
Vasil William Dhimitri/
How might the growing dependence on artificial intelligence (AI) systems for decision-making,
and their illusion of knowledge, affect human intellectual autonomy? More importantly, will there ever be a point when AI benefits humans to the extent of becoming integrated within the human thought process, implying partial and, eventually, total control and replacement of the human mind?
Out of the vast majority of approaches one could take in answering the bipartite query above, my focus is to establish a thesis around the fact that if humans strive for convenience and innovate to make their lives easier, then once they achieve a certain threshold of ‘easiness’; a point at which the autonomous human intellect declines simultaneously to an uprise in complacency bias is inevitable.
According to Norbert Wiener, an American mathematician and philosopher, “even when machines do not in any way transcend man’s intelligence, they very well may, and often do, transcend man in the performance of tasks” (Wiener 1355). Precisely this assertion makes the questions of concern above so vastly valuable. The idea that human nature always strives for the ‘better’ – ‘better’ defined in the context of having nothing to do with the argument of AI lacking creativity or humans possessing some degree of it – automatically denotes the utilization of artificial intelligence (AI) in most human applications beneficial. Beneficial because nowadays, educational institutions and workplaces increasingly far orient themselves around regurgitation,
computation and memorization to the point where embedding a superintelligent assistant in
daily life to aid in simple tasks would be utterly foolish.
There is no denying that with the increasing dominance of AI exposure and an increasingly disengaged human population – in the sense that there seems to be a lack of interest, curiosity and information in understanding how modern machine learning works and doing anything to fix or adjust what it does wrong – autonomy in decision making will falter to a point beyond return. In multiple instances, AI proves its even, H-creative powers, such as the AARON program “that generates beautifully coloured drawings […] described by its human originator as a ‘world-class’ colourist” – but what average human concerns themselves with H-creative ideas when their sole objective in life, for the most part, is getting by in educational or work environments that rely on speed and repetition (Boden 24)? Thus, because “there is no problem getting a computer to make novel combinations of familiar concepts” and developers can achieve it “until kingdom come,”
which is what most individuals are actively seeking answers for, one can begin to see why the
downfall of human autonomy in creative thinking is imminent (Boden 25).
From a theoretical standpoint, AI developers design their machines to proces overwhelmingly
large amounts of data, learn from it, and make inductions or deductions under self or guided input. However, given the current trajectory of AI users and the scarily great assistance AI provides, AI continuously grows its impression of being this illusion of unparalleled trust and accuracy. Furthermore, as time passes, the more AI learns to make decisions that better ‘fit’ its user base, the more increasingly reliant humans become and their intellectual capacity to research or double-check their decisions with first-hand literature or person-person interactions and even with generic search engines slowly deteriorates.
The underlying problem with designing machines that, from the outset, are “likely to
engage public trust and confidence; maximize the gains for the public and commerce; and
proactively head off any potential unintended consequences,” and most importantly free, is
that humans ‘think’ they will serve toward their advantage (Boden et al. 124). Temporarily,
that may be the case while the AI seeks to uncover and collect as much data as possible.
Nevertheless, let it be noted (with emphasis) that while the value of humans and the knowledge they bear declines in value with increasing employment of machine learning technologies, equally as much does AI itself begin to distinguish itself and assume control over the human decision-making process.
In evolutionary terms, humans will devolve their intellect by living with the illusion
that whatever information AI puts out is reliable. Once that point emerges on the modern
historical timeline, a powerful and superintelligent AI will inversely and uncontrollably evolve through genetic algorithms, “rules for changing itself,” similar to the “point mutations
and crossovers” that comprise biological development (Boden 29). This transcendence of AI
capabilities beyond what their programmers initially intended means fundamentally,
according to Wiener, that even “though machines are […] subject to human criticism,” that
criticism may become insignificant and ineffective until long after it is relevant – simply
because the human mind cannot develop and understand pari passu with a machine that is
undeniably more intelligent than its creator (Wiener 1355). Otherwise, once humans reach
this point of no return, an array of ethical and philosophical problems comes, so vast and
distressing and comparable to a careless driver not being able to avoid ramming into a wall
moments before disaster.
Let aside the notion of the human race becoming AI integrated and happily ‘satisfied’
with it doing all the tedious work that educational and workplace protocols reinforce on the
general population. Then the critical question is to be brought to attention. Barbara Johnson,
an American literary critic and translator, once remarked that “a robot […] is the fantasy of
the perfect slave” (Johnson 159). Based on this remark, the moral problem that comes to light
is: “Is AI the optimal enslaved being for any human with access to it, or is humanity the large
slave group serving the very few superintelligent machine learning platforms to date?” The
latter seems more authentic because humans idealize their perfect servant as intelligent and
submissive, but those two adjectives never seem to coincide effectively (Wiener 1357).
Although the idea that partial or complete control of AI within the human mind may seem far-fetched, it is a viable possibility. Considering that AI already shapes what the human population sees online or on social media and that people actively use and rely on virtual assistants like Siri or Alexa daily, integration is already taking place, just not on the physical scale imaginable by watching an AI-oriented science fiction film. However, that will takeplace once people entirely succumb to the temptation of having something to rely on every time a task becomes even remotely complex.
Still most people are reassured that something about the human race is unique and irreplaceable. They believe that “if a computer can do x, a person, even while doing x less well, is always said to do x + n, […] usually the human’s ability to produce art” (Johnson 154). In Johnson’s Persons and Things, the author brings up David Gelernter, an art historian claiming that “what gives art significance and value is that a person has something to say” (Johnson 154). Still, she must remember that AI has – not something, but ‘everything’ – available to its imaginary ears. While it may not say anything in the context of an originality debate, which has already lost its significance in a world that strives not for creativity but for redundancy, it listens, remembers and is capable of learning what originality seems like to beguile the end user. And herein, because most people are not intelligible or ‘original’ enough to disagree, lies the addictive reliability issue of AI.
The reason one could term the integration of AI in human lives ‘addictive’ is because
realistically speaking, AI systems can both positively and negatively impact human
intellectual autonomy – though it seems, on a surface level, to the average user per se,
ChatGPT, the positives outweigh the negatives. AI is exceptionally good at processing data,
analyzing patterns, and performing mathematical operations, which one should grant credit
and utilize to the fullest extent. On the other hand, one should refrain from using these tools
and let their spare time be ‘wasted’ in the sense of not employing time to better their creative
and strategic tasks or opportunities. What this theory brings up is a dazzling conundrum.
Should AI do all the essential critical thinking skills while truly giving humans the time and
space to pursue their creative aspirations? If so, would the average human take advantage of
it correctly or exploit it to satisfy their inner indolence? To the second question, the answer is
exploitation because once a person finds a way that makes everything as easy as possible tobrighten their lives and make room for less work-intensive affairs, they will pursue that way
no matter the cost, even if it means integrating an AI apparatus physically.
As for AI replacing the human mind, while extremely unlikely in this lifetime due to
the sole reason that scientists are yet to obtain a comprehensive understanding of the brain
and its behaviours, AI could provide many applications if embedded within a human’s central
nervous system or at least working in conjunction to it. Human capability would increase
tremendously (if implemented under the umbrellas of ethics and morality) by retaining and
grasping more valuable insights, automating repetitive tasks and processing data at rates that
would save immense effort and lead to the development of a more productive society.
Conclusively, it is still unclear when AI dependence will reach the extremes outlined
here, but for certain that humans will be contingent on it at some point or another.
While it may be possible to reduce AI dependence down the line, it could be arduous to
completely reverse the effects of such dependence, especially if humans have become too
reliant on their decision-making abilities. Additionally, should AI become too advanced or
integrated into society, it may require more than limiting its use or controlling its actions to
stop the human race from losing total autonomy of thought.
Works Cited
Boden, Margaret A. “Computer Models of Creativity.” AI Magazine, vol. 30, no. 3, 2009, pp.
23-33., https://doi.org/10.1609/aimag.v30i3.2254.
Boden, Margaret, et al. “Principles of Robotics: Regulating Robots in the Real World.”
Connection Science, vol. 29, no. 2, 2017, pp. 124–129.,
https://doi.org/10.1080/09540091.2016.1271400.
Johnson, Barbara. “Artificial Life.” Persons and Things, 2010, pp. 153–162.,
https://doi.org/10.2307/j.ctvk12sgc.13.
Wiener, Norbert. “Some Moral and Technical Consequences of Automation.” Science, vol.
131, no. 3410, 1960, pp. 1355–1358., https://doi.org/10.1126/science.131.3410.1355
Vasil William Dhimitri është student i vitit të katërt në Universitetin e Torontos, me specializim në bioteknologji, biologji dhe kimi. Ai është nderuar me disa nga çmimet dhe bursat më të larta akademike në Kanada. Përveç arritjeve në studime, Vasili ka punuar si kërkues shkencor në University of Toronto Mississauga dhe si praktikant në disa institucione shëndetësore. Ai ka kontribuar gjithashtu si asistent pedagog dhe mentor për studentët e rinj, duke ndërthurur shkencën me edukimin dhe praktikën.



