Exponential AI: Synthetic Apocalypse or a Golden Age for Humanity?
Algorithmic conditioning as an evolutionary catalyst for human consciousness
New tools such as ChatGPT demonstrate the principle common to all technology, namely its being the double-edged sword of increasing productivity vs. atrophying human faculties through outsourcing them. This time it is not only the human muscle but language itself that is outsourced.
Humans use language to not only to think, communicate, coordinate and collaborate but also to undermine thinking through flawed reasoning and miscommunication, thus sowing the seeds of chaos and division in the information ecology—language is a psychotechnology which can be used for good or ill. With the birth of ChatGPT both of these possibilities have been raised to the exponential power of algorithmic processing.
The issue of having our speech conditioned by AI and machine learning is not new. Tools available in text editors, such as autocorrect, spell-checking or the likes of Grammarly have been relieving human writers from having to bear the cognitive load of their own thinking. Why would we have to do something that a machine can do for us, right? For thousands of years, we have been using physical tools as extensions of our limbs to make our lives easier and more prosperous. But is psychotechnology such as language qualitatively different from any other tool?
Tool versus weapon
What if ease and prosperity can be amplified by misusing a tool, i.e using it so that a short-term gain ends up producing harmful long-term externalities? When misuse of language and communication leads to self-interested benefits, not only does it undermine the integrity of the whole information ecosystem but more importantly it undermines our capacity for thinking since cognition is contingent on articulated speech, and if we can’t think clearly, we can’t make well informed choices either.
This problem lies in where we still bother to exercise our speech, even if in unsavoury ways, like lying and deception. But outsourcing our speech to a machine is worse in that it doesn’t even allow us to make errors from which we could learn to correct our thinking. If thinking is to be done properly, it has to be done in good grammar, therefore if a machine takes care of our grammar, it also takes care of the quality of our thinking. What follows is that outsourcing our writing skills creates a self-evident dependency on that which cannot comprehend the human dimension of language to its full extent—it can only imitate it based on semantic analysis.
In light of the above, here’s my own view: if human nature is intrinsically lazy, i.e. seeking efficiency and ease as a means to conserve energy, it seems to me that we will have to seriously reflect on it in order to survive the plight of AI because these new tools are way too powerful for us to resist their allure of convenience, if we don’t draw the line of where it’s useful enough and where it starts to become harmful. The mind, just like muscles, must be trained in order to be strong. With the most recent digital offerings what we’re seeing is the exact opposite: the ideology of ease, accessibility and convenience has had a detrimental impact on our mental health, motivation, our critical faculties, attention span, self-knowledge, civility and the capacity to communicate effectively.
And yet, it’s hard to argue against convenience in a world of accelerating complexity where technological tools can buy us the ever more precious time of our finite busy lives. It seems irrational to ignore the opportunity of convenient technological solutions in a world in which cognitive complexity has outpaced the sensemaking capacity of even the best of experts, thus in a sense nullifying the very idea of being one. The question remains, however, is language just another tool or is it significantly distinct from all other tools? I argue for the latter.
Human values in the age of The Machine
I seems that the battle we are in the midst of is not about developing ever more efficient technology to save the world as the techno-optimist nihilists would like to lie themselves into believing. We already have the tools, and too efficient ones to be able to handle them well with due responsiblity.
If we agree with E.O. Wilson, the father of sociobiology, that “the fundamental problem of humanity is that we have paleolithic emotions, medieval institutions and godlike technology," then a reasonable reply to this seems to have been formulated by Daniel Schmachtenberger who notes that “we cannot have the power of gods without the love, the wisdom and the prudence of gods.” In other words, to paraphrase Tristan Harris, the founder of the Centre for Humane Technology, in order to break out of the vicious circle of mutually reinforcing crises driven by exponential technology, we need the kind of wisdom that will let us make sense of the complexity of the world at the level of this complexity while knowing that the runaway technology has incapacitated our experts’ sensemaking as early as in 2008.
Our problem is that we have developed tools that are extremely powerful but now we need an equal measure of character, discernment and responsibility to steward this power to the highest good of all. From the above perspective, the default path on which we are as the technological society of the early 21st century is not an accurate one. In light of this, we ought to accept that if we continue with the business as usual, we’re headed for an existential disaster.
Nihilistic versus Axiological
If technological tools are designed with profit as their goal, which they mostly are—no matter how useful or good an invention it won’t be developed unless it can produce profit. Following this logic to the bottom, we must acknowledge that if profit is the ultimate goal for the said business model, and if it so happens that human welfare stands in the way of this profit, the good of the human user becomes secondary to the technical process of capital acquisition—as such this design model is nihilistic. By nihilistic I mean that it lacks values that are superior and independent of its purely instrumental race for monetary-meterialistic gain which places the humanity of the users in jeopardy.
To sum up this part, there’s a conflict of the need for automation to facilitate the intellectual and operational functionality of human ecosystems where their complexity transcends our biological capacity for understanding them. But while being a part of this process, we must, it seems, devote an equal degree of attention to what is essential about being human that automation could enhance and not undermine. And it’s not clear where exactly is the point at which technology starts to suck the human soul for the sake of offering petty comfort and vain satisfaction of the basest pleasures.
A humane model of tech-design must therefore be axiological (grounded in human values) and not nihilistic (devoid of human values). Without values there is no way to develop the wisdom we need to navigate through the protocols of technium. The tricky bit is that the Matrix analogy of red vs. blue pill is a false dichotomy: we must take both pills whether we like it or not, which is to say that we must engage with nihilism until we purge it from our ecosystems, technological and all.
These issues drive us to the bedrock argument made by Heidegger several decades ago: the essence of technology is not in and of itself anything technical but rather metaphysical. Without seeking to define the essence of humanity in contrast to the essence of technology we will not be able to derive the assessment criteria as to what is the boundary humans should not cross in the process of expanding their reliance on technics. It seems clear to me that one such boundary must be language.
By all means we should use the linguistic tools which AI and machine learning offers to ease our lives and improve services, but only insofar as we don’t lose these same biological linguistic capacities in absence of these tools. In order to be able to think, we must be capable of articulating what we think to check if what we’re thinking even makes sense, a part of which process is discussing our ideas with others to further correct our thinking through the feedback loop of communication.
From human downgrading to brainwashing
My prediction is that human beings who choose to use any language tool for any reason at all times are bound to become downgraded to automatons whose spontaneous capacity for independent critical thought will become extinct. This bears a myriad of risks, some of which I will discuss in the following parts. But part of the problem with the Matrix, as Baudrillard remarks, is not that it is clearly distinct from reality (as the Hollywood blockbuster misrepresents it) but that it is indistinguishable from it. As we’ll learn in the future parts of this series, even when we don’t use online tools of an explicitly linguistic nature, our language gets conditioned through the content suggestions which are driven by algorithmic selection based on behavioural predictive models which the supercomputer behind the screen throws at us based on the date it harvests from our online activity. This sort of subminal, invisible use of AI bears severe long term existential risks.
Algorithmic cult generation
The most extreme and destructive externalities of these risks is algorithmic cult generation. A cult relies on a blind submission to an unquestionable dogma which misuses rational processes for nefarious ends so as to create a fictional reality within which a target population lives as ideologically bound to a particular self-contained view of the world which rejects all other perspectives of reality.
A cult weaponises human evolutionary vulnerabilities through exploiting cognitive biases to create a fictional reality that is detached from the reality of empirical and logical verification, both of which work as checks for how accurate our thinking is. In order to control thinking, the cult leader has to control the subjects’ emotions, their access to information and ultimately, their use language. Cults become effective through isolating the target population from the possibility of empirical and logical verification through double-speak, double-think and gaslighting, i.e. imposing a fictional version of reality under conditions of existential threat if one does not submit to this version.
The illusion of social media, for example, is that the fake world of digital avatars becomes aspirational for the person behind the smartphone screen. The process of aspiring to be an influencer, as 86% of American youth claim to do, is in itself a cultish process with all the algorithmically modified conditions for it to be in this way: echo-chambers, focus on the perception of reality instead of the lived reality, the incentive to pretend, misrepresent and play zero-sum games—basically whatever works in the disembodied narcissistically hollow world of the skewed social media perceptions. There is a rationale of doing what people do on social media—the process is surely rational—but as I’ll argue with Tristan Harris and Daniel Schmachtenberger, while being rational in an algorithmic sense—i.e. rule-governed, calculated, strategic and following its own logic—it is a sort of cultish rationality that is essentially anti-wisdom, which means it does not allow us to make good choices.
From night to day
While the situation is currently bleak, I belive that the way in which we should think of these early shortcomings in the new technological era of humanity, is to see them as a catalyst for new beginnings. Every civilisational collapse brings about about loss of literacy and severe hardship makes a demand on the human cognition to evolve, and the exponential power of AI can be seen as an opportunity for exponential evolution of life overcoming the deathcult of total technocracy.
Things will get much worse before they get better, but hitting the rock bottom will inform the new ways in which we evolve into our full humanity.
Coming up
In the next parts we’ll discuss (1) what wisdom is from the cognitive-scientist perspective, how algorithmic conditioning undermines our capacity for wisdom through exploitation of cognitive biases, (2) why the issue of attention economy is critical to this issue, (3) the ways in which human cognition is different from algorithmic processing, (4) and finally, how the insights from the above three areas map onto the neuropsychology of the communication between left and right brain hemispheres, and what it might mean in terms of both positive and negative future implications for humanity.
As human ecosystems become ever more integrated with exponential technology, and our consciousness mediated via AI protols, we ought to exercise our faculty of speech to facilitate a continued dialogue on what defines our humanity in the age of automation.


