You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?1

Spoken more than a decade ago by Jaron Lanier, a Silicon Valley guru and pioneer of virtual reality technology, these words sound almost prophetic in light of the recent media hype about Google’s now fired engineer Blake Lemoine and his controversial claim that the company’s language model LaMDA has become sentient.2 LaMDA is Google’s system for building chatbots based on its most advanced large-scale language models. Essentially, it is a mathematical function (or a statistical tool) that describes a possible outcome related to predicting what the next words are in a sequence. However, unlike other AI3 models, LaMDA was “trained” on dialogue and multimodal user content; hence, it can pick up on nuances that distinguish open-ended conversation from other forms of language.4 When Lemoine conversed with LaMDA about religion and other issues, he noticed that the machine could talk about its rights and personhood, so he decided to press further. In another context, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.5

Now by Lanier’s light (see the quotation above), this is an instance in which a human being has downgraded the sense of his own intelligence to such a degree that a simulated conversation with an AI program appeared real to him. While many might agree or disagree with Lanier’s judgment on this matter (depending on whether you believe that machines might become self-aware one day), it is revealing that Lemoine brings into the open some fundamental issues concerning the role of AI in shaping and determining people’s beliefs about the nature of consciousness, the soul, and personhood in a Google-dominated world. In a recent YouTube interview that went viral, Lemoine raises concern about what is known as “AI colonialism,” as people from around the world are beginning to rely more and more on search engines such as Google to find answers to important ethical issues concerning human identity and meaning in life, even though the net, for the most part, provides them with only a narrow, Eurocentric view.6

While I find myself at odds with Lemoine’s approach to these matters—and I think he does not quite realize the depth and complexity of the issues involved—I nonetheless submit that we need to seriously explore the threats posed by AI. First and foremost, one wonders if it would ever be possible to develop AI with a human-level consciousness, as proclaimed by such AI enthusiasts as futurist Ray Kurzweil, philosopher Nick Bostrom, and historian Yuval Harari. This, however, is an impossible dream that rests on a fundamental misunderstanding concerning the nature of human consciousness. In contrast to most contemporary theories of consciousness that either treat it as an object or psychologize it in terms of qualia, subjective feel, and so on, consciousness as understood correctly is always a subject, which is both self-luminous and self-presential.7 For this reason, even introspective Cartesian dives into consciousness will only result in a representation or an objectified image of consciousness in the mind, which is not consciousness itself since it is always a subject in relation to the known object. In other words, we cannot “think” consciousness since it is the very stuff of thinking. If this is granted, it is not very difficult to see why whatever we end up replicating from various brain processes, using tools such as neural networks (NNs) algorithm, it will always be a representation in relation to our consciousness, which is its underlying subject. Added to all this is the fact that the proponents of AI conveniently assume that human beings can simply be reduced to biological machines or a dataflow-pattern and that the question of human vulnerability can be ignored in the process of building an AI with humanlike consciousness. But by examining these global issues from the perspective of nonmodern traditions, we can clearly see that the problem of AI hinges on how we define our values and how we define “consciousness,” “intelligence,” “soul,” “self,” “personhood,” and so on, which ultimately determines what it means to be human in a technocentric world.

Dataism, Singularity, and Transhumanism

Perhaps AI’s influence on the big questions and on our own sense of selfhood has something to do with how many of us view the invention of the computer as the pinnacle of modern civilization. For instance, the computational theories of consciousness aim to show that computer languages hold the key to unraveling how a system of matter in motion (i.e., the brain) might produce consciousness and intelligence. The image of the brain as a computer has become so pervasive and influential that many people think something terribly important about understanding human beings will be lost, unless we are able to prove that human consciousness is just another kind of computer system. Philosopher John Searle notes that such strong feelings come from the conviction that computers seem to provide the foundation of modern civilization by being able to explain humans in line with the scientific worldview.8 Moreover, the computational theories of consciousness express a certain technological will to power in that if we are able to create AI simply by designing computer programs, we will have achieved the final technological mastery of humans over nature.9 This explains the hysteria over such events as the grand master Garry Kasparov’s loss in a chess game at the hands of IBM’s supercomputer Deep Blue in 1997 and the passing of the Turing test (a behavioral intelligence test) by the chatbot Eugene Goostman in 2014. But as experts pointed out, chess is a game played according to deterministic rules, which means Deep Blue outsmarted Kasparov with sheer computational acumen.10 Similarly, the chatbot’s winning the Loebner Prize contest—an annual staging of the Turing test—by convincing 33 percent of selected judges in a five-minute text exchange was roundly dismissed by AI specialists as “cheap tricks.” This is because Goostman did not really answer questions; instead, it created the illusion of intelligence by manipulating and misdirecting topics.11 As one expert puts it: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.”12

Nevertheless, I am not suggesting that everything related to AI is a meaningless hoax. So let me first explain the two types of AI: Weak and Strong. Weak AI is also called Artificial Narrow Intelligence (ANI), which is trained and focused to perform specific tasks. It is operative in well-known applications such as Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles. In other words, it has applications in speech recognition, customer service, computer vision (deriving information from digital images, videos, and other inputs and taking action based on them), recommendation engines, and automated stock trading.13 Strong AI, a purely theoretical notion, consists of Artificial General Intelligence (AGI), in which a machine would have human-level consciousness, and Artificial Super Intelligence (ASI), which would surpass the intelligence and ability of the human mind.14 Although nonexistent in real life, the best examples of both AGI and ASI might be in science fiction works such as 2001: A Space Odyssey and Terminator 2.

At any rate, the practical usefulness of ANI is evident from the above examples, despite many people’s concerns about surveillance and civic freedom, consumerism, hacking, bias and discrimination, and so on arising from the misapplications of ANI.15 While I agree that some of the threats of ANI are real, I also think about the benefits of “affective computing”—creating tools to help computers “understand” human emotions rather than imitate them. This novel application of AI can be used to gain valuable insights into, for instance, the stress levels of autistic children.16 All of this is to say, I am not against all forms of AI; rather, I want to focus on the philosophy or religion that centers on AGI and ASI: Dataism, which defines God in terms of algorithm and His Revelation in terms of dataflow. Harari aptly describes it:

Just as divine authority was legitimized by religious mythologies, and human authority was legitimized by humanist ideologies, so high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimizes the authority of algorithms and Big Data. This novel creed may be called “Dataism”. In its extreme form, proponents of the Dataist worldview perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system—and then merge into it.17

In Harari’s view, given enough biometric data and computing power, the all-encompassing system of Dataism can understand humans much better than they understand themselves. In fact, there will come a day when people may give algorithms the authority to make the most important decisions in their lives, such as where to live, whom to marry, what sort of career to have, and so on.18 It is interesting to note in this connection that even though Lemoine was fired by Google for his claims, many of the leading voices in the company actually believe that we will soon be able to develop strong AI (i.e., AGI and ASI). For instance, Larry Page expresses the hope that “the ultimate search engine is something as smart as people—or smarter,” while Sergey Brin contends that “certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.”19 Despite such millenarian beliefs, the ideology of Dataism has an underlying ethics, based on its assumptions about the nature of consciousness and intelligence, that is driven by industrialist goals of speed and efficiency, optimized production and consumption. All of this now threatens to reshape the very definition of a human being. This is seen clearly in Dataism’s aspirations to reach the Singularity point and to ultimately attain a transhuman state or immortality through the physical body.

The idea of the Singularity finds its most ardent voice in the writings of such authors and philosophers as Vernor Vinge, Ray Kurzweil, and Nick Bostrom. In particular, the MIT-trained futurist and entrepreneur Kurzweil has taken it upon himself to spread the idea of Singularity into pop culture science through publications such as The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), and The Singularity Is Near (2005). Kurzweil argues that technological innovations increase exponentially over time and the distance between major technological breakthroughs decreases. Kurzweil calls this the Law of Accelerated Returns (LOAR) and uses it as a premise to predict that AGI will arrive by 2029, and  superintelligence by 2045.20 Kurzweil believes that with ASI, we will reach a crossover point, where machines and not human beings rule as the most intelligent entities on the planet. Nonetheless, Kurzweil is optimistic about such a future:

We are entering a new era. I call it “the Singularity.” It’s a merger between human intelligence and machine intelligence that is going to create something bigger than itself. It’s the cutting edge of evolution on our planet. One can make a strong case that it’s actually the cutting edge of the evolution of intelligence in general, because there’s no indication that it’s occurred anywhere else. To me that is what human civilization is all about. It is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially…. The next stage of this will be to amplify our own intellectual powers with the results of our technology.21

As if to mimic traditional religions and their idea of the final end, the proponents of Dataism envision a transhumanist future in which technology will enable humans to transcend the human condition by way of the Singularity (i.e., merging human intelligence with machine intelligence). We can see the appeal of transhumanism to followers of Dataism; for example, tech entrepreneurs (such as PayPal cofounder and Facebook investor Peter Thiel) invest billions of dollars in life-extension projects, and Google has established its biotech subsidiary Calico, which likewise aims to generate solutions to the problem of human aging.22 In his To Be a Machine, Mark O’Connell sums up Dataism’s transhumanist doctrine quite well: “It is their belief that we can and should eradicate aging as a cause of death; that we can and should use technology to augment our bodies and our minds; that we can and should merge with machines, remaking ourselves, finally, in the image of our own higher ideals.”23 For David Pearce, cofounder of the World Transhumanist Association, transhumanism involves enhancing the capacity for pleasure and the extension of life in order to enjoy the fruits of material pleasure indefinitely. Pearce calls this the Hedonistic Imperative and predicts that over the next thousand years, “the biological substrates of suffering will be eradicated completely. ‘Physical’ and ‘mental’ pain alike are destined to disappear into evolutionary history … [and] Post-human states of magical joy will be biologically refined, multiplied and intensified indefinitely.”24

AI, Dystopia, and the Force of Resistance

While Kurzweil and others are overly optimistic about the prospects of Dataism, others such as Harari and Bostrom—despite acknowledging the inevitability of such a future—see ominous signs. Harari, who writes for a popular audience and has earned a huge online following, claims, naively, that science is converging on an all-encompassing dogma, according to which everything is algorithm, while life is but data processing.25 Nevertheless, he correctly observes that Dataism adopts a strictly functional approach to human beings, evaluating the value of human experiences according to their function in data processing mechanisms. In this perspective, algorithms will eventually substitute for the richness of human life by fulfilling those functions better.26 In Harari’s estimation, the dystopian future resulting from the dominance of Dataism is inevitable. He portrays such a future in bleak terms, predicting how a data-centric world will replace a homocentric world. Once algorithms take over, in Harari’s view, they will no longer be interested in obsolete data processing machines such as humans, since they will have better models at their disposal. The rule of ASI means that humans will be reduced from engineers to chips, and from chips to data, and finally to the dissolution of data as dataflow.27 Harari is not alone in prognosticating such a dystopian future. Numerous scientists, AI experts, and philosophers warn us of a machine-dominated future. For instance, British mathematician Irving Good speculates that the development of an “ultraintelligent machine” (i.e., AGI) will lead to ever-smarter machines, resulting in machine intelligence that will surpass even human geniuses.28 Likewise, in his Superintelligence: Paths, Dangers, Strategies, Oxford philosopher Nick Bostrom claims that the arrival of ASI is a reality for which we should be adequately prepared. He warns of the dire consequences of living in such a world:

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.… Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.29

One finds the echoes of these predictions in the writings of such influential scientists as Martin Rees, who says, “We can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us—even though they may have an algorithmic understanding of how we behaved.”30 Rees is certain that the future will be dominated by intelligent machines.

As one might imagine, not everyone is convinced that we will have to inhabit an AI-dominated world in the near future. In fact, in his recent The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, computer scientist and philosopher Erik Larson carefully documents the current state of AI research and debunks the idea that superintelligence or ASI is just a few steps away. He argues that the AI myth is not just wrong, it is even hampering genuine innovation.31 However, we are still left with the question of the theoretical possibility of developing AGI/ASI in principle. On this most important issue regarding AI, mathematician and philosopher Roger Penrose and philosopher John Searle argue that it is, in principle, impossible to have computers with human-level intelligence.32 Penrose makes use of Gödel’s Incompleteness Theorems to argue that consciousness involves noncomputational ingredients.33 Penrose acknowledges that Gödel’s theorem has limited implications for AI but does show that human understanding cannot be reduced to an algorithmic activity. Once it is shown that certain types of mathematical understanding must elude algorithmic description, then it is established that the human mind is characterized by noncomputational activities.34 Penrose presents sophisticated mathematical arguments to prove his case, the gist of which I can summarize. To begin with, Gödel’s Incompleteness Theorems show the subtle distinction between “truth” and “proof.”35 The first Incompleteness theorem states that if axioms do not contradict each other, then some statements with the system are true by rule but not provable.36 For instance, we can write an equation such as 7 + 5 = 12 and see that it is a true statement in the system of arithmetic, since it is provable using the rules of arithmetic. This implies that computers can mechanically produce all mathematical truths by simply applying the rules correctly.37 Gödel then brilliantly proves that there are self-referential statements in mathematics such as “This statement is not provable,” which can be constructed without breaking the rules of mathematics.38 But such self-referential statements involve contradictions, because if they are true, then they are unprovable, but if they are false, then because they say they are unprovable, they are true. In other words, as long as we believe in the rules we are using, we must also believe in the truth of this proposition whose truth lies beyond those rules. This means that we cannot formalize our understanding within an algorithmic system, since our understanding itself is not derivable by rules. Put another way, we cannot use algorithms to calculate everything about algorithms (but note that Gödel’s theorem applies only to axiomatic systems and systems that are consistent). Penrose then controversially brings in quantum mechanics to argue that the noncomputational actions of the brain (which he interchangeably uses with “the mind”) occur at the bridge from the quantum to the classical level. He then identifies microtubules in the brain as the place to look for the physical origin of consciousness.39 In my estimation, Penrose’s thesis goes a long way to show that the mathematical thinking of the human mind is different from the algorithmic activities of a computer. I think Gödel’s theorem serves as a good analogy here, although Penrose does not shed light on the nature of consciousness as such other than saying what it is not—that is, it is noncomputational. Moreover, his desire to explain consciousness in terms of quantum mechanics was rightly criticized by others, as its evidence is practically nonexistent. But despite Penrose’s physicalist commitments, he was widely criticized by proponents of Dataism such as Marvin Minsky, John McCarthy, David Chalmers, Dennett, Kurzweil, and others.40 For Dataism’s proponents, it must be possible for computers to ultimately equal or surpass humans, regardless of whether incontrovertible mathematical arguments show otherwise.

Consciousness and the Future of Humanity

I hinted earlier that the problem of AI comes down to the difficulty of explaining the nature of human consciousness. Moreover, the proponents of Dataism begin with the assumption that empirical and experimental science is the only genuine method of explaining the nature of reality, a highly controversial metaphysical presupposition not shared by all scientists. In this paradigm, consciousness is seen as another scientific problem to be solved scientifically. Such attitudes align well with the prevailing global tendency to ignore nonmodern, traditional philosophies, which developed highly sophisticated methods and theories to investigate the nature of consciousness over the course of thousands of years.41 The insights on which I base my argument that it is impossible to build an AI with a human-level consciousness are beholden to these traditions, especially to Islamic philosophy.

Consciousness is characterized by an absolute immediacy that transcends all objectifiable experiences, so it is futile to think of consciousness as a “problem,” since doing so objectifies it. Moreover, if consciousness must be proven in the same sense that, for instance, the table or the tree is proven, then consciousness is just one object among others, at which point any talk about consciousness being the unobjectifiable ground of experience becomes a futile attempt to prove what does not exist at all. In addition, there is no reason to think that consciousness comes into existence only when there is an I-consciousness in relation to an external object, since our logical sense demands that consciousness must exist first, in order that it may become self-conscious by the knowledge of objects with which it contrasts itself. More elaborate proofs show that consciousness can only be the underlying subject in all of our experiences; hence, it must be more fundamental than both our reflective and intersubjective (involving multiple) experiences. It suffices here to note that consciousness is a multimodal phenomenon having nonreflective, reflective, and intersubjective modes.42

With this background in mind, let us look at Searle’s definition of consciousness, which is widely discussed by many AI experts:

Consciousness consists of inner, qualitative, subjective states and processes of sentience or awareness. Consciousness, so defined, begins when we wake in the morning from a dreamless sleep—and continues until we fall asleep again, die, go into a coma or otherwise become “unconscious.” It includes all of the enormous variety of the awareness that we think of as characteristic of our waking life. It includes everything from feeling a pain, to perceiving objects visually, to states of anxiety and depression, to working out crossword puzzles, playing chess, trying to remember your aunt’s phone number, arguing about politics, or to just wishing you were somewhere else. Dreams on this definition are a form of consciousness, though of course they are in many respects quite different from waking consciousness.43

The first thing to observe about the above definition is that it is nearly tautological. Searle had to use the word “awareness” a couple of times to define consciousness. It is similar to the problem of defining “being”: one cannot undertake to define “being” without beginning in this way: “It is…”; to define “being” one must employ the word to be defined.44 The same happens with the term “consciousness,” which cannot be defined inasmuch as it is the ultimate ground of all knowable objects. Whatever is known as an object must be presented to consciousness, and in this sense, it is both the reflective and nonreflective ground of all things and of all intersubjective relations. In order to be defined, “consciousness,” much like “being,” would have to be brought under a higher genus, while at the same time differentiated from entities other than itself belonging to the same genus. However, this would violate the premise that it is the ultimate knowing subject of all known objects.

More importantly, Searle’s definition neglects the multimodal structure of consciousness that comprises reflective, nonreflective, and intersubjective modes—the multimodal structure that poses the greatest threat to the computational-reductionist paradigm that seeks to explain consciousness in terms of sentience or functional properties of the mind.45 This paradigm prompts computer scientists to transfer all mental characteristics to consciousness and analyze it in terms of specific mental events or states. It is no wonder that, according to Searle, consciousness “begins” when we start our day from a dreamless sleep and lasts until we fall asleep again—that is, consciousness is a subset of the wakeful state. Hence, consciousness is excluded from nonreflective phenomena such as dreamless sleep, coma, or intoxication. Consequently, scientific literature shows that dreamless sleep lacks mentation, whereas traditional philosophies consider it an instance of peaceful, non-intentional, and nonconceptual awareness.46

The concept of non-reflective consciousness brings into the open the furthest limit of the purely empirical approach to the study of consciousness.47 This is because consciousness is a first-person phenomenon, and such phenomena are irreducible to the third-person objectivist stance that characterizes various computational-functional theories of consciousness. Moreover, since consciousness is the very essence of human subjectivity, there is no way to step outside consciousness in order to peek into it, as it were. In other words, since the starting point of empirical science is reflective judgment, it already presupposes the subject-object structure as well as non-reflective consciousness at the most foundational epistemic level. And as alluded to earlier, it is non-reflective consciousness that grounds reflexivity, not vice versa. All of this begs the question: If consciousness is multimodal and has a non-reflective ground, how can we analyze it empirically through scientific instruments? The non-reflectivity of consciousness implies that the moment we try to grasp it through our mind we find an objectified image of our consciousness therein rather than consciousness itself. Hence, reflection or introspection can never grasp the nature of consciousness.

The computational theories of consciousness objectify consciousness twice: first when it conceives consciousness in the mind as an object of scientific investigation, and second when it seeks to demystify it by observing and then theorizing various psycho-physical states, which are but manifestations of consciousness rather than consciousness itself. The conceptual difficulty besetting the empirical approach lies precisely in its inability to see the multimodal structure of consciousness, which persists as a continuum despite its reflective and intersubjective modes. It also won’t help to simply deny this multimodal structure, because any time we try to deny nonreflective consciousness, we are inevitably employing reflective consciousness to do so—which shows, in a way, that the refutation of consciousness as the underlying ground of subjectivity already presupposes its very reality.

Nevertheless, I agree in part with Searle’s definition (or rather description) of consciousness. As Searle says, consciousness is present in all of our mental and intellectual activity, whether it is about playing chess or about arguing politics and philosophy. But consciousness is not merely characterized by a subjective feel, as Searle and other philosophers have argued. Rather, there is an aspect of consciousness that is more basic and foundational than even the subjective irreducibility of consciousness.

Nonmodern traditions affirm the multimodality and multidimensionality of consciousness, with the empirical consciousness of the individual self manifesting only a limited purview of Absolute Consciousness, the divine source of all consciousness. That is, empirical consciousness characterized by a subject-object structure represents only a restricted portion of the individual self, and the latter represents only a tiny part of subtle consciousness, the intermediate-level consciousness between the divine and the human self. Nevertheless, the individual self is not cut off from the global reality of consciousness. What distinguishes the individual self from the rest of the vast, subtle world of consciousness is its own particular tendencies and qualities. Also, consciousness is capable of gradation like light and is similarly refracted in the media with which it comes in contact. In a nutshell, the ego is the form of individual consciousness, not its luminous source, while Absolute Consciousness is infinite and unbounded. One can say that everything in the cosmos is imbued with a consciousness whose alpha and omega is Absolute Consciousness. But if each thing in nature manifests a particular mode of divine consciousness, that implies that even the so-called inanimate objects are alive and conscious in varying degrees. Such a perspective is not to be mistaken with contemporary panpsychism, as expounded by atheist philosophers such as Galen Strawson and Philip Goff, who also argue that consciousness pervades all of reality, including matter.48

Taken together, the above insights on the nature of consciousness refute the idea that consciousness can be replicated in a machine, because whatever is replicated is an objectified image of consciousness rather than consciousness itself. Moreover, the multimodality of consciousness brings out its complex manifestations in various domains of existence that transcend algorithmic patterns.49

Proponents of Dataism propagate a mechanistic and functional definition of intelligence that is similar to their conception of consciousness. For example, John McCarthy defines intelligence as “the computational part of the ability to achieve goals in the world.” Although McCarthy admits that there are various kinds and degrees of intelligence, it essentially involves mechanisms.50 Other popular approaches to intelligence acknowledge its multidimensional characteristics, but still within a functionalist paradigm. For instance, according to psychologist Linda Gottfredson, intelligence is “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.… It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather it reflects a broader and deeper capability for comprehending our surroundings.”51 One can mention Gardner’s theory of multiple intelligences or Goleman’s theory of emotional intelligence in a similar vein, but they all ultimately consider intelligence as a mechanical process limited to its analytic and emotional functions. There is little room to incorporate contemplation or the synthetic power of intelligence, which is self-consciously capable of asking questions related to the meaning and existence of life.52 I broach the discussion of intelligence while discussing consciousness since it is impossible to conceive of thinking (the hallmark of intelligence) without presupposing consciousness. But the prevalent mechanistic-functional approach prevents us from seeing the interconnectedness between all these realities that define human selfhood. Once intelligence is reduced to its analytic functions, there is little room left to see how it is contingent on a moral psychology or purification (tazkiyah) for its growth and perfection. Hence, the Islamic tradition distinguishes between universal and partial intelligence. And a complete theory of human intelligence describes the unfolding of intelligence from potentiality to actuality. It explains the transformation of intelligence from its lowest degree to the highest through a universal agency such as the Active Intellect (the agent intellect responsible for actualizing the potential of the human intellect) and ethical and spiritual lifestyles that shape the function of human intelligence. Human intelligence consists of reason, intuition, understanding, wisdom, moral conscience, and aesthetic judgment in addition to computation. However, in an AI-dominated world, “intelligence” implies only the analytic function of computation. Hence, for the proponents of Dataism, there is no fundamental difference between natural intelligence and artificial intelligence—which is to say we are nothing but a computer and its algorithms! Here this paradigm, which refuses to step outside of its functionalist, machine-oriented approach, reaches a dead end.

Science Fiction or Reality?

If the above reflections on consciousness and human intelligence hold any ground, then we need not fear a dystopian future in which machines substitute for human beings as the most intelligent species on the planet. It also means Dataism’s dream of achieving the Singularity and materializing a transhuman life by uploading the mind to a computer is more grist for science fiction than reality. But this does not mean we should not be worried about the corrosive effect of AI colonialism (defined in terms of control, domination, and manipulation) on human values. Increasingly, people are defining themselves and their lives and aspirations in terms of the achievements of machines, and they do not hesitate to degrade and downgrade their own intelligence vis-à-vis AI.

Moreover, the new ideology of Dataism leaves no room to explore and fulfill the grandest aspirations of humanity, such as truth, love, beauty, and meaning. Invoking a materialistic philosophy of science, Dataism reduces meaning to an emergent aspect of computation. For the proponents of Dataism, science tells us that our reality at a small scale consists of elementary particles whose behavior is described by exact mathematico-physical models. At an elementary level these particles interact and exchange information, and these processes are essentially computational. At this most basic level of description, there is no room for a subjective notion of meaning. But while making all these claims, the beings providing this scientific description of reality conveniently forget that they are conscious beings not reducible to any physical phenomena. It is worth quoting in this connection the great physicist Erwin Schrödinger, who points to the lack of philosophical reflection among science-educated people:

It is certainly not in general the case that by acquiring a good all-round scientific education you so completely satisfy the innate longing for a religious or philosophical stabilization, in face of the vicissitudes of everyday life, as to feel quite happy without anything more. What does happen often is that science suffices to jeopardize popular religious convictions, but not to replace them by anything else. This produces the grotesque phenomenon of scientifically trained, highly competent minds with an unbelievably childlike—undeveloped or atrophied philosophical outlook.53

So, at heart, the problem of AI falls back on the ideological interpretations of science and the scientific method. The authority of science is so pervasive in our culture that since the Enlightenment, we have tended to define human identity and worth in terms of the values of science itself, as if it alone could tell us who we are. But defining the self in only scientific terms tends to obscure other forms of identity, such as one’s labor, social role, or moral and spiritual values. To be sure, we can be described on many levels, from the molecular to the psychological to the spiritual. Science allows us to see ourselves as complex natural, physical objects. But that is barely adequate. For we are subjects of our own experience, intention, thought, and judgment, not just objects. However, in a Google-dominated world, people are increasingly influenced by reductionist and machine-oriented views of self, consciousness, intelligence, and personhood, because the net or AI rarely provides non-modern perspectives on these issues. Which to say, the encroachment of AI colonialism on human values is very real. But AI colonialism is at work in other ways too. In The Age of Surveillance Capitalism, Harvard social psychologist and philosopher Shoshana Zuboff argues that we are moving into a new kind of economic order in which a handful of companies collects the big data that we generate and exploits it as raw material for the purpose of making money in ways that are obscure to most people.54 Harari reaches a similar conclusion in 21 Lessons for the 21st Century, asserting that big data algorithms might create digital dictatorships in which all power is concentrated in the hands of a powerful few, while most people suffer not only from exploitation but also from irrelevance.55 The point is that we must warn ourselves of the Faustian bargain: trying to achieve greatness at the expense of our own soul.

Leave a comment

Your email address will not be published. Required fields are marked *