The Schizophrenia of Strong AI

If a person went to a psychiatrist and said “I think I am a machine,” the psychiatrist would be quite right in thinking he has his work cut out for him. This belief resembles the brain damaged patients described by Oliver Sacks in books like The Man Who Mistook his Wife for a Hat. One man thinks he is a machine; another, his wife a hat.

Proponents of Strong AI, or artificial general intelligence, regard people as machines and oscillate between extreme self-hatred and god fantasies. This cries out for a diagnosis as much as an explanation. In many ways, it turns out, this is just a particular variant of an omnipresent modern tendency.

Eric Voegelin makes much of Plato’s notion of the metaxy – man as the in-between; neither beasts nor gods. Finite beings confronted by intuitions of the infinite – neither omniscient nor completely oblivious. Metaxy can only exist if in fact something is recognized as transcending Man.

In a similar fashion, Nikolai Berdyaev comments that without the idea of God there can be no idea of Man. The sense of metaxy is lost and man is unable to find his existential situation.

1Strong artificial intelligence is the ambition of creating a computer that can think at a human level or beyond – often including possessing an imagination and consciousness. Narrow AI is what exists right now which involves algorithms, step-by-step instructions, for one particular application that takes place within a strictly rule-bound environment. It is comparable to the difference between an idiot savant who can do just one or two things very well and a normally functioning person capable of intelligent responses to unpredictable events requiring creativity, initiative, and the ability to improvise.

One tendency among proponents of Strong AI is to talk down human intelligence to the level of algorithms. However, Kurt Gödel and Alan Turing proved that not all human thought can be captured by algorithms; even when it comes to multiplication – in the case of Gödel. Mathematical truth must be perceived, at times, outside of axiomatic and therefore algorithmic proof.

The intent of Strong AI is to try to reduce the imagined distance between human minds and computers to make it seem more plausible the gap can be bridged.  While Strong AI cannot plausibly claim that general intelligence has been achieved, some proponents seem to think it will emerge with sufficiently complex mindless algorithms.[1] A computer can beat the world champion Go player partly because games like Go are strictly rule-bound and contained. The computer played itself at Go millions of times and was essentially just optimizing an equation. Of course, the computer does not even know that it is playing Go. The tendency to define mind along behavioristic or operational grounds, as can be seen with the Turing test too, is essential here – even though behaviorism was tried as a complete theory of mind (actually anti-mind) for fifty years by psychologists but was finally rejected since it was found to be completely inadequate. Cognitive behaviorism still exists today but usually in a limited role for therapeutic purposes only.

David Deutsch and a tender youth (uppity pipsqueak?) named Sam Ginn[2] know that artificial general intelligence is not going to be just more and faster narrow AI. Deutsch is a professor of physics at Oxford University and thinks what is needed is a better epistemological theory to tell us how a brain turns something into knowledge, while Ginn is a sophomore computer science major at Stanford. Ginn seems a step above some of his peers because he recognizes that consciousness is necessary for responding flexibly to truly unpredictable environments. It is only possible to write an algorithm for a problem that has been anticipated, thus there can be no algorithm for the genuinely novel.

Inserted in Ginn’s discussion is a reference to David Chalmers, an Australian philosopher who has managed to convince materialists that consciousness is “the hard problem;” how physical brains can produce human subjectivity and self-awareness. But Chalmers also thinks that a human being could exist who could say and do everything a human does while not being conscious – the so-called zombie theory. That is precisely what is not possible. It is only conscious beings who have been seen to have intelligent, flexible responses to the unpredictable. Take that away and human behavior is unreplicable in principle.

The burden of proof that consciousness is redundant is absolutely on Chalmers. We know that conscious human beings can do what conscious human beings can do. We have no evidence that non-conscious human beings could do everything a conscious human being could do. In fact, this is not remotely plausible. Until Chalmers provides good evidence for his assertion there is no reason to take this idea seriously. So we have the perverse fact that Chalmers has managed to convince many materialists that they should worry about how consciousness and brains might be related, while at that very same moment, treating consciousness as functionally entirely unnecessary. The problem is identified and dispensed with in the same breath.

Ginn imagines that he already possesses all the skills and knowledge to produce genuine sentience. This is strange since he thinks algorithms are not going to be sufficient to produce consciousness and that is all computer programmers are in fact trained to write. He asserts that what is needed is a theoretical breakthrough concerning consciousness and that once achieved, with no further advance in ability, he will be capable of creating his homunculus.

Ginn draws, somewhat incoherently, on Martin Heidegger. A computer that had achieved self-awareness would be a computer that is in the world concernfully. Dasein, Heidegger’s term for Man, cares how things are going for it. There is definitely something it is like to be Dasein – being there; being-in-the-world with a past; concernfully choosing among possibilities that extend into the future.

2

Achieving this feat would make of Sam Ginn a god. Creating a being who was in the world concernfully would far surpass the behavioristic limitations of the Turing test. The Turing test, suggested by Alan Turing, aims to see if humans can distinguish between the typed “speech” of a human versus a computer. There is no requirement that the computer be self-aware as being-in-the-world concernfully would require.

3In The Mechanization of Mind Jean-Pierre Dupuy comments that for an engineer, truly understanding something means being able to model it. Modeling subjectivity and being conscious are very much the same thing. Ginn rejects the idea that an entirely new science would be necessary to achieve this. He thinks he can, in principle, do this right now – in fact, anyone with a working knowledge of computer programming can.

Producing consciousness the old-fashioned way (sexual intercourse) definitely does nothing to diminish the mystery of consciousness. Likewise, if consciousness could be produced by following an algorithm it would be possible to do so without actually understanding what was being done – just as math problems can be solved with an algorithm without the student comprehending that either.

A question would be whether the original writer of the program that achieved consciousness would know precisely what was happening. If the programmer knew exactly what he was doing and why everything he did worked the way it did, the mystery of existence would be significantly diminished. It would seem a kind of god-like semi-omniscience. He would be up there and we down here.

The perennial philosophical questions would remain unanswered. Here we are. Now what? How should we live? But something profound would seem to have changed with awareness and consciousness in a test tube created at will.

Such delusions and hubris seem to be the result of the Enlightenment rejection of God. Here, at this point in human history, is the emotional and intellectual origin of Strong AI; a self-contradictory attitude of self-contempt and the most grandiose self-elevation.

The self-contempt is the notion that we are nothing but machines. We are not the product of any kind of divinity. We do not need God to provide goodness, beauty and truth. We have no soul. There is no afterlife. Philosophical questions that seem to face all human beings are generally the product of loose language and sloppy thinking. We are naked apes. We are animals. We are essentially irrelevant specks on a chunk of rock near a noticeably ordinary star. We are bits of nothing caught in a deterministic hell of unfreedom.

This suicide-inducing vision is intensely nihilistic. The ambiguity continues here with some agreeing that nihilism is truth, while others incorrectly insist that the death of God does not in fact entail nihilism – and they try to produce such things as morality out of biology.

Here a second crucial idea of Berdyaev’s is pertinent. God is the only ideal that does not tend to reduce man to means. If men are made in the image of God then they participate in divinity in some way, they have an immortal soul, and are not to be turned into expendable tools. However, if happiness, or equality, is the highest ideal, then killing a few million in the interests of achieving it becomes completely likely. The self-contempt swing in self-opinion enters here.

Grandiosity enters the picture as apotheosis – man elevated to the divine. With existentialism, man can create meaning. God-like, his life means whatever he chooses it to mean. But mostly religion’s replacement is Humanism; the belief in the perfectibility of man. It is essentially human worshipping. If God can be thought of in some ways as the highest good that can possibly be imagined, humanism has ordinary humans as this object of veneration.

4Self-hatred combines with the false promise of human divinity: self-sufficient, original and autonomous. René Girard illuminates this dynamic in Deceit, Desire and the Novel. Following philosophers like Feuerbach, man tries to wrest divinity from an imaginary God. A God upon Whom all man’s good qualities have been projected. Man must take back these projections and possess them as his own.

This is not the death of God. It is Man as God. But no one can live up to this false promise. We are all weak, needy, dependent and mimetic. We need each other and rather than enjoying our omniscience and omnipotence, we must grit our teeth to bear the burden of our limited existence. To exist is to be limited. To be limited is to experience frustration at those limits and to be prone to resentment; especially when all other people are imagined to have acquired the divine inheritance of self-sufficiency, autonomy and originality.

The twentieth century saw man as God in the form of Mao Zedong and Stalin and the results were tragic.

Computer scientists cum philosophers act out this schizophrenic attitude of self-hatred and self-admiration that was very much the product of the Enlightenment – the Dark Age for the soul. What is so bad about being a machine? How about the fact that machines are not even alive, or conscious, and are mere rule-following automatons incapable of love and imagination or of appreciating beauty? Such self-contempt is just the other side of the coin from the imaginary ability to program consciousness.

The death of God produces these wild fluctuations; man loses touch with reality, alternating between believing he is nothing to imagining that he is God. His failure to achieve the divine inheritance exacerbates his self-loathing.

Getting rid of God conceptually does not rid man of the desire for transcendence and the divine. That desire just gets perverted onto inappropriate objects of worship – money, things, other people as St. Augustine claimed.

The Great Chain of Being had man somewhere in the middle of things; God and angelic intelligences above him and animals and plants below him.

The self-loathers and self-aggrandizing are one and the same people. Many of them also enjoy calling human beings “apes.” This becomes just another rhetorical tactic for the self-loathers. This article makes a nice argument against the tendency.

Dogs are the only non-human creature to understand what human pointing means. They also closely track our eye movements for clues to where hidden things might be and scan our faces to pick up emotional nuances in the same way that other humans do – hence the strong emotional connection people often experience with their dogs. These things make dogs our fellow travelers more than orangutans or chimpanzees despite superficial morphological resemblances with the latter. Obviously, this is no argument in favor of us being included in the genus canis. To homologize us with apes is equally nonsensical.

5

In a memorably amusing scene in Goethe’s Faust, Part 2, Act II, Wagner succeeds in making a homunculus, a little man living in a flask.

[Homunculus’s vial is] rising, flashing, piling up—
another moment and it’s done!
A grand design may seem insane at first;
but in the future chance will seem absurd,
and such a brain as this, intended for great thoughts,
will in its turn create a thinker too.[3]

Mephistopheles knocks on Wagner’s door who announces that a man is in the making. Mephisto’s response?

“A man? And pray what couple tender
Have ye shut up in the chimney there?”

Wagner is supposed to be using alchemy to achieve his goal. It now seems quaint. Predictions of an immanent singularity endlessly deferred will seem likewise perhaps not so very far from now.

[1] It is not necessary to know what you are doing to follow an algorithm and computers do not of course know what they are doing.

[2] https://entitledopinions.stanford.edu/sam-ginn-singularity

[3] https://www.litcharts.com/lit/faust/symbols/faust-s-study-and-wagner-s-laboratory

26 thoughts on “The Schizophrenia of Strong AI

  1. Pingback: The Schizophrenia of Strong AI | @the_arv

  2. René Guénon in The Crisis of the Modern World (1927) devotes a chapter to “A Material Civilization.” In that chapter, he asserts that what he calls “materialization” belongs to the essence (or maybe anti-essence) of modernity. Materialization denotes the tendencies to claim that the universe consists only of matter and space and so to deny spirit and God; and to become exclusively interested in – or obsessed by – things material. Berdyaev, Evola, even Heidegger insofar as his work is a critique of modernity write in much the same critical vein. Materialization lends its darkness to the modern scene. The trends and themes in Strong AI that you identify strike me as participating in the Guénonian materialization. The much-awaited singularity of the Strong AI cult expresses the perverse and nihilistic wish to become free of the body, but by the bizarre method of “downloading” one’s personality into a machine, so as presumably “to live” forever. The machine, however, is not alive, as the body, which in the Tradition maintains a link with the soul, is. More than schizophrenic – although certainly it is schizophrenic – Strong AI has the morbid character of death-wish.

    • I stole this from elsewhere on the Internet:

      A guy walks into a bar, and is greeted by a robot.

      The robot says, “What’s your drink”? The man replies, “Whisky”. The robot then says, “What’s your IQ”? The man says 150. The robot then pours his whisky and proceeds to talk to the man about the space time continuum, time travel, and the multiverse. The man finishes his drink, and leaves the bar.

      As he was walking out, he thought, “I’m gonna try that again, see if I get a different response.” So he walks back in and the robot asks him again, “What’s your drink?”. The man again says “whisky”. The robot asks him for his IQ, and this time the man says 110. The robot pours his drink and begins to talk about nascar and normal people talk. He finishes his whisky and exits the bar.

      He gets the idea to try it again. He walks back in, and again the robot asks him “What’s your drink?”. The man says “Whisky”. The robot asks “What’s your IQ?”. The man replies “50”. The robot pours his drink and says “You still upset Hillary lost?”

    • @Tom B: That’s an interesting thought that AI fantasists wish to substitute a living body for a dead one. The Greeks and others had the idea that the soul animates the body. Perhaps the AI cult is unknowingly tapping into such a sentiment.

      The other thought is that modern materialists completely reject vitalism and élan vital. So life isn’t really anything at all, they claim. Some contend that life doesn’t even exist and assert that every attempt to explain what is unique about life fails. Computer programs “reproduce” so that can’t be one of the unique characteristics of organisms, at least one philosopher I know says.

      I’ve also heard Sam Harris talking to another scientist on a podcast reminiscing about the old days when “life” was poorly understood, with some bizarre implication that “science” has “solved” it.

      Life, and certainly the emergence of life, is actually another mystery, like consciousness and is therefore hard to define. It is yet another mystery that must just be accepted in order to have a workable perspective on reality. Niels Bohr wrote: “The existence of life must be considered an elementary fact that cannot be explained but must be taken as a starting point in biology, in a similar way to the quantum of action.”

  3. Pingback: The Schizophrenia of Strong AI | Reaction Times

    • @Dave – Well, we have people who think they are machines so that would make us even!
      But seriously, my contention is that machines “thinking” they are people, or anything else, is not going to happen. We still have absolutely no idea at all how it is that humans are conscious so recreating it, other than through sexual intercourse or test tube babies, is an impossibility.

  4. Reminds me of something Sam Harris wrote:

    “Consciousness—the sheer fact that this universe is illuminated by sentience—is precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true. Consciousness may very well be the lawful product of unconscious information processing. But I don’t know what that sentence means—and I don’t think anyone else does either.”

    https://samharris.org/the-mystery-of-consciousness/

    • Thanks for reading, winstonscrooge. Sounds good to me and, at the same time, completely inconsistent with other thoughts that Sam Harris has expressed. For instance, he claims to know that determinism is true and physical determinism is premised on a thoroughgoing physicalism.

      • What a mess! Regarding thinking, at least, Harris offers just two possibilities – determinism or randomness – and neither permits free will, he says. If “consciousness comes into the picture” via a transcendent spiritual reality – from God – then that makes nonsense of all his anti-religious crusade and his materialism. If he is not a materialist there is no need to be a determinist. If he can accept the mystery of consciousness, he should be able to accept the mystery of free will and God.

      • I don’t know. Probably. The “mystery of consciousness” quotation from Harris that you posted seems amenable to a materialist or spiritual “solution.” Roger Penrose, who I like, imagines some future science explaining consciousness. To my mind that would mean any such future science would end up being “spiritual” – whatever that would mean or look like. Reality would need to be radically reconceived in some way.

        William James wrote that psychology could either expand the idea of science to accommodate consciousness or keep science narrowly conceived and find itself unable to cope with consciousness at all, focusing mostly on physiology. Unfortunately, psychologists chose the second route for the most part such that Psych 101 ends up being mostly a physiology course.

      • @winstonscrooge: I think that’s right – that consciousness precedes materialism. I believe the brain is a material interface permitting creatures with an immaterial soul to move about in and affect the material world. Examining the brain for the origins of consciousness is like examining your TV to find out how it is generating TV programs. I’ll take a look at your article.

  5. I think we already have produced morality out of biology. It’s called Nietzscheism. The philosophers of today put biology into the alembic hoping for different results. But it’s pretty obvious how biology defines “good,” “beautiful,” and “true.” Of course Nietzsche tells us that accepting these definitions is the very opposite of nihilism, but I think he is begging the question.

    You’re quite right about the schizophrenia of a philosophy that simultaneously degrades and exalts man. Sometimes I think this is just the contradictory tendencies of materialist science (degrading) and democratic politics (exalting).

    I don’t know anything about the philosophical implications of AI, but I do sense more than a little misanthropy in AI enthusiasts. Like the early communists, they seem to loath man as he actually is, and love man as they imagine he might be. The unwillingness (inability?) of man to be other than he actually is makes them loath him all the more. And you know very well where that leads.

    • For all of his talk about a ‘will to power,’ Nietzsche and his followers like Jack London were focused on biology first, and considered power as an adjunct to that. They didn’t treat power as an end in itself, but as a tool for eliminating biological competition.

      If you look at white trash, they are great at reproducing themselves, which is the Nietzscheian ideal. But now they’re being exterminated through opioids and a dozen other tools. So clearly biological morality has failed.

      But Christianity demands ‘worthiness’ in a way that Nietzsche does not. Worthy people are temperate, so they cannot succumb to the hedonism that is used against white trash. They improve themselves instead of cutting others down, which allows them to move up the hierarchy without destroying it, as Nietzsche would have them do.

      Nietzscheism is appropriate for our democratic politics, where every group is competing to loot society. But a restoration will require a morality of worthiness, one where you don’t desire to use power to crush your enemies, but to use it to maintain and improve your own position and role in society.

      • “Full is the earth of the superfluous; marred is life by the many-too-many. May they be decoyed out of this life by ‘life eternal’ . . . . Many too many are born: for the superfluous ones was the state devised.” Thus Spake Zarathustra

      • London does celebrate the “hard man” in places, but then he’s very sentimental about the “working man” in others. I never read Sea Wolf, but read The Road many times as a boy. I also went through a Klondike phase, thanks to Jack London.

  6. @JMSmith – Yes. The AI cult merely exemplifies an ubiquitous modern tendency. All lose their moorings without God. From my POV democratic politics is also degrading. Bizarrely, it goes from a method of political selection and organization to some kind of ideal and the cult of “equality” seen in “social” justice. Leveling and tearing down, no higher and lower, seems to be the logic of democracy once it is extended outside the narrowly political.

    Nietzsche has his own man-god and contempt for regular man and so is fully immersed in the contempt/exaltation oscillation.

    All these utopians, communists, AI cultists hate reality and wish for their second reality. They have an intuition of self-overcoming which is correct, but seek to achieve it through social engineering or computer engineering as though it can be imposed from the outside or generated out of silicon.

  7. As a CS student myself, I have seen a professor who felt compelled to introduce an intermediate ‘semantic space’ for a task as simple as image recognition. That strikes me as begging the question in a distinctively CS/Statistical manner, since this ‘semantic space’ is code for how a human would identify an object.

    At this point, he’s relying on the fact that computers are good guessers with a large enough data sample. He can’t set an objective definition of an ‘object’ without using human input.

    To me, this suggests that consciousness is based on an individual’s perception with respect to their goals. Computers don’t have this, so they need to get it from humans.

    Further, it seems that computers lack a fundamental understanding of desire and suffering. The last man to ever defeat a computer at chess did it by making the computer suffer through an abstract and pointless position. The computer didn’t understand that playing for a long time is miserable, and not how you want to play the game. So it lost, because it didn’t have a goal.

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.