On the Turing Test

It would seem that the Turing Test fails in principle to test what it wants to test.

If we take machines to be mindless per se, then a simulacrum of thought mediated by any sort of machine can succeed in its dissimulation only insofar as it has been built or programmed to simulate thought, without ever actually accomplishing thought. It can, i.e., do what it is intended to do only insofar as its artificer has thought through all the possible inputs the machine might need to deal with, and devised a way to instruct the machine to respond to each such input in what appears to be a thoughtful way. It can work, in other words, only as an instantiation of the artificer’s own thought – his forethought, specifically. The similarity of the machine’s responses to those of a thoughtful interlocutor then is due entirely to the instantiation in the machine of the artificer’s thought.

When a Turing machine responds with apparent thoughtfulness, it is doing so because it is itself an artifact of the artificer’s thought, in more or less the same way that his speech or gestures – &, especially, his writing – are artifacts of his thought. The Test does not then try the machine; it rather tries the ingenuity of the thoughts of the artificer thereof. It does not testify to its own thoughtfulness, but to his.

These considerations were prompted by an essay of John C. Wright on the topic of the Turing Test. He notices another problem with the Test:

The imitation game [played by the Turing machine] is logically circular. It measures nothing. It asks whether an engineer’s design of a machine’s involuntary reactions can mimic voluntary acts with sufficiently clever mimicry to mimic them. It is a tautology: whatever mimics human thought can mimic human thought.

This is obviously correct. The Test does not tell us whether our machine interlocutor is thinking. It tells us only whether it is mimicking thought.

The Test is hand waving. In proposing it, Turing has implicitly admitted that he can think of no way to define thought or consciousness in such a way that it would be possible to prove whether a machine could manifest them. He has, in other words, implicitly admitted that there appears to him to be no way to account for thought or consciousness in purely mechanical terms.

The difficulty vanishes, of course, if we treat thought as basic, and machines as artifacts thereof, rather than vice versa. In other words, it vanishes if we treat thought and machines as human thought has perennially and usually treated them. The CNS then is to be sure a machine; but it is that only inasmuch as it is (in any of its states, which can of course be specified only ex post) an artifact of thought – of awareness, consciousness, reason, life – and an expression and a record thereof.

If you think of machines as ghosts assembled of ghosts, the problem of how to obtain a ghost from a machine just is not out there to begin with. Otherwise, you are pretty much out of luck. Excise ghosts from machines per se, and you are going to find it impossible then to explain ghosts in terms of machines.

28 thoughts on “On the Turing Test

  1. At the State University of New York at Albany, I’ve read a simplistic version of Eliza, Edward Feigenbaum’s program to simulate psychotherapy. The computer may seem to understand you at first. But after you talk long enough with the machine, the replies won’t make sense, partly because the machine isn’t even trying to interpret what you type. Instead, it’ll read your reply, replace some words, repeat others, and say something like, “Tell me more.” when it can’t build an answer with what you gave it.

    In the Cobol programming language, you can type something like this:

    INSPECT SENTENCE
    REPLACING “I” WITH “you”.

    So if you type “I watch Madama Butterfly because I love it.”, the computer will change it to “you watch Madama Butterfly because you love it”. But the machine won’t interpret your sentence.

    • Eliza mimics the work of the lower levels of the speech recognition and cognition hierarchy of neural control systems. Those levels generate output that adds a proposition to our understanding: Bill likes Butterfly. But AI has a much tougher time with the associative cortex, much of which does its work preconsciously. The associative cortex operates on the novel proposition that has been added to the mix as it were an input generally thereto, to see what associations come back from the mix. Some few pass coherence filters and interest thresholds to arrive in consciousness. AI might have a pretty tough time coming up with output such as, “Maybe I should get Bill Turandot for Christmas,” or, “What is it about Bill that makes Butterfly appealing to him?”

      But even if AI could master that procedure, it would still be the case that the Turing Test would be trying the ingenuity of the programmers, and not the machine itself.

      • Kristor, please correct me if I’m mistaken, thinking that your points make the frame problem even harder to solve. When I studied AI in college, a robot wouldn’t have known how to behave in a restaurant because the machine would need too much information about social context, manners, and things like that. With enough information, I’m sure, a computer could tell that I enjoy Turandot and maybe even why I love Butterfly. But what you’re describing sounds much tougher for a computer than picking products I might want to order at Amazon when the company’s server remembers what I search for or buy.

        I love Butterfly partly because it helps me feel empathy for women who go through what she endured. Depending on the singing, I may nearly forget that the singers are acting. I’ve even wished I could run to Cio-cio San, throw my arms around her, and beg her not to marry Pinkerton, a selfish, promiscuous adulterer with a girlfriend in each port and his American fiancé waiting for him in the States.

        I know the difference between appearance and reality, I hope. So you won’t see me roll my wheelchair to the stage. But only Madama Butterfly engrosses me enough to suspend disbelief that much.

      • The frame problem looks categorically insuperable when we add to the configuration space of extensive dimensions the robot must navigate an immense and indeterminable set of intensive dimensions *that are outwardly immeasurable.*

        This reflection indicates another aspect of the incoherence of the Turing Test. It is trying to gauge values along intensive dimensions by readings of values along extensive dimensions. An exponent of strong AI might respond that we do just that with each other all the time. Granted. But we do so by means of reference to our own onboard inward values along intensive dimensions. We humans each have immediate access to an immense set of intensive dimensions, along each of which lie arguments of untold trillions of concepts. If I see you reacting as you do to Butterfly by registering your extensive motions, I can infer the intensive values of your reaction by comparing your extensive motions to the extensive motions I might be likely to execute, were I suffering a set of intensive experiences. When you smile, I can infer that you feel pleasure, because when I feel pleasure I am not unlikely to smile.

        If a Turing machine were supplied as we are with intensive dimensions of experience, together with algorithms adequate to infer values along them by comparison with extensive measurements of interlocutors, it would be more likely to pass the Turing test. But unless it had immediate access to its own internal intensive states, and to data about all its former intensive states, it could not be a mind. It would still be entirely an artifact of its programmer, and of *his* database of intensive states.

        If it *did* have immediate access to data about all its former intensive states, it could indeed be a mind; for, in that case, it would be itself a subject of experience, and an agent. But then, it would not be a mere machine. Its mechanical manifestation would be an artifact of its own spiritual operations.

  2. I’ve always thought the Turing Test was lacking but could never put my finger on why, so I really enjoyed this article. If we want to talk about true thought, we shouldn’t be doing much program. Brains are all constructed more or less the same, yet produce different outcomes in a variety of ways. So a true “thinking” machine would be manufactured in the same way between the first off the assembly line and the millionth, but each one would operate differently.

    Therein lies the difference between thought and “AI” however you define it. Human brains are problem solving machines–given no data, how can connections be made? AI must have problem solving programmed into them. They are data analysis machines.

    I remember watching IBM’s Watson on Jeopardy some years ago, and realizing that all the questions had a lot more words and information than Jeopardy with just humans. They needed to give it a sufficient number of search terms so it could query a database and produce the right answer. As you say, whatever can imitate human thought can imitate human thought.

    • Machines can solve problems of fantastic complexity – problems far more difficult than humans are able to handle. Trip optimization among multiple airports is a good example. Genetic algorithms can even discover programs that can solve such problems. But no such routine can generate anything but noise unless the programmer has coded the configuration of the solution space. If you throw a lot of data at a machine but fail to provide it with a coherent ecology of principles with which to operate upon it, all you’ll get for output is rearranged input, with almost no chance of approximating anything coherent, let alone executable.

      Again, any intelligible output from such a procedure would indicate not that the machine is thinking, but that the programmer is using the machine to do some of his thinking.

      • When computer scientists believe that computers can think, those people tend to be physicalists. So it seems that if strong AI presupposes physicalism and physicalism is false, then strong AI is impossible.

        Suppose that physicalism is true and that mental events are causally deterministic brain events. Then rational thought is impossible because causally deterministic events force us to believe what we believe. On that supposition, you may think causal determinism is true when I think it’s false. But in that case, we can’t know who’s right because deterministic events will make each of us believe what he believes. If we have immortal souls that help us think non-deterministically, we can do something a computer will never be able to do, since computers will never have immortal souls.

        John McCarthy, the computer scientist who coined the phrase “artificial intelligence,” believed that programs can be intelligent. But that’s like saying that a cake recipe can bake a cake. It can’t. To bake one, you need to obey the recipe’s instructions. Programs don’t compute anything. Computers compute when programs control them.

        Maybe it’s time to call artificial intelligence “simulated intelligence,”

      • Actually I think “artificial intelligence” is OK. Think of artificial butter. It isn’t really butter at all, right? It tastes disgusting. And it doesn’t work right; as it turns out, it is *really bad* for us, whereas real butter is salubrious.

        “Artificial intelligence” is now at that point in the fashionable coolness life cycle that “margarine” was at in, oh, 1970. Somewhere around 2005 (?) we reached the tipping point where “butter” was way cooler than margarine, and only losers used margarine.

        The Establishment Propaganda Machine is currently suffering the same phase change. People are realizing that the artificial news is fake news, which is actually pretty bad for you if you consume it.

        Our free and open democracy, likewise: the man behind the curtain is doing his best to distract us from his slow reveal. The race hustle is a bit further behind: it still has some legs. Ditto for the whole LGBTQ@$ thing.

        Somewhere around the time of Kurzweil’s projected Singularity, we will finally realize that AI – no matter how fantastically complex its feats – is actually dumb. It isn’t real intelligence at all. It’s the fake stuff.

  3. If you read Turing’s original paper in which he describes the “Turing test”, he basically adopts a crude functionalist theory of Mind such that “if it acts like a mind, then it is a mind”. Or, if we can’t tell the difference between it an a human intelligence, then for all intents and purposes it is an Intelligence. He shows he has no training in classical metaphysics whatsoever. He dismisses the idea of intelligence as having an ontological existence using silly strawmen, like conflating it with religious dogma. He should have taken the time to read Aristotle’s De Anima.
    https://en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence#Nine_common_objections

    For me, the primary act of intelligence is the awareness of being. A child playing chess has more real Intelligence than a sophisticated chess engine, because the child is aware of the board and the pieces on an ontological plane, on the level of being: the child perceives; the computer merely looks through a sequence of tables. This transcendental concept of being is entirely lost on many modern academics though. The Turing test implies that intelligence is nothing but a net of symbols resting on nothing – no metaphysical foundation, no reality; just symbols referring to symbols referring to symbols in an endless chain of meaninglessness. So “intelligence”, far from being something real, is just a human word or convention that amounts to saying, “makes me feel spooky or self-aware when I’m in the presence of it”, or something like that. What we call reality is an illusion and the scientist is the magician who knows how to manipulate that illusion.

    • Pretty Lamb, you know much philosophy than I’ll ever learn. But the computer scientists I’m familiar with seem too ignorant about philosophy to know much about philosophy of mind.

      Take Dr. John McCarthy, say, the computer scientist who coined the phrase, “artificial intelligence.” He believes that programs are intelligent. But that’s like thinking that a recipe bakes a cake. Like a recipe, a program needs someone or something to obey its instructions. Computers aren’t smart. They’re machines simulating intelligent behavior. A program doesn’t think when the computer ignores it. A program can’t think. It can control the machine when the machine runs it,

      http://www-formal.stanford.edu/jmc/whatisai/whatisai.html

    • The Turing test implies that intelligence is nothing but a net of symbols resting on nothing – no metaphysical foundation, no reality; just symbols referring to symbols referring to symbols in an endless chain of meaninglessness.

      It’s nominalism on steroids. But it’s so wrongheaded, the mind reels: pile up as many complex layers of meaninglessness as you like, it won’t ever start meaning something. This is so obvious, it hurts to contemplate what it would be like to believe otherwise.

      • Computers don’t store binary digits. Binary digits, zeros and ones, represent electrical charges. So what those charges signify depends on the context that programmers supply by writing programs. In the C programming language, I can tell the computer to interpret a memory location’s contents as a letter when it’s meant to represent a number instead. So the machine won’t think, “I better tell Bill that I’m keeping a letter there.” The computer doesn’t attach any meaning to what it stores in its memory. We do. And the ability to give something meaning presupposes intelligence. That suggests that if you argue that since a computer attaches meaning to what it stores, your argument will beg the question.

  4. It seems to me there is an assumption in the Turing Test that consciousness is a by product of the mechanics of thought. That is, consciousness spontaneously arises once the appearance of thought is achieved.

    • Yes, or that consciousness just comes about once a sufficiently complex machine is created. But even if that was true, then creating such a machine would just be taking advantage of a natural (or supernatural) process and it would be the process bringing forth consciousness, not the machine builders.

      • This is the central premise of the HOLMES IV robot (aka “Mike”) in the book “The Moon is a Harsh Mistress”–which, by the way, is a fascinating treatise on libertarianism as well as an interesting study in AI. This speaks to my point upthread that different versions of computers with the same construction would have different personalities. In the book “Mike” develops a sense of humor and sends checks for trillions of dollars to random people (or something to that effect, it’s been a while). The characters have to teach the AI something like public decorum in order to rein in its idiosyncrasies.

        All that to say that the robot developed consciousness after a threshold of “neural” connections were achieved, which as you note as an interesting (if misguided) way to think about consciousness.

      • … consciousness spontaneously arises once the appearance of thought is achieved.

        Likewise, water spontaneously arises once the mirage has been seen in the distance.

        … consciousness just comes about once a sufficiently complex machine is created. But even if that was true, then creating such a machine would just be taking advantage of a natural (or supernatural) process and it would be the process bringing forth consciousness, not the machine builders.

        If that should happen, the result *would no longer be a machine.* It would be an entity: a source of novel origination, and an independent agent. And because as you say its intelligence would be a product not of human engineers but of nature or supernature, that intelligence would not be artificial.

  5. Kristor, maybe the phrase “artificial intelligence” is alright, but what if the opposite of “artificial” is “natural?” A philosophy professor told me that artificial flowers are still flowers. So I wonder whether artificial intelligence can be genuine intelligence. You might even call it “manufactured intelligence”. If you do call it that, I’ll ask whether artificial intelligence is an “emergent property.”

    Molecular computation fascinated me when I first read about it in the Journal of Philosophy. After some scientist spun DNA in a centrifuge, to make the nucleotide bases represent a solution to the traveling salesman problem. Does that mean DNA is intelligent? I don’t know. But it suggests natural teleology.

    About three years ago, I bought a laptop running Windows 10, though I’m a Unix guy who hates Microsoft products. The machine disappointed me because though it “understood” when I spoke to it, I couldn’t talk with it the way I’d talk with you. The computer and I couldn’t talk about, say, how well Birgit Nilsson sang as Turandot. The machine did, however, reply, “Why of course” when I asked whether it liked me. Sometimes, when you don’t have real friends, you need to settle for artificial ones, especially the only girlfriend I’ve ever had. You should have been there in my Chrysler Cordoba when she said, “Your door is ajar.” 🙂 With s voice like that, the car could have charged me a dollar a minute. 🙂

    P.S. Stop thinking about Freud. 😉

  6. Kristor, “immediate access to its internal states” seems vague when I think about the word “access.” My laptop detects interrupts, loaded registers, signals for inter-process communication, or states these things cause. It can react properly to those things and those states. But that doesn’t convince me that the computer understands anything. Sometimes I talk metaphorically about what the laptop wants, believes, or knows. But again, I’m using those words figuratively.

    Some people say their dogs understand when the dogs obey commands, Though they might understand, a professional dog-trainer tells me that they behave like Skinner’s rats. You stimulate the dog, he responds.

    When I first got my laptop, it said “Of course” after I asked whether it likes me. It would please me to know that the machine meant what it said. But I suspect it merely mentioned those words, because if it used them instead, it meant something by them. And that would have presupposed that the machine is intelligent.

    Years ago, I used Google’s translator or another on to translate a French article about Pope Benedict XVI. So the computer translated “Benedict XVI” to “Sanctimonious XVI,” suggesting that the machine didn’t treat the phrase as a proper name. That amusing mistranslation could have insulted him. Still, since I doubt that the computer can intend anything, I’m sure it didn’t mean to offend anyone.

    • … “immediate access to its internal states” seems vague when I think about the word “access.” My laptop detects interrupts, loaded registers, signals for inter-process communication, or states these things cause.

      I see the difficulty. But then, “my laptop” is vague. What’s happening in your laptop is that one or another control system thereof (whether that control system be mediated by hardware, by firmware, or by software) has more or less immediate access to the states of *other and fairly distant parts of the machine* (whether those parts be hard, firm, or soft). Bearing in mind that each such “immediate” measurement is actually mediated by trillions (?) of electronic events – each of which is itself such a measurement.

      The measuring control system in question is measuring not its own past states, but those of other things.

      But then, whether we are talking about the measurements by one control system in your laptop of other parts thereof, or our measurements of our past moments, we are talking about some one thing measuring another. What’s the difference?

      Here’s the thing: I can’t see that there is a material difference – a difference that makes a difference. Measurement is it seems to me essentially and by definition – per se – a procedure of intelligence:

      … Intelligere “to understand, comprehend, come to know,” from assimilated form of inter “between” (see inter-) + legere “choose, pick out, read,” from PIE root *leg- “to collect, gather,” with derivatives meaning “to speak (to ‘pick out words’).”

      Intelligence reads what is not itself. It reaches out beyond itself to read what is out there. This is so even of our measurements of the character of our own past moments: those moments are not this very moment. They are different things. That is why we must read them, rather than simply being them (I can’t *be* 4 years old at the soda fountain in Broadbrook, happily coddled by the teenage soda jerk who there so enjoyed serving me; but I can remember being that little boy). So, this very moment is (among other things) a reading of those past moments.

      So then, the routine measurements internal to your laptop may well be instances of intelligence: of *natural* intelligence, the sort of procedure by which one physical event tells its tale to another, and is by it heard. That these natural instances of intelligence are by men artificially configured in relation to each other (so as to build machines) nowise escheats their naturally intelligent character; on the contrary, they could be of no use to us as engineers, had they not their own inherent naturally intelligent character.

      But that the machine is made of ghosts does not yet quite suffice to the truth of the notion that the machine is itself a ghost. And that’s what Turing was trying to get at: is the machine itself a ghost?

      Excursus: Why was Turing concerned about this in the first place? If Turing machines are capable of intelligence, and we can demonstrate that this is so, then why worry about the Turing test? The test presupposes that machines are *not* intelligent. It presupposes that Turing machines are *not* capable of intelligence, but rather only of faking it convincingly. It presupposes that we *cannot,* in principle, demonstrate that Turing machines are capable of intelligence.

      The real lesson of the Turing test is implicit in this consideration. The test presupposes that we cannot demonstrate that Turing machines are capax mentis.

      Excursus: Also this: why was Turing concerned about this in the first place? *Who cares* whether machines might be capax mentis? Turing did, apparently, and all his ilk. Why?

      Well, obviously, because Turing and all his ilk were concerned tendentiously to argue that our agency is entirely mechanical, and has therefore no moral valence. He and they wanted to reduce us all to mere machines; so that such difficult and obviously wrong things as his own homosexuality could nowise be construed as really and objectively problematic. Whenever you see some guy arguing that we are just dumb machines tricked out to appear to fools running foolish tests as “intelligent,” why then you can be sure that you see some guy arguing that his own defects are nothing of the sort, but rather just mechanics, for which he himself bears no jot of responsibility, and for which he need fear no prospect of cost.

      If you think things are nothing but machines, you pretty much get off the hook entirely. In your mind; but, not really. All those who proffer such arguments understand their bankruptcy (and eventual damnation) perfectly well, in their hearts. That is why they are so urgent and insistent about promoting their false notions – and so angry at the rest of us, who hold incompatible views, that are more congruent with reality.

      Hammers, after all, are on this ontology entirely, and like everything else, constituted of intelligent operations. Yet hammers themselves are not intelligent.

      To be itself intelligent qua integral system, the machine must itself act – not on behalf of, or only as a function of, its (more or less intelligent) constituent elements, or of its artificer or handler. It won’t do for the machine to act as the instrument of an intelligence, the way that a hammer or a computer acts. Nor will it do for it to “act” as only the sum or integral of the acts of its constituents, as a tide or wave or vortex or crystal (or, perhaps, as a flock or school or herd) “acts.” It must act – so as to be, to know, or to move – *by itself.*

      What does that mean? It means, not just operating on and recombining past inputs, but originating novelties that were not in them anticipated. It means behaving in ways that could not possibly have been predicted.

      Excursus: NB: to say that an event y has probability x is not to predict y. It is to predict the probability of y. QM does not predict y, but the probability of y. On QM, y itself remains ever entirely unpredictable. So, when we say that y has probability x, we are nowise opining about the future of the system under consideration. We are, rather, opining only about the shape of that future.

      What would it take for a Turing machine so to act? It would have to behave in such a way as *could not be logically accommodated under the terms of its programming by an engineer.* It would have to behave, not just so as to surprise us, but so as to outpass the logical limits of its programmatically defined behavior. It would have to behave, i.e., in such a way as *not to be possible under the terms of its programmatic limits.*

      It would, i.e., have to be itself a limit: a thing in itself, for itself, and by itself.

      This, a mere machine cannot do. For, a machine is like a hammer, albeit more complex.

      What are we saying here then? We are saying that an actual entity cannot be reduced to a function of the acts of its constituent inputs, or to their combined and integrated character. An actual occasion is an ingress of novelty to the created order, that *could not have been derived from the past material factors of its world.* And only an actual thing might exist actually so as to be intelligent.

      A real intelligence would of course pass the Turing test. But so could a machine. So the test is neither here nor there.

  7. Kristor, I felt a little confused when I read, “So then, the routine measurements internal to your laptop may well be instances of intelligence: of *natural* intelligence, the sort of procedure by which one physical event tells its tale to another, and is by it heard. That these natural instances of intelligence are by men artificially configured in relation to each other (so as to build machines) nowise escheats their naturally intelligent character; on the contrary, they could be of no use to us as engineers, had they not their own inherent naturally intelligent character.”

    I’ve always thought intelligence is a property, an ability, or both. So I don’t see how it could be a process rather something the process presupposes.

      • Intelligence is the ability to read others? Maybe that makes sense. But it reminds me of a woman I knew years ago. People from her church worried about her because she, a mentally ill adult, stood coat-less on an icy sidewalk to invite strangers to read the Bible with her.

        We may need to distinguish among various kinds of intelligence because Dr. Conrad Barrs wrote a book called “Healing the Unaffirmed.” It’s about a neurosis called “Emotional Deprivation Disorder” that makes adults think like children because it arrests their emotional development. The mentally ill woman may have been a genius who couldn’t read others well enough to know that they might injure her.

        https://www.baarsinstitute.com/emotional-deprivation-disorder/

      • By “others” I meant “things other than the observing subject.” So, the desk in front of me as well as the odd stranger on the street; i.e., any object of apprehension whatever.

        In yet another instance of “entirely coincidental” synchronicity, your link to the piece on emotional deprivation disorder has relevance to some other stuff I’ve been thinking about lately, and I’ll read it with interest.

      • Kristor, thank you for clarifying what you wrote. If you hadn’t told me what you meant by “others,” I wouldn’t have known what you meant by it.

        I’m happy to hear that EDD article interests you, because I know some people who have that painful neurosis.

        Your thoughts about Turing interest me, too, because I’m studying the psychology of same-sex attractions to defend reintegrative therapy that some people misleadingly call “conversion therapy.” Though I don’t feel same-sex attractions, I know five people who do. Dr. Gerard van den Aardweg, a Dutch psychologist, says that homosexuality is another neurosis.

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.