So Much for the Turing Test … and for Consequentialism

In a few sentences, and with his characteristic penetrating trenchance, Chastek demolishes the Turing Test, and for that matter all arguments from similarity of causal effects; I post here without apology his entire argument, on account of its brevity, precision, and devastation:

One principle of (strong/sci-fi) AI seems to be that what can replicate the effects of intelligence is intelligence, e.g. the Turing test, or the present claim by some philosophers that a Chinese room would be intelligence.

So imagine you rig up a track and trolley to accelerate at 9.8 m/s2. This perfectly replicates the effects of falling, and so is artificial falling. It deserves the name too: you could strap a helmet to the front of your train and drive at a wall 10 feet away, and it will tell you what the helmet would look like if dropped from 10 feet. But for all that the helmet at the front of your train is obviously being pushed and not falling – falling is something bodies do by themselves and being pushed isn’t. The difference is relevant to AI, for just as falling is to being pushed so thinking for oneself is to being a tool, instrument or machine. Both latter are acted on by others, and have the form by which they act in a transient way and not as a principal agent.

Arguments from similarity of causal effect are all species of affirmations of the consequent; a basic and stupid logical fallacy. The fallacious character of the Turing Test should be apparent to every student of logic. That Turing – himself no slouch at logic, forsooth – proposed it in the first place attests to the absurdity of the physicalist – or, at least, the tendentiously agnostic – proposal that his other metaphysical commitments could not excuse.

The same devastating critique lands upon Utilitarianism, and other consequentialist deontologies. To think that a thing is good because it happens to (seem to) work out well is to put the cart before the horse. Indeed, it is to mistake the cart for the horse.

++++++

Post Scriptum: a sodden, sad and anxious thought: is not empiricism per se a sort of consequentialism, and thus epistemologically unreliable? An experiment, however well or badly designed, turns out with a certain result. Sure, OK. Induction and all that, I get it. But is it not the case that *reasoning from results is per se affirmation of the logical consequent, and thus logically fallacious?* I’m sure this topic has been exhaustively explored by the specialists. But I’m not a specialist in the topic, so …

31 thoughts on “So Much for the Turing Test … and for Consequentialism

    • Once you see that it’s silly, you can’t unsee it, and you wonder how you ever failed to see it. But until you do see it, the silliness just isn’t there for you, so you take it seriously, and that can make it all the harder to see the silliness. “But, but,” you sputter, “but it’s *Turing*! Surely that means it must be on the right track! He’s a serious guy, and way smarter than I. If I try harder to understand how his argument works, why then …”

  1. Again, you demonstrate a talent for finding examples and metaphors that illustrate the opposite of what you think they do (in this case the source is your linked page, but still) https://www.einstein-online.info/en/spotlight/equivalence_principle/ It’s kind of impressive!

    And saying “Turing was no slouch at logic” is just embarrassing. He entirely revolutionized logic. That doesn’t mean that anything he said is automatically right, but maybe it means one should make a minimal effort to understand him before thinking you’ve cut him down with a simplistic argument.

    The most important sentence in his famous paper on computational intelligence is: “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.” which is the hallmark of a sophisticated thinker who knows the scope of applicability of concepts. That is to say, consider it an English math nerd version of nonduality.

    • Acceleration is always acceleration, obviously. To suggest otherwise would be to engage in your sort of nondualist “logic.” But just as obviously, that this is so does not entail that all acceleration is gravitational. This is simple. Equivalence is not identity.

      In his discussion of the question of whether machines can think, Turing says that he thinks the question of whether machines can think is too meaningless to discuss. OK. So his discussion of the question of whether machines can think is – together with its assertion of the meaninglessness of that question – quite meaningless.

      As with all assertions of nonduality, the act of assertion contravenes the substance of the assertion.

      Go ahead and knock yourself out with your “rejection” of the Law of Noncontradiction. Watching a dog chase his tail is totally funny.

      • The point is: Artificial gravity (eg the kind generated by the acceleration of a spaceship) is indistinguishable from “natural” gravity and serves the same purposes. So it’s a singularly bad metaphor to use in an attempt to prove that artificial intelligence is not real intelligence, or whatever point you think you are making.

        Note that there are plenty of valid criticisms of the Turing model of intelligence, just not this one.

        Have a good New Year!

      • Happy New Year to you, too, my valuable friend. I do so fondly appreciate you, you irascible cur! May God bless you and keep you, and make his countenance to shine forth upon you, and bring you to everlasting peace.

        Believe me, I totally get – I got 50 years ago – how one sort of acceleration is indistinguishable from another, *so long as one is ignorant of, or chooses to ignore, the differences in the accelerations.* That much is obvious, and it should not have taken an Einstein to notice it. It didn’t, probably. But it misses the point. There are *in fact* different sorts of accelerations, *all of which are accelerations.* That accelerations are all accelerations does not obviate their differences. Horses and dogs are mammals, but horses are not dogs. This is so simple. I’m not sure why you so want to avoid seeing it.

        Actually, that’s not true. I think I do see why you want to avoid seeing it. You want the Turing test to be indicative, so that you can stop worrying about whether your materialism might be false. Dude: it’s false. It has to be: on materialism, there is no such thing as the thought of materialism. Get over your worry about materialism. Abandoning materialism does not mean abandoning science or your other cherished notions. It means rather a wonderful expansion of your intellectual powers. Materialism proper, of the Aristotelian sort, is fantastically more competent than the Modernist sort. You should give it a try.

      • I’ve figured one thing out. When you say things like:

        This is so simple. I’m not sure why you so want to avoid seeing it.

        it is a sure tell that you are avoiding something that I am trying to convey to you.

        Projection is a hell of a drug.

        I have very little interest in defending some cartoon version of materialism. The question of whether or not a machine could be intelligent is not interesting, it’s obviously true, because we are such machines ourselves. Only fools and madmen think otherwise. We are bodies, we are subject to the requirements and insults of physicality, hunger, pain, death, as well as its pleasures and rewards. Our thoughts are accomplishments of the meat computers in our heads, as evidenced by what happens when you mess up the machinery with drugs or injury.

        The question is are we only bodies, or Something More? But if you reflect on that question, you can see that it is based on the kind of denigration of physicality that I’ve mentioned before. On the deeply held conviction that bodies are Low and mind is High, that bodies are dead without some animating spirit to make them go.

        I don’t think this attitude is peculiar to Christianity or Descartes or any of the usual culprits; the conceptual split between spirit and body is a human universal, although how their relationship is conceptualized varies quite a bit. Probably based on the universal experience of seeing people die; when what was a whole person is transformed into a dead hunk of rotting meat plus an absence. it makes sense given those experiences to think of humans as matter + spirit.

        Thus, you think materialism is seen as some kind of denigration of existence as such. If the cosmos is composed of matter and spirit both, then materialism is a denial of spirit, it makes everything dead.

        But modern materialism (misnamed really, “naturalism” would be better) is not about denial of spirit or mind, it is asserting that the body and mind are one thing and not separable into components, contra this common but incorrect human conceptualization. That life and mind are fully material and there is nothing wrong with that. No supernatural Something More required.

      • You seem to be saying that I’m avoiding understanding your point that all accelerations are entirely alike, instead suggesting that you are avoiding understanding my point that accelerations are not all entirely alike. But I do quite well understand “all accelerations are entirely alike.” It’s a simple idea. And wrong, obviously. A push is not a pull, even though they are alike in their effects on what is pushed or pulled. This, too, is an extremely simple idea. Furthermore, neither pushes nor pulls are gravitational accelerations. They are electromagnetic. There is nothing tricky about this.

        So, I have directly and substantively responded to your point. Whereas, you have not responded at all to mine. Indeed, you have not even noticed it. Instead you tried to change the subject to a critique of me.

        So, who’s projecting?

        I am indeed genuinely puzzled about your avoidance of the simple notion that not all accelerations are entirely alike. I conjectured that you don’t want to grapple with the notion because you are a materialist, committed to the notion that we are nothing but machines, and that because machines of our sort can be intelligent, therefore machines of other sorts – such as our computers – can be likewise intelligent. This is materialism of a sort – specifically, it is the doctrine of physicalism. So, it looks like my conjecture was correct.

        The question is are we only bodies, or Something More? But if you reflect on that question, you can see that it is based on the kind of denigration of physicality that I’ve mentioned before.

        So what? Even if that were so – as for some (albeit not orthodox Christians) it certainly has been – the question would still stand. That some Gnostic types denigrate the body does not demonstrate that we are only our bodies. It is in fact quite irrelevant to the question.

        You answer the question with assertions that we are only our bodies, but you offer no arguments to warrant that conclusion other than the logically fallacious ad hominem that only fools or madmen would think otherwise than you do. You point out that we are bodies, but of course the question of whether that’s all we are presupposes this fact, and no serious philosopher has disputed it. That we are bodies goes without saying. It does not at all dispose of the question whether or not there are different sorts of bodies – alive, dead, intelligent, etc.

        It is as if when asked whether the body lying on the rug is alive or not, you responded, “Dude, it’s a *body;* of *course* it’s alive.”

        So, you’ve completely avoided Chastek’s challenge to your answer to the question.

        But modern materialism (misnamed really, “naturalism” would be better) is not about denial of spirit or mind, it is asserting that the body and mind are one thing and not separable into components, contra this common but incorrect human conceptualization. That life and mind are fully material and there is nothing wrong with that. No supernatural Something More required.

        Hard materialism has indeed been sidling in the last century closer and closer to an approximation of orthodox Christian anthropology. It’s been smuggling concepts like final and formal causation into its toolkit, without realizing it has been doing so.

        NB: to say that we have souls is to say only that we are formed as living animals. It is not to say that there is a ghost in the machine of the body – that’s a Gnostic idea. This has been fully nailed down in classical Christian thought since Aquinas. We are indeed fully material – this was settled by Aristotle. But – *and* – we are fully final, fully formal, and fully efficient. The four sorts of causal factor are in us all integral. You can’t obtain an actual contingent being that is not formal, final, material, and efficient.

      • I am indeed genuinely puzzled about your avoidance of the simple notion that not all accelerations are entirely alike.

        I’m not avoiding it; it’s entirely irrelevant. My point was not “all accelerations are exactly the same”, but that they are sufficiently alike in their effects as to be indistinguishable.

        You answer the question with assertions that we are only our bodies, but you offer no arguments to warrant that conclusion other than the logically fallacious ad hominem that only fools or madmen would think otherwise than you do.

        That’s not what I said and I’m sure your reading capabilities are such that you can grasp what I actually meant.

        NB: to say that we have souls is to say only that we are formed as living animals

        That is not what having a soul means in common usage. It’s transparent weaseling, because if that what a soul was, there would be no materialist anywhere who disbelieved in them.

        But you know, never mind. It appears we actually have no metaphysical conflict here. You think people are material, and they have a living form which you call soul. Aside from terminological quibbles, I have no disagreement with that.

        The question then becomes, why can’t a machine made out of silicon and software also have an intelligent form, something soul-like? It might not be a strictly human form, but maybe it is still intelligent and soulful in its essence, just like all accelerations are similar in their essence if not their details.

        Turing’s view was that this essence was mathematical and algorithmic, and so could be fully captured as an effective computation aka Turing machine, and a program that could fully mimic human conversation had perforce a form itself that was close enough to a human form to be indistinguishable from an actual human.

        I happen to think there is a lot wrong with this. A Turing machine is not a very good model of a human mind. It’s too dualistic; it just replicates traditional mind/body dualism in a sort of software/hardware dualism. But that’s a separate consideration from whether it is possible in theory to make an intelligent creature out of synthetic components.

        I guess if you are religious in a simple-minded sort of way, this isn’t a big problem. Humans are humans and have souls because they have a spark of the divine granted them by their creator; machines do not, end of story. If you are religious but less simple-minded, you will notice that since man was created in the image of god, he has a propensity for creating beings in his own self-image.

        However if you are a naturalist, and reject supernatural explanations, there needs to be some systemic or structural reason that a machine that displays what seems to be a fully human intelligence should not be considered fully human. If a silicon machine is an incarnation of the same algorithm as a meat machine, as judged by its behavior or structure, why should it not be considered as authentically intelligent?

      • Wow, thanks a.morphous for a truly engaging snarkless reply. It moves the dialogue forward, which is just great. OK, on to the fisking, which I shall try to make as constructive and indeed as friendly as possible. Sorry for my delay in posting it; real life has been a continued and peremptory challenge of late.

        My point was not “all accelerations are exactly the same,” but that they are sufficiently alike in their effects as to be indistinguishable.

        Leibniz noticed that whatever things are indistinguishable are identical. If there is no way to tell the difference between x and y, why then x = y; for, to say “there is no way to tell any difference between x and y” *just is* to say that *there is no difference between x and y;* so that, x = y.

        The Identity of Indistinguishables is right up there with the Law of Noncontradiction. All maths hang upon it.

        If accelerations cannot be distinguished at all, in any way, then they are all identical. They are the same, single acceleration (there is I suspect something fruitfully conciliatory in that notion for advaita Vedanta and for the Christian doctrines of Divine Simplicity and Omniscience – but tace that for the nonce). But of course they are in fact quite different, and so can be somehow distinguished, and so are not identical: some, e.g., are electromagnetic, while others are gravitational. Accelerations do of course all conform to Newtonian Law, and are insofarforth equivalent in that way, per Einstein. But they are no more identical than the quite disparate lights that fall on my eyes from my desk lamp, the window, and my monitor, even were those lights totally the same as to wavelength, amplitude and valence.

        Excursus: Valence, see; can’t forget or omit that. Valence → ¬ equivalence. Particularity rules out improper reduction.

        Excursus: Tat tvam asi; but, then, also, if neither this and nor that, then no tat tvam asi. This can’t be that if there are no differences between this and that. I.e., this can’t be that (in one way) unless this is *not* that (in another). Interesting. Thanks, a.morphous. I would not have seen this, or that, if I had not been arguing with you.

        Thanks, STA and all you other Scholastics, for teaching me about distinctions.

        You answer the question with assertions that we are only our bodies, but you offer no arguments to warrant that conclusion other than the logically fallacious ad hominem that only fools or madmen would think otherwise than you do.

        That’s not what I said and I’m sure your reading capabilities are such that you can grasp what I actually meant.

        Actually, I was pretty careful to check out what you actually did say. I shall therefore stand by my characterization of what you said. You did in fact say:

        The question of whether or not a machine could be intelligent is not interesting, it’s obviously true, because we are such machines ourselves. Only fools and madmen think otherwise.

        Gotta be careful here at the Orthosphere.

        Notwithstanding that, I understand what you were trying to say. But I think I responded to that, with some adequacy. Let me know, if not.

        NB: to say that we have souls is to say only that we are formed as living animals.

        That is not what having a soul means in common usage. It’s transparent weaseling, because if that what a soul was, there would be no materialist anywhere who disbelieved in them.

        Sorry, I can’t help you with this. The philosophical and theological record is what it is. It disagrees with your impressions of it (which, I feel sure, were by you gained honestly, from the popular and profane discourse upon it that as profane knows but little of the sacred doctrines it discusses and deplores); so what? In respect to philosophical truth, common usage is indicative, but not dispositive. When we are doing metaphysics or theology, it behooves us to employ the technical terms of their discourse – such as “soul” – not under their vernacular meanings but under their precise technical meanings. It’s the same in physics, where it is inapt and confusing, e.g., to take “energy” to mean what the New Agers mean by it.

        When vernacular usage of technical theological terms is investigated through the lens of orthodox theology, it is found to be entirely apt to reality, as doctrine comprehends it and as hoi polloi live it. When the Thomist doctrine of the soul is explained to untutored Christians, they have no trouble with it at all.

        Materialists disagree with orthodox Christian anthropology because they do not understand it. They are like New Agers trying to read physics without learning what physicists mean by the terms they read. But humble Christians get it, right away.

        Thus we arrive at an ironic agreement. When materialists understand materialism fully, they end by a faint recalcitrant echo of orthodox Christian metaphysics. A win! Are you not happy at this result? Is it not nice, to find that we agree about this?

        But you know, never mind. It appears we actually have no metaphysical conflict here. You think people are material, and they have a living form which you call soul. Aside from terminological quibbles, I have no disagreement with that.

        Amen, amen. Like I was just saying. So nice to agree with you, my friend.

        The question then becomes, why can’t a machine made out of silicon and software also have an intelligent form, something soul-like? It might not be a strictly human form, but maybe it is still intelligent and soulful in its essence, just like all accelerations are similar in their essence if not their details.

        That’s the $64 question. I’m perfectly willing to admit that any congeries of systematically ordered entities could be intelligent, and even conscious – could, i.e., together constitute a person. The only, and crucial, qualification I would enter is that, if that were to happen – whether in respect to a brain, or an artificial computer – the intelligence that then made itself felt in the causal nexus would be, not artificial, and *not mechanical,* but natural, and organic. It would be alive, and would experience its life. That its life was manifest in registers of silicon rather than in registers of neurons would matter not at all. What matters is the formal configuration, and not so much the character of the particular sorts of constituents that form configures. Mangan, Tipler and Powers have each covered this topic exhaustively.

        Much here depends upon our definition of “machine.” I take a machine to be an artifact of an actor that has no motive original in and from itself, but rather only in and from its author. An axe, e.g., is a machine: until I import to it some agency from myself, it lies idle.

        If machines are only and nothing but tools like the axe, then of course they can’t be intelligent. They can in that case be nothing more than tools, like an axe, albeit more complex. But if a thing – that once we called only a machine – were to generate outputs that *could not in logic be derived from the inputs of its artificers* – that, to be precise, *could not possibly have been produced by the programs they wrote for it or the data they put into it* – why then it would obviously be an agent in and of itself – i.e., itself a source of novel origination – and not entirely derived from the causal inputs that had been to it supplied ex ante by its artificers.

        NB: if such a thing were to happen in a Turing test, then we might rightly infer that the machine thus tested was intelligent. But, NB: if such a thing were to happen in a Turing test, we would also rightly infer that the Turing machine thus tested was *not in fact nothing but a machine.*

        It is this difference, exactly, between an entirely deterministic machine and an agent indeterminant ex ante, which the Turing test wants to overlook. The Turing test argues that if we can’t tell a difference between those two radically different sorts of things, then there just is no difference between them, so that they are the same sorts of things: machines are minds, and minds are machines. Prima facie, that seems to adhere to Leibniz’ Identity of Indistinguishables. But not so fast. If the Turing machine has acted in a manner altogether reducible to the effects of its programs and data, *then it is a machine,* despite all contrary appearances. If it has acted in a manner not altogether thus reducible, *then it is not a machine.*

        The question then boils down to this: is there anything in the acts of minds that is not altogether and exhaustively caused by their mundane antecedents? If no, then minds are machines, period full stop. In that case, Kristor and a.morphous do not actually exist, and so we are not actually conversing here. We are in that case rather only deterministic routines running on physical substrates.

        If yes, they are not. In that case, we two do indeed exist, and can parley here at the Orthosphere. Are we in fact talking to each other? Do either of us actually exist, so as thus to talk? It reduces at last to that. What say you? It surely seems to me that we do both indeed exist. Surely the contrary notion bears the greater burden of proof, no? Extraordinary claims call for extraordinary evidence …

        This in a nutshell I think captures the controversy between us. You think that we are machines, and as such entirely – nothing more than – functions of our data and programs; whereas I think that we, and indeed all actual events, are not entirely functions of our data and programs, but are, rather, each ourselves occasions of ontological innovation, real acts that take our data systematically into account, but that then introduce something more, something novel, that was not present in our mundane data.

        I can’t honestly see how you can obtain cosmogony in any other way than by such novel origination. No true innovation → no events → no cosmogony. But, cosmogony, ergo, etc.

        Actually, I bet you are a lot closer to my perspective than to that of the eliminative materialist. You sure talk that way. Plus it’s only common sense.

        Turing’s view was that this essence was mathematical and algorithmic, and so could be fully captured as an effective computation aka Turing machine, and a program that could fully mimic human conversation had perforce a form itself that was close enough to a human form to be indistinguishable from an actual human.

        I get it, but it just won’t do. This, in just the way that a man accelerated by a rocket is just not the same thing as a man accelerated by an elevator, even though their accelerations can be formalized in the same way. The argument seems to have some force, but in the end it just doesn’t.

        A Turing machine is not a very good model of a human mind. It’s too dualistic; it just replicates traditional mind/body dualism in a sort of software/hardware dualism.

        That is a brilliant insight.

        But that’s a separate consideration from whether it is possible in theory to make an intelligent creature out of synthetic components.

        It is possible. Every component is synthetic. And every occasion of mind then synthesizes disparate component inputs.

        The question really is whether it is possible to make an intelligent creature out of a bunch of stuff that is altogether unintelligent. It answers itself: no, it isn’t. A machine can’t be a mind. That doesn’t mean that a mind cannot be mediated in silicon circuits, rather than carbon circuits. It means only that you can’t get a mind out of mindless stuff, of any sort; whereas, from mindful stuff, you can assemble a mind.

        Machine and mind are categoreally incommensurable. A mind can’t be a machine, nor can a machine be a mind. The difference is easy to see: machines can’t do anything by themselves, because they are nothing but tools of their artificers and users (and, so, *expressions* or *manifestations* of the minds of their artificers and users); whereas minds can. Thus if one of our computers were to become mindful, it would no longer be a machine. In like manner, if a lifeless brain were to be returned to life and so become again mindful, it would no longer be dead and therefore mindless.

        That’s what the Turing test tries to get at, and fails. It wants to say that if a Turing machine were ever to act like a true intelligence, by doing something all intelligence does – by doing, i.e., something that could not have been predicted (even in principle) from its data and programming – why then it would be intelligent. Sure. But in that case, *it would no longer be a machine.* Sorry to belabor the point, but it is important, and warrants emphasis. Computers can *seem* to act like minds, but they cannot *be* minds unless they *are* minds, and so *not machines.*

        Humans are humans and have souls because they have a spark of the divine granted them by their creator; machines do not, end of story.

        Allowing for lots of terminological and ontological distinctions that cry out to be noticed, I think we here pretty much agree. E.g., humans are living persons, while machines are not. The “spark of the divine” enjoyed by the former is but the form of a living person. If a computer were to become such a person, it would no longer be a machine. It would no longer be entirely a function of its artificial causal antecedents, but rather free: able to originate acts at variance with the logical determinations of its data and programming.

        However if you are a naturalist, and reject supernatural explanations, there needs to be some systemic or structural reason that a machine that displays what seems to be a fully human intelligence should not be considered fully human. If a silicon machine is an incarnation of the same algorithm as a meat machine, as judged by its behavior or structure, why should it not be considered as authentically intelligent?

        It should; the chemical substrate of a person is neither here nor there. But that a person is subvened by chemistry does not entail that he should then be considered as a merely chemical *machine.* He should rather be considered as a free person, a more or less independent agent, a source of some jot of true and temporally unanticipated ontological novelty. If the actions of an entity are given entirely and completely in and by their chemical antecedents and components, then the entity has no acts of its own, and so simply does not exist. It rather amounts in that case *only* to its causal antecedents, and has neither actual existence nor therefore capacity to act. In that case, there is nothing of it that calls for explanation; it rather simply is not, and there is no more to be said about it.

        But, in that case, there is nothing like a.morphous or Kristor.

        Manifestly, there are both a.morphous and Kristor. So …

      • This stuff about identity is just stupid, sorry. I’ll stand by my analogy, which since it is only an analogy is not true or false, just insight-generating or not.

        That is to say, an artificial acceleration is close enough to gravity to stand in for it for practical purposes, no metaphysical reasoning about identity required. And analogously to that, Turing is saying that an artificial intelligence that passes some human-complete task (conversation) serves the same practical purposes as natural intelligences.

        Re what I said or didn’t say. You said that I “answer the question with assertions that we are only our bodies”, and to support that you quoted me as saying: “The question of whether or not a machine could be intelligent is not interesting, it’s obviously true, because we are such machines ourselves” conveniently omitting the next paragraph where I said “The question is are we only bodies, or Something More?”, and explained that this is a malformed question.

        So, OK, maybe I’m being way too subtle for you lot, Mikvet made a very similar misreading of something I thought was clear. I am definitely not saying “we are only our bodies”. I’d never say that, it’s your projection of what you think I believe.

        That is to say, you are arguing with the atheist that lives in your head, not me.

        This question of what a soul is: you introduced the word, so OK, you can mean what you like by it: “we are formed as living animals”. Fine. If a soul is just a form, then it isn’t supernatural and no atheist or materialist has a problem with it. But then it says absolutely nothing about whether a machine can’t also have a form that engages in intelligent human-like behavior.

        Thus we arrive at an ironic agreement. When materialists understand materialism fully, they end by a faint recalcitrant echo of orthodox Christian metaphysics. A win! Are you not happy at this result? Is it not nice, to find that we agree about this?

        Not really, I don’t come here looking for agreement. And I suspect you don’t understand me well enough to say I am in agreement with you.

        I’m perfectly willing to admit that any congeries of systematically ordered entities could be intelligent, and even conscious – could, i.e., together constitute a person. The only, and crucial, qualification I would enter is that, if that were to happen … the intelligence … would be, not artificial, and *not mechanical,* but natural, and organic.</blockquote.

        I find this truly mystifying; it doesn't make a lick of sense. There is nothing inherently different about the mechanical and the organic; that is the basis for all of science, certainly since Wöhler demonstrated the synthesis of urea in 1828 and destroyed vitalism. And there is in principle no difference between the artificial and the natural; a synthesized molecule of urea has the exact same effect as a natural one (in fact they are both synthesized, just through different processes).

        I can't even understand what you are getting at. Are you saying that if we somehow build an intelligent machine, it somehow ceases to be artificial when it first achieves intelligence? Presumably not, but then what are you trying to say?

        Much here depends upon our definition of “machine.” I take a machine to be an artifact of an actor that has no motive original in and from itself, but rather only in and from its author. An axe, e.g., is a machine

        This is just another attempt to win the argument by definition. There are of course already machines with a degree of autonomy – a thermostat, a Roomba.

        Machine and mind are categoreally incommensurable. A mind can’t be a machine, nor can a machine be a mind.

        Why not? I mean, if you define machines as you do above, then sure, it’s trivially true, but that is not the definition that Turing was using or that I am using or is in common usage. And it’s an attempt to win an argument by altering the definition of terms, which is tiresome. It’s thinly disguised question-begging.

        The whole point of cybernetics, computation and AI is that we have learned how to make machines that can do some things that seem tantalizingly mind-like. That’s not equivalent to proving that you can make a human-equivalent machine, but it’s suggestive of the possibility. To pretend that machines haven’t progressed beyond the hand-ax is kind of dumb, isn’t it?

        Re eliminative materialism, I am pretty sure we have gone over this exact same thing multiple times in the past. Oh yeah to quote myself from 2019 (can’t find the comment now)

        Eliminative materialism is just stupid. But normal materialism does not entail that “there are no thoughts or beliefs”, just that they are implemented via material means.

        This is obvious to anyone who understands computers, and apparently completely impossible to understand if you don՚t. Consider the computer you are using now – it is in principle describable in purely physical terms, as patterns of electrical states at the hardware level, and also as patterns of symbolic implication at the software level. Both of these are true and describe real phenomenona. An eliminative materialist would presumably say that the software level is not real, but again, this is a stupidity that only would occur to a philosopher.

        Note that this is just an analogy. Computers show us how the same system can be both physical and symbolic/representational at the same time. Humans (whether or not they can be fully modelled by computation) share this quality.

  2. It seems to me that there is a parallel trend of opinion, generally held by the same people, that human beings do not think, but are simply demonstrating the occurrence of random, unconscious, deterministic epiphenomena, but that some of the machines manufactured by these human beings are capable of actual thought.

    • Yes. To believe that machines think, it helps to think of oneself as a machine. So you end up with eliminative materialism, at least implicitly; and, so, at the zero of thought.

    • If you turned the machine on and waited and gave it no programming and then it cranked some output on its own, that would be a start. Of course, if that were to happen, it would indicate not artificial intelligence, but rather natural intelligence.

      • Fascinating. Someone might say a relationship with a robot would be an artifical one and a lie. Likewise, placing “artificial” with “intelligence” would make it only an empty approximation of intellect. So artificial intelligence can only be a well-crafted lie. You convinced me.

  3. It would be better to actually read the paper, since it challenges the notion of intelligence. Birds, bats, insects and pterosaurs all have wings; biologically they differ, yet they are still wings. Different causes but the same effect.

    Turing’s first challenge was to distinguish between a man and a woman by asking questions; That seems to be quite a challenge nowadays.

    How may people do you know who believe their dogs think and have human-like intelligence? A few years ago, people were impressed by a dog who got on a bus and got off at the doggie park. That does not demonstrate the critter’s intelligence, but rather it shows how little intelligence it takes to ride a bus. Surely you know people who cried at the end of ET, demonstrating that they cannot distinguish between human intelligence and its simulation.

    So like a wing, if you cannot distinguish between a human and a bot, then they are both “thinking” in some sense. If you are serious, Roger Penrose wrote a nice book or two about thinking about thinking, they show more understanding of the issues.

    At best, AI can only simulate rational, logical thinking. However, the human mind transcends ratio with intellectus. That is where the refutation lies.
    (I have link to the paper in my Library.)

  4. I think the onus is on the materialists to demonstrate that a machine can be fully human, not on transcendentalists to prove that they aren’t.

    Perhaps a.morphous might explain to this simple-minded person why I have to prove that your assertion that a machine is intelligent in a human way must be assumed and that it is up to me to prove that it isn’t?

    Is this just another progressivist just-so diktat, such as we’re to accept that two people of the same sex can marry or that a man with his testicles still hanging must be accepted as a woman on his own say-so?

    • Really not sure what you are talking about. What onus? You can assume whatever you like, and you don’t “have to prove” anything, I’m not holding a gun to your head.

      I guess this is just a kind of whiny way to say that you don’t share my assumptions about what is more plausible. That’s fair enough, but why put it in this form that somehow assumes I’m forcing you to do something? I have zero power here, other than that of argument and rhetoric, how then can I impose on you a “progressivist just-so diktat”?

      Aside from that, this is a philosophical question, not math, the only area of human thinking that is really amenable to proof.

      I guarantee that even if the materialists manage to make a machine that passes the Turing test, and exhibits emotion, self-consciousness, creativity, and all the other hallmarks of the human – there would still be people who would deny that it is a real intelligence and not a mere simulation of one. That is to say, even that would not constitute an ironclad proof that all would accept.

      • To quote yourself: “…there needs to be some systemic or structural reason that a machine that displays what seems a fully human intelligence should not be considered fully human.” That seems to be laying the onus on others to provide the reasons for what is only your assertion.

      • Um, why did you leave out the preceding qualifier? What I said was:

        However if you are a naturalist, and reject supernatural explanations, there needs to be some systemic or structural reason that a machine that displays what seems a fully human intelligence should not be considered fully human

        Which, if you are not a naturalist, implies no onus whatsoever.

        I can’t tell if you are deliberately trying to distort what I am saying (pretty dumb when the full quote is right there), or really can’t read plain English.

  5. I wonder if Turing was influenced in his thinking in these matters by his background in mathematical logic. Purely formal operations are one domain where two different representations *are* equivalent (at least in terms of their formal properties).

    For instance, one can take the same arithmetical calculations that are written on paper and do them on an abacus and they are the same calculations.

    Of course that is not true for things that have content and are not purely formal. Even the same algebraic equation is different if different numbers are substituted. And this idea really fails for something that has a subjective element, which we know thought does.

    Turing’s got it backwards. Thought isn’t a computer program; a computer program is a thought.

  6. Another consideration is that a scientist will take a question he has no idea how to answer and modifty it to reframe it into a question he thinks he can make progress towards. All well and good. Since scientists want to make *progress*, one does not want to spend all one’s time on a question that is too difficult.

    But then, this practical decision is adapted into a metaphysical stance and the original philosophical question is declared to be meaningless because the scientist in question can’t see how to make incremental progress towards solving it. Well, that doesn’t follow.

    I have not read Turing’s original paper, so I can’t say this is what he was doing, but it appears to be a common attitude among AI researchers that because they can make incremental progress with the problems they have set for themselves and they can’t make progress with the philosophical problems

    • that those philosophical problems have in fact been superseded by the other problems. It’s a peculiar form of positivism. This attitude also seems to be common in physicists when their work impinges on philosophical matters.

      • Turing’s got it backwards. Thought isn’t a computer program; a computer program is a thought.

        Well, accurately and precisely said. Thus you can reduce a computer program to a thought, but not vice versa.

  7. At a.morphous: in both your replies to me, you have directed personal insults at me. Rather than call me a whiner or insult my command of the English language, perhaps you might consider the possibility that the fault in communication might be at your end? The fact still remains, but perhaps you don’t comprehend what you’ve actually written that, regardless of your qualifier, you are asserting that an assumption without proof is adequate for a naturalist. To put it another way, what would make sense is to say ‘that there needs to be a reason that a machine that displays what seems to be a fully human intelligence should be considered fully human’. The absence of ‘not’ in the sentence makes a crucial difference. Of course, maybe that only applies to the simple-minded, like myself.

    • You started this little interchange by accusing me of being some kind of progressive totalitarian bent on imposing my views by force, so I think my mild insults were not very out of line. You still have not backed up your accusation, btw.

      But you are right, if there’s a failure in communication the fault could be mine. I said (with more context):

      I guess if you are religious in a simple-minded sort of way, this isn’t a big problem. Humans are humans and have souls because they have a spark of the divine granted them by their creator; machines do not, end of story. If you are religious but less simple-minded, you will notice that since man was created in the image of god, he has a propensity for creating beings in his own self-image.

      However if you are a naturalist, and reject supernatural explanations, there needs to be some systemic or structural reason that a machine that displays what seems a fully human intelligence should not be considered fully human

      This is the opposite of putting an onus on you; it is offering you an easy out. You can be a supernaturalist or a naturalist. In the first case, you can explain anything you want by attributing it to God or gods, end of story. However, if you are a naturalist, you don’t have that easy option, and you need to explain how minds work in terms of other natural phenomenon, like the structure and activity of neural tissue or the processes of evolution.

      With it’s qualification, this shouldn’t even be a controversial statement; it’s really just defining two different stances towards the world. I guess the word “need” triggered you, but (a) it doesn’t apply to you, and (b) it’s a “need” driven solely by intellectual curiosity, not a moral imperative or government marching order.

      Turing is saying, if you can replicate the structure and function of the mind using different materials, have you not replicated the human? If not, why not? Supernaturalists have an easy answer to this, naturalists do not. A curious intelligent naturalist seeks to understand how the mind works, and computation offers at least the possibility of some answers. But if you are a supernaturalist, why should you care? You have an “Explain everything for free” card in the game of thought.

      • I asked a question, outlining some implications of what you said, on an anonymous blog site, or whatever we call it, and now you’re accusing me of using ‘force’ against you. I’m saying no more-anyone interested in reading these posts can make up their own mind.

      • Where did I accuse you of using force against me? I said that YOU accused ME of wanting to use force (a “progressive diktat”) on others.

        I think the philosophy will have to be tabled until we solve basic reading comprehension.

  8. I know hardly anything about the Turing test, but I find the trolley analogy brilliant. Kudos to Dr. Chastek! Looking him up, I found out about the Chesteron Academy movement. It gladdens the heart, though it’d be better if they were TLM renegades. Maybe they are??? Anyways, many blessings to them all. Also, as a side note, I noticed the “house system” for the school (https://chestertonacademy.org/chesterton-student-life/house-system/) . . . love it. Two eastern and two western fathers — and fitting heraldry. After looking around the site, I sent this to my family:

    “In three hundred years, will people realize that their (future-)contemporary schools have internal ‘houses’ (a medieval tradition) largely because of one woman’s work at the turn of the twenty-first century (Rowling)?

    “I know that certain elite universities kept their house systems through the 20th century, but they were weakened and dismantled throughout the West. And it appears to me that that process stopped and reversed because of Rowling’s stories.

    But that’s just the sort of thing that time will forget (except for specialists with quirky niche interests).”

    And that seems to be how history often works — as little rivers determine the landscape of mighty continents (thanks, Prof. Smith!), so such “trivial” occurrences (in the grand scheme) shape civilization.

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.