In a few sentences, and with his characteristic penetrating trenchance, Chastek demolishes the Turing Test, and for that matter all arguments from similarity of causal effects; I post here without apology his entire argument, on account of its brevity, precision, and devastation:
One principle of (strong/sci-fi) AI seems to be that what can replicate the effects of intelligence is intelligence, e.g. the Turing test, or the present claim by some philosophers that a Chinese room would be intelligence.
So imagine you rig up a track and trolley to accelerate at 9.8 m/s2. This perfectly replicates the effects of falling, and so is artificial falling. It deserves the name too: you could strap a helmet to the front of your train and drive at a wall 10 feet away, and it will tell you what the helmet would look like if dropped from 10 feet. But for all that the helmet at the front of your train is obviously being pushed and not falling – falling is something bodies do by themselves and being pushed isn’t. The difference is relevant to AI, for just as falling is to being pushed so thinking for oneself is to being a tool, instrument or machine. Both latter are acted on by others, and have the form by which they act in a transient way and not as a principal agent.
Arguments from similarity of causal effect are all species of affirmations of the consequent; a basic and stupid logical fallacy. The fallacious character of the Turing Test should be apparent to every student of logic. That Turing – himself no slouch at logic, forsooth – proposed it in the first place attests to the absurdity of the physicalist – or, at least, the tendentiously agnostic – proposal that his other metaphysical commitments could not excuse.
The same devastating critique lands upon Utilitarianism, and other consequentialist deontologies. To think that a thing is good because it happens to (seem to) work out well is to put the cart before the horse. Indeed, it is to mistake the cart for the horse.
++++++
Post Scriptum: a sodden, sad and anxious thought: is not empiricism per se a sort of consequentialism, and thus epistemologically unreliable? An experiment, however well or badly designed, turns out with a certain result. Sure, OK. Induction and all that, I get it. But is it not the case that *reasoning from results is per se affirmation of the logical consequent, and thus logically fallacious?* I’m sure this topic has been exhaustively explored by the specialists. But I’m not a specialist in the topic, so …
What the Turing test really demonstrates is the frequent stupidity of the highly intelligent. A child could see it was nonsense. Something that seems to be something must be that thing. What?!
Once you see that it’s silly, you can’t unsee it, and you wonder how you ever failed to see it. But until you do see it, the silliness just isn’t there for you, so you take it seriously, and that can make it all the harder to see the silliness. “But, but,” you sputter, “but it’s *Turing*! Surely that means it must be on the right track! He’s a serious guy, and way smarter than I. If I try harder to understand how his argument works, why then …”
Again, you demonstrate a talent for finding examples and metaphors that illustrate the opposite of what you think they do (in this case the source is your linked page, but still) https://www.einstein-online.info/en/spotlight/equivalence_principle/ It’s kind of impressive!
And saying “Turing was no slouch at logic” is just embarrassing. He entirely revolutionized logic. That doesn’t mean that anything he said is automatically right, but maybe it means one should make a minimal effort to understand him before thinking you’ve cut him down with a simplistic argument.
The most important sentence in his famous paper on computational intelligence is: “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.” which is the hallmark of a sophisticated thinker who knows the scope of applicability of concepts. That is to say, consider it an English math nerd version of nonduality.
Acceleration is always acceleration, obviously. To suggest otherwise would be to engage in your sort of nondualist “logic.” But just as obviously, that this is so does not entail that all acceleration is gravitational. This is simple. Equivalence is not identity.
In his discussion of the question of whether machines can think, Turing says that he thinks the question of whether machines can think is too meaningless to discuss. OK. So his discussion of the question of whether machines can think is – together with its assertion of the meaninglessness of that question – quite meaningless.
As with all assertions of nonduality, the act of assertion contravenes the substance of the assertion.
Go ahead and knock yourself out with your “rejection” of the Law of Noncontradiction. Watching a dog chase his tail is totally funny.
The point is: Artificial gravity (eg the kind generated by the acceleration of a spaceship) is indistinguishable from “natural” gravity and serves the same purposes. So it’s a singularly bad metaphor to use in an attempt to prove that artificial intelligence is not real intelligence, or whatever point you think you are making.
Note that there are plenty of valid criticisms of the Turing model of intelligence, just not this one.
Have a good New Year!
Happy New Year to you, too, my valuable friend. I do so fondly appreciate you, you irascible cur! May God bless you and keep you, and make his countenance to shine forth upon you, and bring you to everlasting peace.
Believe me, I totally get – I got 50 years ago – how one sort of acceleration is indistinguishable from another, *so long as one is ignorant of, or chooses to ignore, the differences in the accelerations.* That much is obvious, and it should not have taken an Einstein to notice it. It didn’t, probably. But it misses the point. There are *in fact* different sorts of accelerations, *all of which are accelerations.* That accelerations are all accelerations does not obviate their differences. Horses and dogs are mammals, but horses are not dogs. This is so simple. I’m not sure why you so want to avoid seeing it.
Actually, that’s not true. I think I do see why you want to avoid seeing it. You want the Turing test to be indicative, so that you can stop worrying about whether your materialism might be false. Dude: it’s false. It has to be: on materialism, there is no such thing as the thought of materialism. Get over your worry about materialism. Abandoning materialism does not mean abandoning science or your other cherished notions. It means rather a wonderful expansion of your intellectual powers. Materialism proper, of the Aristotelian sort, is fantastically more competent than the Modernist sort. You should give it a try.
I’ve figured one thing out. When you say things like:
it is a sure tell that you are avoiding something that I am trying to convey to you.
Projection is a hell of a drug.
I have very little interest in defending some cartoon version of materialism. The question of whether or not a machine could be intelligent is not interesting, it’s obviously true, because we are such machines ourselves. Only fools and madmen think otherwise. We are bodies, we are subject to the requirements and insults of physicality, hunger, pain, death, as well as its pleasures and rewards. Our thoughts are accomplishments of the meat computers in our heads, as evidenced by what happens when you mess up the machinery with drugs or injury.
The question is are we only bodies, or Something More? But if you reflect on that question, you can see that it is based on the kind of denigration of physicality that I’ve mentioned before. On the deeply held conviction that bodies are Low and mind is High, that bodies are dead without some animating spirit to make them go.
I don’t think this attitude is peculiar to Christianity or Descartes or any of the usual culprits; the conceptual split between spirit and body is a human universal, although how their relationship is conceptualized varies quite a bit. Probably based on the universal experience of seeing people die; when what was a whole person is transformed into a dead hunk of rotting meat plus an absence. it makes sense given those experiences to think of humans as matter + spirit.
Thus, you think materialism is seen as some kind of denigration of existence as such. If the cosmos is composed of matter and spirit both, then materialism is a denial of spirit, it makes everything dead.
But modern materialism (misnamed really, “naturalism” would be better) is not about denial of spirit or mind, it is asserting that the body and mind are one thing and not separable into components, contra this common but incorrect human conceptualization. That life and mind are fully material and there is nothing wrong with that. No supernatural Something More required.
You seem to be saying that I’m avoiding understanding your point that all accelerations are entirely alike, instead suggesting that you are avoiding understanding my point that accelerations are not all entirely alike. But I do quite well understand “all accelerations are entirely alike.” It’s a simple idea. And wrong, obviously. A push is not a pull, even though they are alike in their effects on what is pushed or pulled. This, too, is an extremely simple idea. Furthermore, neither pushes nor pulls are gravitational accelerations. They are electromagnetic. There is nothing tricky about this.
So, I have directly and substantively responded to your point. Whereas, you have not responded at all to mine. Indeed, you have not even noticed it. Instead you tried to change the subject to a critique of me.
So, who’s projecting?
I am indeed genuinely puzzled about your avoidance of the simple notion that not all accelerations are entirely alike. I conjectured that you don’t want to grapple with the notion because you are a materialist, committed to the notion that we are nothing but machines, and that because machines of our sort can be intelligent, therefore machines of other sorts – such as our computers – can be likewise intelligent. This is materialism of a sort – specifically, it is the doctrine of physicalism. So, it looks like my conjecture was correct.
So what? Even if that were so – as for some (albeit not orthodox Christians) it certainly has been – the question would still stand. That some Gnostic types denigrate the body does not demonstrate that we are only our bodies. It is in fact quite irrelevant to the question.
You answer the question with assertions that we are only our bodies, but you offer no arguments to warrant that conclusion other than the logically fallacious ad hominem that only fools or madmen would think otherwise than you do. You point out that we are bodies, but of course the question of whether that’s all we are presupposes this fact, and no serious philosopher has disputed it. That we are bodies goes without saying. It does not at all dispose of the question whether or not there are different sorts of bodies – alive, dead, intelligent, etc.
It is as if when asked whether the body lying on the rug is alive or not, you responded, “Dude, it’s a *body;* of *course* it’s alive.”
So, you’ve completely avoided Chastek’s challenge to your answer to the question.
Hard materialism has indeed been sidling in the last century closer and closer to an approximation of orthodox Christian anthropology. It’s been smuggling concepts like final and formal causation into its toolkit, without realizing it has been doing so.
NB: to say that we have souls is to say only that we are formed as living animals. It is not to say that there is a ghost in the machine of the body – that’s a Gnostic idea. This has been fully nailed down in classical Christian thought since Aquinas. We are indeed fully material – this was settled by Aristotle. But – *and* – we are fully final, fully formal, and fully efficient. The four sorts of causal factor are in us all integral. You can’t obtain an actual contingent being that is not formal, final, material, and efficient.
I’m not avoiding it; it’s entirely irrelevant. My point was not “all accelerations are exactly the same”, but that they are sufficiently alike in their effects as to be indistinguishable.
That’s not what I said and I’m sure your reading capabilities are such that you can grasp what I actually meant.
That is not what having a soul means in common usage. It’s transparent weaseling, because if that what a soul was, there would be no materialist anywhere who disbelieved in them.
But you know, never mind. It appears we actually have no metaphysical conflict here. You think people are material, and they have a living form which you call soul. Aside from terminological quibbles, I have no disagreement with that.
The question then becomes, why can’t a machine made out of silicon and software also have an intelligent form, something soul-like? It might not be a strictly human form, but maybe it is still intelligent and soulful in its essence, just like all accelerations are similar in their essence if not their details.
Turing’s view was that this essence was mathematical and algorithmic, and so could be fully captured as an effective computation aka Turing machine, and a program that could fully mimic human conversation had perforce a form itself that was close enough to a human form to be indistinguishable from an actual human.
I happen to think there is a lot wrong with this. A Turing machine is not a very good model of a human mind. It’s too dualistic; it just replicates traditional mind/body dualism in a sort of software/hardware dualism. But that’s a separate consideration from whether it is possible in theory to make an intelligent creature out of synthetic components.
I guess if you are religious in a simple-minded sort of way, this isn’t a big problem. Humans are humans and have souls because they have a spark of the divine granted them by their creator; machines do not, end of story. If you are religious but less simple-minded, you will notice that since man was created in the image of god, he has a propensity for creating beings in his own self-image.
However if you are a naturalist, and reject supernatural explanations, there needs to be some systemic or structural reason that a machine that displays what seems to be a fully human intelligence should not be considered fully human. If a silicon machine is an incarnation of the same algorithm as a meat machine, as judged by its behavior or structure, why should it not be considered as authentically intelligent?
Wow, thanks a.morphous for a truly engaging snarkless reply. It moves the dialogue forward, which is just great. OK, on to the fisking, which I shall try to make as constructive and indeed as friendly as possible. Sorry for my delay in posting it; real life has been a continued and peremptory challenge of late.
Leibniz noticed that whatever things are indistinguishable are identical. If there is no way to tell the difference between x and y, why then x = y; for, to say “there is no way to tell any difference between x and y” *just is* to say that *there is no difference between x and y;* so that, x = y.
The Identity of Indistinguishables is right up there with the Law of Noncontradiction. All maths hang upon it.
If accelerations cannot be distinguished at all, in any way, then they are all identical. They are the same, single acceleration (there is I suspect something fruitfully conciliatory in that notion for advaita Vedanta and for the Christian doctrines of Divine Simplicity and Omniscience – but tace that for the nonce). But of course they are in fact quite different, and so can be somehow distinguished, and so are not identical: some, e.g., are electromagnetic, while others are gravitational. Accelerations do of course all conform to Newtonian Law, and are insofarforth equivalent in that way, per Einstein. But they are no more identical than the quite disparate lights that fall on my eyes from my desk lamp, the window, and my monitor, even were those lights totally the same as to wavelength, amplitude and valence.
Excursus: Valence, see; can’t forget or omit that. Valence → ¬ equivalence. Particularity rules out improper reduction.
Excursus: Tat tvam asi; but, then, also, if neither this and nor that, then no tat tvam asi. This can’t be that if there are no differences between this and that. I.e., this can’t be that (in one way) unless this is *not* that (in another). Interesting. Thanks, a.morphous. I would not have seen this, or that, if I had not been arguing with you.
Thanks, STA and all you other Scholastics, for teaching me about distinctions.
Actually, I was pretty careful to check out what you actually did say. I shall therefore stand by my characterization of what you said. You did in fact say:
Gotta be careful here at the Orthosphere.
Notwithstanding that, I understand what you were trying to say. But I think I responded to that, with some adequacy. Let me know, if not.
Sorry, I can’t help you with this. The philosophical and theological record is what it is. It disagrees with your impressions of it (which, I feel sure, were by you gained honestly, from the popular and profane discourse upon it that as profane knows but little of the sacred doctrines it discusses and deplores); so what? In respect to philosophical truth, common usage is indicative, but not dispositive. When we are doing metaphysics or theology, it behooves us to employ the technical terms of their discourse – such as “soul” – not under their vernacular meanings but under their precise technical meanings. It’s the same in physics, where it is inapt and confusing, e.g., to take “energy” to mean what the New Agers mean by it.
When vernacular usage of technical theological terms is investigated through the lens of orthodox theology, it is found to be entirely apt to reality, as doctrine comprehends it and as hoi polloi live it. When the Thomist doctrine of the soul is explained to untutored Christians, they have no trouble with it at all.
Materialists disagree with orthodox Christian anthropology because they do not understand it. They are like New Agers trying to read physics without learning what physicists mean by the terms they read. But humble Christians get it, right away.
Thus we arrive at an ironic agreement. When materialists understand materialism fully, they end by a faint recalcitrant echo of orthodox Christian metaphysics. A win! Are you not happy at this result? Is it not nice, to find that we agree about this?
Amen, amen. Like I was just saying. So nice to agree with you, my friend.
That’s the $64 question. I’m perfectly willing to admit that any congeries of systematically ordered entities could be intelligent, and even conscious – could, i.e., together constitute a person. The only, and crucial, qualification I would enter is that, if that were to happen – whether in respect to a brain, or an artificial computer – the intelligence that then made itself felt in the causal nexus would be, not artificial, and *not mechanical,* but natural, and organic. It would be alive, and would experience its life. That its life was manifest in registers of silicon rather than in registers of neurons would matter not at all. What matters is the formal configuration, and not so much the character of the particular sorts of constituents that form configures. Mangan, Tipler and Powers have each covered this topic exhaustively.
Much here depends upon our definition of “machine.” I take a machine to be an artifact of an actor that has no motive original in and from itself, but rather only in and from its author. An axe, e.g., is a machine: until I import to it some agency from myself, it lies idle.
If machines are only and nothing but tools like the axe, then of course they can’t be intelligent. They can in that case be nothing more than tools, like an axe, albeit more complex. But if a thing – that once we called only a machine – were to generate outputs that *could not in logic be derived from the inputs of its artificers* – that, to be precise, *could not possibly have been produced by the programs they wrote for it or the data they put into it* – why then it would obviously be an agent in and of itself – i.e., itself a source of novel origination – and not entirely derived from the causal inputs that had been to it supplied ex ante by its artificers.
NB: if such a thing were to happen in a Turing test, then we might rightly infer that the machine thus tested was intelligent. But, NB: if such a thing were to happen in a Turing test, we would also rightly infer that the Turing machine thus tested was *not in fact nothing but a machine.*
It is this difference, exactly, between an entirely deterministic machine and an agent indeterminant ex ante, which the Turing test wants to overlook. The Turing test argues that if we can’t tell a difference between those two radically different sorts of things, then there just is no difference between them, so that they are the same sorts of things: machines are minds, and minds are machines. Prima facie, that seems to adhere to Leibniz’ Identity of Indistinguishables. But not so fast. If the Turing machine has acted in a manner altogether reducible to the effects of its programs and data, *then it is a machine,* despite all contrary appearances. If it has acted in a manner not altogether thus reducible, *then it is not a machine.*
The question then boils down to this: is there anything in the acts of minds that is not altogether and exhaustively caused by their mundane antecedents? If no, then minds are machines, period full stop. In that case, Kristor and a.morphous do not actually exist, and so we are not actually conversing here. We are in that case rather only deterministic routines running on physical substrates.
If yes, they are not. In that case, we two do indeed exist, and can parley here at the Orthosphere. Are we in fact talking to each other? Do either of us actually exist, so as thus to talk? It reduces at last to that. What say you? It surely seems to me that we do both indeed exist. Surely the contrary notion bears the greater burden of proof, no? Extraordinary claims call for extraordinary evidence …
This in a nutshell I think captures the controversy between us. You think that we are machines, and as such entirely – nothing more than – functions of our data and programs; whereas I think that we, and indeed all actual events, are not entirely functions of our data and programs, but are, rather, each ourselves occasions of ontological innovation, real acts that take our data systematically into account, but that then introduce something more, something novel, that was not present in our mundane data.
I can’t honestly see how you can obtain cosmogony in any other way than by such novel origination. No true innovation → no events → no cosmogony. But, cosmogony, ergo, etc.
Actually, I bet you are a lot closer to my perspective than to that of the eliminative materialist. You sure talk that way. Plus it’s only common sense.
I get it, but it just won’t do. This, in just the way that a man accelerated by a rocket is just not the same thing as a man accelerated by an elevator, even though their accelerations can be formalized in the same way. The argument seems to have some force, but in the end it just doesn’t.
That is a brilliant insight.
It is possible. Every component is synthetic. And every occasion of mind then synthesizes disparate component inputs.
The question really is whether it is possible to make an intelligent creature out of a bunch of stuff that is altogether unintelligent. It answers itself: no, it isn’t. A machine can’t be a mind. That doesn’t mean that a mind cannot be mediated in silicon circuits, rather than carbon circuits. It means only that you can’t get a mind out of mindless stuff, of any sort; whereas, from mindful stuff, you can assemble a mind.
Machine and mind are categoreally incommensurable. A mind can’t be a machine, nor can a machine be a mind. The difference is easy to see: machines can’t do anything by themselves, because they are nothing but tools of their artificers and users (and, so, *expressions* or *manifestations* of the minds of their artificers and users); whereas minds can. Thus if one of our computers were to become mindful, it would no longer be a machine. In like manner, if a lifeless brain were to be returned to life and so become again mindful, it would no longer be dead and therefore mindless.
That’s what the Turing test tries to get at, and fails. It wants to say that if a Turing machine were ever to act like a true intelligence, by doing something all intelligence does – by doing, i.e., something that could not have been predicted (even in principle) from its data and programming – why then it would be intelligent. Sure. But in that case, *it would no longer be a machine.* Sorry to belabor the point, but it is important, and warrants emphasis. Computers can *seem* to act like minds, but they cannot *be* minds unless they *are* minds, and so *not machines.*
Allowing for lots of terminological and ontological distinctions that cry out to be noticed, I think we here pretty much agree. E.g., humans are living persons, while machines are not. The “spark of the divine” enjoyed by the former is but the form of a living person. If a computer were to become such a person, it would no longer be a machine. It would no longer be entirely a function of its artificial causal antecedents, but rather free: able to originate acts at variance with the logical determinations of its data and programming.
It should; the chemical substrate of a person is neither here nor there. But that a person is subvened by chemistry does not entail that he should then be considered as a merely chemical *machine.* He should rather be considered as a free person, a more or less independent agent, a source of some jot of true and temporally unanticipated ontological novelty. If the actions of an entity are given entirely and completely in and by their chemical antecedents and components, then the entity has no acts of its own, and so simply does not exist. It rather amounts in that case *only* to its causal antecedents, and has neither actual existence nor therefore capacity to act. In that case, there is nothing of it that calls for explanation; it rather simply is not, and there is no more to be said about it.
But, in that case, there is nothing like a.morphous or Kristor.
Manifestly, there are both a.morphous and Kristor. So …
This stuff about identity is just stupid, sorry. I’ll stand by my analogy, which since it is only an analogy is not true or false, just insight-generating or not.
That is to say, an artificial acceleration is close enough to gravity to stand in for it for practical purposes, no metaphysical reasoning about identity required. And analogously to that, Turing is saying that an artificial intelligence that passes some human-complete task (conversation) serves the same practical purposes as natural intelligences.
Re what I said or didn’t say. You said that I “answer the question with assertions that we are only our bodies”, and to support that you quoted me as saying: “The question of whether or not a machine could be intelligent is not interesting, it’s obviously true, because we are such machines ourselves” conveniently omitting the next paragraph where I said “The question is are we only bodies, or Something More?”, and explained that this is a malformed question.
So, OK, maybe I’m being way too subtle for you lot, Mikvet made a very similar misreading of something I thought was clear. I am definitely not saying “we are only our bodies”. I’d never say that, it’s your projection of what you think I believe.
That is to say, you are arguing with the atheist that lives in your head, not me.
This question of what a soul is: you introduced the word, so OK, you can mean what you like by it: “we are formed as living animals”. Fine. If a soul is just a form, then it isn’t supernatural and no atheist or materialist has a problem with it. But then it says absolutely nothing about whether a machine can’t also have a form that engages in intelligent human-like behavior.
Not really, I don’t come here looking for agreement. And I suspect you don’t understand me well enough to say I am in agreement with you.
I find this truly mystifying; it doesn't make a lick of sense. There is nothing inherently different about the mechanical and the organic; that is the basis for all of science, certainly since Wöhler demonstrated the synthesis of urea in 1828 and destroyed vitalism. And there is in principle no difference between the artificial and the natural; a synthesized molecule of urea has the exact same effect as a natural one (in fact they are both synthesized, just through different processes).
I can't even understand what you are getting at. Are you saying that if we somehow build an intelligent machine, it somehow ceases to be artificial when it first achieves intelligence? Presumably not, but then what are you trying to say?
This is just another attempt to win the argument by definition. There are of course already machines with a degree of autonomy – a thermostat, a Roomba.
Why not? I mean, if you define machines as you do above, then sure, it’s trivially true, but that is not the definition that Turing was using or that I am using or is in common usage. And it’s an attempt to win an argument by altering the definition of terms, which is tiresome. It’s thinly disguised question-begging.
The whole point of cybernetics, computation and AI is that we have learned how to make machines that can do some things that seem tantalizingly mind-like. That’s not equivalent to proving that you can make a human-equivalent machine, but it’s suggestive of the possibility. To pretend that machines haven’t progressed beyond the hand-ax is kind of dumb, isn’t it?
Re eliminative materialism, I am pretty sure we have gone over this exact same thing multiple times in the past. Oh yeah to quote myself from 2019 (can’t find the comment now)
My dear a.morphous, as is not uncommon in our symposia – I’m working on a whiskey right now – it is clear to me that we are at once both basically at one in our fundamental notions, and also in their outworks completely at odds. It’s a funny thing.
Do not therefore, I beg you, take what follows as simply inimical. It is intended only as respectfully and honestly responsive. Albeit, not so as to overlook your manifest errors. I am sure you’d feel a bit cheated, and indeed disrespected, were I to overlook them.
OK, on to the fisking.
So Leibniz is stupid. OK. Get as far with that as you can, and good luck to you. I’ll bet on him, and not on you. Discovered the calculus lately, have you, or something comparable?
Your rejection of the Identity of Indiscernibles is right up there with your rejection of the Principle of Noncontradiction. Dude, if you reject these principles, you reject reasoning as such. Does that not worry you, a bit? Does it not make you feel rather … out on a limb, and sawing at it with all your might?
OK, sure. Analogy is not demonstration, to be sure. But it is indicative, nonetheless. If your analogy doesn’t work the way you want it to – as it does not – then that should tell you something.
So for all practical purposes, an elevator is just the same sort of thing as a rocket ship. Right? NASA will be glad to hear about that; they’ll be able to call up Otis and save a bundle of my money. An elevator to the Moon. Piece of cake! Cheap!
I’m not being facetious, despite my little rhetorical victory dance there. There are real differences between elevators and rocket ships – and planets – that differentiate the similar purely formal accelerations that they all effect. The concrete cannot reduce entirely to the formal, on account of its ineluctable particularity.
If I hit a nail with a hammer, it can have the same inertial effect as if the nail was hit by a meteoric pebble. But from the fact that the same inertial effect on the nail was achieved in both cases it simply does not follow that the causes of that effect were equivalent in every respect, and therefore identical, so that there is *no difference whatever between the two disparate events that had that same effect on the nail.*
A meteor strike is just not the same sort of thing as my hammer strike, despite the fact that they are both strikes, and have the same inertial result. This is obvious, no? What could be the reason of disputing it? Why should you quibble at it?
If you kiss your girl, is that the same thing exactly as a robot kissing your girl, or for that matter my kissing your girl, to the same effect in her? Are you and I and the robot the same thing identically? Of course not.
You see what I mean here? Probably not, I suppose.
Put it this way. The following syllogism is invalid:
1. X → y.
2. A → y.
3. Y.
4. X or A.
Y would not entail that X, because perhaps A; and vice versa. But then, there might furthermore be other things than either X or A that could cause y.
This form of invalid argument – of logical fallacy – is affirming the consequent. It is what the Turing test does. It affirms the consequent.
Now, because you reject the Law of Noncontradiction, this basic logical fallacy might not trouble you. But it should.
Sure, of course. The Turing test presupposes all that. I get that, to begin with; thus, the original post, above. The question is whether it is justified in so doing. It is not. A hammer strike is just not the same sort of thing as a meteor strike, even though both strikes have exactly the same physical effects. Likewise, a faked conversation is just not the same sort of thing as a real conversation, even when they appear to be the same.
The Turing test presupposes that the Turing machine is generating only a convincing fake of a real conversation, and then suggesting that a sufficiently convincing fake *is the same thing* as what it fakes. It’s a foolish notion. “Hey, this *looks* like a diamond, even though it is nothing of the sort; but because it looks so much like a diamond, then it *is* a diamond.” Silly.
The Turing test proposes that we jettison all our hard won epistemological humility, caution, and skepticism.
OK, great, you agree with me. Nuff said re that.
Hey, I am not the one who came up with the notion that the soul is the form of a living body. It goes way, way back. Educated writers have used the term as I do for many centuries. It’s great, because as usual, and as you have here said, once you define terms properly, all sorts of controversies – such as the controversy between materialism proper and classical anthropology – melt away.
An assemblage of circuits of silicon *could* have a form such that it engages in intelligent behavior. It could, i.e., have a form such that it not only *appeared* to engage in intelligent behavior – this being the criterion of the Turing test – but in fact actually *did* engage in intelligent behavior. But then, an artifact that was thus alive – that, i.e., had a form that was the form of a living body – *would not be a machine.* It would be a living organism.
This should not be a troubling notion to an advocate of strong AI. The whole gist of strong AI is that we might devise an artifact that was in fact a living organism, and *not a mere stupid dead machine.* I do grant that we could in principle do that; after all, the rest of the cosmos did it in and by us. Why should we not be a participant occasion of the same sort of procedure that generated us?
We are agreeing here pretty hard, like it or not. The only remaining difference between us is in re the definition of “machine.” On Turing’s definition, a Turing machine cannot be a mind, for it is *nothing more than a routine predetermined ab initio by its own logic, its own nature and definition.*
It is not itself, i.e., an actual thing. It is, rather, a procedure, a program, *a form,* enacted by other things, who *are* actual.
So Turing machines as such are not and cannot be by themselves concretely real. They are rather only and ever formal. Their instantiations here or there in tape or silicon or for that matter neurons are neither here nor there; just as an instance in paper and leather of Hamlet is not itself Hamlet; the book of Hamlet on my living room shelf does not *do* anything; it is not itself a real agent, not an organism. It is a record, a trace or fossil of acts by organic agents, rather than itself such an agent.
So also with Turing machines. As only and merely formal, Turing machines – like the information encoded in the books on my shelves – likewise are not and cannot be themselves actual – i.e., cannot be in themselves really generative of novelty. Forms (howsoever encoded) don’t *do* anything, they just *are.* So, they can’t live, or therefore be organisms, or agents, or causal factors. They cannot contribute to reality, except insofar as they are the forms of concrete active agents, who then act in accord with their essential forms; with their natures.
But then, strong AI is not shooting for a Turing machine that merely *acts* like an organism. It is not shooting for a *pretense* of organism. It aims far higher. It is shooting for *an artificial organism,* that is therefore *not a Turing machine,* but rather – like us – *far more.* It is shooting for life.
Strong AI is not aiming for margarine. It aims for butter.
Apart from my prudential practical worry about the Faustian hazards implicit in such a project – most innovations are lethal – I have no doubt that it is possible in principle. After all, and again, the cosmos did it in and by us. Faust, call your office: your worst nightmares have already been realized in man; in, i.e., Faust himself, and likewise Frankenstein, and Nimrod (and, at the end of the day, and at least a bit, in a.morphous and his buddy Kristor). When it comes to Faustian moral hazard, Skynet has nothing on us. Skynet is just man fed back to man. It is a machine by which we injure ourselves with ourselves.
I think I understand you – or, at least, the notions you here characteristically propose – pretty well. And I think you understand me, by mine own proposals. I mean, come on: the ideas we each offer are not too complicated for intellects at more than 2 standard deviations, no? You are a talented and skillful writer. If I misunderstand you, it can only be because you have not yet here fully expressed yourself. So, unless one of us is prevaricating – are you a reactionary, LARPing as an opponent of reaction? – we must presume that we understand each other pretty well.
But, because we do in plain fact obviously understand each other, you retreat from any true grapple with the notions I propose, by way of an attack on my understanding. That is I must say rather a sad thing to see, from such a capable mind as yours. Have you nothing substantive to offer? If not, what does that tell you about your avowed commitments? How sturdy and reliable can they be, if this is the best rhetoric you can offer in their defense? Where’s the knock down logical refutation? Got one?
The organic is mediated by the mechanical, to be sure, and the organic and the mechanical implicate each other. But the organic does not reduce to the mechanical without jot of remainder. If it did, then the notion that the organic reduces altogether to the mechanical would be itself entirely mechanical, and thus not an organic motion in the first place, at all, properly speaking, but rather nothing more than a blind mindless mechanical motion. It could not in that case ever have occurred to a mind such as yours. Nobody would ever have thought it. Nor would they have thought anything else. There would in that case be no thought.
There are mindless motions – perhaps, there is no way we could tell that it were so, or not – and there are mindful motions – those we can be sure of, for of such are ours. Whether or not there are such things as altogether mindless motions (I doubt it), mindful motions cannot be nothing but mindless motions. Because why? Because then – duh – *they would be nothing but mindless.* There would be nothing mindful in them, at all.
Mind *cannot* reduce to mindlessness.
If there are minds, then – as manifestly there are – they cannot be reduced to mindless machines.
The question then is whether there can be mental machines. On Turing’s definition of his machines, the answer must be no. Turing’s definition of his machines presupposes the excision of any jot of intelligence from their purely formal operations. How then their merely *specious* appearance of mental intelligence (when properly programmed by a mind) could suffice to a credible warrant of their *actual* mental intelligence is mysterious. Turing and most of his sympathetic commentators skate past this difficult bit of thin ice as fast as they can.
Turing is arguing that a really excellent fake probably just is what it fakes – or good enough for government work, anyway. This is just silly, a total nonstarter. The Turing test is a stupid idea, and we should all just drop it.
Excursus: Turing’s definition of his machines *presupposes* their mindlessness. Indeed, his machines cannot work as they are meant to do unless they are mindless.
“… as they are meant to do.” Turing machines must be operated by agents who are not themselves Turing machines and who mean to do something. For, Turing machines cannot mean to do anything. Indeed, they cannot move at all unless they are meant and urged to do something by others who are not themselves Turing machines.
If we organisms are mechanically mediated – as our sort of organism seems certainly to be – then it would seem that Turing’s formal definition of machines was too narrow to be adequate to reality, for Turing machines cannot mediate organisms such as Turing. We’ll need a more expansive definition of machines, so that we can be mediate in them. Or else, we’ll need to abandon the notion that we are machines. Note that these two options are tantamount to the same thing.
An artifice that becomes intelligent does not thereby cease to be artificial. But an artificial machine that becomes intelligent does thereby cease to be a machine. It is still artificial, but it is no longer a machine. Strong AI has been saying basically this for years. It’s not a new idea.
Intelligence is not mechanical, even when it is mediated mechanically, as we are.
If the proper careful definition of terms disposes of an argument, so much for the argument. I should think that as yourself a seeker of truth you would take that as a win for the dialectical procedure as a whole, and for the human search for understanding of truth. And so, then, happily, move on to the next topic.
O please. Give us a break, already. Thermostats and Roombas have no autonomy. They do not act. They put out the output of functions input to them by their artificers. They do not determine their own operations, any more than axes do.
The same would go for a quantum computer that was just a computer.
Honestly, a.morphous. I have been reading cybernetics since 1972. You can’t get this kind of thing past me, OK?
Turing’s definition is entirely formal, and so *cannot* define a mind; for, minds are not just formal, but also final, efficient, and material. Take any system of formulae you like – or, far more parochially, any program of operations on tape – no matter how vast and no matter how complex, it is not a mind. It is just a bunch of symbols, that can have no meaning or effect except insofar as they are cognized and implemented in and by some mind.
I’m happy to use Turing’s own definition:
Turing’s machine is *entirely formal.* The tape, and the medium of record of the table of rules, are entirely ex post instantiations of the Turing machine.
I get this, I really do. Strong AI seems credible to me, with the caveat that an artificial intelligence would not be a machine. Intelligence – life – does more than its causal inputs might have done, howsoever recombined. It generates novelty, that could not have been predicted from its past, even despite the fact that every such novelty was one possibility implicit therein, and so compossible thereto, thus ensuring cosmic coherence (and practical universal and actual repudiation of the Law of Noncontradiction … heh!).
As a fan of Prigogine, you should totally dig that.
Yeah. But then, I never suggested any such thing, did I now?
Like an axe, a computer is effecting the intentions of its operator. There is nothing untoward or scandalous in this realization, except to those who – mystified, perhaps, by artificial computation – think artificial computation categoreally different from artificial cutting.
Excursus: computation is intellectual cutting.
Like an axe, a computer is effecting the intentions of its operator, and not its own. So like the axe, it is entirely a tool of its operator.
Agreed. I know, shocking, right?
The problem is that normal materialism entails eliminative materialism. If thoughts reduce without jot of residue to their material substrate, why then there are no such thoughts, but only their material substrate. That’s what reduction *means.* If thoughts are nothing but the motions of molecules, then *thoughts are nothing but the motions of molecules.* They are not thoughts, i.e.; they are just and only motions of molecules.
What you reach for with your notion of “normal materialism” is not in fact materialism at all. Rather is it something closer to the ancient Aristotelian ontology, according to which material causes are but one sort, added to formal, final, and efficient causes. Your “normal materialism” wants to smuggle into its system a bit of formality and finality, to rescue itself from the stupidity of merely material causation. But most materialists, of any sort, have no notion of Aristotle. They don’t know that they are echoing him. So they labor on in the darkness toward their own discoveries of his insights.
I understand computers. Hell, I learned computers when the only input UI was Hollerith cards.
Understanding computers is inadequate to understanding minds. This is obvious to anyone who understands both computers and minds, and to all others apparently impossible to understand.
Sure. Aristotle again. And Whitehead. Whitehead at least understood his debt to Aristotle.
It seems to me that there is a parallel trend of opinion, generally held by the same people, that human beings do not think, but are simply demonstrating the occurrence of random, unconscious, deterministic epiphenomena, but that some of the machines manufactured by these human beings are capable of actual thought.
Yes. To believe that machines think, it helps to think of oneself as a machine. So you end up with eliminative materialism, at least implicitly; and, so, at the zero of thought.
Thought-provoking article. Thank you for posting. If the Turing test in not a good measure of artificial intelligence, what would you recommend in its place?
If you turned the machine on and waited and gave it no programming and then it cranked some output on its own, that would be a start. Of course, if that were to happen, it would indicate not artificial intelligence, but rather natural intelligence.
Fascinating. Someone might say a relationship with a robot would be an artifical one and a lie. Likewise, placing “artificial” with “intelligence” would make it only an empty approximation of intellect. So artificial intelligence can only be a well-crafted lie. You convinced me.
It would be better to actually read the paper, since it challenges the notion of intelligence. Birds, bats, insects and pterosaurs all have wings; biologically they differ, yet they are still wings. Different causes but the same effect.
Turing’s first challenge was to distinguish between a man and a woman by asking questions; That seems to be quite a challenge nowadays.
How may people do you know who believe their dogs think and have human-like intelligence? A few years ago, people were impressed by a dog who got on a bus and got off at the doggie park. That does not demonstrate the critter’s intelligence, but rather it shows how little intelligence it takes to ride a bus. Surely you know people who cried at the end of ET, demonstrating that they cannot distinguish between human intelligence and its simulation.
So like a wing, if you cannot distinguish between a human and a bot, then they are both “thinking” in some sense. If you are serious, Roger Penrose wrote a nice book or two about thinking about thinking, they show more understanding of the issues.
At best, AI can only simulate rational, logical thinking. However, the human mind transcends ratio with intellectus. That is where the refutation lies.
(I have link to the paper in my Library.)
I think the onus is on the materialists to demonstrate that a machine can be fully human, not on transcendentalists to prove that they aren’t.
Perhaps a.morphous might explain to this simple-minded person why I have to prove that your assertion that a machine is intelligent in a human way must be assumed and that it is up to me to prove that it isn’t?
Is this just another progressivist just-so diktat, such as we’re to accept that two people of the same sex can marry or that a man with his testicles still hanging must be accepted as a woman on his own say-so?
Really not sure what you are talking about. What onus? You can assume whatever you like, and you don’t “have to prove” anything, I’m not holding a gun to your head.
I guess this is just a kind of whiny way to say that you don’t share my assumptions about what is more plausible. That’s fair enough, but why put it in this form that somehow assumes I’m forcing you to do something? I have zero power here, other than that of argument and rhetoric, how then can I impose on you a “progressivist just-so diktat”?
Aside from that, this is a philosophical question, not math, the only area of human thinking that is really amenable to proof.
I guarantee that even if the materialists manage to make a machine that passes the Turing test, and exhibits emotion, self-consciousness, creativity, and all the other hallmarks of the human – there would still be people who would deny that it is a real intelligence and not a mere simulation of one. That is to say, even that would not constitute an ironclad proof that all would accept.
To quote yourself: “…there needs to be some systemic or structural reason that a machine that displays what seems a fully human intelligence should not be considered fully human.” That seems to be laying the onus on others to provide the reasons for what is only your assertion.
Um, why did you leave out the preceding qualifier? What I said was:
Which, if you are not a naturalist, implies no onus whatsoever.
I can’t tell if you are deliberately trying to distort what I am saying (pretty dumb when the full quote is right there), or really can’t read plain English.
I wonder if Turing was influenced in his thinking in these matters by his background in mathematical logic. Purely formal operations are one domain where two different representations *are* equivalent (at least in terms of their formal properties).
For instance, one can take the same arithmetical calculations that are written on paper and do them on an abacus and they are the same calculations.
Of course that is not true for things that have content and are not purely formal. Even the same algebraic equation is different if different numbers are substituted. And this idea really fails for something that has a subjective element, which we know thought does.
Turing’s got it backwards. Thought isn’t a computer program; a computer program is a thought.
Another consideration is that a scientist will take a question he has no idea how to answer and modifty it to reframe it into a question he thinks he can make progress towards. All well and good. Since scientists want to make *progress*, one does not want to spend all one’s time on a question that is too difficult.
But then, this practical decision is adapted into a metaphysical stance and the original philosophical question is declared to be meaningless because the scientist in question can’t see how to make incremental progress towards solving it. Well, that doesn’t follow.
I have not read Turing’s original paper, so I can’t say this is what he was doing, but it appears to be a common attitude among AI researchers that because they can make incremental progress with the problems they have set for themselves and they can’t make progress with the philosophical problems
that those philosophical problems have in fact been superseded by the other problems. It’s a peculiar form of positivism. This attitude also seems to be common in physicists when their work impinges on philosophical matters.
Well, accurately and precisely said. Thus you can reduce a computer program to a thought, but not vice versa.
At a.morphous: in both your replies to me, you have directed personal insults at me. Rather than call me a whiner or insult my command of the English language, perhaps you might consider the possibility that the fault in communication might be at your end? The fact still remains, but perhaps you don’t comprehend what you’ve actually written that, regardless of your qualifier, you are asserting that an assumption without proof is adequate for a naturalist. To put it another way, what would make sense is to say ‘that there needs to be a reason that a machine that displays what seems to be a fully human intelligence should be considered fully human’. The absence of ‘not’ in the sentence makes a crucial difference. Of course, maybe that only applies to the simple-minded, like myself.
You started this little interchange by accusing me of being some kind of progressive totalitarian bent on imposing my views by force, so I think my mild insults were not very out of line. You still have not backed up your accusation, btw.
But you are right, if there’s a failure in communication the fault could be mine. I said (with more context):
This is the opposite of putting an onus on you; it is offering you an easy out. You can be a supernaturalist or a naturalist. In the first case, you can explain anything you want by attributing it to God or gods, end of story. However, if you are a naturalist, you don’t have that easy option, and you need to explain how minds work in terms of other natural phenomenon, like the structure and activity of neural tissue or the processes of evolution.
With it’s qualification, this shouldn’t even be a controversial statement; it’s really just defining two different stances towards the world. I guess the word “need” triggered you, but (a) it doesn’t apply to you, and (b) it’s a “need” driven solely by intellectual curiosity, not a moral imperative or government marching order.
Turing is saying, if you can replicate the structure and function of the mind using different materials, have you not replicated the human? If not, why not? Supernaturalists have an easy answer to this, naturalists do not. A curious intelligent naturalist seeks to understand how the mind works, and computation offers at least the possibility of some answers. But if you are a supernaturalist, why should you care? You have an “Explain everything for free” card in the game of thought.
I asked a question, outlining some implications of what you said, on an anonymous blog site, or whatever we call it, and now you’re accusing me of using ‘force’ against you. I’m saying no more-anyone interested in reading these posts can make up their own mind.
Where did I accuse you of using force against me? I said that YOU accused ME of wanting to use force (a “progressive diktat”) on others.
I think the philosophy will have to be tabled until we solve basic reading comprehension.
Serves me right for reading in a hurry, you actually said that I accused you of imposing your views by force. Right.
I know hardly anything about the Turing test, but I find the trolley analogy brilliant. Kudos to Dr. Chastek! Looking him up, I found out about the Chesteron Academy movement. It gladdens the heart, though it’d be better if they were TLM renegades. Maybe they are??? Anyways, many blessings to them all. Also, as a side note, I noticed the “house system” for the school (https://chestertonacademy.org/chesterton-student-life/house-system/) . . . love it. Two eastern and two western fathers — and fitting heraldry. After looking around the site, I sent this to my family:
“In three hundred years, will people realize that their (future-)contemporary schools have internal ‘houses’ (a medieval tradition) largely because of one woman’s work at the turn of the twenty-first century (Rowling)?
“I know that certain elite universities kept their house systems through the 20th century, but they were weakened and dismantled throughout the West. And it appears to me that that process stopped and reversed because of Rowling’s stories.
But that’s just the sort of thing that time will forget (except for specialists with quirky niche interests).”
And that seems to be how history often works — as little rivers determine the landscape of mighty continents (thanks, Prof. Smith!), so such “trivial” occurrences (in the grand scheme) shape civilization.
This really must be my last contribution to this discussion. It’s getting tedious, and if we can’t agree on what a machine is, then we certainly will not agree on the more difficult and interesting question of whether they can think.
Or perhaps if you cast all these quibbles aside we would basically be in agreement, as you suggest. If so, great, then we can agree to agree and leave it at that. You did say “I have no doubt that it [AI] is possible in principle. After all, and again, the cosmos did it in and by us.” so yeah I’m not even sure we have anything to argue about.
I assume you’ve seen 2001, which features both artificial intelligence and artificial gravity (via centrifuge). The latter has been part of proposed design of space habitats and vehicles for a very long time.
I know you didn’t. Educated writers have also noticed that the notions of the soul as (a) the form of the body and (b) something that is immortal and can exist separately from the body – are in conflict. And yes I’m sure there is a millenia’s worth of theology that purports to resolve this contradiction.
You are just playing around with fake definitions here, it’s a humpty-dumpty argument and I already addressed it.
Why not? This is question-begging, you are just asserting (over and over) the thing to be demonstrated, without giving any argument.
Why should it be a “reduction”? Organic processes don’t “reduce” to mechanical processes, they *are* mechanical processes, there is no magic remainder.
They have, as I said, a degree of autonomy and suggest that machines with a greater degree of autonomy are possible. I don’t think this is worth arguing about since “autonomy” is not well-defined, and we can’t even agree on what “machine” means.
I’d say you haven’t understood it very well. Don’t mean to be insulting, but it’s a founding insight of cybernetics that machines, organisms, and humans have important structural things in common (which is to say they are commensurable, if not identical).
[continued, I think this was too long so had to break it up]
It’s true that Turing’s definition is formal in the mathematical sense. But we have physical computers that can implement Turing machines, and are perfectly material, efficient, etc. That’s kind of the whole point, computers demonstrate that physical causation and semantic causation are not different things but aspects of the same thing (ok, that may be too much of metaphysical leap. But they certainly demonstrate that you can *implement* semantic causation on top of mechanical causation, that’s just what computers *are*).
You do realize that this is sheer nonsense don’t you? It’s also really boring since it’s not making any substantive point, just playing definitional games. Why should I care if you have some idiosyncratic definition of “machine”? If I can make an artificial intelligence, why should I care what label you want to put on it?
Maybe the problem is you are confusing “machine” and “Turing machine”, but you seem to know the difference.
I think the only thing we really disagree about is whether a machine can be a mind. You think that by definition it can, I think that by definition it can’t. You are irked at my use of definitions. But if our definitions are not clear, then neither can our thinking be clear. If our definitions are incorrect, then we cannot reason from premises to true conclusions: we can’t know anything.
Excursus: Perception is a *logical* operation. It is an exercise of *reasoning.* We *conclude* that we see a circle, and not a rabbit, on the basis of *inference.* Perception then is applied philosophy. If our concepts as encoded in the structure of the neural control systems of the CNS – e.g., in the pattern recognition circuits of the visual cortex – are inconsistent with each other or inaccurate, our perception will be muddled, confused, mistaken. We’ll be more or less insane. This is why cognitive dissonance – confusion – is a sort of pain for us, and a source of anxiety: we are built so as to drive out inconsistency and inaccuracy.
You make my argument. Centrifugal force is obviously not the same thing as gravitational force, even though they might have the same inertial effects upon a body. Likewise, that a machine *appears* to be a mind does not suffice to demonstrate that it *is* a mind.
Correct. And it succeeds. The apparent contradiction vanishes when we use crystal clear definitions and distinctions. The key distinction, to my mind, is to be found in the recollection that the soul is defined as the form of a *living* organism. It is then first the form of a *life,* and only second the form of a *carnate* life. The question then shifts to whether there can be lives that are not carnate. And there are good reasons to think there can be such lives – reasons not only metaphysical and anthropological, but also empirical (viz., the many accounts of out of body experience).
But even were it otherwise, since when have you been bothered by contradiction? But then, come to think of it, a man who does not credit the Law of Noncontradiction can engage in the inconsistency of caring about contradiction without caring about that inconsistency. So convenient! So easy!
I don’t know what a humpty-dumpty argument is, and obviously it is not clear to me that you have addressed this argument; not adequately, anyway.
You’ll have to clarify what is fake about the definitions and arguments I’ve employed. Just pointing at them and spluttering and calling them names won’t do. What exactly is wrong about them? Again, it won’t do just to repeat that they are wrong. You’ll need to *demonstrate* that they are wrong, if you want to convince anyone to take you seriously. NB: “I think differently” is not a demonstration.
If you define a mind as a Turing machine – this being, mind, just *your definition,* and not the usual definition, nor my own definition – why then obviously a Turing machine can be a mind, and there is nothing more to be said. But if you define a mind as a subject of experience of its environment, that can act in ways underdetermined by that environment, then obviously a Turing machine cannot be a mind – albeit, that its motions can of course be extended *operations* of a mind, in just the way that the motions of an axe are an extended operation of the axman.
Turing machines are utterly determined in their operations by their causal antecedents. Their motions are *entirely* functions of their causal inputs – the tape, the tape reader, the holes in the tape punched by the taper (as programmers once were called), and so on. In this respect, Turing machines are just like an axe. They are implements of those who do have lives, and nothing more.
Understanding what a Turing machine is, I don’t see how you can disagree with the foregoing.
NB then: the accusation that “you are just playing with definitions” therefore cuts both ways. We are *both* offering definitions. You don’t like my definitions, and I don’t like yours. The question then is, which set of definitions looks more like reality?
On that score, a physicalist like you has a much tougher row to hoe than a realist like me. The physicalist says to us all, “you don’t actually do anything, all your acts are programmed.” But that’s just not how life feels. It is false to empirical fact. Physicalism asks us to disregard *all our experience* in favor of a theory that prima facie – and secunda facie, and tertia facie – radically disagrees with reality as we encounter it at every moment of our lives.
Dude, that’s what “reduction” just *means* in this context. To say that organic processes *just are* mechanical processes *just is* to reduce organism entirely to mechanism. And that is to eliminate organism as a real category. This is why materialism tends to eliminative materialism. Eliminative materialism, which as you have said is just stupid, eliminates organisms, minds, indeed even waves and tides and currents and vortices; it eliminates the Tao, writ large and small.
NB also that the total reduction of organism to mechanism which you have here invoked eliminates the recourse to emergence favored by materialists properly wary of the eliminativism that is implicit in their reductions, and that threatens to suck them in. On your sort of total reduction, there are no features of things that emerge once they attain a threshold level of complexity. Indeed, a consistent eliminative materialism must eliminate also such things as thresholds and complexity. But enough about that for now.
Thermostats and Roombas do not determine their own motions. The programs built into them wholly determine their motions. Like an axe, they do not themselves act. They are to be sure automatic: the government of their motions is built into them, like the governor on a steam engine; so that they don’t need to be moved by a human agent at every moment of their operations, the way an axe does. But thermostats and Roombas and governors on steam engines don’t make things up as they go. They are not underdetermined by their causal antecedents. They are *totally* determined by their causal antecedents.
On the contrary, “autonomy” is quite specifically defined. It means:
The Roomba moves according to the nomos of its artificers, and not according to its own rule. It is not autonomous. It is automatic.
As for the definition of “machine,” I’ll take Turing’s definition for the purposes of this discussion. A Turing machine is a sort of machine. Not all machines are Turing machines. Although there is a case that they can be construed as such. I’ll get to that some other time, perhaps.
Well, of course. Duh. This goes without saying, no? But it is not the least bit dispositive of anything here under discussion. Indeed, it is not even indicative. The question at odds between us arises in the first place only on account of those important structural things that humans, machines and organisms have in common. Those commonalities are fetching to properly parsimonious thinkers. The question though is whether those commonalities are completely dispositive, so that organism reduces without jot of remainder to mechanism. I suggest that they are not. Physicalists such as yourself suggest that they are.
Again: physicalists such as you have a tougher row to hoe than realists such as I, because *all human experience agrees with realism,* and *no experience whatsoever agrees with physicalism.* Thus *any experience whatever* is a defeater counterexample to physicalism. Then experience per se contradicts physicalism.
That’s not a strong basis for a metaphysical system. And make no mistake, physicalism is a system – a deluded system, impossible under its own terms – of metaphysics.
Michelangelo’s David has important structural things in common with a living decathlete. But … well, you get the drift, right? Impressive and beautiful though they may be, David and the machine are *not alive.* The decathlete and his mind are *alive.* The living decathlete and David are alike in some ways, but their differences are such as to render them categoreally incommensurable. They are altogether different sorts of thing.
So likewise with the mind of the decathlete and the program of the machine that pretends to be a mind, but is really a mindless routine.
Ah, but are Turing machines finally caused as well? Are they *about* anything in and of themselves, or do they just roll along blindly – i.e., *unintelligently* – according to the programs that, according to the intellects of their programmers, *are* about something that matters to those programmers? That’s the $64 question.
Turing machines can be really implemented in physical reality. But that does not mean that they are about or intend anything in themselves and apart from the intentions of their programmers. The final causation of the operations of Turing machines derives, not from the machines themselves, but from the intentions of their programmers.
Here we are very close to agreement. But NB that the semantic causation is, as you say, “on top” of mechanical causation. It *supervenes* mechanism, but is not entirely reducible thereto. That is why we are able to differentiate it from its mechanical substrate, and specify it with a different term. It is semantic, and thus *just not quite the same thing* as mechanical.
The machine qua mechanism does not mean anything, is not about anything. The semantic character of its operations and outputs derives entirely from the semantic character of the operations and inputs of its human operators.
To the extent that an artifact did begin to exhibit intelligence of its own that was not entirely derived from the intelligence of its artificers – i.e., that could not, in logic, have been generated by operations of the programs coded in it by its users, so that it was *demonstrably not just a Turing machine* – it would to the extent of that intelligence be intrinsically semantic, as well as mechanical.
In that event, the intelligent artifact would not consist of two things, one of which was mechanical and the other semantic. It would be, as you say, one thing, that had both semantic and mechanical aspects. It would in that respect be like us.
I get that you don’t like my definitions. I reciprocate. You think minds are just and only machines – i.e., you are an eliminative materialist, even though you think eliminativism is “stupid.” So you think that my talk of minds as different from machines is just nonsense, by your definitions of mind and machine.
I on the other hand think minds are not wholly reducible to machines, so that they are not machines, but something else.
That’s a difference of definitions. It is the whole issue between us.
If you want others to credit your definitions, and thus your overall eliminative materialist thesis, you are going to have to show how they agree better with reality as we actually experience it than the definitions men have always used for “mind” and “machine.” Your problem, again, is that your definitions of those terms are radically at odds with human experience as such. Whereas mine agree perfectly with human experience as such. Your definitions ask us to believe that we are nothing like what we experience ourselves to be. Mine do not.
Your problem is that I *don’t* have an idiosyncratic definition of “machine.” My definition is the one that men have always used. A Turing machine is a special sort of machine that runs programs, but in the wholly determined character of its operations it is just like all other machines.
It is not my definition of machine that is idiosyncratic, but rather your definition of mind.
I’m honestly perplexed that you do seem so much to care. You care enough to have worked hard on your comments in this thread. Your definitions of “mind” and “machine” seem super important to you. My hunch is that you care because you *really want eliminativism to be true.* You *really want to be confident in the notion that minds *just are* machines.* I’m not sure what that notion achieves for you. Excuse from moral culpability for your acts? It’s a puzzle.
Excursus: Modernism appears pretty consistently in minds as an amalgam of several philosophical notions: positivism, materialism, physicalism, nominalism, liberalism, and atheism. There may be others. It is interesting that, on any one of those notions, there can be no such thing as moral culpability. Modernism then looks like a retreat from morality, and so from responsibility, on every philosophical front. Implicitly, modernism makes shame and guilt inapposite to reality. That might account for its memetic success.
I said I wouldn’t continue this, but I can’t resist taking on one of the points of your last message apart. You said:
Who is saying that gravity and centrifugal force are “the same thing”? Obviously they are different things, but the point is that one is so functionally similar to the other than it can be substituted and works just as well for practical purposes. They are different mechanisms, but the force they produce is the same (or close enough). Nobody on the spaceship is going to complain that the artificial gravity is not real, because it *is* real – it’s real acceleration, and that’s all that they care about.
I know you aren’t too stupid to understand an analogy, so it appears you are being deliberately dense here. But if I have to spell it out – the analog situation is that a computer capable of simulating a human *is just as good at being a mind* as a human. Mind in this analogy is like acceleration, capable of being produced by quite different mechanisms but ultimately the same thing,
An analogy of course does not prove or disprove anything. But your insistence that gravity and centrifugal force are “not the same thing” means you don’t understand the very basic nature of what I’m saying, And I see I’ve made multiple attempt to say the same thing in these comments, and it’s getting extremely repetitive and wearisome. If you won’t engage with what I am saying, then yes, I’m losing whatever motivation I might have had for continuing.
There are a number of interesting questions you raise, like, how do we reconcile our material (mechanical) nature with our experience of ourselves as autonomous actors? I don’t think we are capable of having an interesting discussion about it however, since you s/ Our assumptions and vocabulary are too different, as is the basic question we are interested in.
If I can summarize your stance: you think you have a very clear idea of what minds are, what machines are, and that they are radically incommensurable, end of story. This produces some absurdities, like the assertion that we can make artificial intelligences but they won’t be machines, but you don’t seem bothered by that.
My stance on the other hand: we don’t really know what minds are or what machines are capable of, and it is (at minimum) interesting to try to explore the areas where they overlap. We will get better ideas about what minds are and how they work. We will understand ourselves better.
Thanks, a.morphous. I understand your frustration, and I am grateful that you have soldiered on in spite of it. I have learned a lot from responding to your comments.
The basic problem with the Turing test is that it affirms the consequent. That is a logical fallacy. The archetypal exemplar of affirming the consequent runs:
1. All men are mortal.
2. Rex is mortal.
3. Rex is a man.
The problem of course is that both the premises could be true, and the conclusion could nevertheless be false, so that it did not follow from those premises: Rex might be a dog.
The valid form of the argument – affirming the antecedent – runs:
1. All men are mortal.
2. Rex is a man.
3. Rex is mortal.
Casting the Turing test into a syllogism, it runs:
1. All minds can carry on convincing conversations with humans.
2. Properly programmed Turing machines can carry on convincing conversations with humans.
3. Properly programmed Turing machines that can carry on convincing conversations with humans are minds.
The first premise is a bit controversial, because animals have minds, but never mind that: the form of the argument is invalid. The conclusion might be true, of course, but it just does not follow.
Let’s cast that into a syllogism:
1. Minds can simulate human minds.
2. A properly programmed computer can simulate a human mind.
3. A properly programmed computer is a mind.
Again, one of the premises is problematic – dogs are minds, and cannot simulate human minds – but it doesn’t matter. The argument is invalid. The conclusion might be true, but it just does not follow from the premises.
Actually, the way you put it in the passage just quoted has an additional problem, for it presupposes the conclusion as an effectual premise, so that it is circular: for a thing to be “just as good at being a mind as a human,” it must be a mind in the first place. A thing that is not a mind at all can’t be either good or bad at being a mind.
I get what you are trying to do here, I really do. I am not unsympathetic. I agree furthermore that mind can in principle be instantiated, and realized, in and by various material substrates. And I agree that mind is the same sort of thing regardless of its material substrate.
But I disagree that mind can be reduced to mechanism.
Excursus: I think indeed that even matter – and, ergo, the material substrates of corporeally realized minds – can’t be reduced to mechanism. I think mechanism is fundamentally inadequate as an account of reality – of *physical* reality. But that’s a topic for some other day.
Among other things, minds are agents: they can act in ways not reducible in toto to motions or output or functions of their causal antecedents; which is to say that they can originate novel motions not first implicit in the motions of their causal antecedents.
Excursus: Notice that this character of minds does not require that they be conscious, or even aware.
Minds are also actual entities in their own right. Actuality comes along integrally with agency. A thing that does not actually exist cannot itself do anything, because it isn’t there in the first place. And a thing that cannot itself originate novel acts, but rather moves only and entirely as a function of its causal antecedents, is not an actual entity. It is a work or assemblage of other entities, that do actually exist and act. Thus an axe for example is not an entity: it can do nothing on its own, and can move only insofar as it is moved by another. A thing that *does* move in ways that are not wholly reducible to effects of its antecedents is then probably a mind.
Excursus: Notice that this character of minds does not require that they be conscious, or even aware.
Turing machines are quite definitely – according to Turing’s own definition – not agents, and so not actual entities. Like other machines – simple machines like axe heads and complex machines like clocks or adding machines or the Antikythera machine – their outputs are entirely functions of their causal antecedents. Thus while some complex machines might be able as properly programmed by a sufficiently sagacious taper to generate simulacra of mental acts, so long as their motions are entirely reducible in logic to functions on their inputs – as is certainly the case with Turing machines – they are not agents, so not actual entities, and so not minds.
I don’t feel as though my idea of what minds are is very clear. But I do feel that it is clear that my own mind is not in its motions entirely a function of its causal antecedents. Which is to say that I feel I am not entirely determined by my causal antecedents, but rather to some degree always free to act as I would. I feel that I act; and in order to act I have to exist actually; lo, I find that I feel that I actually exist. I can’t feel otherwise. And this my feeling of what it is to be a mind is the only information I can possibly have about what it is to be a mind; of what minds are, and how they work. Every other mind is in the same situation as I, epistemologically. So, to propose that minds are mechanical – that their motions are entirely implicit in and determined by the motions of their causal antecedents, so that they don’t actually do anything, or therefore exist – is to contradict *all the evidence we can possibly have about what minds are.*
Excursus: Indeed, we ought at all times to remember Hume’s demonstration that we cannot adduce empirical evidence of any sort sufficient to the conclusion that *any* contingent motion is entirely a function of its causal antecedents. The notion of causation as such then – of anything, by any other – is a logically unwarranted albeit reliable inductive inference to the best explanation of what we experience.
I’m relying on Turing’s definition of Turing machines. It seems pretty clear to me.
The motions of Turing machines are entirely functions of their causal antecedents. So, they are not at all like what I know mind is like, from my experience of being a mind.
The proposition that artificial intelligence can’t be merely mechanical seems absurd to you only on your presupposition that merely mechanical cybernetic machines *can* be minds. But if minds are categoreally different from machines, it isn’t absurd, but rather, obviously true. The question then is whether minds are the same sorts of things as merely mechanical cybernetic machines. I say no; you say yes. You have a tougher row to hoe, because your assertion contradicts all human experience of being a mind, whereas mine agrees with it.
I don’t disagree with that at all. We know some things about minds, but it seems they are incorrigibly mysterious at bottom, like all the other basic ideas. We understand quite well what machines are, but by no means do we know what they are capable of doing. It is definitely interesting and fruitful to explore the similarities between minds and cybernetic machines. That effort can help us understand minds better.
This post and the discussion in the comments were helpful in understanding AI better.
I believe there are two issues to AI: whether it is possible to make a conscious artifact and the practical issue of how successful current AI researchers can be at making machines imitating human behavior.
I think part of the contention between philosophers and AI enthusiasts is that the AI enthusiasts mix up the two questions. From their actions, have shown that they aren’t really interested in the first question: their goal is to find clever methods for making computers mimic humans. But for whatever reason, they want to have their cake and eat it too: they want to claim they have solved the philosophical problem as well.
I write some more about this here: https://nolongerreading.blogspot.com/2022/01/the-two-questions-of-ai.html