I would like to thank Robot Philosopher for providing this mental work out and for keeping me honest by pushing back on various assertions. The argument continues with a quotation from me:
“What on earth would “a good life” mean for a mindless automaton with no free will? Or, if it has a mind, a mind that is trapped within the automaton with no ability to alter a single thing about its life?”
Again, experience matters. Whether or not we make actual choices, we still experience good or bad. We didn’t decide what we experience as good or bad, but we still DO experience. With me so far? Not complicated.
- The two rhetorical questions I have asked you are not directly responding to. I suggest they are immensely powerful.
- What makes a good life good is a far-reaching topic. Some suggestions from the past are that a good life is a flourishing life where we realize our human potential. We develop friendships, engage in self-chosen projects, make mistakes, learn from those mistakes and act differently next time, and we fall in love and have children. Without agency, none of those things are possible. If I decide to cultivate someone as a friend, it is important to me that it is me making that decision and also that their friendship is voluntary. If it is compelled and, for instance, someone is forcing them to be friendly, or perhaps bribing them to spend time with me, then that would ruin what was good about any of it.
- If determinism is true, I cannot actually do anything. If consciousness and thus experience is supposed to exist, then not only must I be completely passive with regard to the world, under determinism, even how I react to these experiences has been determined by something else.
- Imagine there is someone you love. I, for instance, love my son. If I had the choice (a real one) between him dying or him living in the universe as you describe it, I would prefer that he die. I would have to say to him, “Look, Nick. Your consciousness can continue on but not in any way that you are currently familiar with. It will be completely passive. It will be a kind of bare awareness. You will lack all agency. Someone or something else will make every single decision about every single aspect of your life, from the most trivial to the most important. You will not get to choose who you will date, or whether you will have sex with that person either. Should you have sex it will effectively be rape, both on your part and the person you are having sex with since consent is meaningless in this context. You cannot decide on your job, or your friends, or whether to play a musical instrument, or to look at a sunset. Not even whether to, or when, to brush your teeth. No element of your existence will be too trivial for someone or something else to decide for you. Prisoners in jail will have infinitely more freedom than you ever will. But it gets worse…
- Stoicism, particularly the kind espoused by Epictetus, is an extraordinarily passive philosophy. Epictetus had been a slave, though he was later freed. As a slave, he needed to find a way of coping with a life he had no control over. He suggested treating life as though one were playacting, interestingly, in the manner of schizophrenics who feel this way spontaneously. Epictetus wrote that if one finds that one is poor, short, and ugly, one should be the best poor, short, and ugly person one can be. Good actors are only too happy to stick on an ugly nose and play the role they have been assigned. The trouble with this view of things is that it is too passive. The Serenity Prayer says, “O God and Heavenly Father, Grant to us the serenity of mind to accept that which cannot be changed; courage to change that which can be changed, and wisdom to know the difference, through Jesus Christ our Lord, Amen.” The prayer distinguishes between things we cannot change and things we can. Epictetus does not. He treats all things as things he has no control over, presumably as an adaptation to living a life of slavery. Most of us are not slaves, so this attitude would be foolish.
- But, the one thing we can change, and thus have agency over, is how we respond to life experiences and Epictetus has some very interesting and profound things to say on that topic. He suggests, for instance, that if someone calls you insulting names, you should respond in a positive manner and never get annoyed. If the insults are false, one can merely point out that they are false, but you could say, “that is not true, but I have other defects you have not mentioned yet,” and then list some. If the insults are true, one should simply own up to them. How can someone saying something true about you harm you? If you are male and 5’4” tall and someone calls you short they have not harmed you. If you are 6’4” and someone calls you short, you can simply point out that they are mistaken. If you find yourself getting annoyed, then this person has identified a wound, a sensitive spot, in you and you should thank them for providing this educational experience.
- Under determinism, the level of passivity reaches its maximum. Not only can you not decide anything at all about how to live your life and who to befriend, but you cannot even choose how you react to anything. You cannot follow Epictetus’ advice about how to respond to insults because you can’t do anything. You are a mannequin, a marionette, with something akin to a photoreceptor cell attached recording experiences.
- Imagine you take a date out for a meal and you find her enchanting, delightful and interesting. Under determinism, none of that has any meaning. You did not choose to take her out, and you did not choose to find her enchanting. The latter point is complicated because as with preferences, perhaps only a relatively small part of finding someone charming is voluntary, but as Winstonscrooge says, we only need that small part for free will. Regardless, your response to your date has nothing to do with you per se, but is merely part of a chain of causation.
- Under determinism, you could be taking Miss Piggy from the Muppets out, not because you wanted to, and deterministic forces will make you feel her to be the most wonderful person you have ever met; or not.
- The life of an automaton with no ability to alter a single thing would be hell on earth. Euthanasia for such an entity would be the greatest kindness.
- Our programming decided what the experience would be, under determinism. Then our programming decided what “we” would think about that experience. The “we” of course being a bunch of circuits indistinguishable from a robot.
We can quite easily make a program with machine learning, these days. In such programs, we provide a reward for achieving tasks we wish them to achieve, which reinforces the behaviors the code performed. If they fail, there is either a “punishment” or no change – I forget.
- The use of the term “reward” in this context must be metaphorical to the highest degree possible. It is an essentially mind-dependent phenomenon involving incentives. I’ll come back to this shortly.
- On the philosophical significance of machine learning, the following is from A.I. and the Dehumanization of Man:
- Following algorithms does not require real understanding. The instructions are provided by the programmer and the machine does what it is told. The phrase “machine-learning,” however, sounds like some way out of this dictatorship of the programmer.
With bottom up algorithms, a procedure is laid down for the machine to “learn from experience.” The system must be run many times, performing its actions on a continuing input of data with the rules of operations being continually modified in response.
The goal of this “learning” has been clearly set in advance, e.g., to identify a person’s face, or a species of animal, or to diagnose cancer from an X-ray. After each iteration, an assessment is made and the system is modified with a view to improving the quality of the output. This assessment requires that the correct answer is known beforehand.
But “the way in which the system modifies its procedure is itself provided by something purely computational specified ahead of time.” This is why the system can be implemented on an ordinary computer. In real human learning from experience, no one knows what he will learn in advance. Nobody would call being told beforehand what the right thing to think was “learning from experience.” That would be learning from someone else. And then when it is added that exactly how you will use data to reach this foregone conclusion is also determined by someone else, this resembles learning by rote. Real learning from experience, has no goal known in advance, nor is it set in stone how learning will take place. The phrase “machine learning” sounds like the machine will be learning by itself, and reaching its own conclusions. It is not. Neither the conclusion, nor how the conclusion will be reached, is self-determined. Real learning from experience is very interesting because no one knows what conclusions experience will teach. That is why it would be fascinating to consult one’s eighty-five year old self about many topics because no one knows what life lessons will have been learned by that point in one’s life.
Penrose writes that the key distinguishing feature of bottom up programming as opposed to top down is that “the computational procedure must contain a memory of its previous performance (‘experience’), so that this memory can be incorporated into its subsequent computational actions.” It should be noted that mistakes and machine learning are inextricably interlocked. Without “mistakes” the system could not function. It is only with top down programming that computers have significantly outperformed humans such as numerical calculation or calculating games like chess.
Machine learning does not really change anything and neither does parallel versus serial processing. Whether computational actions are performed one at a time or a task is divided into sections and the sections are tackled simultaneously makes no philosophical difference.
Roger Penrose provides a pithy summary of why artificial “intelligence” is a misnomer. There is no intelligence without understanding, and understanding requires awareness. An operational definition of “understanding” is not sufficient because, for instance, a mathematical algorithm can be followed with or without understanding. Only the person who understands would be able to formulate a new algorithm for achieving the same task since the person in the dark does not even know the purpose of the algorithm.
Computer programs, as products of human intelligence, can thus appear to be intelligent themselves. But effective computer simulations almost always are exploiting some significant human understanding of the underlying mathematical ideas. Computer algorithms that can differentiate knotted string from simple heaps depend on very complex and recently developed (twentieth century) geometric ideas – though a person can often test the string with simple manual manipulation or common sense.
- The concepts of reward and punishments does not make sense in a deterministic universe. I’m afraid those are yet more things you are not allowed to have without contradicting your own metaphysics. Physical determinism involves cause and effect pushing from behind. A reward or punishment would resemble a goal in relevant ways. Goals exist in the future and pull us towards them, as do rewards, rather than being pushed from behind. It can be described as backwards causation. However, goals and purposes have been eliminated by modern science. Their elimination is, in fact, a defining feature of what makes modern science modern. At best, goals and purposes are placeholders until a fully mechanical explanation has become available.
- Rewards and punishments depend on the existence of minds. Dog training typically involves rewards. Dogs have minds. But, for nonsentient entities, the concept of “reward” is simply inappropriate. You do not “reward” water for running down the drain the right way, or for your car for a good performance. You do not “punish” hurricanes to deter them from destroying cities.
- Talking about “reward” in the context of a computer algorithm as though the computer scientists are meaning anything like “reward” in human beings is misleading.
- If you give someone a reward, you are giving them something that they want. Computer programs do not want anything. “Hey, computer. If you do what I want you to do, I will reward you with a memory upgrade. How would you like that? Now will you do what I want you to do?”
- A plausible response from a determinist is, “Well, human beings are just robots following their programming,” so if reward does not apply to robots/computers, it does not apply to human beings either. That logic would be correct. Either way the concept of “rewards” is eliminated as redundant and inaccurate.
Regardless, here is an example of your apparently enigmatic “good life” for a “mindless automaton”. Reward.
- “Reward” is not a meaningful concept without minds and agency. It assumes that I have choice and that my behavior is goal driven, namely to earn a reward and that without that reward I will have made a different choice.
- If minds are causally efficacious, then we have escaped from the confines of physical determinism.
- The term “reward” in this context is a very odd metaphorical use of it. In humans, it indicates a good feeling, perhaps a smile, a dopamine hit. No such thing is happening in machine learning. Machines feel nothing.
- If reward just means another manipulation of the automaton by its ruler and programmer then this is not a desirable thing. There is no “good life” for someone whose life is significantly worse than the most pathetic slave, since a slave’s thoughts and feelings, his experiences, are his own. In Brave New World, people are manipulated by pleasure and take a drug Soma whenever they start to feel down. No one thinks the life envisaged in Brave New World would be good or desirable.
- What a “reward” might mean for machine learning and what it would mean for a human being cannot be plausibly compared.
- In 1984, Winston Smith is tortured by fear to the extent that he supposedly learns to love what his torturers want him to love. This defeat of the human spirit at the end of the novel represents the worst of outcomes. Dying with dignity is at least admirable.
With that simple variable, I have solved your philosophy riddle, prof!
We humans experience the good life as a sufficient amount of reward, with a minimal amount of punishment.
- A reward for what? Dogs, in training, are rewarded for their actions. What am I being rewarded for? The dog can think “I’m a good dog.”
- A reward has to be the reward for something. I have not done anything.
- I have no choice in any of my behaviors, if determinism is true, thus I am not being rewarded for making good choices nor punished for making bad choices. None of this language has any legitimate referent in this context.
It does not matter that we didn’t choose what rewards would satisfy us or by how much. It does not matter if you think we shouldn’t possess a sense of self if this were true. It’s irrelevant.
- I dispute those assertions with every fiber of my being.
- I encourage you to read Brave New World and 1984, for the father of both, We by Zamyatin.
We are separate entities that can experience reward, just like our basic A.I.’s can. And you would agree that (for now), A.I.’s do not have free will, yes?
- A.I.s are not conscious and thus do not have free will. They cannot experience rewards.
So, assuming that I haven’t yet lost you, what is the reason that even in your world, where determinism does not exist, machines can act exactly how I’m contending the world works, and yet you deny this could possibly be the case with human experience as well?
- Machines can and do act as you describe, but they cannot do many of the things human beings do. They have no understanding and merely follow their programming.
- David Chalmers imagines that there could be someone behaving just like him, but without consciousness. The fact is that there never has been such a person and, I would claim, there never can be. Machines are rule-following devices. You can have a rule only for things you can predict. We are unable to predict the future, so we are routinely faced with finding solutions to novel problems. Sometimes we succeed, sometimes we fail.
- The difference between a conscious person and a machine becomes a matter of pain when dealing with computerized customer service devices. Almost always when I have bothered to call some company it involves a problem that the computer cannot solve. When the Amazon website asks why you are returning a product the list of options is simply not long enough. They have not anticipated the reason for every return, and they could not do so. Computers have no common sense and they will not be able to improvise a solution that will please a human being in this context.
What I’ve heard from you has mostly been semantics and non sequiturs and arbitrary rules, but what specific rule would you cite here why robots regularly do exactly what you purport humans incapable of – which is follow their programming, without additional spooky Deepak Chopra woo woo magically providing us with free will?
- You don’t seem to be making sense here. I have never disputed that machines can follow their programming.
- I accept that for free will to exist something spiritual must be introduced. That I also have never disputed. That is why free will cannot be proved. Arguing for determinism, however is a performative contradiction. Arguments exist to persuade. Persuasion does not exist for determinists. Persuasion operates on minds and reasons. If minds are independent of physical causation then determinism is false. If minds are nothing but physical processes then persuasion is nonsense. Only causation exists.
- If the unprovable metaphysical position called “materialism” is true, then determinism is true.
Where do we gain free will? At what point? What variables? What did we freely choose that affects our decision making?
- You won’t like the answer, and I don’t expect you to like it. The human soul has a connection to the Ungrund, so the moment we come into existence we have free will. This is not provable, but, at least, it does not involve a performative contradiction.
- I cannot point to any one thing that would prove to someone skeptical of free will that it was freely chosen. A determinist can probably find determined things, but he will be unable to prove that free will never obtains.
- From: Does the Concept of Metaphysical Freedom Make Sense?
- “What proceeds from The Great Mystery must be causeless in order to be free – otherwise physical determinism is simply replaced by spiritual determinism. If creativity were explainable, it would no longer be creativity. Freedom too is inexplicable. And it is the postulate that is the precondition for postulating anything since only agents can postulate. Berdyaev uses the phrase “creative dogmatism” at one point in his writing. If ever there were a right moment for creative dogmatism, the postulate of Freedom is surely one of them.
- Though the Ungrund is by definition The Great Mystery and unknowable, one way of thinking about it that could make it a little more imaginable, is to compare it to another dimension that you can reach into, like a wormhole. It is another dimension that you cannot see inside, but you can reach your hand in and pull something out. What you pull out will be related to you, and your desires, preferences, personality, knowledge, and life experiences. Einstein had to know a lot about physics and mathematics to generate the theory of relativity, and he had to have a great imagination. As a young teenager, he had read an encyclopedia that combined physics and biology and in it was the thought experiment of what it would look like to ride a beam of light. He never forgot this and it inspired thoughts that led to his breakthrough discovery, along with working in a patent office where clocks were being patented, getting him to think about time in a new way. What Einstein discovered was also related to his knowledge, desire to know, and life experience. When Beethoven composed music, he knew a lot about previous music, and also a great deal about music theory. His style of music reflected him, his personality, his cultural environment, and his preferences; and even the nature of his creative and imaginative impulses. Einstein’s insights into the Logos; the beauty of Beethoven’s music, represent something transcendent. Highly trained composers can compose in the style of Beethoven, but this is strictly imitative. It is possible to pull from the Ungrund something similar to Beethoven, but only in conscious imitation of him, and the results are derivative. What each musician pulls from the Ungrund, ideally, is a reflection of him and his interaction with the Great Mystery. It is a gift from the divine; a gift uniquely chosen for the recipient and in cooperation with him.”
“If you did not choose a good life, why should it worry you if you are denied a good life? I have no choice about how I argue, or what I do, and neither do you. What does “you” are “worried” mean in this context anyway? Automatons are neither “you” nor “worried.” The illogicality of determinists is one of the most abhorrent and repulsive aspects of determinism. As a fan of logic, properly applied, I admit I find this distressing.”
“A good life”, put another way, is simply satisfying one’s preferences. You are incapable of “choosing” anything except attempting to satisfy your preferences. People have chosen lives which they’ve mistakenly thought would lead them to their preferences, but nobody has ever consciously chosen a bad life for themselves, where they knew their preferences wouldn’t get met. We are not actually capable of doing such a thing.
- So, we are back to preferences again. This is my response.
- We try to determine, using our minds, what we think the good life will look like. This we choose. Our preferences often follow these freely chosen goals. So, while the preference might not be chosen per se, what it is a preference for is.
- Plus, as stated before, we choose if and when to follow our preferences. To imagine otherwise is to commit a self-sealing fallacy.
- Your argument reminds me of the claim that all men are selfish, that everything we do, we do for pleasure. I wonder if you believe that? That turns out to be a self-sealing fallacy (No True Scotsman Fallacy) and thus definitively false.
- I wonder if this is the reason Sam Harris chooses “preferences” as the thing that means we are not free? Sometimes, I do what I need to do rather than what I prefer. I drive my spouse to the airport, though I might prefer to sleep in. Someone can then say, that just means your “real” preference is to not make your wife angry or disappointed with you. Whatever you actually end up doing expresses your real, overriding preference. But, that becomes a tautology. It becomes true by definition that we follow our preferences. Since no exceptions can be found, due to the meaning of the word “preferences,” one is no longer making a statement about how the world functions, but just the meanings of words. As such, no counterexample will ever be possible. In the world of facts, however, hypothetical counterexamples are always possible. That is how we know we are dealing with the world of empirical reality.
And if you argue against that, you’d reveal your ignorance of how determinism actually works,
- I am arguing against physical determinism. You seem to have a kind of preference determinism in mind which I am satisfied I have refuted as being mind-dependent, while you regard conscious experience as causally inefficacious.
““Preferences” are irrelevant. Who gave you those preferences? You are a slave, a mechanism, and a nullity. Its do not have meaningful preferences. You cannot act on those preferences, since only agents act. You, unfortunately, are caught up in a meaningless charade. You are not enough of a determinist. Get with the program!”
We’ll ignore the tedious repetition of the embarrassing semantical rhetoric, but I wanted to bring up another point: “Its do not have meaningful preferences”. What does “meaning” have to do with anything? This is skirting an appeal to emotion. Because you don’t find it “meaningful” enough, it therefore must be untrue? Come, now, Mr. New York Philosophy Professor. Tsk tsk.
- Nihilism is the belief that life has no meaning; it is a meaningless joke. It is an understandable position to want to avoid that conclusion. You are suggesting that it is enough to have preferences to have a good life. I am arguing that meaningless preferences are undesirable because nihilism is undesirable. There is nothing particularly emotional about this claim.
- Philosophy necessarily has an intuitive component unless we stick to mathematical logic. Actually, that involves intuitions too. New insights into mathematical logic are derived from creative, intuitive, and imaginative thinking, see Gödel’s Theorem. And the solution to long proofs frequently involve “aha” moments.
- If the life of a slave (actually worse than that) and a mechanism do not repulse you then that is your prerogative. This style of argument is called a reductio ad absurdum. We assume for an argument’s the sake the truth of an assertion and see if we can derive a contradiction. In this case, your claim that determinism is an utterly fine position should conflict with the belief that you do not want to be a slave or a mere mechanism programmed to do what you are told. There is no conflict for you, so the argument does not work in your case.
- This cannot all be a matter of mere logic. What we picture as a worthwhile life is relevant to this discussion also. Some of that evaluation will occur on an intuitive and partly emotional basis.
- Given your failure to find this repulsive, you are happy to continue being a determinist.
- Obviously, neither determinism nor free will can be proved, so one must have a motivation to argue for one rather than the other. My motivation is that I want to continue to lead a meaningful human life where I make my own decisions and suffer the consequences.
- One possible attraction of determinism is that it avoids moral responsibility. People do not like the feeling of guilt and determinism provides a handy way of avoiding it.
Subjective meaning, too, has no bearing on the truth – for any argument.
- That would be true if we were debating a mere matter of scientific fact. But, we are doing philosophy. In particular, we are debating the meaning of human existence. You seem to have been arguing that a good life is a reward of some kind and that lots of other things do not matter, like the fact that we will not have chosen that life, and that someone or something else has decided that such and such is a reward. That is a judgment call and thus subjective. When someone introduces subjective elements like that, and we both have, you can try to nudge them in your direction by asking questions like, “Do you really think such a life would be worth living?” Your answer is yes. Mine is no. And I am busy providing reasons for that “no.” Some of them have to do with feelings since we humans care about such things.
You may think it’s meaningless. I don’t. Regardless of whether you find preferences you did not choose “meaningful”, you possess them. You experience them. And sure, this may, indeed, be a charade of sorts, but if programs can feel a sense of reward for meeting preferences they did not choose, why do you insist we cannot? What law of logic or nature is that breaking?
- You and I have fundamentally different intuitions concerning meaning here, hence partly why we are on opposite sides of this debate. I don’t want my life to be a charade and will fight to avoid that outcome.
- For anyone who is bothering to read this, this ethical disagreement about the nature of the good life should be regarded as most edifying.
- I dispute that the word “reward” applies to machines in any relevant sense to this discussion for reasons already stated.
I swear to god if you say “bUT wHat Is “WE”?”… lol. Try an actual argument, this time.
“There is no you. There are no choices. Every single determinist will back me on that one. Choice is a pure illusion. It does not exist! You already said so in your opening statement. You are doing what you are programmed to do by genes and environment, but really, the Big Bang. Please make up your mind if you really want to be a determinist or not.”
I’d like to talk to these mystical “100% of determinists” who believe that. Strange, then, that none of the big names or any of the smattering of lesser names in this discussion that I’ve read/heard has ever mentioned not being allowed sense of self. No real choices, sure. You’re conflating that with “no sense of self is therefore possible”, as you feel it makes determinism much easier to defeat. Aka: Strawman.
- The position that there is no meaningful “self” is one I am claiming is logically implied from the position of determinists. It is not one actively embraced by most determinists. As stated elsewhere, without agency, all that really exists is a sequence of events. There is a stream of physical cause and effect and what we call “you,” and “I” are metaphysically indistinguishable from that stream. Except, you think experience is significant, though it is not an experience that belongs to anyone to any real degree. I have analogized it to “locked-in” syndrome and thus a kind of living hell.
- Sure enough, by the end of this discussion you accept that “I” and “you” are meaningless concepts designed to make us feel better about living in a deterministic universe.
If this is how you are presenting determinism to your students, no wonder they think it’s batshit.
- There being no self is the least of it.
- Actually, I present to them two main articles proving that arguing for determinism is a performative contradiction and one of the more insane things philosophers have ever done. The articles are this one and this one.
- My students are not sanguine about the existential consequences in the manner that you are and that is another reason they do not embrace determinism.
We choose things but have no choice over what we choose. “Pick a number between one and one” is not a free choice.
“So, you have “preferences.” What does that even mean in this context? Who cares? Does a computer have “preferences?” No. You are a computer. Nothing more.”
Yes, computers do have preferences, as discussed. I wrote all of the above before I read this sentence and now I fear that you might not actually know how computers work.
- Computers do not have preferences in any way analogizable to human preferences. They are, by definition, rule following devices. They do what they are programmed to do. They do not follow their “preferences.” If this happens, do this. If that happens, do that.
“There is no “I.” Just circuits. So, no there is no “I” perceiving anything.”
Sorry to beat a dead horse (to be fair, I feel I’m only returning the favor), but I just want to clarify your thoughts: You obviously take EXTREME issue saying “I”, or anything of the sort. I get that. Perfectly. Period. Crystal clear. Oh my god I understand that you don’t like it. But would you agree that there is something – some entity – doing some kind of perceiving, yes?
- I’m not going to agree that there is a deterministic entity that perceives. Perception takes place in minds. We perceive through our eyes, but with our minds. Kepler “solved” the problem of perception by stopping before it got to the mind. See this article here.
That is “I”, for future reference.
“What exactly is “experience?” I do not believe that is well-defined. That would require consciousness and an “I.””
I’m not sure I could concisely define “experience”, either, but what is it when a computer program can perceive data? Record data? Recall data? Interpret data? I’ll ask again: Does a computer have consciousness? Do you think they a “them”? No, right? Yet they can do the exact same things as would be included in any definition of human “experience”, and yet you also maintain that a conscious and an “I” is required. Arbitrarily. It’s a double standard.
- The words “perceive,” “record,” “recall,” and especially “interpret” as applied to computers is strongly metaphorical and not what human beings are doing. A computer has no understanding and thus all these things we are projecting onto the computer from our human point of view. The zeros and ones might have meaning to us as “recording” something. The computer has no such conception. In fact, it has no concept of anything.
- Experience takes place within consciousness. An unconscious experience is perhaps an oxymoron. Maybe the passive and active nous distinction would apply. Signals from our skin inform us about the feel of the shirts on our back, but we usually pay no attention and it forms no part of our experience. It remains a potential only. Computers are not, as yet, conscious.
- If human beings are computers, then I deny them the honor of personal pronouns. We turn computers off. If humans are computers, we can turn them off too. The only thing stopping us would be some woo woo magical thinking.
I know you haven’t yet answered, but I’m expecting some form of moving the goalposts in response to this. What else could you do which would preserve your beliefs?
“At this point in the argument, you are describing a horror show. You admit you have no control over anything. Events are simply happening.”
More appeals to emotion. Why “horror”? At the risk of sounding especially unintellectual: You could have just as easily mentioned situations out of your control leading to booze and sex and blowjobs and yet you want to prime your students that only “stupid” people believe determinism, so “horror” it is.
- When the topic is human life, emotion and horror are applicable. If this were some topic in mathematical logic, not so much.
- I am attempting to persuade students and other readers that your depiction of human existence is absolutely horrible. The horror entailed by your worldview is relevant.
- Imagine that someone has to choose between two realities to inhabit, and they must inhabit one of them. If I care about this person, then an entirely neutral and “objective” description will not be sufficient if one of those realities turns life into a meaningless joke. In that reality, all behavior is compulsive, all decisions made by mindless physical forces and even how you feel about it decided by physics.
- I imagine you have seen science fiction scenarios where prisoners have their minds’ altered to think that they are enjoying their captivity. Most viewers will find this especially creepy. It is one thing to be a slave. It is another thing to be compelled to enjoy it and maybe sing songs of praise to your master.
- Compulsory sex is rape. Compulsory booze would be most unpleasant. Is it still rape if you enjoy it? Yes. And there can be only rape where consent is impossible.
- Only illogical people believe in arguing for determinism because of all the logical contradictions involved in arguing for it as outlined in The Illogicality of Determinism.
- I am indeed trying to appeal to your right hemisphere which is relevant here since we are partly debating what a worthwhile life might look like. And what a horrible life might be like. Intuition and emotion necessarily come into those topics, not mere facts.
Just an observation – not an argument. I just want to make you a little more aware of the slight dishonesties and mental gymnastics and strawmans you are apparently oblivious of which you are committing.
- Readers can decide for themselves about all of that.
- Some of what you write here is predicated on the misapprehension that I think the robot, Sunny, from Isaac Asimov’s I, Robot is possible.
““You” are not doing anything. You have no agency. Therefore, there is no “you.” “
“You” are doing all the things you said “you” were doing. You simply cannot claim ownership over the choice to do them. If you want to claim that this isn’t “agency” – again, that’s up to you.
- To be an agent is to be the center of conscious decision making. So, no. Agency does not apply here.
It doesn’t matter. There is a clear delineation – as far as human experience is concerned – between human action and a puck on a plinko wall. We collectively refer to that difference as “agency.” If you don’t want to define it that way, so be it, but the rest of the world does. You do you bud. Be a semantic contrarian if it satisfies your preferences. See if I care!
- Determinism denies that human action is significantly different from a puck on a plinko wall, whatever that is, not me.
- Agency is what most people assume to be the case. Determinists are arguing against it.
“In this mixed up way of thinking, “you” are somehow conscious, but trapped. “You” have “experiences,” whatever those are, since they have not been scientifically defined, and you think some are “good” and some “bad.” But, someone/something decided that for you.”
And you disagree? I know I’ve asked this already but provide me right now with where your preferences which guide your beliefs and subsequent actions come from.
“Your opinion (again there is no you, but let’s just run with it) that something is “good” or “bad” has been assigned by someone or something else.”
You start to catch on, here. You don’t believe the self can coexist with determinism, and yet even you are capable of saying “I don’t agree with this, but this language is useful for explaining what I’m trying to explain, therefore […]”. Something certainly exists, even if determinism also does. It’s the same thing you begrudgingly refer to as “you” in your above quote.
- I’m not catching on to anything. I am doing a “for argument’s sake” move. I don’t agree with this, but, counterfactually, assuming it were true for a second, then…
“They could have assigned you to think something else.”
A bit off the path but no. There is no “they”. It is physics – nothing else.
- Let’s just go with physics. Treat my “they” there as being a placeholder for whatever you want to put there, which is what it was intended to be. In one of my articles, I argue that the Big Bang would have to be effectively omniscient for the world to operate as it does and for determinism to be true. Hence, the “they.”
“There is no “convincing” if determinism is true. There are merely sequences of events. You move that way. I move this way. You feel X. I feel Y.”
You just explained the psychological mechanics of “convincing” someone within determinism whilst pretending you cannot convince someone within determinism. And you’re exactly right in the mechanism – that’s what it is to convince someone. I say x, you perceive y, and you reevaluate. I then might say z, “move that way” and you “move this way” and you’ve become convinced.
- There are no “psychological” mechanics here. In a world of automatons with no free will, psychology is a meaningless category. There is no such thing as “reevaluating” given the limits of your metaphysics. You cannot make mind causally efficacious when it suits you while believing that physics alone is what makes things happen. Mind, and “convincing” are just meaningless phrases in this context.
There are literally “arguments” in computer programming.
Computer virus code can ‘argue’ with antivirus code. If the virus code successfully “convinces” the antivirus, it will let it in the system. – An example of a deterministic, causal, mechanical “automaton” successfully “convincing” another.
- These are very obviously metaphors and nothing more. Even people do none of those things in physical determinism, let alone computer programs.
“You wrote: “When the robot makes a “choice” to veer left to follow the line, it has done nothing but reference its programming and equations and variables to their inevitable conclusions.””
I thought it obvious, but I was using “choice” the way you do.
- This whole debate is about the existence of choice. There is a reason you put scare quotes around “choice.” Obviously, we do not use that word in the same manner. I believe in choice. You believe in “choice.”
Humans do the same thing – follow the same mechanism – when making decisions as does this robot, and yet we call one a “choice” and one “following programming”. My point is that they are literally the same thing. There is no additional magic sprinkled by an invisible wizard which makes our choices free of causes we did not choose. Though, you might as well be arguing as much. It seems without your semantic contrariness (hmm.. Google Docs thinks that’s a word? Sure.) you don’t actually have much of anything.
- I understand you think they are the same thing. Obviously, I do not, so I am not going to concede that they are the same thing!
- We both have difficulties with our positions. Arguing for yours involves a performative contradiction. Arguing for mine involves something uncomfortably close to an “additional magic sprinkled by an invisible wizard.” I would love to say otherwise, but I cannot. Since we are in fact arguing, to be consistent, then in bothering to argue you are accepting that additional magic. I need a little magic, while you describe a life of utter nihilism and pointlessness of the sort described by autistic people and schizophrenics who have lost the sense that either they or the people around them are real, and instead experience everyone as a robot and imposter; as mere automatons simulating human beings. Choose your poison.
“A preference implies goals and purposes. I prefer wine to sewer water. I prefer pleasure to pain. I intend to strive for one rather than the other. There are no goals and purposes in determinism. Determinism is predicated on cause and effect, not goal-driven behavior.”
We can know with absolute certainty that this is false, as computer programs HAVE goals and preferences, and we both would agree they have no free will. Therefore, goals and preferences necessarily are possible within a deterministic system. Not to mention: Why do you arbitrarily believe that cause and effect cannot lead to goal-driven behavior?
- Computers have no minds to have goals or preferences. Since they understand nothing, they just follow their instructions presented by others. Programmers have goals and preferences and use computers as tools to achieve them. Demis Hassabis agrees with this “tool” description.
- Demis Hassabis, CEO of DeepMind Technologies, responsible for AlphaGo and AlphaFold (protein folding) has been described, by Lex Fridman, as the person most likely to preside over the emergence of Artificial General Intelligence (AGI) were that to ever happen. In his interview with the roboticist, former Stanford professor, and podcast host, Lex Fridman, Hassabis made two pertinent comments. One, computers are tools and nothing more. Two, he has never seen any evidence of sentience in a computer program. He regards the existence of life and consciousness as a complete mystery. Hassabis thinks that intelligence might be separable, in principle, from consciousness since he does not think dogs are very intelligent. I disagree. Hassabis does not pretend that his claim is anything other than speculative. We have no evidence that intelligence and consciousness can be separated. And, Hassabis is perhaps overloading the term “intelligence” when he claims that dogs are not intelligent. They are goal-driven, like all organisms, and can improvise intelligent solutions to problems, just as single-cell organisms, such as white blood cells, can and do. Hassabis calls dogs unintelligent only in comparison with humans.
- It is not arbitrary to think that cause and effect cannot lead to goal-driven behavior. Causes push from behind. Goals pull towards the future. Goals are yet-to-exist states of affairs. If something does not physically exist, it cannot cause anything in the manner of physical determinism. Teleology has been eliminated from the scientific approach to the world. Science, in general, denies the existence of goal-driven behavior. Aristotelian “science” included “ends” (a telos) and goals. Modern science lets “goals” exist only as a place-holder until a mechanical explanation can be found. Once it has been found, they eliminate any talk of “goals.” Unfortunately for scientists, biology is completely unworkable without reference to goal-driven behavior, depending on the specific area.
“That does not make your preference good. It gives no reason for thinking your preferences should be satisfied. And, incidentally, there are no “shoulds” in determinism.”
Absolutely true, on all 3 points. I never claimed otherwise, unless I misspoke with a rogue “should”, or something.
A little far afield, again, but Kant’s hypothetical imperative works perfectly fine within determinism. Nobody can say they ought to have the preferences they have, but IF they are trying to satisfy their preferences, there are certain “oughts” that can, more or less, be proven – though not necessarily very specific ones. Maybe that’s too off topic.
- Hypothetical imperatives involve goals. See above. Goals can, however, be provided by programmers, though it is their goals, not the computer’s.
“Computers cannot “attempt” anything. That is a word borrowed from the language, metaphysics, and ontology of agency. That is intentional language involving goals.”
Entirely incorrect. “Attempting to establish a connection…” What is going on there? What is the computer actually doing, then? I mean, I realize that a human wrote that message in the code, but what else would you call it that the computer is doing? I’m just trying to figure out if this is more semantic nonsense or if you genuinely believe that a computer doing something that resembles in every way an “attempt” to do something is not actually an “attempt”, somehow.
- In many instances, using intentional language is highly efficient. However, it becomes problematic if we are trying to figure out something’s metaphysical status. Then, we need to be extremely careful with our language and not introduce mental categories into inorganic entities. An ax, a tool, is not attempting to cut down the tree, the woodsman is. A computer does what it is programmed to do and it is convenient to call that “attempting” for the purposes of communicating what the programmer wants it to do to other humans. It is, of course, not literally attempting anything for real.
- I realize that you don’t think we are literally attempting anything either. Our actions are perfectly mechanical and robotic for the determinist. A leads to B, B leads to C, and so on. Philosophically, it would make more sense for you to eliminate words like “attempt” both from descriptions of humans and computers. In daily life, however, it will remain useful to retain intentional language in both instances.
- I know a determinist who thinks computer can “argue.” Since computers have no understanding, they cannot argue. If I reply in phonetic Mandarin but I don’t know what I am saying, I cannot be said to be arguing even if what I say is in fact a premise supporting a conclusion. I am, however, verbalizing.
“Nobody can be convinced of anything if free will does not exist. “You” can cause “me” to alter my programming, but “you” are not actually “doing” anything and neither am “I.” A sequence of events has occurred. End of story.”
If you want to think of it like that – a sequence of events – fine. Your cross to bear. The rest of us use language most suited to conveying the ideas we wish to convey, and in a manner that pleases us – satisfying preferences. “You”, “me”, “doing”, and “I” are certainly among them, and not at all mutually exclusive with the possibility of determinism.
- As I say, one mode of speech is suitable in one context, and another in another. I don’t believe in any of this deterministic stuff, so I don’t use that language at all, except when interacting with determinists on the topic of determinism!
Your view of how determinism would work:
“It was the best of times, it was the worst of times. A series of events happened. The end.”
- Except without the concepts of “best” and “worst.”
Strangely, that is not as rewarding to us humans as pretending we all have agency and free will and all of that which you are clutching to your chest. Our preferences are better met with all the gooey middle parts. With pretending we have vast choices and wallowing in our ignorance of the complexity of cause and effect. With heaping meaning onto our inevitable fates. That’s why we use words like “we” and “I” and “convince” and “goals” and such. But it is nothing more than a reward system attempting to satisfy our preferences.
- Determinists are hypocrites. They say one thing and do another. A good test of what someone really believes is how they act.
- Most determinists seem to acknowledge that they cannot live their philosophy.
- Some pretty good comedy could be garnered from having a determinist talk to his girlfriend in a manner consistent with his determinist beliefs. The left hemisphere is mostly devoted to the perception of, and thinking about, inanimate objects. Since girlfriends would have interiors of no real significance, a character playing the determinist could say, “My serotonin seems to be edging upward, my dopamine is at levels indicating that my system is enhanced by your proximity. I guess this means my biological processes are pushing me to say things like “I love you,” whatever that means.”
- We don’t have any choice about what we pretend and don’t pretend under determinism. That particular contradiction, that we have a choice about whether to use a certain kind of language or not and so on is one the things I most dislike about determinists. My metaphor for this behavior is that determinists point to a river of causation leading to its inexorable destination, and then they pretend to step out of that river and make real decisions concerning real choices.
- Determinism is nihilistic. Since it is an unprovable hypothesis, just like the existence of free will, positive optionality suggests going with the option that does not render all of human existence meaningless.
- “But it is nothing more than a reward system attempting to satisfy our preferences.” Thus, I was correct to isolate the topic of preferences and give it its own little article. I disagree that “preferences” make any sense under determinism, or “rewards.” They have no more ontological reality than “I,” “we,” and “goals.” Preferences and rewards make sense as concepts only in a world of causally efficacious consciousness. They are, by their nature, mental items implying top down causation; the mind affecting the body. “Rewards” are incentives to a certain kind of behavior. And “incentives” imply minds. We do not talk about rewards and incentives when it comes to inanimate objects with no minds. It would seem to make the most sense for determinists to deny the existence of life as an unnotable and unidentifiable element in the vast causal stream of events. One I know does, describing it as a “poorly defined concept.” Rejecting the existence of things that cannot be defined is a distinctly left-hemisphere tendency. We cannot define knowledge either, and yet knowledge can and must exist, especially if one claims to know that knowledge is impossible. No such thing, rewards and preferences, exist for a hardcore consistent materialist, and thus determinist, so they must revert back to physical causes only.
“Randomness is a problem for physical determinists. It is does not contribute to agency.”
That’s actually a really good way to phrase that. I may steal that in the future, as I certainly could have used it in a few past discussions. Cheers.
 This explanation is based on Roger Penrose, Shadows of the Mind, pp. 18-19.
 Ibid, p. 19.
 Roger Penrose, pp. 38-39.
 Penrose, p. 60.