
Alan Turing
Kurt Gödel’s Incompleteness Theorem[1] was inspired by David Hilbert’s question “Are the axioms of a formal system sufficient to derive every statement that is true in all models of the system?” Hilbert played the same role regarding Alan Turing’s proof of the halting problem. Hilbert had asked: “Is there some mechanical procedure [an algorithm] for answering all mathematical problems, belonging to some broad, but well-defined class?”[2] In German this is called Entscheidungsproblem – the decision problem.[3]
Turing found that he could answer this question by framing it in terms of a Turing machine[4] – could there be a program that could determine whether any other arbitrary computer program and input would eventually stop or just loop forever? This was called the halting problem.
“Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.”[5]
The terms “algorithm,” “computer program,” “mechanical decision procedure,” “mechanical procedure,” “effective procedure,” are all synonyms.[6]
Algorithms can be contrasted with heuristics.
A heuristic is a loosely defined rule of thumb often with a teleological (purposeful, goal-driven) element. E.g., “if you want to win at chess, try not to play just defensively.” Such a heuristic does not tell someone how to actually accomplish this in detail.
An algorithm, though, is a set of instructions about what precisely to do in a given well-defined circumstance that is guaranteed to give a result.
An algorithm is a method for finding, say, what the largest number two other numbers are divisible by. It is a set of instructions; a procedure that can be written down as a series of steps. Step one – divide the larger number by the smaller number. Step two – replace the original two numbers by the remainder …etc.
The validity of an algorithm such as this one is not determined algorithmically but by an external test. Algorithms do not decide mathematical truth.[7] It will be necessary to use some other method to test whether a given algorithm is producing the correct result.
Quite often mathematics is (badly) taught as a series of such techniques, algorithms, mechanical decision procedures, while the students have no idea what they are really doing. Sometimes they are taught algorithms for a whole area of mathematics like algebra, while never even knowing what algebra even is, what it is for or why they are doing it, which is surely a spectacular waste of time.[8]
David Hilbert’s question is effectively – is there an all-purpose algorithm for finding and testing all other algorithms?

Roger Penrose – mathematical physicist, major contributor to the physics of black holes, and winner of the Wolf Prize in physics.
This question can be framed in multiple equivalent ways. Is it possible to completely automate mathematical discovery and proof through the use of computer programs? Is formalism in mathematics possible? Can mathematics be reduced to rules for symbol manipulation that completely ignore mathematical meaning and truth?
The answer to all these questions is “no.”
To repeat, Turing answered the question by framing it in terms of asking whether a Turing machine could be built that would determine whether other Turing machines would ever halt. This tester machine would thus be called a “halting machine.”
It is crucial that an algorithm comes to an end. It is only when it ends that the question that has been asked is answered. An algorithm with no end, that never stops, is not an algorithm.[9] A method that fails is not a method at all.
Turing’s concept of a Turing machine helped to define what a mechanical procedure (algorithm) was – it was anything that could be done by a Turing machine; an abstractly described computer.
It is not permitted to stop a Turing machine to check its output before it has completed the task it is working on because the final result is always potentially subject to change before the process is complete. A Turing machine that does not stop is not functioning as a Turing machine and its results would be worthless – hence the importance of halting.
There is nothing mysterious about this injunction. If long division, which is done using an algorithm, is stopped half way through, the answer is guaranteed not to be the correct answer to the problem. Someone wanting to know the answer must wait for the process to finish before the output can be relied upon.
Gödel’s Theorem proves that mathematical truth goes beyond any axiomatic system and rules of procedure and thus that thinking must involve more than mechanical procedures. The halting problem demonstrates that though many problems may ultimately have a mechanical method for solving them, finding those methods cannot be fully automated. Gödel and Turing show the limits of algorithms.
If the halting problem were solvable, then computer programmers, who are just writing algorithms, could write an algorithm that could solve all the intractable and unsolvable outstanding problems in mathematics. Or, determine once and for all, which ones were solvable and which ones were not, putting mathematicians out of a job.

Christian Goldbach
An example given by Roger Penrose is the Goldbach conjecture[10] which is that every even number greater than 2 can be regarded as the sum of two prime numbers. The even number is split into pairs and “for each such even number, it splits into some pair for which both members are prime.” There is an algorithm, some finite sequence of steps, for deciding whether a number is prime or not. That is not a problem. So, an algorithm is written that stops when neither of the two numbers are prime. So, if there were some way to determine whether this algorithm (program, Turing machine) ever stops, then the problem of proving or disproving the Goldbach conjecture will have been solved.
The reader can probably intuit that this is simply not going to be possible. Has the algorithm not stopped because the two numbers are always prime or because the program has just not found the exception yet? There is no way to tell the difference in all cases between an answer that is not there and an answer that just has not been found yet. This question is not “decidable” (provably true or false) as far as is known. If there turns out to be a proof for the Goldbach conjecture, an algorithm, it is not one that is going to be found by using another algorithm.
There is an irreducible role for insight and intuition when it comes even to the areas of human thought that are typically considered the most certain and precise. Albert Einstein wrote:
The supreme task of the physicist is the discovery of the most general elementary laws from which the world-picture can be deduced logically. But there is no logical way to the discovery of these elemental laws. There is only the way of intuition, which is helped by a feeling for the order lying behind the appearance, and this Einfühlung [literally, empathy or ‘feeling one’s way in’] is developed by experience.’[11]
Discovery, scientific or otherwise, involves taking something obscure, unknown, dark, mysterious and rendering it relatively intelligible, known, clear, light-of-day.
The halting problem and Gödel’s Incompleteness Theorem offer definitive, logical, left hemisphere[12] proofs about the limit of definitive, logical, left hemisphere proofs. Anybody with a normally functioning right hemisphere should understand this intuitively – lived experience has already demonstrated this truth. An algorithm for a successful romantic relationship does not exist and if it did exist, it would kill any interest or life in the relationship, so it is not even good as a fantasy.
These rebuttals to David Hilbert’s desire for an all-knowing mechanical procedure offer salve for all those who have had to deal with autistic, Asperger Syndrome sufferers and their attempts to get rid of the ambiguous; their dislike of words with multiple meanings, ambiguity, metaphor, poetry, the connotative, and nuance; their preference for machines versus people and thus their interest in turning people into machines, even if just metaphorically.
Generally, it is too much to ask that those misguided enough to want mechanical procedures for everything and absolute certainty recognize the error of their ways and start reading novels and taking a greater interest in the role of emotion in life. Nonetheless, the rest of us can take some moral satisfaction that logicians have had to recognize the severe limits of mere logic. It is not rational to be too rationalistic.
Paradoxically, it is possible to be very certain that certainty in all things is not achievable. Outright contradiction is avoided because it is never claimed that nothing can be proven to be true: just not all things can be so proven.
Gödel’s Theorem postulates that an axiomatic system of sufficient complexity can be either consistent and incomplete, or inconsistent and complete, but not consistent and complete.
Completeness
In a complete system, everything true is provable. Unfortunately, completeness and consistency is impossible so not everything true is provable. Any axiomatic system at the level of complexity of multiplication and beyond must contain statements not provable within that system. A + P. “A” being the axiomatic system and “P” being the statements derived from the axiomatic system but that are true yet not provable within it. Not all truth is susceptible of proof.
Consistency
In a consistent system, everything that is provable is true.
If inconsistency is permitted in a system, then the system can be “complete” but inconsistency will mean “proving” things that are not true.
It is better to prove too little than too much.
Inconsistency would mean, for instance, ordering rocket engines to simultaneously shut down and ignite; a surgeon who would both try to kill a patient and to cure him – a disastrous and nonsensical state of affairs.
Decidability
Completeness involves decidability. A problem is decidable if a “true” or “false” as an answer can be given rather than “don’t know,” crashing or looping forever in an interminable state of indecision.
The halting problem proves that there can be no halting machine, no algorithm, that looks at any given program and input and can determine whether in all cases the program will ever halt or whether it will just loop forever on that input.
The fact that an algorithm has not found the answer to the question yet does not mean it never will. Should the operator keep waiting or give up? Nobody knows. No halting machine can exist to tell the operator one way or the other.
Such a “decider” algorithm would be incredibly useful, but unfortunately, it is impossible.
The halting problem proves that there is more to cognition than computation. If computation were sufficient for all thought, the halting machine could do its hypothetical job. There is no set procedure, no algorithm, no computer program, for finding and identifying all set procedures, algorithms and computer programs. The rational alone is sterile. Insight and intuition must be brought in to provide the shortfall.
Universal Turing Machine
Alan Turing gave a purely abstract description of computers that he called a Universal Turing Machine in 1936. It consisted of an infinite loop of paper with instructions on it and a read/write/erase head. The instructions are divided into squares telling the machine what to do next.

Universal Turing Machine
The trouble is that some programs never stop running – they do not halt. Everyone is familiar with the little green circle and the like that tells the computer operator that a program is loading or a page is being saved, etc.. Sometimes the little circle halts and sometimes it does not in which case the operator just has to exit the program. The operator never knows if he should just wait a little longer or if waiting is pointless.
Before the first modern computer was ever built, Alan Turing proved that there can never ever be an effective procedure for deciding whether in all cases p*i, a program and input, will halt. This is true no matter how fast or powerful a computer is or what programs it runs. There can never be a “decider” program, a halting machine, that tells the operator whether another program will halt or not. So, so long as people and computers of any kind or construction exist, they will be looking at the equivalent of little green circles and wondering if the program will halt or not and having no way to know.
The halting problem is about far more than an annoying feature of computers.
The implication of the halting problem is that no one will ever have enough information to make a decision that is certain.
This includes providing an infinite amount of information.
It is always possible that there is some missing information that shows the wrong decision has been made. To determine whether this might be the case, a search is made to see if there is some missing information, e.g., are all swans white? The search will continue until relevant missing information is found. Is there some way of deciding if a search program will halt or not?
If a search is made for something that does not exist, then there is no way of knowing when to stop searching.
If the information does not exist, the program will run forever. If it does exist, and assuming the program is well-written and fit for purpose, the program will halt. But to know if the program will halt, it would be necessary to know that there is indeed missing information which is the very thing that is trying to be determined.
Decidability would mean that when making a decision in all instances it would be possible to know in advance if relevant information exists that has just not been found yet. If this were possible, then decisions that were certain in all cases could be made, but it is not.
“We cannot be sure for any arbitrary well-defined question and procedure to answer it whether that procedure will ever give us an answer.”
This “shows how a fully deterministic system can be in principle unpredictable!”[13]
Presumably, this would apply to Laplace’s Demon. Laplace’s Demon is imagined to know the position of all the particles in the universe and to know the real and full laws of physics. It is imagined that the Demon would be able to predict everything that was going to happen or to explain everything that had happened. Since the halting problem is not solved even by an infinite amount of information, it will apply to the Demon too. If the Demon starts to try to solve a problem he will not know in advance, in all cases, if it is solvable nor will he know when to stop looking. As will be seen below, imagining that the halting problem can be solved leads to a contradiction.
If Turing Machines exist in the universe of Laplace’s Demon, then, since the halting “decider” program does not exist, Laplace’s Demon will not know which problems are solvable either. The limitations on computation mean that unpredictability remains.
A related problem might be answering the question: “Are the security measures taken to protect an operating system sufficient to prevent hacking?” Imagine constructing a program to determine that! Not finding an answer does not mean there is no answer. Laplace’s Demon might start developing a migraine.
The Impossible Machine Version of the Halting Problem
- Impossible Machine Version. Assumption for reductio: assume that there is a program for our UTM (Universal Turing Machine) that can determine for any arbitrary program+data combination whether it will halt.
- Call this program H for Halt-Check. H takes as its input a string p*d which is just some program followed by its data. We can write U(H*p*d) to say the input to U is H and the concatenation[14] of p and d with the conventional marker for their separation.
- Now, from this machine it is trivial to create the following machine C (for Crazy-Machine): C consists of this machine U and H plus just a little bit of extra code: if H says that p*d will halt, loop forever; if H says that p*d will never halt, then halt.
- Make just one more modification to create the machine R (for Really Crazy): R takes some input I and just concatenates it (by whatever method we are using to put program and data together) and feeds it to C. So, if you put input I into R it feeds (I*I) to C and we have C(I*I).
- Now, feed R to R. What happens? If R(R) halts, then R(R) loops forever; but then R(R) loops, which means that R(R) halts.
The machine R is impossible. Given H, the creation of C and of R was trivial, so something else must have gone wrong: namely, it must be the assumption that there is a program H.
- This result is shocking and very powerful. It means that we cannot be sure for any arbitrary well-defined question and procedure to answer it whether that procedure will ever give us an answer. There is no way now to realize Hilbert’s dream: we have agreed on what an — or, at least, what one kind of — effective procedure is, but we cannot be sure it will ever yield an answer in any arbitrary case. Since there is no effective way to determine what the effective procedures are (the good computer programs), it appears reason cannot discover the limits of reason. (A philosophical consequence of interest is also that this is a nicely clear example of how a fully deterministic system can be in principle unpredictable!)”
What this argument is showing is that assuming that a halting machine could exist leads to a contradiction. Assuming the opposite of what the thinker is trying to prove and deriving a contradiction from it is called an “indirect proof.”
In this case one begins with the imaginary existence of a perfectly functioning halting machine; a “decider” program, program H, that can determine whether any given p*d[15] will halt or whether it will loop forever.

The halting machine
Another way of picturing it is this:
The halting machine, H, is an algorithm; the decider;
that takes in a program
and an input for the program
Can it tell the operator whether the program halts or loops forever when run on the input?
The next step in Professor Delancey’s explanation is that if “H” is possible, then H+ is also possible. In fact, it is obvious and straight-forward (trivial) to create.
H+ is a crazy machine that can be created from H. H+ says that when H says “Yes, the program with that input will halt,” then H+ will loop forever (i.e., not halt). When H says “No, the p*d will run forever,” then H+ will halt.

The crazy machine
The question that might arise for some people at this point is “Why would anyone in his right mind create H+?” It seems like a lovely and useful decider program, a halting machine, is being taken and messed up. What’s the point? Leave H alone!
It is useful to remember that the halting machine cannot exist and we are trying to prove that it cannot exist. So we are not “ruining” an otherwise great machine. We are half wrecking a machine that is impossible and are trying to prove definitively, once and for all, that it could never have existed in the first place.
Gödel’s Incompleteness Theorem contains an axiom “this statement is not provable within this axiomatic system.” If this is proved, then it is proved to be unprovable, since that is what the statement says. If it cannot be proved, then once again, it is unprovable. Hence, not all statements in a reasonably complex axiomatic system are provable.
In the case of the halting problem, it is being asked whether an algorithm can exist that could tell in all cases whether an answer can be found to all well-defined questions. We are then showing that answering “yes” to that question leads to a contradiction. We are going to feed the modified halting machine a question that it cannot answer without generating contradictory and impossible results, proving that the imagined halting machine cannot determine in all cases whether programs with given inputs will halt or not.
It should be remembered that Gödel’s Theorem does not apply at the level of simple addition. No unprovable statements are generated by 1 + 1 = 2, but they are for 1 x 1 = 1.
Likewise, the halting problem does not apply to incredibly simple programs that say, for instance, “print ‘hello’.” This will stop for sure. Or “while (true) continue” will not halt. But in more complex cases the answer can be unknowable.
It is sometimes also possible to show with a specific algorithm whether it will halt or not. “But each [such] proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt.”[16]
Returning to the issue of why anyone would make the crazy program H+, we are just using logic and logically, if H is possible, then H+ is trivial to create, i.e., H+ is completely logically consistent with H and can be easily derived from H. If the halting machine, the decider program could exist, then H+ definitely can exist also.

The crazy machine 2
The next step is to create the really crazy machine, H++. H++ takes any input, concatenates it, I*I, and feeds it to H+.
“The trick is that the impossible machine (really, it’s an impossible program) must also consider itself considering itself as input. Suppose you have the halt-check program H, then you make the crazy program H+. But there’s another step. You make a program H++ that takes any input and concatenates it and feeds that to H+. So, if you feed any program P to H++, then H++ feeds PP to H+.
Now: feed H++ to H++. So H++ is asking: “Will I halt or will I run forever, when considering my own program?” It will not halt if looping, and it will loop if halting; a contradiction. Making H+ and H++ are trivial if you have H, so the source of that contradiction must be the supposition that there could be an H”.[17]
What does it mean to feed H++ to H++?
Again, H++ takes any input, concatenates it, I*I, and feeds it to H+.
This time the input is itself; H++.
It is important to know that when an input is concatenated, it is not just added or doubled. I*I means a program is running itself as input.
Windows runs Word as an input. The operating system program runs Microsoft Word which is also a program. These days we call programs like Word an “app,” as in “application” to distinguish it from an operating system; an OS, like Windows or Apple iOS.[18] But they are all just programs running other programs as input.
The really crazy machine is taking the really crazy machine as input and running it. But what is the really crazy machine doing in this case?
When H++ is given itself, H++, as input, H++ creates H++*H++ and submits H++*H++ to H+.
After all, all H++ does is adds an input to itself and submits the result to H+. I*I = the input running the input. The input H++ is functioning as the operating system and the app in this case.
It is being asked if H++ runs H++ and is submitted to H+, will H+ say it will halt or loop?
Again, will the program H++ that concatenates inputs, I*I, halt or run forever when running itself as input? The program that concatenates is being taken and being made to concatenate itself. Will that process ever finish?
Now using a program as its own input is pretty crazy, but H++, the really crazy machine is obvious and straight-forward to create from H+. It is logically consistent with and derived from H+.[19]
If H is possible, then H+ is possible and H++ is possible. And if H++ takes itself as input, then this involves feeding the result to H+. The result is that if H+ says it will halt, then it will run forever. If H+ says it will loop forever, then it will halt. This is a contradiction.
Since everything that has been said follows from assuming H exists; that a halting machine exists; and we have derived a contradiction from this assumption, then this assumption must be erroneous. No such machine; no “decider” program exists or could ever exist.
“There can be no effective procedures for finding all and only effective procedures.”[20] That is what the “decider” program, the halting machine, would be trying to do vis-à-vis other programs and inputs.
Infinite regress and the halting problem
Imagine the decider program, the imagined halting machine, has been given another program and an input to determine whether the program halts given that input.[21] The decider program is run for a second. It does not halt. It is run for a minute. It does not halt. The program runs for an hour. It does not halt. Is this because it is never going to halt, or because it has not being given long enough? Instead of solving the problem, it has just been reiterated.
The decider program becomes just another instance of the halting problem. Will the halting machine halt or not?
Perhaps another halting machine could be introduced to decide if the first halting machine was going to halt or to loop forever. But it would still not be possible to know if this new halting machine was going to halt or loop forever, so it would be necessary to submit this mess to yet another halting machine, etc.
Concluding Implications
The requirement that the results of thinking must be certain would make thinking impossible. Speaking to the London Mathematical Society in 1947 Alan Turing said “…if a machine is expected to be infallible, it cannot also be intelligent.[22] There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.”[23]
Turing seems to have thought that intelligence requires more than computing:
“Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops.”[20]
Peter Kugel writes:
Turing (1948, p. 21)[21] saw it more than fifty years ago when he suggested that, in order to be intelligent, computers would have to be given what he called “initiative.”
Turing (1948, pp. 21-23) discussed initiative, but he did not really define it other than to say that it was not “discipline” – the ability to follow orders given by others. Then he gave an example (Turing, 1948, p. 22):
A very typical sort of problem requiring some sort of initiative consists of those of the form ‘Find a number n such that …’ This form covers a very great variety of problems. For instance problems of the form ‘See if you can find a way of calculating the function which will enable us to find the values for arguments…’ are reducible to this form.
Finding such a number is, he wrote, “clearly equivalent to finding a program.”
It is not surprising that intelligence in computers might require them to be given some initiative. You have to let people exercise initiative if you want them to behave intelligently. You cannot expect intelligence from people if you insist that they do exactly what you tell them to do in exactly the way you tell them.
Likewise, Max Planck writes:
: ‘… empiricism is unassailable on the fundamental ground of pure logic; and its conclusions are equally impregnable. But if we look at it purely from the viewpoint of knowledge it leads into a blind alley, which is called solipsism. In order to escape from this impasse there is no other way open but to jump the wall at some part of it, and preferably at the beginning. This can be done only by introducing, once and for all, a metaphysical hypothesis which has nothing to do with the immediate experience of sense-perceptions or the conclusions logically drawn from them.’[22]
Pascal writes:
It is rare that the rationalists achieve subtlety and that subtle minds are rationalistic, because the rationalists want to treat matters of intuition rationalistically, and make fools of themselves, wanting to start with definitions and then move on to principles, which is not the way to deal with this kind of reasoning. Not that the mind does not do so, but it does it implicitly, naturally, and without artifice; for it is beyond man’s wit to say how, and even to intuit it belongs only to a few.[23]
All of these points are related to Iain McGilchrist’s comments about the functioning of the left and right hemispheres of the human brain in The Master and His Emissary
If intuition and insight are often necessary to find algorithms and to derive testable hypotheses, what exactly are intuition and insight? Well, they clearly involve intelligence and they are not merely random. They plainly work, at least some of the time, since math and logic work and they ultimately rest on intuition and insight. They are not the result of algorithms in the brain because this supposition simply reiterates the assumption that the halting problem disproves.
One model of consciousness can be found in the allegory of Plato’s Cave. The levels outlined arguably correlate with body, mind, soul and spirit. There consciousness extends downward into the shadows and the physical realm, and upward towards God. Conscious awareness at any given moment is a search light centered on some part of this picture. The rest is unconscious. Insight and understanding seem to be a more broadly focused right hemisphere affair. They often come after sleep or when someone is in a relaxed meditative state, perhaps out for a walk – in other words, indirectly. They will often be preceded by a conscious effort to understand but then arrive after the foot comes off the accelerator pedal. As a very minor example, having tried to remember a person’s name, another word, or a phrase the information will sometimes burst unbidden into consciousness three days later while the person is thinking about something else entirely. Perhaps such things are a matter of tapping into universal consciousness – most of which is subconscious to any given individual at a particular moment in time.
Formalism in mathematics means ignoring mathematical truth and meaning, and reducing mathematics to symbol manipulation following certain rules – which is what computers do. From the point of view of the symbol manipulator, what is being done is meaningless. If mathematics were formalizable, then algorithms would be quite sufficient. So long as the correct procedures were followed, the right result would be ascertained.
Claiming that so long as computers get the right result they have “understood” the question is like writing an equation on the board, altering some of the numbers, getting someone to mechanically copy what they are reading with some small changes and asserting that the result means the question and answer are comprehended. Such a person does not understand the question being asked, nor what the answer means. Such mindlessness can function for a while until the time comes when it is necessary to know what is going on.
Computers are stuck in a world of formalism. They are not dealing with numbers per se. They do not know what they are doing at all. Zeros and ones are being manipulated according to rules and those zeros and ones are just “on,” and “off,” not numbers. If all mathematics were capable of being turned into a formal system there would be no halting problem. An algorithm could be written to deal with all problems in mathematics. However, mathematics contains metamathematical elements to do with truth and meaning. Choosing axioms and checking that axiomatic systems are consistent involve appeals to intuition, “seeing” the truth of Gödelian propositions, and the notion of self-evidence.
This indicates that a computer running a program cannot attain artificial general intelligence (Strong AI) even in the realm of mathematics. This is what the halting problem means.
[1] Gödel’s Theorem at https://orthosphere.wordpress.com/2018/05/19/godels-theorem/ should be regarded as “Part 1” relative to the current essay.
[2] Penrose, Roger. The Emperor’s New Mind, p. 57.
[3] Turing got his proof after studying Gödel and Gödel was well aware of Bertrand Russell’s paradox (p. 111) “R is the set of all sets that do not have themselves as members.” If R is a not a member of itself, then it is a member of itself. If R is a member of itself, then it cannot be a member of itself. (p. 101) A key part of Gödel’s Theorem, the Gödelian proposition that cannot be proved within the system, turned this kind of paradoxical reasoning into part of a valid mathematical argument. Penrose, p. 111.
[4] An abstract description of a computer, a Universal Turing Machine, described below.
[5] https://en.wikipedia.org/wiki/Halting_problem
[6] One working programmer just calls them loosely “procedures.”
[7] Penrose, p. 64.
[8] Penrose, p. 59.
[9] Driving directions to the nearest large city is an algorithm. Whether you have actually arrived at the nearest large city is not answered by the driving directions themselves.
[10] An operating system might not qualify as an algorithm if it is regarded as not terminating. From https://stackoverflow.com/questions/35111283/os-as-algorithm “Yes” an OS is an algorithm…since the OS is an algorithm for computing the next state of the kernel given the previous state [together with] a hardware or software interrupt. The computer runs this algorithm every time an interrupt occurs. But if you are being asked to “focus on the finiteness property”, then whoever is asking probably wants you to say “no”, because the OS doesn’t necessarily ever terminate… (except that when you characterize it as I did above, it does 🙂 Matt Timmermans
[11] Planck, M., Where is Science Going? (with a preface by Albert Einstein), trans. J Murphy, Allen & Unwin, London, 1933 p. 12.
[12] See the next article based on The Master and His Emissary by Iain McGilchrist. The left hemisphere of the brain deals with the familiar and the known. The right hemisphere with the anomalous, ambiguous, uncertain and often what is the unprovable. It is associated with problem-solving – mathematical and otherwise.
[13] Craig Delancey. http://www.oswego.edu/~delancey/309_DIR/LLT_LECTURES/11_turing_out.html
[14] “Concatenation” just means stringing things together in a linear sequence. In the case of computers, it means one program running another program.
[15] “Program + data,” or “program + input.”
[16] https://en.wikipedia.org/wiki/Halting_problem
[17] Personal communication from Prof. Craig Delancey.
[18] As previously mentioned, it is hard to say if an operating system is a Turing machine or not. If it is conceived of as a set of instructions responding to various inputs and terminating – giving a specific output – after each such input, then the OS is frequently “starting” and “stopping.” If the OS is considered as a continuous loop, an OS is not a Turing machine.
[19] “Not every program would be impossible if fed to H+. Many would just loop or just stop.” From Prof. Delancey. And just looping or stopping is not a contradiction.
[20] Craig Delancey – private communication.
[21] https://www.youtube.com/watch?v=wGLQiHXHWNk
[22] The following quotations and comments about Turing are from Peter Kugel’s Computing Machines Can’t Be Intelligent (…And Turing Said So) http://hilltop.bradley.edu/~chris/ComputersCannotBeIntelligent.pdf
[23] Turing, A.M. (1947), Lecture to The London Mathematical Society On 20 February 1947, in Carpenter and Doran (1986), p. 124.
[24] Turing, A.M. (1950), Computing Machinery And Intelligence, Mind 59 (N.S. 236), 433-460, p. 459
[25] Turing, A.M. (1948), Machine Intelligence, in Meltzer and Mitchie (1969).
[26] Planck, M., Where is Science Going? (with a preface by Albert Einstein), trans. J Murphy, Allen & Unwin, London, 1933, p. 128.
[27] Pascal, Pensées, §1 (Lafuma §512).
Pingback: The Halting Problem | @the_arv
Richard — I am unsure whether I follow all the subtle turns of mathematical logic in your exposition, but based on what of it I do understand, and again on conversations that we have recently had, I have the intuition that in a layman’s summary, the Q.E.D. would be: It is an open universe, predictable here and there, unpredictable elsewhere, hence requiring for its navigation a measure of guesswork and faith; and that, at last, this character of things guarantees that freedom is real.
That would be about right! Since one computer programmer and one former Royal Navy weapons and computer systems engineer also have found the mathematical reasoning hard to follow, a little bit of faith that it is legitimate is necessary there too.
Pingback: The Halting Problem | Reaction Times
I’m reminded of when people complain “Oh, you Catholics are so dogmatic. You think you know the truth about everything.” No, we think we know the truth about some things. Incomplete knowledge is still knowledge.
Like Penrose, the lesson I like to take from Goedel’s theorems is not about limits to what we can know, but that we know more than we should if our minds were limited to a finite set of axioms and algorithmic manipulations upon them. He gives an example of an algorithm, I believe in “The Road to Reality” (I’m just going from memory), that the reader can quickly convince himself will never terminate although there is no general algorithm for settling this. The conclusion Penrose draws in his other books is that rational thought is something broader than computation. It would appear to be a compelling argument, given that philosophers of mind like John Searle are willing to put the reliability of mathematics into doubt to avoid its conclusion.
Hi, Bonald: I agree that the halting problem and Goedel’s incompleteness theorem are more about the limits of rationalism than knowledge. It would only be an absolute limit on knowledge if knowing were restricted to algorithms. Knowing that the discovery of algorithms themselves cannot be done algorithmically nor even their validity tested and knowing that unprovable assumptions are present even in mathematics beyond the level of addition is for me liberating, optimistic and points to the greater depths of the human soul and wider reality.
In maths, how is “truth” defined as being distinct from “provable”?
Starting from a set of axioms, what else is mathematical truth apart from the set of provable statements?.
You say
“Any axiomatic system at the level of complexity of multiplication and beyond must contain axioms not provable within that system.”
Surely, an axiomatic system takes axioms to be given. They don’t need to be proven. The proofs start from assuming the axioms. This is what “axiom” means. So, I don’t think this statement can be right.
It is not axioms that are implied in “Not all truth is susceptible of proof.”
An axiomatic system has certain statements that can be seen to be true but are not provable, that is they are not derivable from the axioms. They are what are called Goedel propositions.
The incompleteness refers to the Goedel propositions and not to the axioms.
That seems to equate with what has been my understanding. I have, for long, understood the theorem to be that “in any system above the most simple, there is at least one statement or proposition not provable by the axioms contained therein”. Of course, one shouldn’t expect much certainty or consistency from me.
I have understood the theorem as pointing towards God, in that if one takes the Universe as a system, it will inevitably contain propositions that cannot be proven within itself. If one accepts the Universe as consistent (itself an unprovable assumption, I suppose), then these unprovable propositions can only be proven by Something outside the system. I believe Kurt Goedel converted from atheism to Christianity and I assume this might have been one of his lines of reasoning, but I’d be very grateful for any detailed account of this aspect of his life.
Hi, mickvet: I also think Goedel’s Theorem points in this direction.
The Goedelian proposition is true. So your statement should be slightly modified:
“in any system above the most simple, there is at least one true statement or proposition not provable by the axioms contained therein.”
Thanks, Bedarz Iliachi. I think you are right. See my reply after the Goedel’s Theorem article. It contains a couple of questions.
There are a number of fundamental misconceptions in this article.
[Thank you for taking the time to comment and thanks for reading!]
You wrote, The terms “algorithm,” “computer program,” “mechanical decision procedure, “mechanical procedure,” “effective procedure,” or just “procedure” according to one computer programmer,[5] are all synonyms.
Then this programmer isn’t making distinctions inherent in the terms. A “computer program” isn’t necessarily an “algorithm.” By definition, an algorithm contains a finite number of steps. A computer program need not. In the same way, “procedures” need not be “effective”.
[By claiming that computer programs are not algorithms you are making a controversial claim. If this counts as a “misconception,” I seem to be in excellent company.]
[Certainly a “procedure” need not be “effective” – it might be crap; a non-algorithm. Apparently in the programmer’s line of work, they don’t bother calling things that don’t work “procedures” at all. He’s not a theoretician or academic, just a run-of-the-mill actual working programmer. I’m not saying the programmer is saying they are all synonyms {note the placement of commas} – just that he uses the word “procedure” to refer to effective procedures in his line of work.]
[A computer program with an infinite number of steps is caught in an infinite loop. Certainly, non-functional computer programs are not algorithms. And dead people don’t require oxygen for respiration. Should doctors always refer to “living” patients instead of just “patients?” This point seems pedantic.]
[Or, based on your last comment, are you thinking of a search program that is searching for something that is not there? Google seems to have found a way around that – but I’m not sure how they do it. Please explain if you know.]
“Algorithms can be contrasted with heuristics.”
This is a category error. A heuristic is simply a means of making a selection out of a group of possible choices. Algorithms can use heuristics as part of their steps.
[But that doesn’t mean that algorithms are the same things as heuristics does it? In which case they can be contrasted. Paragraphs employ sentences but they are not the same thing either. But that is interesting to know.]
[If a heuristic is a rule of thumb to be amended according to circumstance, computers can’t do this, unless exactly what to do is specified in advance according to each circumstance, in which case they turn back into algorithms. “Making a selection out of a group of possible choices” doesn’t sound so “simple” if the selection is to be intelligent and appropriate. – let me know where I am going wrong here.]
“An algorithm, though, is a set of instructions about what precisely to do in a given well-defined circumstance that will give a guaranteed result.”
This is phrased poorly. An algorithm, by definition, is guaranteed to give a result. It is not guaranteed to give the same result each time (that would be a mathematical function, which is a subset of an algorithm). An algorithm can, for example, throw dice to get to a result. Note that mathematicians use function one way, programmers another.
[Thanks for the editing suggestion. I am aware that every time someone does long division they will not get the same result. That would be a bit pointless. Perhaps I can write “is guaranteed to give a result” so no one gets confused. I was not aware that “guaranteed result” would imply the same result each time.]
“If there turns out to be a proof for the Goldbach conjecture, an algorithm, it is not one that is going to be found by using another algorithm.”
It isn’t this simple. General search procedures (which, by definition, aren’t algorithms, because they aren’t guaranteed to terminate), produce algorithms if the search succeeds. Since a search knows it’s starting point and has found an ending point, then the path from the start to end is a finite series of steps leading to the desired result. Which is an algorithm. So general searches that succeed become algorithms in hindsight.
[The halting problem shows that if Goldbach’s conjecture is to be proven, there is no all-purpose algorithm (a halting machine) for proving it. The eventual solution might be an algorithm – in fact it has to be {I think} – but that is not how the algorithm is discovered. See Einstein’s comment about how his discoveries are all beautifully clear and objectively true, but he discovered them using intuition based on lots of experience. The point is supposed to be that a general search procedure is not going to be how the outstanding problems in mathematics are going to be solved.]
Thank you for taking the time to comment and thanks for reading!
You’re certainly welcome.
“By claiming that computer programs are not algorithms you are making a controversial claim. If this counts as a “misconception,” I seem to be in excellent company.”
I recommend Scott Aaronson’s The Toaster Enhanced Turing Machine where he writes, in part: “The prompt for this debate was a question asking for opinions about Peter Wegner and Dina Goldin’s repetitive diatribes claiming to refute “the myth of the Church-Turing Thesis”—on the grounds that, you see, Turing machines can only handle computations with static inputs and outputs, not interactivity, or programs like operating systems that run continuously.” [emphasis mine]. See also Fortnow’s UBIQUITY SYMPOSIUM ‘WHAT IS COMPUTATION?’ THE ENDURING LEGACY OF THE TURING MACHINE”. He says, “Computation is about process, about the transitions made from one state of the machine to another. Computation is not about the input and the output, point A and point B, but the journey. Turing uses the computable numbers as a way to analyze the power and limitations of computation but they do not reflect computation itself. You can feed a Turing machine an infinite digits of a real number (Siegelmann [2]), have computers interact with each other (Wegner-Goldin [5]), or have a computer that perform an infinite series of tasks (Denning [1]) but in all these cases the process remains the same, each step following Turing’s model.” [emphasis mine]
By definition, “an infinite series of tasks” is not an algorithm — but it is a computation described by a Turing machine, or the Lambda Calculus, or other equivalent.
A computer program with an infinite number of steps is caught in an infinite loop.
But that may be exactly what should happen. Just because something is in an infinite loop doesn’t mean it isn’t doing something useful. The control system for the Mars rover Curiosity has been running a loop for around six years.
But that doesn’t mean that algorithms are the same things as heuristics does it?
A heuristic is just a sequence of steps for making a choice. That’s all. In that sense, it’s an algorithm, but it’s usually embedded in a larger algorithm. Just like single machine instructions, like “DIV” and “ADD” and “MOV” are algorithms, that are parts of larger algorithms. Algorithms used by algorithms used by….
“Making a selection out of a group of possible choices” doesn’t sound so “simple” if the selection is to be intelligent and appropriate. – let me know where I am going wrong here.]
Well, you’re now including ill-defined terms, like “intelligent”. Intelligence is a spectrum of behaviors. We happen to be at one end, my dog is on another part, and computers are coming up fast. A heuristic is simply an algorithm for making a choice, usually to try and reduce the size of a search space. The more a heuristic can successfully prune a search tree, the more intelligent it appears.
“I was not aware that “guaranteed result” would imply the same result each time.”
We generally class the “random()” function as an algorithm. And if the “random()” function takes input from a quantum device so that the result is truly random, well, that’s fine.
“[The halting problem shows that if Goldbach’s conjecture is to be proven, there is no all-purpose algorithm (a halting machine) for proving it.
That’s not what the halting problem says. It simply says that some problems are structured such that a form of “the liar’s paradox” is embedded in the problem. These kinds of problems can’t be solved by search. The halting problem does not say whether a particular search will be successful or not. We have sophisticated software tools that examine code for bugs. The tools can say “that part of the code will halt, that part of the code will loop forever, and I can’t tell about this other part of the code.” We just don’t have a tool that can successfully analyze every conceivable program, because that’s impossible.
[I’ve come across this claim before by a proponent of strong-AI before that Goedelian propositions are of the same type as the liar paradox. They are not. The liar paradox – “this statement is false” – is undecidable period. Goedelian propositions are decidable, just not with reference to the axiomatic systems that give rise to them. Their truth could be described as metamathematical.]
The point is supposed to be that a general search procedure is not going to be how the outstanding problems in mathematics are going to be solved.
And here I must disagree, because that’s exactly how humans solve them. We have individuals trying to solve these things — searching for an answer — and our books are our “tape” that keeps track of results. Our shared experiences are our heuristics. These searches have been going on for thousands of years. Computers haven’t begun to match the capability of billions of minds searching for answers over thousands of years. Any more than dogs have. But dogs have short “tapes” (unlike us, they don’t use external storage), and aren’t increasing in computational complexity. But computers are. It’s just a matter of time. However, I suspect I won’t convince you that the difference between man and machine is one of degree and not kind. We’re both waiting for the other side to strike the decisive blow. If you could do something that has been proven a computer cannot do in principle then I’d admit defeat. If I could build HAL-9000, or his great-great-grandson, I would hope you would admit defeat.
What you describe is not a “general search procedure” – just a general search. Remember that “procedure,” to stick to Turing, is an effective procedure.
Regarding Mr. Turing, it is also worth remembering that his motivation was to find a soul, a software, that could be realized in any hardware, due to the death of a beloved classmate. AKA wishful thinking. As Dana Carvey’s Bush says: “Not gonna happen.” Turing also mentions the necessity for computers to discover “initiative.” Good luck with that!
My Goedel’s Theorem article is very much “part 1” to this one, BTW.
Part of me wants to say “I have no idea why someone like you wants to be a machine.” But another says “I too, perhaps lately, am sick of life and am happy to leave it to the machines.” At least I get to say, in this argument, I am literally on the side of the angels. Ivan in “The Grand Inquisitor” says most of us can’t stand freedom and wish to be slaves.
You and I do things a computer can’t do every second, much of it related to the right hemisphere of our brains, so I guess I win. A lot of it would be things an autistic person cannot do, but a dog or cat can. But, certainly, if you can come up with HAL-9000 or Data from STNG I’ll pay respectful attention!
Regarding Goldbach’s conjecture, the halting problem says exactly what I say it says. I agree that “these kinds of problems can’t be solved by search.” [Consistent with my claim] I agree “The halting problem does not say whether a particular search will be successful or not.” It says “there is no all-purpose algorithm (a halting machine) for proving it.” Actual thinking is required. You know, that thing computers don’t do.
I love the “ill-defined” gambit. That is what Godel’s Theorem and the halting problem are all about. Much of thinking, especially innovative thinking involving mathematical discoveries is “ill-defined.” Both Goedel and Turing signal the death of David Hilbert’s dream of mechanical procedures for solving all mathematical problems. The truth and validity of algorithms is not determined by algorithms. Stanley Jaki: “The fact that the mind cannot derive a formal proof of the consistency of a formal system from the system itself is actually the very proof that human reasoning, if it is to exist at all, must resort in the last analysis to informal, self-reflecting, intuitive steps as well. This is precisely what a machine, being necessarily a purely formal system, cannot do, and this is why Gödel’s Theorem distinguishes in effect between self-conscious beings and inanimate objects.” (p. 220, Brain, Mind, and Computers)
228
“Machines merely perform functions which man correlates with thought. Their output is devoid of any intellectual content and becomes meaningful only if the human operator is there to interpret it.” (p. 228, Ibid) AKA The Chinese Room argument.
What you describe is not a “general search procedure” – just a general search. Remember that “procedure,” to stick to Turing, is an effective procedure.
I’m not sure what you’re trying to say. If you’re looking for your misplaced keys, that’s a general search. If you find them, that’s an effective search. A search is a search is a search, regardless whether it’s my dog looking for his bone, a computer looking for a winning move in chess, or you looking for your keys.
[No. An effective procedure is an algorithm. An algorithm is a universally valid method for achieving a result. An example is long division. All who accurately follow the step-by-step instructions will be able to get a correct result. There is no algorithm for finding keys, writing an novel, or responding in such a way that your wife is satisfied with your response instead of being angry at you. If I tell you how I found my keys on a particular occasion, this is not a universally valid method that all can follow. All you have is a description of my search on this particular occasion; a historical record of what I did. The fact that I happened to be successful does not transform my ad hoc search into an algorithm that all can follow. A chess playing computer is following some kind of algorithm, either the same one each time, or ones written for it by its programmers about how to deal with certain situations it might encounter. Computers can play chess because chess is a strictly rule-bound activity. It is known in advance which moves are permitted and which are not. Real life situations frequently have no such rules and thus pre-programmed responses are unlikely to work.]
Part of me wants to say “I have no idea why someone like you wants to be a machine.”
It isn’t a question of “want”. It’s solely a question of where all of the evidence leads. And if the evidence runs afoul of a cherished intuition, then I may have to re-inform my cherished intuition. (Do you ever wonder why people are so adamant that they can’t possibly be a “machine”, as if that were a fate worse than death?)
[I think you are being disingenuous. “All the evidence” does not lead to this conclusion. In a later comment you claim that following an algorithm is the same thing as figuring out an algorithm in retrospect, as though following Google directions is the same thing as writing Google directions after the event. This is not at all logical. It seems to be a desperate attempt to maintain your contention that all thinking is algorithmic. Ad hominems are not nice and are usually fallacious in response to arguments. On this particular topic, however, the question of emotional intelligence seems relevant. Stanley Jaki, Roger Penrose and indirectly perhaps, Turing and Goedel, are pointing out that even when it comes to something as simple as arithmetic, there is a role for informal thinking, intuition and self-reflection. Low emotional intelligence people have trouble accessing what is going on in their right hemispheres and thus encounter a kind of absence. They are then apt to make false generalizations about the nature of thinking by extrapolating from their own experience. This autistic defect makes the contention that we are all just rule-following devices (machines) seem plausible. Only someone who already experiences life in a somewhat mechanized manner could not look in horror at the prospect of turning into a machine, or perhaps, as I indicated, someone who wishes to escape the pain of human existence.]
[Cordwainer Smith’s story “Scanners Live in Vain” is on this topic. “Over and over again, Smith describes the horror and inhumanity of the haberman and the scanner. Martel says:
‘But our lives, Luci. What can you get out of being the wife of a scanner? Why did you marry me? I’m human only when I cranch. The rest of the time—you know what I am. A machine. A man turned into a machine. A man who has been killed and kept alive for duty. Don’t you realize what I miss?” . . . How will I know if I’m dead?’
A man without feelings is an abomination: a machine; indistinguishable from a dead man.”
From my article: http://literatefreedom.org/13-2-literary-analysis3/ ]
Ivan in “The Grand Inquisitor” says most of us can’t stand freedom and wish to be slaves.
We know from computability theory that non-deterministic Turing machines have deterministic equivalents. So maybe it’s just a matter of perspective. I have no problem being free in Christ, I also have no problem confessing Jesus as Lord.
[Jesus would presumably also be a machine. So that would make you literally a machine-worshipper. What being free in Christ would mean in this context I have no idea. It certainly seems a sudden orthogonal tangent. I’m reminded a bit of Ernst Junger who loved mechanized warfare in WWI. He found it sublime. He felt he was witnessing some kind of suprahuman heaven.]
You and I do things a computer can’t do every second…
Sure. But the question is whether this is because of a difference in principle or a difference of degree. A computer program is determined by the complexity and arrangement of the wiring. Our brains are far more complex and have a different arrangement of wires from today’s computers. There is nothing, in principle, that says that a computer with a more complex arrangement of wires won’t have any less abilities than we do.
[This was the response a self-confessed low emotional intelligence woman once made when she saw my “Yes-Man” doll. It was a parody of the company man and arse-licker. You bang it on the head and out comes a pre-programmed response sounding like it is commending what a boss-figure has just said. E.g., “I couldn’t agree with you more.” It is literally a non-thinking machine. Like you she commented that we are just like that, just “more complicated,” as though everything we do and say is also pre-programmed in the same manner. She apparently found the idea plausible leading me to speculate what it must be like to be her – bearing in mind that direct access to consciousness is exclusively first-person – though we have plenty of indirect access.]
Regarding Goldbach’s conjecture, the halting problem says exactly what I say it says. … It says “there is no all-purpose algorithm (a halting machine) for proving it.” Actual thinking is required. You know, that thing computers don’t do.
You’re making the halting problem say what it doesn’t say. The halting problem does not say whether or not you’ll find your lost keys, or whether you’ll end up in a cul-de-sac where you go round and round endlessly. [No, but there is no lost key algorithm – wouldn’t that be handy!] And you’re making a distinction of kind, not degree, when you say that thinking is something that computers don’t do. [Correct.] They certainly manage to come up with novel solutions to problems. I’d have to dig through my textbooks to find it, but in the early days of AI, there was a theorem prover that came up with a proof in geometry that had never been seen before.
[There is no such thing as a mechanical “theorem prover.” That’s what the halting machine establishes. It sure would make life easier if there were!]
Both Goedel and Turing signal the death of David Hilbert’s dream of mechanical procedures for solving all mathematical problems.
So what? Just because there is no mechanical procedure to solve all mathematical problems doesn’t mean that mechanical procedures can’t solve some mathematical problems (and they have). And not only have humans not solved all mathematical problems (does P == NP? Is the Riemann Conjecture true?) there’s no guarantee that we’ll solve them, either.
[The truth and validity of algorithms is not determined algorithmically. If someone writes an algorithm that “solves” a mathematical problem, whether the proof is successful or not is still going to have to be proved by a person otherwise you would just be taking the purported success of the algorithm on faith – and that isn’t how mathematical proofs work.]
In closing, I’ll comment on the Chinese Room argument. When I first read it, I was baffled why so many people believed it. In a nutshell, the argument says “we can build a machine that can translate from one language to another but that doesn’t understand either language. Therefore, we can’t build a machine that can translate from one language to another that understands either language.” The conclusion simply doesn’t follow. Years later, I read John McCarthy’s take on Searle’s argument and he says what I concluded, but more eloquently.
And, since I had to search for McCarthy’s response to Searle, while doing so, I came across this note on Turing and Gödel:
“In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. Roger Penrose claims this. However, people can’t guarantee to solve arbitrary problems in these domains either. See my Review of The Emperor’s New Mind by Roger Penrose. More essays and reviews defending AI research are in [McC96a].”
[You say this is your review of The Emperor’s New Mind, but this is by John McCarthy – deceased]
David Hilbert asked a question. Turing answered it “no.” It turns out math is not completely formalizable. Now we know. Likewise, Goedel proved completeness and consistency impossible (concerning axiomatic systems capable of generating the simple truths of arithmetic at the level of multiplication and above).
If anyone were to read your account of the Chinese Room argument they would be none the wiser concerning it. A computer understands neither the question nor its own answer. Neither the input nor the output. A human operator is necessary to make sense of the latter. The point is supposed to be that what you are doing and what the computer is doing is completely different. This does require an intuitive appreciation of the difference… It’s not a proof – just an appeal to your basic experience of being a living, breathing, thinking person.
When Thomas Hobbes encountered a calculating machine he responded “behold, it thinks” or words to that effect. Its inventor, a far better mathematician than Hobbes who apparently was rather bad, completely disagreed. Likewise with Pascal’s calculator. Pascal’s description of it was that it takes a lot of the boring, mechanical aspect of doing math out, but not that the wheels and gears were “thinking.” So this tendency to attribute to machines “thinking” abilities goes back a way.
I think you gave the game away in the last round with the phrase “ill-defined.” That is precisely what Goedel and Turing proved – that ill-defined thinking AKA as intuition is needed. In fact, choosing the axioms of the axiomatic system relies on notions like “self-evidence” based on the “meanings” of mathematical propositions terms which are also “ill-defined.” THAT is not an algorithmic process.
Congratulations on your hero winning the Turing prize. My hero, Penrose, won the Wolf prize for physics. The topic is whether thought can be completely mechanized. Computer scientists and mathematical physicists and philosophers all potentially have something to contribute to this cross-disciplinary question. Computer scientists are not necessarily at their strongest when it comes to the thinking part – many of them having difficulty identifying and describing emotional states and other right hemisphere phenomena and thus ending up with a funny idea about how humans actually function.
It is a matter of opinion whether I am “mistating” the implications of Goedel’s Theorem or not. I, of course, could apply the same comment to you. I agree with Penrose – surely an expert. You do not and appeal to yours.
Your definition of searching for lost keys as an algorithm is not a good one. See my mention of Einstein commenting on the difference between discovering something nice and objective and the process of making that discovery – words like “insight,” and “intuition,” arise. It is the difference between concrete operational rule following and formal operational: the master’s ability to transcend rules and improvise an effective response to what might be a one-off situation that is highly unpredictable. The master is not following a pre-established step-by-step procedure, he is being creative. In retrospect only, to refer to one of your points, an algorithm could perhaps be derived. It’s likely to be a pretty useless one since it was good for that one occasion e.g., a street fight or writing a creative essay. In this case, doing something backwards is not the same as doing it forwards. Doing it backwards means you know in advance how to do it.
Pingback: AI and the Dehumanization of Man – The Orthosphere
Pingback: Nikolai Berdyaev: the Primacy of Freedom – The Orthosphere
Pingback: Tit for Tat – The Orthosphere