ChatGPT4

ChatGPT4 is supposed to be an improvement over ChatGPT3. One thing it can now do is correctly state what will happen if a glass containing a ring is inverted over a bed. The ring will fall out. If the ring is embedded in ice, it will not fall out. If the glass with the ice and ring is left in a warm room for long enough both the water and the ring will fall out. Amazing! It reminds me of a scene from Asterix the Gaul. The Romans have captured Asterix and his druid, Getafix, and are trying to get them to make the magic potion that gives them superhuman strength. Naturally, the Gauls do not want to do this. Getafix gives them a hair growing potion instead. But, before the deception is discovered, the Roman leader, who thinks he has drunk the correct potion, wants to test his new abilities, trying ever lighter loads until he finally picks up a small rock convinced he has Herculean strength.

asterix

We know that mathematics cannot be reduced to the mere manipulation of symbols without considering their meaning. Goedel’s Theorem proved this. An axiomatic system above the level of complexity of simple addition will generate Goedelian propositions that are not provable within that system. Their truth can only be seen by human intelligence, and even then, only if sufficiently smart and trained in the appropriate kind of mathematics. The Halting Problem was discovered by Alan Turing simultaneously with his invention of a formal description of what is now a modern computer – a so-called Turing machine. It has a read/write head, a program, and a (hypothetically) infinite memory. His question was whether a halting machine could be created that could test any program to see if it was valid or not. An invalid program will never halt. The failure to halt can be seen when we wait for a program to load, or some such activity, and a little green circle indicates that the task has not been completed. Is it caught in an infinite loop and we should bail out, or will it resolve itself? It turns out that there is no way to automate this process. There is no algorithm that can test the validity of all other algorithms. Again, human intelligence must step in to test that an algorithm is valid and thus will not result in an infinite loop. In fact, imagining that a halting machine could ever be invented results in a contradiction. Machines cannot and will never be able to do what the human mind can do in this regard. Anyone claiming real machine intelligence must explain how they will avoid the Halting Problem and Goedel’s Theorem. The developers of ChatGPT do not claim that it is genuinely intelligent. It is merely engaged in symbol manipulation. It is not thinking nor is it able to solve novel problems. If you ask it how to develop workable cold fusion it will not provide the ability to do so.

On Lex Fridman’s podcast, 3/25/2023, the CEO of OpenAI and co-creator of ChatGPT4, agreed with this last statement. He suggested that it would only be when a program like ChatGPT4 could make new discoveries in science that he would want to call it AGI (artificial general intelligence – human-like intelligence – as opposed to the narrow abilities of current AI).

AI works great for closed loops, like chess or Go, which are tightly rule-bound. What “success” looks like is defined in advance. It can figure out how to get from point A to point B because the question is well-defined and the end point determined in advance. The end point of machine learning is determined by the computer scientists using it. Training the bot is done by humans who have to determine how reliable the bot is being. It is not self-learning.

13 thoughts on “ChatGPT4

  1. I once saw a hilarious blog or Reddit post where some very wicked fellow had specced out a job which didn’t even attempt to disguise that it was development of an application to implement the Halting Problem and posted it to a popular freelancing website where Indian IT sweatshops bid for work. Needless to say, there was an avalanche of takers.

    If you’re familiar with that pedagogical masterpiece of an introduction to Computation, The Little Schemer by Friedman and Felleisen, have a look at the just-released “The Little Learner — A Straight Line to Deep Learning” by Friedman and Mendhekar (who would know the Halting Problem if he bumped into it).

    • Haha – if they cracked it, reality would dissolve.

      I’m not familiar with the book. My son is a programmer, but not me. I spent months getting the best understanding of Goedel’s Theorem and the Halting Problem that I could manage though some years ago. A friendly Indian PhD commenter got me to buy and read Stanley Jaki’s Brain, Mind and Computers which was excellent.

  2. This is not really a manipulation of symbols but rather statistical prediction, after having read a lot of text, it can predict what word is the most likely to follow the words “the sun is shining and the sky is…”

    This has been theoretically proven possible a century ago, what happened recently is basically powerful enough computers: https://en.wikipedia.org/wiki/Markov_chain

    The interesting part is that the Scott Alexander types say the human mind does not do anything else but statistical prediction too, they define rationality as something approaching a perfect Bayesian reasoner. He is just saying it is not good enough yet. Interestingly, Scott’s argument is not from the strengths but the weaknesses of AI. ChatGPT when asked to write a a numbered list, went like 1,2,3,4,2,1,3,4 e.g. was bad in precisely something computers are usually good at, and small human children are usually bad at. Another weakness is hallucination, when it comes up with something wrong, and doubles down and defends it. That is also a human-like trait.

    I don’t know. Look, when I was bored during the lockdown I looked into music theory, and found you need no talent to compose a basic pop-rock song, it is all settled “science” of chord progressions, inversions for simpler bass, arpeggiating your chords for melody and so on. And I asked GPT to give me moody melodic chord progressions and it did. From this on basically the entire basic pop-rock production process could be automated.

    The question is, is the difference between yet another IV-ii-V-I rock song and Bach merely a matter of some kind of quantity (Bach had more “skill points”) or is it something fundamentally different?

    OTOH I have some pretty good arguments why consciousness cannot be physical. Basically, it has no evolutionary fitness advantage over a hominid with the same spear-throwing skills but not having awareness.

    AI is basically a simulation. At some point a simulation might become indistinguishable from the real thing for all observers.

    • Thanks, Dividualist. Yes. I am continuing to listen to podcasts on the topic and just listened to Stephen Wolfram. Someone earlier described ChatGPT as autocomplete on steroids. It searches for the statistically most likely word to come next in a sentence – no understanding necessary – which must make it one of the most derivative and mindless of activities.

      A machine can write something in the style of Bach, doing so also being just derivative and a facsimile. But, it takes an actual personality and talent to create a really good style in the first place. The right hemisphere makes creativity, insight, and imagination possible. The left hemisphere merely represents and follows algorithms. It is possible to factory produce pop songs, and to do a pastiche of Bach or Beethoven but not to do what Bach or Beethoven did, which was not a mere copy of anything.

      Human beings have helped the machine identify “moody chord progressions” so we are seeing human intelligence being embodied with no intrinsic intelligence of the machine.

      You write: “Scott Alexander types say the human mind does not do anything else but statistical prediction too.” That is at the heart of my article AI and the Dehumanization of Man. The technique is to denigrate what humans do and talk up what computers do and try to make them meet in the middle.

      Computers can replicate a lot of what the left hemisphere is doing. People who have had their right hemispheres deactivated, either using transcranial stimulation or from disease or from developmental problems also confabulate. The LH hallucinates because it has no access to the RH which is responsible for direct intuitive awareness of reality. The LH will hallucinate a justification after the fact and cannot be persuaded otherwise. The LH loves certainty and hates ambiguity.

      The LH is also responsible for dealing with things; inanimate objects, and is not good with living creatures like us. Those with pathologies of the RH are unaware of their deficits. Those with LH pathologies are perfectly aware of what they are missing. Technically minded people often approach thinking from a LH perspective and are not good at all with intuition, insight, creativity, and imagination. Those last two words were never EVER mentioned in 12 years of formal education in analytic philosophy. At most, Scott Alexander is describing aspects of the LH. Before a skill is mastered it is a “problem.” Once a problem is solved, many times a rule/algorithm can be derived after the fact. The mistake is to think that the problem was solved using an algorithm in the first place.

      • Thanks, Richard. Your hemispheres remind me of the idea that autism and psychosis form one spectrum, with normalcy in the middle.

        Psychosis is the overly social mind, that detects intent where there is none. (“It is raining, surely the universe is sending me a message with it. He looked at me – maybe he plans to kill me.”)

        Autism is the inadequately social mind that does not detect intent well when it is there, and tries to solve social interaction through algoritms, rules, and tends to treat people as objects. The psychotic type is more likely to be an artist or mystic, and the autistic type more likely to be an engineer.

        This explains a lot of things, like for example why does the New Atheist movement smell so much like autism. For the autistic, a huge intent behind the whole universe is just plain simply scary. They really prefer a mechanical universe – because that is what they are good at dealing with.

        Also in this model men tend a little bit towards autism and women a little bit towards psychosis.

        Consider politics. Leftists talk about people. Conservatives also talk about people. Women like that, both. Libertarians talk about systems, not people, hence very few women are libertarians.

      • Also in my model, it is really interesting how totally autists used to win, except for lately when it got reversed. I mean, in the primitive tribal past life was like in high school, your outcomes depend on your relationship with the popular kids / powerful people.

        Then autists came around and through thousands of years came up with ideas like how about we resolve things through written laws and court procedures, how about we turn the state into a machine and the economy into a machine and science into a machine and everything into a machine and result was that Victorian Britain was an autist’s paradise with their own autistic “superhero”, Sherlock Holmes.

        Then the psychothics launched a counter-attack. First with Communism. Politburo was very much like high school in the above sense. Wokeness is very much like high school in the above sense, one has to accept what the popular kids say, regardless of whether it makes any sense. Wokeness is the psychotic’s paradise. everything depends on “reading the room”, figuring out whether the black guy or the trans “woman” has more political clout in this particular room.

        I wonder whether Catholic Europe was a good balance between the two.

      • I would disagree with the rationalist types and would say that many of them hamstring their thinking. They have trained themselves to avoid thinking in terms of human beings and their motivations. But it’s not just a quirk of psychology that people naturally think in those terms. Thinking of human beings and their motivations is actually an accurate way to understand human interactions in one’s own life as well as society as a whole.

        Even though society is complex because of the interactions of many motivations at many levels, individual motivations are typically simple and easily comprehensible.

        And so, by thinking of AI as a self-sustaining system (which it isn’t) rather than in terms of the motivations of the human beings who make it, fund it, and market it, they are missing a big piece of the puzzle.

        As far as the human mind doing nothing but statistical prediction, I believe Alexander is mistaken. The rationalists and people like them appear to believe that they can use concepts taken from models which have been useful and then somehow use those concepts to stand outside of normal human thinking.

        Where did the models come from in the first place? Human thinking. So we’re right back to where we started from.

        It’s like the idea that a telescope somehow stands beyond human vision. Yes, it can amplify the eye, but what do we use to look in the telescope?

        Concepts taken from statistics or anywhere else can still be useful in understanding nature and human society. But trying to use those concepts and those alone in one’s understanding isn’t strengthening thinking; it’s artificially restricting it.

      • Agreed. I have to say that I am really impressed with the quality of the comments I have received on this topic, apart from one person who didn’t seem to grasp the idea of “reasons” and who has the burden of proof. I.e., once I’ve given an argument, he has to poke holes in it. He can’t just say, “Skynet, here we come!”

    • If it ever becomes possible to replicate/simulate right hemisphere abilities my worldview would be pretty much destroyed at which point you can expect a truckload of cognitive dissonance from me.

  3. The ‘AGI’ label is only relevant for marketing purposes. There is an argument to be made that by several definitions, it is simply impossible. However if the distinction between an enormous system with near unlimited resources running neural nets and such, and ‘the real deal’ cannot be made, it hardly matters.

    What matters is that it does not become acceptable for these things to get used outside of their role of tools/ assistance with database management/ perhaps replacing defunct search engines to some extent. Discord wants to test out automated channel moderation, and unlike most people i do not believe the road to automated parking ticket handling is very long from there. Obviously, it won’t stop at that point either.

  4. People are lauding this nonsense as the great fix of our times, which is not surprising at all, but we should take heed lest society gets steered into a kind of ‘Lex Fridman dream gone wild’. Mostly referring to his inane claims that ‘true evil is incompetent’ and ‘stacking computing power will lead to actual intelligence, and even wisdom’. Yikes.

    • One response to the Halting Problem is that you just need to combine lots of little specialized programs to solve it. Roger Penrose pointed out that lots of little programs just become one big program when combined, and the Halting Problem has already answered that problem. It is not possible for one program to test whether all programs are valid.

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.