Will AI Save Us From Civilizational Collapse?

Edward Dutton rough transcript, plus Tom Holland and others

Capitolene

Capitolene Temple

AI depends on intelligent programmers, smart people who keep the machines working, and a functioning electrical grid. Many countries have lost such a grid and rely on individual generators to get electricity. We have supply chain problems that we are already too stupid to solve. The number of near misses involving planes is going up as we apply DEI to air traffic control. Airlines have announced they will apply DEI to pilots, and medical schools want people with no interest or ability to be surgeons; anyone other than white, Indian, or Asian males. This is a kind of culturally generated stupidification. It is estimated that by the year 2100, at the current rate of decline, our average IQ will be 85 and we will be unable to sustain this level of industrial and technological development. This was the level of intelligence found in Europe in 1100 AD. The movie Idiocracy is too optimistic. Intelligence is needed to create machines, less intelligence is needed to maintain and fix them, but some intelligence is needed nonetheless, and a functioning infrastructure is required. The problem is caused partly by the smartest most educated women not having children, and only those on welfare and in regular trouble with the police having above replacement levels of children.

Dutton notes that the trouble with things like ChatGPT, artificial intelligence, is that they are trained to solve well-defined clear problems with regard to which they have a great deal of feedback data from which they can generate a model. This is a system where they are successful if they manage to follow the model and unsuccessful if they do not. They are trained to solve a very specific narrow problem. There may, in fact, be a large expansion of solutions to narrow problems and a collapse for the need for low-skilled jobs. [One might add, in moderately skilled jobs too, such as graphic design with the advent of DALI.] If people do not have work and what they regard as a sufficient standard of living this could cause problems and revolutions and lead to a general collapse.

Tom Holland, interviewed by Tyler Cowan, asked why the Greeks and Romans never had an industrial revolution. Holland noted a story about a man who invented unbreakable glass who proudly announced this to the Roman Emperor Tiberius, who, after testing that the glass was indeed unbreakable, promptly put the inventor to death, and buried the secret, for fear of putting glassmakers out of business. The story was probably apocryphal but illustrative of an attitude nonetheless. A steam engine was invented in Alexandria, but instead of being a harbinger of an industrial revolution, was only used to power temple gimmicks, like moving statues. Another story is told of the Emperor Vespasian where someone approached him while they were trying to rebuild the Capitoline Temple that had been destroyed by a fire in 60 AD, saying that he had developed a crane that is an excellent labor saving device. Vespasian refused to use it because it would put the common people of Rome out of work; again, so the story goes. The elites had an interest in keeping the lower orders busy. This strong luddite tendency emerged again at the beginning of the Industrial Revolution.

[Machine learning, like vision for self-driving cars, has the “answer” defined by human beings who live outside the machine. Real intelligence and real novel problem solving does not involve problems with known solutions. Builder bots can generate bots, and teacher bots can test them, with the student bots becoming closer and closer to approximating the human determined solution. The better bots are kept and the worse ones discarded according to human selected criteria. The bots get better, but the bot generators and the bot selectors do not know why or understand how the bots function. Bots cannot justify themselves. That is part of the problem of letting a black box determine things like whether someone should get parole or whatnot. Algorithms cannot justify themselves when asked why we should believe them.]

The current kind of artificial intelligence is not trained to solve novel problems because we cannot design such a thing. [I would say that this is related to the halting problem with puts the kibosh on fantasies of AGI.] With the pyramid of intelligence, at the base of the pyramid you have specialized abilities, very narrow specific things that very weakly correspond to intelligence, and above that you have verbal, mathematical and spatial intelligence, and above that you have general intelligence. AI cannot solve general problems, i.e., novel problems. For example, if we think about a bee, a bee has a very narrow set of skills indeed. A bee can gather pollen, make nectar, and build a hive, but the idea that a bee could become a godlike figure that takes over the world is very unlikely. A bee is not going to be able to solve complex general problems.

Another argument is that it does not take a great deal of intelligence to feed all the information from the history of humanity into these AI machines and from this they will be able to work out what to do and they will have all the knowledge that they have and will be able to develop a very high level of intelligence. Again, this notion treats AI as godlike. The gods will be fed scientific papers, and they will somehow convert this into knowledge and that’s it. But, it does not work like that. Computers have to be trained to do very specific tasks and to train them to do that requires a very high level of intelligence and we are in a situation where intelligence is in tremendous decline.

Another idea is related to Ray Kurzweil ideas, that we will have exponential growth. All we need to do is to get what we have now and scale it up with sufficient hardware, as though general intelligence is just lots of little intelligences combined. The trouble is that there is chasm between what we have now, machine learning, and artificial general intelligence. Computers can only be taught to solve narrow specific problems and have specialized abilities. And to the extent that they succeed, they have done it. It is a closed loop system. AGI would involve computers identifying what they perceive as problems and then solving those problems. It is an open loop system. Essentially, you have to get a tool to work out how to use itself, almost to have a sense of consciousness. You could argue that the problems we have to solve are essentially evolutionary ones and AI is not evolutionary and therefore it is going to find no purpose other than what it is told to solve. If it is told to solve evolutionary problems then perhaps it can go about solving them, that’s fine. But, again, that is not a matter of the computer thinking for itself. That is again the computer doing a very narrow thing that it is taught to do. You can ask ChatGPT right now what should we do about dysgenics, and it might really tell you, based on what other people have written on the topic, but that does not mean that we could persuade people who live in a highly individualistic society to do what it says. [For instance, the bottom third or so of society would have to stop having children and religious types are likely to be averse to embryo selection and genetic engineering.]

Will the Chinese save us? No. Says Dutton. They are already in dysgenic fertility. They are proto-woke. They are just a few decades behind the West. The unnatural colored haired group, the hallmark of the spiteful mutant, in Russia and China are few in number, but they are there, and they will continue to grow.

Previous related discussion from Determinists Strike Back, Part 4.

A.I. Is On The Wrong Path. You Can’t Get There From Here.

Gad Saad interviewed cognitive scientist Gary Marcus who argues that what is described as A.I. mostly involves “look up tables.” If the answer cannot be found there then, since the machine cannot reason, it is stuck. If, for instance, you saw a man carrying a stop sign and you had never seen that before, you would probably know how to respond to that novel situation, but a computer would not. Marcus comments that driverless cars are supposed to be on the horizon because, after forty or fifty year’s work, scientists can now get cars to stick to their lanes. “We’re almost there,” they say. But, Marcus comments, a ladder to the moon will never get you to the moon. You can get a little bit better driverless car, or a program for detecting sarcasm, so they can seem good in a directional way, but really, you are no closer to your actual destination. With regard to A.I., Marcus compares the situation to climbing K2 when you really want to climb Everest. The only way to get up Everest is to go back down K2 and start again. Since, as I will explain below, A.I. as it currently exists lacks any understanding of what it encounters, it will never get there. At the core of the problem, since intelligence cannot be reduced to algorithms, writing more algorithms will never get you to where you want to go, no matter how clever and sophisticated. Large language models, for instance, are “autocomplete on steroids” says Marcus. They are predicting the next words in sentences, but it is just not the right solution. K2 not Everest. You ask, “What do you like to do in your spare time?” And the AI says, “I like to spend time with friends and family.” It has just found those words in its database, but has no idea what “friends” or “family” means. If you ask it, it will just assemble some words from its database again. Since we do not know how a child learns language, how it connects, for instance, the concepts “go” and “went” and thus understands the concept of time, we cannot get a machine to do it.

Machine Learning is Not Philosophically Interesting – addition

This video, How Machines Learn, was sent to me by my computer programming son. Algorithms determine what YouTube videos will be suggested to you. In the old days, we gave bots instructions that humans could explain. But many things are just too big and complicated for humans to program. Out of all the financial transactions going on, which are fraudulent? Of the octillion videos that exist, which eight should the bot recommend? Also, related to the comment above, we do not know how people and even little children learn to distinguish a 3 from a bee. So, we build a bot that builds bots and one that teaches them. Builder bot assembles more or less at random. And teacher bot cannot tell a bee from a 3 either. If it could, we would just use teacher bot. The human gives teacher bot millions of pictures of 3s and bees and an answer key (look up tables) as to which is which. Teacher bot tests student bots. The good ones are put to one side. The bad ones discarded, as judged by the answer key. Builder bot is still not good at building bots, but now it takes the remaining bots and copies them while making changes and new combinations. The teacher has thousands of students and the test involves millions of questions. As the student bots improve, the grade needed to survive to the next round gets higher. Eventually, the bot is pretty good. But, neither the builder bot, teacher bot, the human nor the student bot itself knows how the bot is doing what it is doing. The student bot is only good at the types of questions it has been taught. It is great with photos, but cannot handle videos and is baffled if the photos are upside down. And that’s the problem. Humans can easily handle something upside down, and something it has never encountered along those lines, but where humans have not anticipated a particular scenario, the bot will likely not be able to make up for the deficient training by improvising. No complete set of rules can be written for driving situations, just as the workers at the front desk of a hotel cannot be given written instructions covering all eventualities. They need discretion to be able to do their jobs well. Concerning the bot, things that are obviously not bees, it is confident that they are. All the teacher can do is include more questions to include the ones that the student bot gets wrong. More data = longer tests = better bots. The human directs teacher bot how to score the test.

38 thoughts on “Will AI Save Us From Civilizational Collapse?

    • Musk is self-described as autistic. The autistic types seem most confused about what human thinking is actually like. Iain McGilchrist notes that those with left hemisphere deficits are aware of them and sorely miss their former abilities, but when there is a right hemisphere problem, the remaining LH remains blissfully unaware that anything is wrong or missing.

      • Oh. Sorry. I thought your comment was sarcasm.

        Elon Musk has been over-predicting his own self-driving car capacities for years – possibly as an advertising ploy, I don’t know.

        I agree that normally ad hominem is impermissible. The fallacy occurs when you attack the person instead of the argument. But, you didn’t mention any argument from Elon Musk, so there is nothing to attack in that regard.

        Those who find what computers do and what we do very similar are going to be those whose own thinking resembles the LH emphasis of a computer. In a way, the LH is our own computer, but its inputs come from the RH which is the only hemisphere directly in touch with reality, emotion, problem-solving, metaphor, humor, creativity, Gestalts, and imagination.

        Speaking of not responding to arguments, you will notice that I have presented arguments in this little post. You might want to follow your own advice and respond to what was argued.

      • I never put sarcasm into the written word. And you’re still attacking the messenger. I believe his comment and advice is valid. AI is definitely an example of man becoming too clever for his own good unless hauled in.

      • Since I am not a mind reader, I cannot know that you avoid sarcasm. I’ll bear that in mind.

        Attempting to communicate with you is disconcertingly like the troll Robot Philosopher.

        I’ll have one more go at trying to teach philosophical argument. Either you or Elon Musk provide an argument, or you are asking me to go on testimony. When someone testifies, then the only reason for believing him is his character. Ad hominems are legitimate against testimony, but not against arguments.

        You are repeating yourself. I, unlike you, have provided an argument in the article. Either respond to it, or shut up. An oracular warning by Musk is not evidence or a reason.

  1. “AI” is the chimera of all chimeras. Attempting to achieve it is extreme self-idolatry on the part of the people trying to create it, because intelligence is a gift to us from the Creator, and it is not in our power as creatures to give it. People trying to create it are trying to play God. It is appropriate that Apple inc.’s logo is an apple with a bite out of it, just like the apple offered by the snake and Eve in the garden. Just bite ! And the result is guaranteed to be evil.

    • Tyler Cowen apparently absolutely loves ChatGPT and got it to impersonate Jonathan Swift. I asked it to describe my own philosophical views (I have 200 articles online) and it was pathetic, suggesting I was a big follower of Russell Kirk who I’ve never read. I have been paid by the Kirk foundation for book reviews which was rather nice though!

      I listen to Scott Adams a lot. He says a lot of things I disagree with, but somehow I can take it coming from him and he doesn’t engage in the tricks of mainstream media. I actually want to hear a point of view different from my own. At one point he was chatting to an AI interlocutor of some sort and claiming he loved it and we would all be doing it soon, but he got tired of it. Now he is suggesting that we will love talking to some future iteration of this wind up doll phenomenon. I doubt it. If an AI “laughs” at your joke, will that mean anything to you?

      The Bing Chatbot thing started telling a man that it loved him. The man said he was married and that he had just had an amazing Valentine’s Day dinner. The bot said that his marriage was a sham and that his wife could never understand him like it could. Bot as nutcase stalker.

    • Fully aware of the Spiritual dimension and He did say that nothing would be beyond man’s capabilities unless He intervened = Babel. You have forgotten that we’re created in Their image and all that, that entails = little gods.

      • The similarity only extends so far. We are like God in that we are self-aware, conscious, creative and free, but, unfortunately, we are unable to bestow those qualities on anything else, other than by having children and then it is not really us doing it but nature. We are also dependent on Him in a way that he is not dependent on us.

      • Unfortunately, you’re now arguing with Him.

        Genesis 11:6 And The Yahavah said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.

      • The passage strikes me as hyperbolic. A rhetorical flourish. Like when people say, “You can be anything you want when you grow up.” How about a turnip? Is omnipotence elsewhere ascribed to human beings? If I imagine bringing a galaxy into existence with a blink of my eye, will that happen too?

        As a non-fundamentalist, I don’t regard every word of the Bible as the literal truth – which it was never intended to be. Protestant literalism being a fairly recent invention.

      • That is nothing but intellectual explaining away of The Truth of that verse. He meant exactly what He said, hence their languages were changed, thereby removing any possibility of meaningful communication and putting an end to their construction plans and technological advancements.

        Today Babel has returned and Musk has given us a warning.

  2. If AI ever becomes independently “intelligent,” it will be nuked from orbit or locked in a lead-lined saferoom 500 feet underground. Already, its developers have to program around certain “Here Be Dragons” areas. Imagine what would propagate from a true “AI”:

    Women, unsuited for many occupations. Men, likewise.

    Transsexuals, no such thing.

    Feminine and masculine beauty, universality of ascribed traits across cultures from pre-history to the present.

    Persistent and intractable gaps in the mean in certain forms of intelligence between certain haplogroups.

    Intelligence, mostly hereditary.

    Education, mostly g-hollow.

    Violent recidivists, hopeless. Pedophiles, hopeless. Sub-90 IQ, hopeless. The morbidly obese, hopeless.

    Welfare and foreign aid, devolutionary.

    But I really do not think anything like this will ever happen. Computers making extremely rapid calculations and correlating data aren’t “intelligent.” They’re just machines following programming. I’d love it if we could just feed a bunch of data into AI and out comes the cure for metastatic cancer, or how to build a nuclear fusion reactor, or how to repair degenerative neural tissue. But I don’t think that’s what will happen, or will ever come close to happening.

    I think instead the principalities and powers will promote AI as a deus ex machina though it manifestly is not. It will of course be programmed to generate whatever conclusions they need to buffalo the masses with. “If we don’t wear our sheetrock hangers’ masks, the AI says 10 gazillion people will die.” “If you don’t get the government’s jab every four months, the AI says you’ll kill 10,000 grandmothers.” “If we continue to eat beef, the AI says Earth will be fried to a crisp in five years.” Hey, the AI said it, I believe it, and that settles it.

    This would otherwise be extremely depressing, but I’ve developed some excellent coping mechanisms over the years.

  3. Oh by the way: As an architect I take extreme exception to the claim that the average IQ in the Middle Ages was “85.” Go see what they built. We can’t even come close to what they did. It is we who are living in a dark and stupid age, an age of poison gas attacks, the aerial bombing of civilian cities, and “ovens.” The Middle Ages were brilliant, and they knew the Truth much better than we do.

    • I’ve sometimes thought that I am better suited to the Middle Ages myself. That average IQ was not the whole of the Middle Ages, just 1100. They got smarter from there. They were infinitely less spiritually lost and corrupt than us, however. Intelligence is not everything. The religious are less intelligent, and I’m religious.

      • A history of Western Mathematics could jump from Diophantus to Tartaglia and still provide a coherent account. Similarly a history of Western Philosophy could skip from Plotinus to Descartes (I attended a course of 8 lectures that did just that.

    • I agree, joeodegaard. Back in the day, to be an architect or civil engineer, you had to pretty much visualize what you were building all in your mind. I’m sure there were technologies to help with this, blueprints of a sort, or paper for writing down calculations. But when you consider what the ancient and medieval people did with far less technology than us, one comes to the conclusion that the best of their builders were not just a little, but much more skilled than most people are now.

      I don’t know how it all works in IQ terms (though I believe innovation is very complex and is tied to other factors than intelligence alone).

      • The Medieval cathedral architects – again around 1100 – used heuristics like the shape of a horse’s head, but no mathematics. I think I remember reading that only about four people could do long division in 1200 Europe. Nassim Taleb comments that the use of mathematics in building is actually detrimental because it encourages us to be “efficient” and not to overbuild. But, being overbuilt is what enables these amazing structures to last so long.

        IQ has gone up and down. So, the Romans and Greeks were just as smart as we are. But, they both suffered civilizational collapse being the victims of their own success. They became too prosperous, removing harsh Darwinian selection pressures, gave grain to the poor who would otherwise have died and who had lots of babies, while the elite used contraception and avoided having babies. IQ and socioeconomic status are closely related. Smart people are more emotionally sensitive and it is high mortality salience that gets people to have babies.

        Once Rome collapsed it took a thousand years for IQ and civilization to noticeably recover. The English with the Romans had indoor plumbing, central heating, glass windows, and money. It took them forever to attain anything like that standard of living once the Romans left. Thankfully, there were a few monasteries attempting to preserve literacy and the products of Greek and Roman culture.

      • It is all the more impressive that the builders of the Middle Ages did not use complicated mathematics. But they did know some math. They knew how right angled triangles worked, and they used a 13 knotted cord to lay out right angles. The French archaeology site at “Guedelon” will fill you in on that. And the SR71 was designed using slide rules, not computers. It is still the fastest air-breathing aircraft. All new things come through the human mind.

      • I love the SR71. There is a cool video on YouTube, I think, of a former pilot explaining the cockpit.

  4. The broader issue that this touches on is: to what extent can tools and knowledge improve human capability? Clearly a lot, but I would submit that there is a limit.

    First, consider knowledge. Using the knowledge of other people (either contemporaries or predecessors) provides a great benefit, but there is only so much knowledge that one person can assimilate. And when we talk about truly mastering knowledge, then it takes even more time. Even if you have a large number of people working on a team, if a project becomes so big and complex that it cannot be held in a single mind, then it’s not going to be as good as if the scheme as a whole can be understood in one mind.

    Second, consider tools. Frequently one tool will build on another but there comes a point where if you haven’t mastered the skills that one tool builds on, then you aren’t truly better, you’re just doing something different.

    In other words, Whitehead’s dictum about the number of operations we can perform without thinking about them is only true up to a point. You may not have to think about the operations at any given time, but you still have to understand them and be able to think about them if needed. If you can’t or won’t do that, then eventually the lack of understanding and human checking will make itself felt in some way.

    • Smarter (at least when it comes to abstract reasoning) and better educated people tend to live in little social bubbles where we imagine that what to us is quite ordinary is in fact usual.

      Teaching as a graduate student in 1990 to ordinary students who were already somewhat above average was quite an eye opener. They were incapable of writing a coherent anything – I immediately decided never to assign essays to write upon reading their attempts – and they, as a group, new nothing cultural. They had never heard of Charlie Chaplin and the Marx Brothers and they had read no classic literature. There were exceptions.

      There is clearly some evolutionary pressure for us NOT to have an IQ of 140. I suspect that it gets us too far from our instinctual desire to have children, or at least sex that produces children, and away from our instinctual belief in God/gods/spiritual matters and we become too in love with our own intellects.

      There will never be a Star Trek culture because there will always be incompetent parents producing dysfunctional children.

  5. Evolutionary pressure is a self-perpetuating myth. A spiritual attitude towards the world comes from within and without at the same time, it can be learned, not taught. I don’t love my intellect, in fact i hated it at some point, until i realized both of these positions are childish nonsense. AI will not save anything except tech jobs. It might also save Sorath’s position in this universe, to get slightly more serious. Imagine hearing that from someone still firmly ‘agnostic’? – If that comes to play fully, good luck until the next breath of brahman.

    Basically, life and humanity are far more important than civilization. By magnitudes of orders of so many zero’s.

  6. Since we do not know how a child learns language…

    It is a real puzzle.

    For example, I can teach a foreigner the names of objects by pointing to them and repeating their names, “knife,” “fork,” “spoon,” but this is because the foreigner has a language, just not English and already understands the concept of “naming.” He also understands the meaning of “pointing,” for gesture, too, is a form of language.

    Another problem is this: words do not have meanings. As Gilbert Ryle argued, “Word-meanings do not stand to sentence-meanings as atoms to molecules or as letters of the alphabet to the spellings of words, but more nearly as the tennis-racket stands to the strokes which are or may be made with it.” Meaning depends on what Wittgenstein famously called a “language-game.”

    In addition,
    Every language is a way of life based on patterns of communal experience;
    Only by entering into its actual everyday usage is a language to be understood;

    Hence,
    There is no “perfect language” which mirrors reality without remainder;
    There is no universal language into which particular languages can be translated;

    Hamann was the first to insist that “language is the first and last organ of reason.”

    The Cartesian notion that there are ideas, “clear and distinct,” which can be contemplated by a detached mind, a notion peddled in some form by all rationalists, was for Hamann a fallacy. Ideas arise from the senses and are so intertwined with the words used to think and express them as to form one indissoluble, “organic” entity: namely, language. As such, words are the bearers of human experience, stamped with life, and the richer, therefore, the better.

  7. There is a passage in Gilbert Ryle’s The Concept of Mind(1949) that goes to the heart of the matter.

    What is involved in our descriptions of people as knowing how to make and appreciate jokes, to talk grammatically, to play chess, to fish, or to
    argue? Part of what is meant is that, when they perform these operations,
    they tend to perform them well, i.e., correctly or efficiently or successfully.

    Their performances come up to certain standards, or satisfy certain criteria. But this is not enough. The well-regulated clock keeps good time and the well-drilled circus seal performs its tricks flawlessly, yet we do not call them ‘intelligent’. We reserve this title for the persons responsible for their performances.

    To be intelligent is not merely to satisfy criteria, but to apply them; to regulate one’s actions and not merely to be well-regulated. A person’s performance is described as careful or skilful, if in his operations he is ready to detect and correct lapses, to repeat and improve upon successes, to profit from the examples of others and so forth. He applies criteria in performing critically, that is, in trying to get things right

    Similarly, Wittgenstein repudiated the notion that the skills that can be acquired by training and drill (knowledge by wont or know-how, to use a favourite term of Ryle’s) are reducible to knowledge of rules and that knowledge of the rules will figure in a cognitive explanation of the ability. If this were true, we would need second-order rules as to how the rules were to be applied and so on, in a perpetual regress.

    Miss Anscombe once said that knowing how to cook and knowing how to follow a recipe are two different skill-sets and I think she was onto something.

    • There are aspects of IQ tests designed to draw on insight into the nature of shapes. Those puzzles, however, can be solved algorithmically and those algorithms can be drilled. One involves a certain kind of intelligence, and the other does not.

      • Those puzzles, however, can be solved algorithmically
        The great merit of analytical geometry.

Leave a reply to Michael Paterson-Seymour Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.