WSJ July 31, 2021: Artificial Intelligence’s Big Chill
The WSJ article is worthy of reading, even if you have to pay for it. I don’t want to steal its thunder, so consider this a recommendation. I don’t get any compensation from it. I can’t link to it, because it’s behind a paywall, but the Wikipedia article defines the definition nicely enough.
An “AI Winter” is what follows irrational exuberance caused by overblown and sweeping claims that imply the computers are going to somehow “be intelligent” like people. Eventually, the people get sick of being misled (or actively lied to, but we’ll call it “mistakenly over-optimistic”) and the party ends.
So, to be clear, the marketing term of “AI” is outrageously stupid.
What So-called AI Really Does
Under the umbrella of AI are two important areas:
- Statistical optimization (think big data search for coefficients)
- Expert systems of matching and unification (think Prolog or CLIPS)
Both of those are search problems. Fundamentally, AI attempts to find patterns and either identify something or predict something. If you phrase it that way, one wonders where the intelligence is?
When all the talk is done, AI is implemented in code. The code either calculates numbers (following precisely defined steps) or it calculates symbolicly (following precisely defined steps). How do I make such a strong claim? What’s the alternative?
What AI Doesn’t Do At All
AI does not think. It is not intelligent. It does not enable a CPU or GPU to acquire self-awareness or pursue its own goals. It does not become self-aware and take over the world.
And it can’t understand why people are fixated on pictures of cats. Even once trained to identify cats in many pictures, it can’t use what it learned to identify dogs. For that, it would start from scratch.
In short: AI does not do what people do. It also doesn’t do what ants do, or what amoeba do.
I keep having to give a simple example of why the fear that AI is going to take over the world is misplaced.
A Restatement of the Chinese Room
I read Searle’s description of the Chinese Room originally in his book. I’ll link to the Wikipedia article on it as it’s summarized sufficiently. I just change it a bit …
I carry around index cards. Index cards and sharpies are great for doing analysis and explanations in person, in ways that even whiteboards aren’t, because without a digital whiteboard I can’t move things around.
So, if I represent something on the cards, and do so for a very long time, perhaps years, and each card represents some concept (written on it in English) and some rules for how to produce a new card or replace an older one … how many cards does it take before the cards become self-aware?
All I did is define computation, carried out manually. Replace the person who is blindly following the rules with a CPU that is blindly following the rules, and how many “virtual” cards does it take before the machine becomes self-aware?
It’s the same question. Yet we don’t worry that index cards are going to become self-aware and take over the world.
Real Concerns About Computer Applications
This doesn’t mean there aren’t very valid concerns about what computer programs do with data.
The concerns, however, aren’t about the computer’s “intentions” (which don’t exist) but are about the people that are using the computers and the data.
If you don’t want a computer to be told to fire live weapons without human activation, then that’s a valid concern. The governments and military forces are facing these decisions. If the computer requires a human be “in the loop” before it activates weapons, then defensive systems may not be able to respond fast enough.
The challenge of fully autonomous passenger cars is still ongoing. They made spectacular claims — but we still don’t have commercial cars we can buy (or summon via app) that will take people to arbitrary locations on demand. And we’re not particularly close.
I’m a lot more bothered by statistical software being used to predict human behavior, such as in granting or rejecting parole requests, because “the computer said so.” A statistical model is consistent, in that it will not be drunk or bored, but it can absolutely be biased if the data it was optimized on represented bias. Worse, statistical systems can’t explain their reasoning. They are black boxes. Granted, there is some work to make them capable of giving causes, and I look forward to that succeeding.
It’s a very common mistake to think that mastery of a model is mastery of reality. The simplest example is playing chess. Chess is trivial compared to war. Chess has a fixed arena (64 squares), full knowledge of all materials in that arena, and precise rules on interactions within it. If anyone thinks that’s representative of war, then they have a lot more problems than their view on AI technology.
All Models are Wrong, Some are Useful
The aphorism “all models are wrong, some are useful” is crucially important, because AI always works only on a model.
Chess is a model. Facial recognition builds the coefficients that make up a model. An expert system that explains payroll configuration follows a model of payroll configuration.
In fact, all human understanding (real intelligence) is based on models.
I challenge anyone to explain the atomic or chemical structure of mathematics. It’s a fantastic tool, but math is not reality. It’s a model for explaining what we believe we understand. It has tremendous predictive power and physics is intensely useful. But, theories such as General Relativity are still models. This is also why those theories can evolve over time.
It’s a terrible mistake to assume that a model is the reality. We need models, or we couldn’t function as living beings. But those models must change as our understanding grows. We call this learning and science is one of the approaches used.
Science isn’t the only approach used. Most of our internal models (what Kahneman calls System 1 in his book Thinking Fast and Slow) that handle our most basic habits are “what has worked.” There’s no hypothesis or proofs in System 1. There is basically “what has worked over and over again.”
When a computer optimizes a set of data into a model it doesn’t understand the data or the model. It does, however, rapidly apply the mathematical computation to new data and generate numerical responses. Some of these responses are useful.
Some of the responses are catastrophically dysfunctional.
When a recommendation algorithm recommends a song that isn’t a good match, it’s annoying.
When a recognition algorithm identifies a person as a monkey (a non-human) and some human (or some automation, such as a weapon system) treats them differently and it injures or kills them, it’s much worse than annoying.
And since all models are wrong … it can and will happen. There will never be “perfect” anything.
Why I Don’t Claim to Do AI
My partner and I built amazing technology that is “the most current 1970s technology.” We built it on the foundations of Prolog and Lisp. It’s purely symbolic. We don’t even claim it’s AI. Because we built it, we know there’s no intelligence in it. We can trace and log it. It’s just code.
I can run the same libraries that anyone else can to get various “machine learning” (statistical) algorithms. Look into them, and they are, amazingly, the same algorithms as have been taught in statistics since the 60s. The math is the math. I don’t advise on those tools and techniques because I’m a software developer, not a statistician. I am grateful that people smarter in statistics than me make the tools available, though.
Even when I do Prolog and Lisp work, symbolic work, I don’t claim AI. I claim, instead, that I can solve problems often using a computer as part of the solution.
There are real researchers working on extending our understanding of how to represent knowledge, how to make coefficients capture nuances over time or space, etc. I’m perfectly happy to let them use the term AI if they want, so long as they don’t warn me that the computers are going to become self-aware and take over the world.
I’m a practitioner who builds software that helps real people achieve their goals.
That’s not going to make a machine somehow magically self-aware (the strong AI). But my tools allow me to express intentions and represent knowledge in powerful and useful ways.
And my tools will work even when (not if) the next “AI Winter” hits. Because my clients will be served regardless of the marketing hype.
Keep the Light,
Otter
Brian