Prensky on Chat GPT: THINKING— FAST, SLOW, and now, STATISTICALLY
Humans now have a new way of “thinking.” Its impact is profound.
I’M A BIG FAN of Daniel Kahneman’s book Thinking Fast and Slow. It really helps to finally know, after 250,000 years of being humans, that we have at least two thinking systems: i.e., one based on heuristics, and one based on more critical analysis. The lesson (at least for me) is to never to use only one of them, but to always try, if possible, to combine the powers (and limitations) of both.
Thinking Fast and Slow came out in 2011. Starting many years before that, however, humans—including Albert Einstein, no slouch when it comes to thinking—have been telling us, in various ways, that “We cannot solve the problems we face with the kinds of thinking we currently have.” Some have been trying to invent “new kinds” of thinking. A A company I invest in called Pattern says it has invented “new mathematics” for explaining AI reasoning.
LLM’s
But now, most of us actually have a “new way” of thinking. Large Language Models—a recently released part of AI that includes Chat GPT—seem to me to represent a third kind of thinking, one that is unlike the others. It is thinking that is purely statistical and algorithmic. It does not happen totally “in our heads,” but, as long as we can access the web, we do have it.
So now, the majority of humans now have three thinking types. In addition to fast (heuristic/“rule-of-thumb”) thinking and slow (analytical/critical) thinking, we now have statistical (algorithmic) thinking. Seeing is a new type of thinking helps explain its incredibly rapid spread to billions. (Yet another type of thinking may be Imaginative fantasizing, and there may be more.)
Thinking? Really?
I know some balk at calling the new LLMs “thinking.” But what those processes do is remarkably similar—even though it is done through a new and different method evolved over decades and not millennia. Give Chat GPT (or a sibling) something to ponder—aka a “prompt”—and it instantly creates a cogent answer, one that is slightly different each time you ask. In producing that answer (or “thought”), it does not just retrieve pre-written (and therefore pre-thought-about) information verbatim, but puts words together in new, statistically based ways to produce thinking results that we couldn’t get before. Importantly—they are not just results that we couldn’t get as fast, but results that we couldn’t previously get EVER by our other means. The reason for this—and precisely what is new, different, and perhaps most remarkable about this new type of thinking— is that when we use it we incorporate into our individual thinking all —or just about all—the world’s written knowledge. With the other thinking types, we use only what was in our heads. Putting more into our heads is what often makes (or made) wide and deep readers “better” thinkers. This is why we encourage reading. It is why Ph.D. candidates, who often become our professors, are expected to have read almost everything in their field.)
But…?
The full processes by which the LLMs reach their “thoughts” and conclusions remain hidden to us. Their “thinking”— really our thinking, because they do it on the fly not just for us but with us—takes place in a hidden “black box” of software that we cannot see or, as yet, understand. But this is not very different than the other types of thinking that happen inside the human brain. The precise processes used by our brain-based thinking are also mostly still hidden from us, despite oft-made claims that we can “explain” our reasoning. For example, we still have no firm idea of how a brain produces a single “I think” about a complex situation.
Another issue is that LLMs also sometimes invent without telling us that they are doing so (some call this “hallucinating.”) Of course, that is also something human brains do. Humans are not especially adept at separating fact from fiction, and we often fail when we try. Much of our recall is, in fact, explanatory “stories” we make up. Humans still lack a reliable means of human lie detection, other than fact-checking. We can, of course—and should when using an LLM—always ask it to check whether it is hallucinating. Perhaps a required (or built-in) follow up prompt should always be: “Are you telling the truth, the whole truth and nothing but the truth?” Or perhaps all LLMs should have an “under oath” mode that can be turned on when asking about existing things like legal cases.
It’s the Combination That Counts
No matter what happens (and much certainly will) humans have now added a new way of dealing with thought problems and accomplishing tasks requiring thinking. It is a way that we never had before, and I don’t see why we shouldn’t call it a new form of “thinking.” It is perhaps a new form of “thinking outside the box”—with the “box” being our cranium.
Just like with fast, slow, and imaginative thinking, we humans are at our most powerful when we combine all the kinds we have—which suggests that we should always use statistical thinking in combination with the other types. A friend recently wrote me “I use Chat GPT every day.” Great—so do I. But do we both use it for everything we think about?
I’ve also heard advice to “Use Chat GPT as you would an assistant, or intern—don’t fully rely on it or trust it.” Of course, that is exactly how we should use our brain—except that most of us assign to our own internal thinking the quality of being “right.” (Anyone married knows how often this is not true.)
New Uses and Integration
Certainly, we all need to figure out as quickly as possible good ways to use this new thinking mode and integrate it with the others. Many of the heuristics of fast thinking are passed down culturally—often as “common sense.” When professors often say, “My job is to teach my students to think,“ what they really mean is “my job is to teach them to think analytically and critically.” To teach us how to think statistically. a new profession of “prompt engineer” has already emerged. Since AI is rapidly evolving—and statistical thinking with it—we will all have to keep up. Just as with the other modes of thinking there will be inequalities among us regarding statistical thinking’s use, but we can do a lot to prevent that by more rapidly embracing it.
Some worry that the new thinking—and AI in general—will get out of control and lead to human annihilation—as our old thinking came close to doing with nuclear bombs. But even though there are certainly some troubling scenarios, I think this new thinking—or any thinking—is unlikely to lead to annihilation, mostly because humans love themselves too much to ultimately allow it.
Now, rather, is the time to let it this new form of thinking flourish and integrate with the others. It is a time to be sure all our young people—or as many as we possibly can—start to use this new type of human thinking in combination with the others from their earliest years.
Labelling this new type of thinking “artificial” won’t help us. We need a better name for this new way of thinking humans have invented—and we need it sooner rather than later. It took humanity way too long to come up with “fast” and “slow”— and we gave their namer a Nobel Prize for describing how they influence decision-making! Perhaps it will help if we consider statistical thinking a new, external “brain component”—that also influences decision making—and works well with, and strongly complements, humans’ newly evolving AORTA (= Always on Real-Time Access)? Other ideas?