On June 29, 2007, I stood in line outside an AT&T store in Rockford, Illinois, hoping to purchase one of the first iPhones. Hours later, I finally reached the door, only to have an employee inform me that the last iPhone they had in stock had just been bought by the man in front of me, a man who had revealed some hours earlier that Rick Nielsen, the guitarist for Rockford’s own Cheap Trick, had paid him a thousand dollars to stand in line and purchase what some in the press had dubbed the “Jesus Phone.” (Think “long expected,” not “resurrected.”)
Undaunted, I drove to Schaumburg the next day and picked up an iPhone at the Apple Store in Woodfield Mall. As primitive, slow and clunky as it seems today, that first iPhone captivated me (and much of the rest of the world) with its new interface metaphors and a tactile experience that made it seem alive.
Something similar, I think, has happened with generative artificial intelligence, the most famous model of which, ChatGPT, has captured the popular imagination. Ask Google a question, and you’ll get back a list of search results, often with one highlighted at the top of the page, with a snippet of text that the search engine’s algorithm has chosen as the answer most likely to be that which you were looking for. That’s pretty impressive even now, nearly a quarter of a century after Google went live in September 1998, but it hardly seems like magic anymore.
Ask ChatGPT a question, however, and you’re likely to get a multi-paragraph essay, composed as you watch, that is often surprisingly accurate and sometimes dead wrong. (I asked ChatGPT to “Tell me about Kyle Hamilton, CEO of Our Sunday Visitor.” It replied, incorrectly, that “Kyle Hamilton is not the CEO of Our Sunday Visitor” and then went on, amusingly, to say that “Our Sunday Visitor is a Catholic publishing company based in Indiana, USA, and its current CEO is Scott P. Richert.”)
The answers that ChatGPT generates are passable as ninth-grade essays from 40 years ago or as college-level senior essays today, which is why much of the debate over generative AI has focused on the potential for students to use it to cheat. More puzzling, though, for someone who has paid attention to the development of this technology, is the widespread expectation that ChatGPT and its cousins are somehow poised to help mankind leap into a new age, in which AI models will develop new ideas or insights that will advance human knowledge. They won’t, for a pretty simple reason: They aren’t designed that way.
As Stephen Wolfram, the British-born physicist and mathematician, recently explained: “Those essays that [ChatGPT is] writing, it’s writing one word at a time. As it writes each word it doesn’t have a global plan about what’s going to happen. It’s simply saying ‘what’s the best word to put down next based on what I’ve already written?'”
“Best,” here, is the quality “most logical,” based on the patterns that ChatGPT has perceived across all of the billions of words it has been fed by its creators. “Best” is not the quality “most insightful,” much less the quality “most likely to delight” or “to advance human thought.”
In his early work, “Poetic Diction: A Study in Meaning,” Owen Barfield, the philosopher, linguist and friend of C.S. Lewis and J.R.R. Tolkien, delves deep into how humans come to understand the world, to uncover meaning, particularly through the use of metaphor. To put Barfield’s insights into the simplest terms, the unexpected juxtaposition of two or more words tickles our fancy and allows us to uncover some bit of reality that we haven’t seen before.
But the very premise of a ChatGPT, built into the models used to train the AI, is, as Wolfram points out, that there is a next logical word to place after the current sequence of words. What ChatGPT generates is, by definition, not insightful, but more of what came before.
In that sense, ChatGPT is the perfect product of our times, of a world in which men and women are so locked into certain habits of thought — in a word, ideologies — that we cannot “break on through to the other side,” to create a new but true metaphor that allows us to see reality as it is, and not as we’ve been conditioned to believe it to be.
Scott P. Richert is publisher for Our Sunday Visitor