Over the past few months, I’ve introduced artificial intelligence into the hobby life of my seven-year-old son, Peter. On Saturdays, he takes a coding class, in which he recently made a version of rock-paper-scissors, and he really wants to make more sophisticated games at home. I gave ChatGPT and Claude a sense of his skill level, and they instantaneously suggested next steps.
…
I was in college from 1998 to 2002, at the apex of the first dot-com boom; I paid much of my college tuition by running a small startup with my roommates, mainly making websites and applications for other startups. Then, as now, countless companies offered products that didn’t add up. (We worked for some of them.) It was easy to predict that many of these businesses would fail, and that investors at all scales would lose a lot of money. Still, the underlying technology—the internet—was unquestionably powerful. It’s hard not to say the same about A.I. today.
And yet, compared to the dot-com boom, the story of artificial intelligence is weirder. When the internet arrived, people weren’t sure how to make money with it. Even so, there was a sense in which the technology itself was somewhat complete. It seemed clear that connectivity would get faster and more pervasive; beyond that, the uses to which the internet might be put—streaming media, e-commerce, collaboration, cloud storage, and so on—were already broadly apparent. (Around the year 2000, for example, our little company was hired to create a workplace-collaboration system that had many of the capabilities we now associate with Slack.) Over the following decades, the engineering efforts required to create the modern internet would be prodigious; it would take extraordinary ingenuity to build the cloud, for example. But, from the beginning, the fundamental nature of the internet itself was more or less settled.
With A.I., it’s different. From a scientific perspective, the work of building and understanding A.I. is far from complete. Experts in the field differ on important issues, such as whether increases in the scale of today’s A.I. systems will deliver substantial increases in intelligence. (Perhaps new systems, shaped by further breakthroughs, will be required.) They disagree on conceptual issues, too, such as what “intelligence” means. On the all-important question of whether today’s A.I. research will lead to the invention of systems capable of human-level thinking, they hold strong, divergent views. People who work in A.I. tend to articulate their opinions clearly and forcefully, and yet there is no consensus. Anyone who weaves a scenario is disagreeing with a large cohort of her colleagues. Researchers will be answering many questions about A.I. empirically, by trying to build better A.I. and seeing what works. The A.I. bubble, in short, is more than just a bubble—it’s a collision between scientific uncertainty and evolving business thinking.
There are, at this moment, two big unknowns about artificial intelligence. First, we don’t know whether and how companies will succeed in getting value out of A.I.; they’re trying to figure that out, and they could get it wrong. Second, we don’t know how much smarter A.I. will become. About the first unknown, though, we have some clues. We can say, from firsthand experience, that having an A.I. available to you can be really useful; that it can help you learn; that it can make you more capable; that it can assist you in better utilizing your human capital, and even in expanding it. We can also say, with some confidence, that A.I. cannot do many of the important things people do—that, except in certain narrow circumstances, it is better at enabling human beings than at taking their place…
To continue reading this article, click here.