No human celebrating a first birthday is as verbose, knowledgeable, or prone to fabrication as ChatGPT, which is blowing out its first candle as I type these words. Of course, OpenAI’s game-changing large language model was precocious at birth, tumbling into civilization’s ongoing conversation like an uninvited guest busting into a dinner party and instantly commanding the room. The chatbot astonished everyone who prompted it with fully realized, if not always completely factual, responses to almost any possible query. Suddenly, the world had access to a Magic 8 Ball with a PhD in every discipline. In almost no time, 100 million people became regular users, delighted and terrified to realize that humans had suddenly lost their monopoly on discourse.

The response shocked ChatGPT’s creators at the AI startup OpenAI as much as anyone. When I was interviewing people at the company for WIRED’s October cover feature this year, virtually everyone admitted to wildly underestimating the chatbot’s impact. From their view inside the AI bubble, the truly big reveal was expected to be the just-completed text-generation model GPT-4. ChatGPT used a less powerful version, 3.5, and was seen as merely an interesting experiment in packaging the technology into an easier-to-use interface. This week Aliisa Rosenthal, the company’s head of sales, tweeted out striking evidence of the degree to which OpenAI’s leaders didn’t understand what they were about to unleash on the world. “A year ago tonight I got a Slack letting me know we were silently launching a ‘low key research preview’ in the morning and that it shouldn’t impact the sales team,” she wrote. Ha! Another OpenAI employee posted that people were taking bets on how many users would access it. 20K? 80K? 250K? Try the fastest-growing user base in history.

In my first Plaintext column of 2023, I made the observation (too obvious to be a prediction) that ChatGPT would own the new year. I said that it would kick off a wet, hot AI summer, dispelling whatever chill lingered from an extended AI winter. To be sure, it was a triumph not solely of science but of perception as well. Artificial intelligence had been a thing for almost 70 years already, at first taking baby steps in limited domains. Researchers built robots that stacked blocks. An early chatbot called Eliza beguiled people into sharing their personal lives using the simple trick of parroting their words back to them as questions. But as the millennium approached, AI became more adept and built momentum. A computer clobbered the greatest human chess champion. Amazon warehouses became dominated by automatic package processors. Daring Tesla owners snoozed while their cars drove them home. A computer program managed a feat that might have taken humans centuries to accomplish: solving the scientific mysteries of protein folding. But none of those advances packed the visceral wallop of asking ChatGPT to, say, compare the knives of the Roman Empire to those of medieval France. And then asking if the shockingly detailed bullet-pointed response could be recast in the way that historian Barbara Tuchman might do it, and getting an essay good enough to prove that homework will never be the same.

Millions of people tried to figure out how to use this tool to improve their work. Many more simply played with it in wonder. I can’t count the number of times where journalists asked ChatGPT itself for comment on something and dutifully reported its response. Beyond bolstering word count, it’s hard to say what they were trying to prove. Maybe one day human content will be the novelty.

ChatGPT also changed the tech world. Microsoft’s $1 billion gamble on OpenAI in 2019 turned out to have been a masterstroke. Microsoft CEO Satya Nadella, with early access to OpenAI’s advances, quickly integrated the technology behind ChatGPT into its Bing search engine and pledged billions more of investment to its maker. This triggered an AI arms race. Google, which earlier in November 2022 had publicly bragged that it was going slow on releasing its LLMs, went into a frantic “Code Red” to push out its own search-based bot. Hundred of AI startups launched, and contenders like Anthropic and Inflection raised hundreds of millions or even billions of dollars. But no company benefited more than Nvidia, which built the chips that powered large language models. ChatGPT had scrambled tech’s balance of power.

Maybe most significantly, ChapGPT was a shrieking wake-up call that a technology with impact at least on the scale of the internet was about to change our lives. Governments in the US, Europe, and even China had been nervously monitoring AI’s rise for years; when Barack Obama guest-hosted an issue of WIRED in 2016, he was eager to talk about the technology. Even the Trump White House released an executive order. All of that was mostly talk. But after ChatGPT appeared, even politicians realized that scientific revolutions don’t care much about bluster, and that this was a revolution of the first order. In the last year, AI regulation rose to the top of the stack of must-deal-with issues for Congress and the White House. Joe Biden’s own, expansive executive order seemed to reflect the sudden urgency, though it’s far from clear that it will change the course of events.

Meanwhile, during this year of ChatGPT, many AI scientists themselves have come to believe that their brilliant creations could bring about disaster. Dozens of leading AI thinkers signed letters either urging a pause in developing new models or just noting that AI poses a potentially existential danger to humanity. Notably, Geoffrey Hinton, dubbed the godfather of AI, spoke publicly of a change of heart—the technology he helped invent and champion urgently needs more oversight, he now says. It was a little confusing to see how many of those signatories kept working on AI anyway.

The recent OpenAI boardroom drama—where its directors fired CEO Sam Altman, only to back down after employees threatened to walk—seemed to neatly cap off a year of excitement and tumult. Five days of chaos doesn’t seem to have hobbled OpenAI’s ability to move the science of AI forward, or harmed its for-profit product development. (Though it would be a definite blow to the project if it loses Ilya Sutskever, the chief researcher who turned on his cofounders only to later recant. His fate is still uncertain.) But OpenAI’s vaudeville version of governance did leave a tarnish on what might have been an overly trusting view of the wizards injecting AI into humanity’s collective bloodstream.

I appreciate ChatGPT for many things, but especially the clarity it provided us in an era of change. In the Before Days, meaning anytime prior to November 30, 2022, we already had long passed the turning point in digital technology’s remodeling of civilization. AI was already running zillions of systems, from airplanes to the electric grid. With mobile phones seemingly Gorilla-glued to our palmtops, we had attained cyborg status. All of that sneaked up on us. We were frogs in pots of increasingly warm water, oblivious to the enormity of this transformation. Then OpenAI turned up the heat. We found ourselves face-to-face with an alien form of intelligence—and a possibly parboiled future. Please don’t ask ChatGPT what happens next. It’s up to us.

Image may contain Label Text Symbol and Sign
Time Travel

My first Plaintext letter of 2023 grappled with the release of OpenAI and what it might mean in the coming months and beyond. The “wet hot AI summer” that I predicted indeed arrived—and it was hotter and wetter than anyone, including the OpenAI adepts that triggered it, could have predicted.

Something weird is happening in the world of AI. In the early part of this century, the field burst out of a lethargy—known as an AI winter—by the innovation of “deep learning” led by three academics. This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. We’ve spent a dozen years in this AI springtime. But in the past year or so there has been a dramatic aftershock to that earthquake as a sudden profusion of mind-bending generative models have appeared.

One thing is [clear] … Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. Contrary to Mark Zuckerberg’s belief, the next big paradigm isn’t the metaverse—it’s this new wave of AI content engines, and it’s here now. In the 1980s, we saw a gold rush of products moving tasks from paper to PC application. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. In the 2020s the big shift is toward building with generative AI. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. By the end of the decade, AI video-generation systems may well dominate TikTok and other apps. They may not be anywhere as good as the innovative creations of talented human beings, but the robots will quantitatively dominate.

Image may contain Symbol
Ask Me One Thing

Pawan asks, “Why does it feel like we’re always losing the privacy fight these days? Are Silicon Valley’s inventions always going to be necessarily adversarial to a desire to be left unknown?”

Thanks for asking, Pawan. One big reason we’re losing is that our regulators and legislators have failed to protect us. In the US, there’s nary a congresscritter who doesn’t think that citizens deserve more privacy in the digital age. Yet due to lobbyists, partisanship, and arguments over who gets credit for the laws, the long-needed, long-in-the-making federal privacy bill hasn’t appeared.

The second part of your question is more interesting. The moguls of the Valley didn’t set out to create a privacy dystopia. But a lot of the innovations they add to products just happen to depend on snooping. It starts of course with targeted advertising, which pays for basically all our search and social media. No one loves that, but we do love plenty of inventions that wind up compromising our privacy. If you had to jump through hoops every time you talked to your computer, it wouldn’t be as handy in answering your questions on the spot. How could navigation happen if your phone didn’t know where you were? What’s the use of a home security camera if it isn’t vigilant? Even face recognition—which seems, like it or not, destined to become a standard part of air travel, and maybe how we get into office buildings—will speed things up. The aggregate result, as you note, is that privacy is indeed a fight that we’ve lost. Maybe Silicon Valley should concentrate more on inventing stuff that actually enables “a desire to be left unknown.”

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

End Times Chronicle

What if we had an international summit on climate change and neither the US president or the Chinese president bothered to show up? News flash: All the stuff that churns carbon dioxide in the atmosphere isn’t sitting things out.

Image may contain Label Text Symbol and Sign
Last but Not Least

Maybe not as flashy as ChatGPT, but DeepMind just announced an AI system that dreams up new inorganic materials, literally rewriting the book on that science.

Here’s the lawyer suing Open AI on behalf of writers, artists and comedian Sarah Silverman.

For $61,000 you can buy a Cybertruck and hear your friends weigh in on whether it’s ugly.

Hey, WIRED is having a conference in San Francisco next week. Here’s how to participate.

Image may contain Logo Symbol Trademark Text and Label

Similar Posts