The Limitations of AI: Things it Cannot Accomplish
Table of Contents
A few months ago, I was called in at the last minute to participate in an onstage fireside chat at an Authors’ Guild event. (I’m on the nonprofit’s council, but of course I speak here only for myself.) Guild CEO Mary Rasenberger and I spent much of the session exploring the implications of a future where AI robots could create viable literary works. For writers, it’s a terrifying scenario. As we discussed the prospect of a marketplace flooded by books authored by prompting neural nets, I had a revelation that seemed to mitigate some of the anxiety. It may not have been an original thought, and I may have even come up with it myself earlier and forgotten about it. (My ability to retain what’s in my training set falls short of that of ChatGPT or Claude.) But it did frame the situation in a way that transcended issues like copyright and royalties.
I put it to the audience something like this: Let’s say you read a novel that you really loved, something that inspired you. And only after you were done were you told that the author had not been a human being, but an artificial intelligence system … a robot. How many of you would feel cheated?
Almost every hand went up.
The reason for that feeling, I went on, is that when we read—when we take in any piece of art, actually, in any medium—we’re looking for something more than great content. We are seeking a human connection.
This applies even when an author is long dead. If anyone is still reading Chaucer (Has he been canceled yet?), somehow over centuries we can vibe into the mind of some dude that lived in the 14th century and would have been amazing to talk to over a beer or a goblet of mead. In fact, we get to know him better through reading him, even if we have to struggle a bit with Middle English. (Props to Ann Matonis, my rock star of a Medieval Lit professor at Temple University. Tough grader, though.)
That epiphany about the meaning of human authorship has been my northern star as I work my way through the challenging AI issues that seem to besiege us every day. I thought about it this week when I sat in on a press briefing from Google product managers explaining some new AI features of its large language model–powered chatbot Gemini. (For those not keeping score at home, that’s the bot formerly known as Bard; these companies change names more than spies with safe-deposit boxes full of passports.) The new, enhanced Gemini promises, they said, “to supercharge your productivity and creativity.”
Productivity is a slam dunk win for algorithms. No quibble there. Creativity we have to talk about.
Google provided some illustrative examples. One was organizing snacks for a kids soccer team. Gemini could figure out who brings what at which game, send personalized emails to the right people, and even map out the destinations. That seems a great way to save time on what can be a thankless time suck. Productivity!
A second example involved the creation of “a cute caption” for a picture of the family dog. Gemini provided: “Baxter is the hilltop king! ???? Look who’s on top of the world!” That’s a reasonably fun caption. But it makes me think about the purpose of posting to social media, which is all about human connections. Sharing a remark pinned to your dog’s picture is part of a conversation. Using a ghostwriter invariably distances you from friends and followers who read the caption. Having a robot provide your part of the conversation seems like outsourcing to the extreme.
No problem hiring someone to walk your dog. But hiring, um, something to talk about your dog? Weird. What if everyone did this? I bet we would not enjoy captions so much. A friend who replied to the automated caption with a comment might feel silly if they later learned that they were responding to something concocted by an artificial neural net, not a squishy biological one. Or maybe, your friend asks their Gemini to come up with a cute reply. Then the humans could sit back while their robots conversed. The repartee might have the rapier wit of a Tom Stoppard play. But there’d be no human connection.
No doubt these language models are going to provide tremendous benefits. Automate those grant proposals! Summarize those sales reports! Tutor those kids on algebra! Tell us when a spreadsheet reports something dicey! Code up a storm! But some content is contingent on connection. Another use case offered in Google’s briefing: “Help me write a document for a job.” Gemini can do terrific things with that. But employers read those things to get a sense of the applicant’s reasoning skills, grasp of the job requirement, and basic sanity. When everyone is generating those letters with AI, those factors will become opaque. Don’t bother with the letter and just send a résumé. For a real connection, the recruiter will have to do the Zoom—and hope you don’t send your deepfake double.
In his just-published book Literary Theory for Robots, Dennis Yi Tenen, an associate English professor at Columbia University, notes that fears like mine—and the Author’s Guild’s—have precedent. He says that despite its highbrow associations much of the work of writing is a conventional form of labor that’s prone to automation. He cites the pre-computer age development of “template” techniques that could speed the process and even provide an author’s plots. In 1895, for instance, a French writer named Georges Polti published a template book called The Thirty-Six Dramatic Situations. Other works broke down elements of mystery stories. More complex systems appeared, like an entire file cabinet full of notes you could mix or match to create your own work. “They seemed scandalous to people at the time,” Tenen tells me when I reach him by phone. “They said, Oh, my God, you can’t, you’re ruining the genius of authorship!” Of course, that hasn’t happened. And even in some areas of writing that utilize rigid formulas, like pulp fiction and television sitcoms, writers have managed to express themselves uniquely within those confines and garner true connection. Tenen expects the same thing to happen in AI. “Some practitioners who use these AI tools and the human intellect will rise above the automated possibilities,” he says.
I’d love to believe that. No matter how good those robots get, writers, musicians, and artists will smear their brilliant and messy fingerprints on the output. Audiences will sense and respond to the humanity expressed by those works. On the amateur level, people might even get inured to clever AI-produced captions of their photos and realize it’s more fun to put their own twists on them.
Nothing is going to stop our inexorable slog toward a world where much of what we read, see, and hear will be coproduced, if not entirely concocted, by robots. In many, many ways these systems will lift the burden of rote work from our keyboard-weary wrists. Still, I am wary when I hear representatives of AI companies tell us that we will be “inspired” by our language models, a word I heard more than once during that Google briefing. Humans are inspired by great prose, great images, great music, and other forms of art. Maybe one day our AI systems will be capable of producing artworks as fantastic, or even more so, as those imagined by the best human artists. But the point of it all is human connection. God help us if we can’t tell the difference.
Time Travel
I keep returning to a story I wrote for WIRED in April 2012, as AI’s post-winter thaw was just beginning. I examined a Chicago-based company called Narrative Science that produced news articles with AI, an idea that at the time seemed ludicrous. The headline was “Can an Algorithm Write a Better News Story Than a Human Reporter?” We still don’t have an answer, but the needle is much closer to “Yes” than it was 12 years ago.
When Narrative Science was just getting started, meta-writers had to painstakingly educate the system every time it tackled a new subject. But before long they developed a platform that made it easier for the algorithm to learn about new domains. For instance, one of the meta-writers decided to build a story-writing machine that would produce articles about the best restaurants in a given city. Using a database of restaurant reviews, she was able to quickly teach the software how to identify the relevant components (high survey grades, good service, delicious food, a quote from a happy customer) and feed in some relevant phrases. In the space of a few hours she had a bot that could churn out an endless supply of chirpy little articles like “The Best Italian Restaurants in Atlanta” or “Great Sushi in Milwaukee…”
[Narrative Science CEO] Hammond believes that as Narrative Science grows, its stories will go higher up the journalism food chain—from commodity news to explanatory journalism and, ultimately, detailed long-form articles. Maybe at some point, humans and algorithms will collaborate, with each partner playing to its strength. Computers, with their flawless memories and ability to access data, might act as legmen to human writers. Or vice versa, human reporters might interview subjects and pick up stray details—and then send them to a computer that writes it all up. As the computers get more accomplished and have access to more and more data, their limitations as storytellers will fall away. It might take a while, but eventually even a story like this one could be produced without, well, me. “Humans are unbelievably rich and complex, but they are machines,” Hammond says. “In 20 years, there will be no area in which Narrative Science doesn’t write stories.”
Ask Me One Thing
Quentin asks, “Today there is so much data falsification, deflection, and misrepresentation online by businesses seeking more profit. Don’t software developers want to provide software that tries to circumvent these huge web commercial pitfalls?”
Hi, Quentin, and thanks for the question. There’s plenty of software that tries to limit spammy or deceptive pitches, and my guess is the majority of us have some sort of filtering that directs most of these straight to our built-in spam bins. Some legitimate stuff ends up getting tagged as spam too, but at least that gives us an all-purpose excuse for not responding to an invitation. It was in my spam filter!
But we still have to process way too much stuff that gets put in front of us, and a lot of it is sophisticated fraud. Spam that looks like authentic messages from Paypal or Amazon is so prevalent that I’m sure that the real companies behind those are at wit’s end. Fortunately, a surprising number of those can be detected by simply looking at the email address, which often betrays a sender who is decidedly non-corporate.
A knottier problem is that even legitimate companies pitch their wares in slimy ways. Even companies that pride themselves on being “trusted brands” all too often bust into our inboxes like carnival barkers. Offers are always last-chance (yeah, like a Who farewell tour), and “discounts” are seldom actual bargains. Marketers offer us free trials designed to lock us into long-term subscriptions that are near impossible to cancel. They promise to fatten our bank accounts with products that end up diminishing them. And even if we wind up buying something that doesn’t rip us off, they follow up with a fusillade of demands that we rate the experience, and then quadruple their offers. AI can assuredly be developed to detect and warn us of such tactics—but it’s also being tapped to design still more infernal commercial traps. In short, I feel your pain, Quentin. Our inboxes are Hobbsian nightmares, and I don’t see how this is going to change soon. Sorry!
You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.
End Times Chronicle
What? A Grammy ceremony where not everyone is griping about what a travesty it was? (Jay-Z excepted, but he did say that his complaints were made “with love.”) Plus Joni and Tracy!
As Google rebrands Bard as Gemini, CEO Sundar Pichai shares what’s next. It may not be search.
There’s an advanced Gemini model, too. Will people pay for it?
Start here for a stunning six-part fictional tale taking place in 2054, where the singularity rules.
Sixteen years after the BItcoin paper, and no one has figured out who Satoshi is. Now a judge will weigh in.
Artificial Intelligence (AI) has made significant advancements in recent years, revolutionizing various industries and transforming the way we live and work. From self-driving cars to virtual assistants, AI has shown immense potential in replicating human-like intelligence and performing complex tasks. However, despite its remarkable capabilities, there are still limitations to what AI can accomplish. In this article, we will explore some of the key limitations of AI and the challenges it faces.
1. Common Sense Reasoning: While AI systems excel at processing vast amounts of data and making predictions based on patterns, they often struggle with common sense reasoning. Humans possess a deep understanding of the world, enabling us to make logical deductions and infer information even when it is not explicitly stated. AI systems, on the other hand, lack this innate ability and struggle to interpret context and make sense of ambiguous or incomplete information.
2. Emotional Intelligence: Emotions play a crucial role in human decision-making and social interactions. However, AI systems are unable to experience emotions or understand them in the same way humans do. While AI can recognize facial expressions or analyze sentiment in text, it cannot truly comprehend emotions or empathize with others. This limitation poses challenges in areas such as customer service or healthcare, where emotional intelligence is essential.
3. Creativity and Innovation: AI systems are excellent at analyzing existing data and patterns to generate insights or make predictions. However, they struggle when it comes to creativity and innovation. The ability to think outside the box, come up with original ideas, or create something entirely new is a uniquely human trait. While AI can assist in certain creative tasks like generating music or art, it lacks the depth of imagination and intuition that humans possess.
4. Ethical Decision-making: AI systems are designed to follow predefined rules and algorithms, making them highly efficient in executing tasks. However, they lack the moral compass and ethical judgment that humans possess. AI cannot make subjective decisions or consider the broader ethical implications of its actions. This limitation raises concerns in areas such as autonomous weapons, where AI may be used to make life-or-death decisions without human intervention.
5. Adaptability and Flexibility: AI systems are typically trained on specific datasets and perform well within those predefined boundaries. However, they struggle to adapt to new or unfamiliar situations. Humans, on the other hand, can quickly learn and adapt to new environments, apply knowledge from one domain to another, and handle unforeseen circumstances. This limitation makes it challenging for AI systems to operate in dynamic or rapidly changing environments.
6. Intuition and Insight: Humans often rely on intuition and gut feelings to make decisions or solve complex problems. These intuitive leaps are difficult to replicate in AI systems, as they are based on subconscious processing and pattern recognition. While AI can analyze vast amounts of data and provide recommendations, it cannot replicate the intuitive leaps that humans make, which often lead to breakthroughs or innovative solutions.
In conclusion, while AI has made remarkable progress and continues to transform various industries, it still has limitations that prevent it from fully replicating human intelligence. Common sense reasoning, emotional intelligence, creativity, ethical decision-making, adaptability, and intuition are some of the areas where AI falls short. Recognizing these limitations is crucial to ensure responsible and ethical development and deployment of AI technologies. By understanding the boundaries of AI, we can leverage its strengths while also acknowledging the unique capabilities that make us human.