In the decades to come, 2023 may be remembered as the year of generative AI hype, where ChatGPT became arguably the fastest-spreading new technology in human history and expectations of AI-powered riches became commonplace. The year 2024 will be the time for recalibrating expectations.

Of course, generative AI is an impressive technology, and it provides tremendous opportunities for improving productivity in a number of tasks. But because the hype has gone so far ahead of reality, the setbacks of the technology in 2024 will be more memorable.

More and more evidence will emerge that generative AI and large language models provide false information and are prone to hallucination—where an AI simply makes stuff up, and gets it wrong. Hopes of a quick fix to the hallucination problem via supervised learning, where these models are taught to stay away from questionable sources or statements, will prove optimistic at best. Because the architecture of these models is based on predicting the next word or words in a sequence, it will prove exceedingly difficult to have the predictions be anchored to known truths.

Anticipation that there will be exponential improvements in productivity across the economy, or the much-vaunted first steps towards “artificial general intelligence”, or AGI, will fare no better. The tune on productivity improvements will shift to blaming failures on faulty implementation of generative AI by businesses. We may start moving towards the (much more meaningful) conclusion that one needs to know which human tasks can be augmented by these models, and what types of additional training workers need to make this a reality.

Some people will start recognizing that it was always a pipe dream to reach anything resembling complex human cognition on the basis of predicting words. Others will say that intelligence is just around the corner. Many more, I fear, will continue to talk of the “existential risks” of AI, missing what is going wrong, as well as the much more mundane (and consequential) risks that its uncontrolled rollout is posing for jobs, inequality, and democracy.

We will witness these costs more clearly in 2024. Generative AI will have been adopted by many companies, but it will prove to be just “so-so automation” of the type that displaces workers but fails to deliver huge productivity improvements.

The biggest use of ChatGPT and other large language models will be in social media and online search. Platforms will continue to monetize the information they collect via individualized digital ads, while competition for user attention will intensify. The amount of manipulation and misinformation online will grow. Generative AI will then increase the amount of time people spend using screens (and the inevitable mental health problems associated with it).

There will be more AI startups, and the open source model will gain some traction, but this will not be enough to halt the emergence of a duopoly in the industry, with Google and Microsoft/OpenAI dominating the field with their gargantuan models. Many more companies will be compelled to rely on these foundation models to develop their own apps. And because these models will continue to disappoint due to false information and hallucinations, many of these apps will also disappoint.

Calls for antitrust and regulation will intensify. Antitrust action will go nowhere, because neither the courts nor policymakers will have the courage to attempt to break up the largest tech companies. There will be more stirrings in the regulation space. Nevertheless, meaningful regulation will not arrive in 2024, for the simple reason that the US government has fallen so far behind the technology that it needs some time to catch up—a shortcoming that will become more apparent in 2024, intensifying discussions around new laws and regulations, and even becoming more bipartisan.

Prepare Yourself for the Impending Disappointment of AI

Artificial Intelligence (AI) has been a buzzword in recent years, with promises of transforming industries and revolutionizing our daily lives. From self-driving cars to virtual assistants, AI has shown great potential. However, it is important to temper our expectations and prepare ourselves for the impending disappointment that may come with this technology.

One of the main reasons for potential disappointment is the hype surrounding AI. Media and companies often exaggerate the capabilities of AI, leading to unrealistic expectations. While AI has made significant advancements, it is still far from achieving human-level intelligence. It is crucial to understand that AI systems are designed to perform specific tasks and lack the general intelligence and common sense that humans possess.

Another factor contributing to potential disappointment is the limitations of current AI technologies. Despite significant progress, AI systems still struggle with certain tasks that humans find easy. For example, natural language understanding and context comprehension are areas where AI often falls short. This can lead to frustrating experiences when interacting with AI-powered devices or services.

Ethical concerns surrounding AI also play a role in potential disappointment. As AI becomes more integrated into our lives, questions arise about privacy, security, and bias. Issues such as data breaches, algorithmic biases, and the potential for misuse of AI raise legitimate concerns. It is essential to be aware of these ethical considerations and demand transparency and accountability from companies developing AI technologies.

Furthermore, the pace of AI development may not meet our expectations. While breakthroughs in AI research occur regularly, the translation of these advancements into real-world applications takes time. The complexity of implementing AI systems in various domains, along with regulatory and safety considerations, can slow down progress. It is important to have patience and realistic expectations regarding the timeline for AI adoption.

To prepare ourselves for the potential disappointment of AI, it is crucial to educate ourselves about its capabilities and limitations. Understanding what AI can and cannot do will help manage our expectations and avoid unrealistic assumptions. Engaging in critical thinking and questioning the claims made by AI developers and companies is essential.

Additionally, it is important to stay informed about the ethical implications of AI. Being aware of the potential risks and advocating for responsible AI development can help mitigate disappointment and ensure that AI technologies are developed in a way that benefits society as a whole.

Lastly, embracing a mindset of continuous learning and adaptation will be beneficial. AI technologies are constantly evolving, and new breakthroughs may occur that surpass our current expectations. By staying open-minded and adaptable, we can navigate the potential disappointments and embrace the positive aspects that AI brings to our lives.

In conclusion, while AI holds great promise, it is crucial to prepare ourselves for the potential disappointment that may come with it. Managing our expectations, understanding its limitations, being aware of ethical concerns, and staying informed will help us navigate the evolving landscape of AI. By doing so, we can make the most of this transformative technology while minimizing any potential disappointments along the way.

Similar Posts