In 2024, we will see courts and regulators around the world demonstrate that tech exceptionalism, when it comes to the applicability of legal rules, is magical thinking. The tide has already started to turn on the assumption that law and regulation cannot keep up with technological innovation. But, in 2024, the sea change will come: not through new rules, but by old rules being applied aggressively to new problems.
In the United States, in the absence of federal privacy legislation, regulators have already started to repurpose laws and rules they do have at their disposal to address some of the most egregious examples of Big Tech playing fast and loose with our rights and personal data. In 2023, the US Federal Trade Commission (FTC) continued to expand the regulatory heft of consumer protection regulations.
It took on the problem of dark patterns—deceptive design used by apps and websites to trick users into doing something that they didn’t intend to, like buying or subscribing to something—with a half-billion-dollar fine against Fortnite maker Epic Games. The FTC also issued massive fines to Amazon for significant breaches of privacy through Alexa and Ring doorbell devices. There are no signs that, in 2024, the FTC will slow down, with rules in the pipeline to govern commercial surveillance and digital security. In 2024, we’ll see regulators in other fields and other parts of the world follow suit, bolstered by the FTC’s successes.
In 2022, the French Data Protection Authority, the CNIL, fined Clearview AI a record €20 million (around $21.9 million) for failure to comply with an earlier 2021 ruling, which had ordered the company to stop collecting and using data of persons on French territory. Further overdue penalties will be racking up in the millions of euros in 2023. In 2024, we will see regulators such as the CNIL taking more radical legal steps to show that no company is above the law.
OpenAI’s CEO, Sam Altman, started 2023 with a call for global AI regulation, but balked at the actual prospect of EU regulation in the shape of the EU AI Act. While AI doomers asked for a pause on innovation to allow regulation to catch up, regulators including the Italian DPA found ways to clip their wings by stopping ChatGPT on their territory, albeit temporarily, with existing regulations. Ongoing intellectual property lawsuits, such as the one against Microsoft which charges the company to have illegally used code created by others, may well result in a turbulent 2024 for the fundamental business model of generative AI.
It is not only the individual impacts of technology that courts and regulators have in their sights. In 2024, they will also be considering the impacts on society, markets, and businesses. For instance, antitrust actions in the US and the EU launched in 2023 call into question Google’s dominance in the ad tech market, potentially shaking the monolithic logic of the programmatic advertising model that has helped create the internet as we know it today.
In 2024, we will see the regulatory void long enjoyed by Big Tech come to an end. While new laws and regulations like the AI Act, the Digital Services Act, and the Digital Markets Act in the EU start to take shape, courts and regulators will continue to apply existing law and regulation to the new ways that technology affects our daily lives. We will see the full panoply of legal tools coming to meet the challenges. Human rights and civil liberties law, competition law, consumer rights law, intellectual property, defamation, tort, employment law, and a plethora of other fields will be engaged to tackle the real-life harms already being caused by existing technology, including AI.
Regulatory Authorities Are Making Progress in Addressing Big Tech Concerns
In recent years, concerns surrounding the power and influence of big tech companies have grown significantly. Companies like Facebook, Google, Amazon, and Apple have become household names, dominating various sectors of the economy and amassing vast amounts of user data. As a result, regulatory authorities around the world have been closely monitoring these tech giants and taking steps to address the concerns associated with their dominance.
One of the primary concerns surrounding big tech companies is their ability to stifle competition. With their immense resources and market power, these companies can easily acquire or squash potential competitors, leading to reduced innovation and limited choices for consumers. To tackle this issue, regulatory authorities have been actively reviewing mergers and acquisitions involving big tech companies to ensure fair competition in the market.
For instance, in 2020, the European Commission launched two separate antitrust investigations against Apple. The first investigation focused on the App Store’s rules and practices, examining whether Apple’s policies unfairly favored its own apps over those of competitors. The second investigation looked into Apple’s conduct regarding Apple Pay, investigating whether the company restricted access to near-field communication (NFC) technology on iPhones, limiting competition in mobile payment services. These investigations demonstrate the commitment of regulatory authorities to address concerns related to anti-competitive behavior by big tech companies.
Data privacy is another significant concern associated with big tech companies. These companies collect vast amounts of user data, raising questions about how this data is used and protected. In response, regulatory authorities have introduced stricter data protection regulations and increased scrutiny over big tech companies’ data practices.
The General Data Protection Regulation (GDPR) implemented by the European Union in 2018 is a prime example of regulatory efforts to address data privacy concerns. The GDPR provides individuals with greater control over their personal data and imposes strict obligations on companies handling such data. Non-compliance with GDPR can result in hefty fines, encouraging big tech companies to prioritize data privacy and protection.
Furthermore, regulatory authorities have been exploring ways to enhance transparency and accountability in big tech companies’ algorithms and content moderation practices. Concerns have been raised about the potential for bias, discrimination, and the spread of misinformation through these algorithms. To address these concerns, regulatory authorities are pushing for more transparency in algorithmic decision-making processes and urging big tech companies to take responsibility for the content shared on their platforms.
In the United States, the Federal Trade Commission (FTC) has been actively investigating big tech companies for potential antitrust violations and anti-competitive behavior. In 2020, the FTC filed a lawsuit against Facebook, alleging that the company engaged in a systematic strategy to eliminate competition by acquiring potential rivals like Instagram and WhatsApp. This lawsuit reflects the growing determination of regulatory authorities to hold big tech companies accountable for their actions.
While progress is being made, addressing the concerns surrounding big tech companies remains a complex task. These companies operate globally, making it challenging for individual regulatory authorities to enforce regulations effectively. Coordinated efforts between regulatory authorities across different jurisdictions are crucial to ensure a level playing field and prevent regulatory arbitrage.
In conclusion, regulatory authorities are actively addressing concerns related to big tech companies’ dominance. Efforts are being made to promote fair competition, protect user data privacy, enhance transparency, and hold these companies accountable for their actions. However, the challenges associated with regulating global tech giants require ongoing collaboration and coordination among regulatory authorities worldwide. Only through these collective efforts can we strike a balance between innovation and ensuring a fair and competitive digital landscape.