Last week, OpenAI and the German media conglomerate Axel Springer signed a multi-year licensing agreement. It allows OpenAI to incorporate articles from Axel Springer–owned outlets like Business Insider and Politico into its products, including ChatGPT. Although the deal centers on using journalistic work, reporters whose stories will be shared as part of the agreement were not consulted about the deal beforehand.

Four Business Insider employees told WIRED that they found out about the AI deal at the same time it was announced publicly. PEN Guild, the US union which represents around 280 workers at Politico and E&E News, another Axel Springer publication, says it was “not consulted or informed about the decision to have robots summarize our work.”

At a Business Insider all-hands meeting on December 13, the day the news broke, the deal was “a very quick line item,” according to a Business Insider staffer in attendance who spoke to WIRED on condition of anonymity. “How it works, when it’s starting, I have no idea.”

This isn’t the first deal struck between an AI company and a media company over data licensing—OpenAI made a similar agreement with the Associated Press in July, for example— but it’s a significant one. Right now, most major AI companies gather their training data by scraping the internet without first licensing the copyrighted materials they use. This has led to a wave of lawsuits against the companies, arguing that the practice is unfair.

Instead of acquiring Axel Springer’s articles through permissionless scraping, OpenAI is now paying to integrate news stories into its products. This agreement demonstrates that companies like OpenAI are willing to cut deals with media companies—and that, even while arguing it’s legal to scrape web content, OpenAI is preparing for a future in which the current scraping approach stops working so well. (A number of news outlets, including the BBC and The New York Times, have taken steps to block OpenAI’s web crawler, in an effort to prevent scraping.)

Other outlets are in talks with AI companies to strike their own deals, too. News Corp. CEO Robert Thomson, for example, said the company was in “advanced discussions” about licensing in an earnings call this November.

Some writer advocacy groups have pushed for this kind of licensing as an alternative to data scraping. The Author’s Guild, for example, is currently agitating for collective licensing agreements to ensure that writers are paid when their work is used as training data for AI companies. The News Media Alliance, a trade association that represents over 2,000 newspapers and magazines in the US, praised Axel Springer’s deal with OpenAI. “These business arrangements are a good start in setting benchmarks for payment, demonstrating precedent of value,” CEO Danielle Coffey said in a statement.

Axel Springer characterised the partnership as a win for journalists, a way to introduce new audiences to their work and help the company prosper. “This benefits the journalists as well as the journalism of the brands involved in the partnership,” Axel Springer spokesperson Julia Sommerfield says.

Does it, though? Mike Masnick, editor of the tech policy website Techdirt, has doubts. “It looks like a strategy that we’ll likely see repeated elsewhere, a ‘partnership’ that is effectively the AI companies convincing publishers not to sue them in exchange for some level of access to the technology,” he says. “That access might help the journalists very indirectly, but it’s not flowing into paychecks or realistically making their jobs any easier.”

Axel Springer declined to comment on specifics of the deal. “I can only reiterate our reasons for entering this partnership which is that we see a paradigm shift in journalism: For the first time, there’s a revenue stream from an AI company to a media company for the use of recent content,” Sommerfeld says. “This is exactly what media companies failed to establish back in the day with Google or Facebook—and we’re still chasing those platforms for compensation.”

Bloomberg reported last week that OpenAI will pay Axel Springer tens of millions of euros, but it is entirely unclear whether individual journalists will see any of that money. When asked if reporters would benefit from any revenue-sharing or additional compensation as a result of the licensing arrangement, Axel Springer did not directly answer the question. “The deal is set to be structured in a way that does not infringe on any individual IP or copyright,” Sommerfeld said. So, as of now, it is unclear whether a writer whose work is incorporated into ChatGPT will receive a one-time payment, a recurring royalty-like payment, or no payment at all.

In the media industry, where a growing number of newsrooms (including WIRED’s) are unionized, labor leaders are keeping close tabs on these deals. “Even before the news of the Axel Springer deal broke, we were looking at how companies’ use of AI impacts us as a union—especially in workplaces with new contracts or where we are still bargaining for first contracts,” says Susan DeCarava, president of the NewsGuild of New York. “We are reviewing all our options on how to best protect the integrity of the work our members produce.” Even at outlets where many staffers are “work-for-hire”—meaning their employers own the work they produce on the job—there will still be opportunities to negotiate for revenue-sharing and other compensation agreements.

Whether those types of arrangements will be a net good is another matter of debate. If individual writers are compensated for licensing agreements, the actual dollar amount may be so miniscule, it wouldn’t meaningfully impact their finances. “We know from artist compensation programs on other platforms, such as Spotify, that most artists receive very little, and only a small proportion of the most popular creators may receive more substantive payments,” says Nick Diakopoulos, a computational journalism professor at Northwestern University.

Even if these deals do benefit news outlets, and journalists do get paid when their work is rolled into AI tools like ChatGPT, the longer-term impact of the rise of AI services may still imperil the media. If people who might ordinarily subscribe to an outlet like Business Insider, or at the very least click on a few links per month, instead read summaries of articles within ChatGPT, how will that affect the number of readers actually clicking on an article? If reading the news within AI tools becomes more popular, it could crater traffic-driven digital advertising revenue and generate a whole new media crisis.

And what about journalists who don’t want their work used by OpenAI? “It’s not just about the money,” copyright activist Neil Turkewitz says. “An individual writer who’s just adamantly opposed to AI training on their works should be able to say no.”

Axel Springer did not respond to questions about whether reporters would be able to opt out of the training agreement. There will likely be more opportunities to ask, though. If the company succeeds, this deal is just the first of many. “It’s fair to assume we’re in ongoing conversations with all large AI companies,” Sommerfeld says.

Journalists Unaware of OpenAI’s Agreement to Utilize Their Stories

In the ever-evolving landscape of artificial intelligence (AI) and machine learning, OpenAI has emerged as a prominent player. OpenAI, a research organization focused on developing AI technologies, has recently made headlines for its language model, GPT-3 (Generative Pre-trained Transformer 3). While GPT-3 has garnered attention for its impressive ability to generate human-like text, a recent revelation has raised concerns among journalists.

It has come to light that OpenAI has entered into agreements with several news organizations, allowing them to utilize journalists’ stories without their explicit knowledge or consent. This revelation has left many journalists feeling blindsided and raises important questions about the ethics and implications of AI technology in the media industry.

OpenAI’s GPT-3 has been hailed as a breakthrough in natural language processing, capable of generating coherent and contextually relevant text. It has been used in various applications, from chatbots to content creation. However, the recent discovery that OpenAI has been utilizing journalists’ stories without their awareness has sparked a heated debate about the boundaries of AI technology and intellectual property rights.

The concern arises from the fact that OpenAI’s language model is trained on a vast corpus of publicly available text, including news articles from various sources. While OpenAI claims that it takes steps to anonymize the data used for training, it is undeniable that journalists’ work is being indirectly incorporated into the model. This raises questions about ownership and control over journalistic content.

Journalists play a crucial role in society by providing accurate and unbiased information to the public. They invest time, effort, and expertise in their work, often under challenging circumstances. The idea that their stories can be utilized by AI models without their knowledge or consent undermines their professional integrity and raises concerns about the potential misuse of their work.

OpenAI argues that its agreements with news organizations are necessary to ensure the accuracy and reliability of the information generated by GPT-3. By incorporating real-world news stories, OpenAI claims that it can improve the model’s ability to generate contextually appropriate responses. However, critics argue that this approach disregards the rights and autonomy of journalists, who should have control over how their work is used.

Furthermore, the lack of transparency surrounding OpenAI’s agreements with news organizations is troubling. Journalists are often unaware that their stories are being utilized by AI models, which raises questions about accountability and the potential for misinformation. If AI-generated content is indistinguishable from human-generated content, it becomes increasingly difficult for readers to discern the authenticity and credibility of the information they consume.

To address these concerns, it is crucial for OpenAI and other AI developers to prioritize ethical considerations and establish clear guidelines for the use of journalistic content. Journalists should be informed and given the opportunity to provide consent before their work is incorporated into AI models. Additionally, there should be mechanisms in place to ensure transparency and accountability in the use of AI-generated content.

As AI technology continues to advance, it is essential to strike a balance between innovation and respecting the rights of content creators. Journalists play a vital role in shaping public discourse, and their work should be protected and acknowledged. OpenAI’s agreement to utilize journalists’ stories without their awareness serves as a wake-up call for the media industry to address the ethical implications of AI technology and ensure that journalists’ rights are safeguarded in this rapidly evolving landscape.

Similar Posts