A silent but significant cold war has emerged between leading news organizations including Disney and OpenAI, the creators of the groundbreaking AI chatbot, ChatGPT.
While no shots have been fired, newsrooms are taking defensive actions to protect their content from potential exploitation by AI, as concerns grow about the impact on an already struggling news industry.
A recent investigation has uncovered that several major news outlets, including CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, and many others, have implemented code to block OpenAI’s web crawler, GPTBot, form ChatGPT, from scanning their platforms for content.
The move highlights the deep concern among news giants about the use of their content to train AI models like ChatGPT.
The intellectual property and extensive archives held by these news organizations play a pivotal role in training AI models to provide users with accurate information.
However, the concerns stem from the fear that AI technology might further destabilize the embattled news sector.
A news executive, speaking anonymously, said, “Traditional media publishers offer quality content, driven by facts. Most of the internet is garbage.”
Danielle Coffey, president and CEO of the News Media Alliance, representing nearly 2,000 US publishers, confirmed the industry’s alarm at the rapid advancement of AI technology.
Coffey acknowledged that newsrooms feel secure under copyright protections, but there’s an urgent need to address the potential misuse of their content by AI companies.
She explained, “An existential threat is looming; there isn’t a viable business model with certain uses of A.I.”
Though the industry is cautious, action against OpenAI has been limited. Media mogul Barry Diller and The New York Times are reportedly considering legal action, while the Associated Press opted for a licensing deal with OpenAI. The details of the agreement, however, remain undisclosed.
The implications are vast. If not resolved, the publishing industry could suffer irreparable damage, risking the already fragile information environment globally.
The integration of AI bots into search engines, apps, and smart devices could inadvertently push newsrooms out of business, with these very outlets being the source of information for the AI systems. The absence of authoritative sources could lead to a proliferation of misinformation.
Danielle Coffey remarked, “If quality content disappears, we’re headed for a bleak future.”
While the stakes are high, most news organizations are choosing to address the matter discreetly. Blocking GPTBot serves as a strong signal, conveying the message: “Talk to us.”
The standoff between newsrooms and OpenAI underscores the complex challenges AI advancements pose to traditional media and the urgency to find a balanced solution.
Subscribe to Switch TV