In an ambitious endeavor, Google is currently testing an AI-powered news writing tool, internally known as “Genesis”. According to recent reports, Google has approached prominent publications like The New York Times, The Washington Post, and News Corp, the owner of The Wall Street Journal, with its AI tool proposition.
This news writing tool leverages AI to process information and produce written content. It aims to serve as a personal assistant for journalists by automating certain tasks and freeing up their time for more critical aspects of reporting. Google views this technology as “responsible,” but initial reactions from some executives who were pitched on the tool have been “unsettling.” These executives expressed concerns that the AI-generated content might undermine the rigorous efforts invested in producing accurate and reliable news stories.
A Google spokesperson clarified the company's stance, stating that they are exploring ways to potentially offer AI-enabled tools to assist journalists, especially smaller publishers. The goal is to empower journalists with options like generating headlines or exploring different writing styles. Google intends to ensure that these emerging AI tools enhance journalists' work and productivity rather than replace their essential role in reporting, creating, and fact-checking articles.
This development comes at a time when several news organizations, including NPR, are contemplating how AI can responsibly be integrated into their newsrooms. While AI-generated content has been used by some news outlets for specific applications such as reporting on corporate earnings. It represents only a fraction of their overall articles, the vast majority of which are still written by human journalists.
However, there is a growing concern about the potential consequences of AI-generated news stories that lack thorough fact-checking and human editorial oversight. Instances like CNET's experiment earlier this year serve as cautionary tales. When CNET utilized generative AI to produce articles, the endeavor backfired, leading to corrections on more than half of the AI-generated pieces. Some contained factual errors, while others may have included plagiarized material. Consequently, these experiences highlight the need for vigilant fact-checking and human involvement in the news writing process.
As Google's Genesis news writing tool continues to be refined and evaluated, it is essential for stakeholders to approach AI's integration into newsrooms cautiously. Ensuring that AI-generated content adheres to strict editorial standards and is consistently fact-checked remains paramount. Responsible usage of AI in journalism can potentially complement and enhance journalists' capabilities. However, it should not overshadow their indispensable role in delivering accurate and trustworthy news to the public. As the industry navigates the possibilities and challenges of AI, transparency and accountability must remain at the forefront to maintain the public's trust in journalism.