Articles

AI in Newsrooms: A Call For Improved Innovation and Media Literacy

Story Highlights
  • Knowledge is power
  • The Future Of Possible
  • Hibs and Ross County fans on final
  • Tip of the day: That man again
  • Hibs and Ross County fans on final
  • Spieth in danger of missing cut

By Oluwaseye Ogunsanya

Artificial Intelligence is evolving daily with its full-blown influx into virtually all aspects of human endeavour. It cuts across sub-fields of machine learning and deep learning, these disciplines are AI algorithms which seek to create expert systems which make predictions or classifications based on input data. This development has propelled AI into mainstream discussions, with generative AI technology, like ChatGPT and Bard amassing over 100 million users in 2023 alone which experts say people who use them may also inadvertently spread falsehoods.

With the aid of computer science and robust datasets, machines now perform tasks typically associated with human intelligence seamlessly, such as reading, writing, talking, creating, playing, analyzing, and making recommendations.

The newsroom is also experiencing this radical change as they are using AI to optimise their work despite concerns about bias and accuracy.

A survey captured by TheVerge revealed that 90 percent of newsrooms are already using some form of AI in news production, 80 percent in news distribution, and 75 percent in news gathering. It added that news-gathering tasks include automated transcription and translation, extracting text from images, and scraping the web or using automated summarization tools. News production could include translating articles into other languages, proofreading, writing headlines, or writing full articles. Distribution includes using AI-driven search engine optimization as well as things like tailoring content to a specific audience.

In the same vein, DW Akademie reported on how media organizations use artificial intelligence to debunk fake news.

“News organizations worldwide are increasingly integrating AI into various facets of news production, spanning from generating topic ideas and translating content to editing and, in some instances, crafting articles. The utilization of AI for content creation, however, remains a contentious topic, with numerous outlets viewing it as a considerable risk to their reputation.” It stated.

The Daniel Smith Debacle 

In the field of journalism and fact-checking, there is no doubt that AI has come to stay as it offers both a significant asset for fact-checkers and journalists as well as a challenge to their profession especially as it relates to the spread of disinformation which uses sophisticated AI features and capabilities in manufacturing not only compelling narratives but entirely fabricated identities.

Daniel Smith’s photo used for interviews

This concern was further heightened by Southeast Europe (SEE) Check which cited an interview published by Danas, a Serbian newspaper in early 2023 with Daniel Smith, a figure introduced as a “British international affairs and security expert” who spoke about the Russian influence within Serbia and the broader Western Balkans landscape.

The report narrated how Smith who had priorly shared his insights with Al Jazeera Balkans through two interviews and had found his way into the discourse within Albanian media circles, made a compelling case for the Serbian president, Aleksandar Vučić, to acknowledge the independence of Kosovo, framing it as an undeniable reality already in effect.

Due to his sequence of media appearances, Smith was soon painted “as a figure of authority and expertise, convincing Danas of his legitimacy.” The report stated. There was however a problem with Smith’s interviews as the report would later reveal that he “had consistently offered the same visage across all his interactions, presenting a singular image to represent his identity.”

The Danas article created doubt among readers who questioned Smith’s authenticity. According to SEE Check, his interviews were conducted exclusively in writing but when a journalist proposed a follow-up video interview amidst rising suspicions about his authenticity, Smith was nowhere to be found, alongside his Twitter presence, which had served as the primary avenue for engaging with him.

The controversy then prompted Serbian fact-checking entity, Fake News Tragač (FNT) to investigate Smith’s photograph. FNT’s investigative report revealed that “the image bore the hallmarks of being crafted by artificial intelligence (AI), with specific reference to the website “This Person Does Not Exist.”

It also said that the uniform square shape and the peculiar alignment of the eyes—traits meticulously designed to mirror those found in other fabricated personas from the same digital creation site.

While detailing the indicators of AI-generated imagery, FNT further noted that photos of such often have abstract, blurred backgrounds and display errors in the integration of accessories like glasses, jewellery, and scarves.

FNT’s Analysis 

Putting it more explicitly, it said: “The photograph of Smith, for instance, showcased precisely centered eyes, a pair of glasses marred by an anomalous frame, and an odd blurring effect beneath the right eyepiece. Moreover, an unusual bulge was noticeable upon his neck, further arousing suspicion regarding the image’s authenticity.”

Danas, having discovered that Daniel Smith was a fabricated entity, later apologized to their readers and the public and admitted in their correction that they had foregone thorough verification, swayed by Al Jazeera’s prior interviews with Smith, which seemingly lent him credibility.

“We apologize to our readers and the public for misleading them, but we’ve learned the lesson that journalism in the internet age is quite challenging and that we must be much more cautious and verify even what seems to be authentic. Based on our own experience, we will, in collaboration with investigative and fact-checking platforms, pay more attention to the education of journalists and the audience to avoid such situations in the future,” Danas acknowledged.

Tough and Threatening Journey Ahead

While Daniel Smith’s revelation poses a great threat to fact-checking and journalism, it offers a window of opportunity for fact-checkers and journalists to brace up and prepare for the worst things to come as they navigate the path of combating disinformation as far as the spread of narratives using fabricated identities is concerned.

As SEE Check aptly puts it, it is “imperative for journalistic and fact-checking communities to adapt and evolve in response to the burgeoning threat of artificial narratives, ensuring the integrity of information in an increasingly complex digital landscape.”

According to Wired, there are now nearly 400 fact-checking initiatives in over 100 countries, with two-thirds of those within traditional news organizations, growth has slowed, according to Duke Reporters’ Lab’s latest fact-checking census. Quoting Mark Stencel, Reporters’ Lab’s co-director, the report added that around 12 fact-checking groups shut down each year on average. While new launches of fact-checking organizations have slowed since 2020, the space is far from saturated, Stencel says—particularly in the US, where 29 out of 50 states still have no permanent fact-checking projects.

What’s more, there’s also a greater need than ever for fact-checkers to do away with overly relying on traditional methods for verification. In the face of a growing and sophisticated AI-driven disinformation, advanced verification tools and methodologies arise.

“To date, fact-checkers are predominantly using the traditional methods, ensuring triangulation of data and approaching each case individually, supplemented with some tools specifically designed for AI content. Still, this approach has its limitations.” SEE Check states.

Fact-Checking Organizations Efforts

In fairness, fact-checking organizations around the world are responding to this challenge positively as they are regularly infusing AI into various aspects of their work and also looking for ways to automate fact-checking.

The quest for this started in 2013 when the founder of the American fact-checking organization Politifact, Bill Adair, first experimented with an instant verification tool called Squash at Duke University Reporters’ Lab in 2013.

Squash is a system under development that fact checks video of politicians as they speak. The goal is to display related fact checks on viewers’ screens in seconds.

Squash listens to what politicians say and transcribes their words, making them searchable text. It then compares that text to previously published fact checks to look for matches but its utility was limited. It didn’t have access to a big enough library of fact-checked pieces to cross-reference claims against, and its transcriptions were full of errors that humans needed to double-check.

“Squash was an excellent first step that showed us the promise and challenges of live fact-checking,” Adair tells WIRED. “Now, we need to marry what we’ve done with new advances in AI and develop the next generation.”

Another instance is Newtral’s multilingual AI language model, ClaimHunter, which was developed in 2020, and funded by the profits from its TV wing, which produces a shown fact-checking politicians, and documentaries for HBO and Netflix.

Using Microsoft’s BERT language model, ClaimHunter’s developers used 10,000 statements to train the system to recognize sentences that appear to include declarations of fact, such as data, numbers, or comparisons.

ClaimHunter automatically detects political claims made on Twitter, while another application transcribes video and audio coverage of politicians into text. Both identify and highlight statements that contain a claim relevant to public life that can be proved or disproved—as in, statements that aren’t ambiguous, questions, or opinions—and flag them to Newtral’s fact-checkers for review.

According to Newtral’s chief technology officer, Rubén Míguez, the system isn’t perfect and occasionally flags opinions as facts, but its mistakes help users to continually retrain the algorithm. It has cut the time it takes to identify statements worth checking by 70 to 80 percent.

Newtral is also working with the London School of Economics and the broadcaster ABC Australia to develop a claim “matching” tool that identifies repeated false statements made by politicians, saving fact checkers time by recycling existing clarifications and articles debunking the claims.

Similarly, Full Fact, a media company founded in 2009 is offering several fact-checking tools, including ones that are automated through the use of artificial intelligence having won the Google.org AI Impact Challenge in May 2019 alongside Africa Check, Chequeado and the Open Data Institute.

With the support of Google, the organisation is using machine learning to improve and scale fact-checking by working with international experts to define how artificial intelligence could transform the work, develop new tools and deploy and evaluate them.

It is also building AI tools to help fact-checkers understand what is the most important, and check-worthy, information of the day. It also aims to design an algorithm that can identify when somebody knowingly repeats something they know to be false.

It is also worthy of note to mention that in the run-up to the 2023 General elections in Nigeria, Full Fact offered its artificial intelligence suite — consisting of three tools that work in unison to automate lengthy fact-checking processes — to greatly expand fact-checking capacity in Nigeria.

The three tools from Full Fact — search, alerts and live functions — work in real-time to detect claims, alert fact checkers when false claims are repeated, and instantly transcribe television or radio interviews (cross-referencing things said with existing fact checks).

Equally, FactCheckAfrica is also developing innovative tools to further enhance the work of fact-checkers and aid the general public in inculcating the habit of fact-checking. One such tool is our award-winning AI AI-powered chatbot known as MyAIFactchecker.

MyAIFactchecker harnesses a synergy of artificial intelligence and reputable news sources. Combining Google’s fact-checking API with the GPT-4 model, it also has a nice user experience interface which incorporates French, Swahili and Nigerian local languages to break the language barrier as well as a voice option for fact-checking.

The above and many more are some of the tools developed by news organizations and fact-checking initiatives alike to tackle AI Disinformation activities by bad actors.

To-Do List

While these efforts are commendable, fully automated fact-checking is “very far off,” according to Michael Schlichtkrull, a postdoctoral research associate in automated fact verification at the University of Cambridge but fact-checkers and researchers say there is a real urgency to the search for tools to scale up and speed up their work, as generative AI increases the volume of misinformation online by automating the process of producing falsehoods.

Fact checking organizations must incorporate media literacy into their work by educating the public about the various forms of AI-generated misinformation and equipping them with the skills to spot fake content before it can cause harm, thereby actively halting its spread.

To maintain the integrity and transparency of the digital information ecosystem, a complete strategy that transcends ethical issues and legal frameworks is also necessary.

Related Articles

One Comment

Back to top button