3 Big Concerns About Using AI in the Newsroom

In the rapidly evolving landscape of journalism, the integration of artificial intelligence (AI) has brought forth a multitude of both promise and peril. As newsrooms strive to keep up with the demand for timely and personalized content, AI technologies have emerged as powerful tools, capable of automating tasks, enhancing data analysis, and even generating news articles. However, this expanding role of AI in journalism raises critical questions about its potential harms and benefits. While some hail it as a revolutionary force that can revolutionize news production and consumption, others express concerns about bias, ethics, and the erosion of human journalistic values.

If you were unable to tell, the entire above paragraph was written by the AI platform ChatGPT. As it turns out, the ability to spot AI-generated content is not a skill that most people have — yet.

While using the platform to generate the first paragraph of this post, the immediate warning below stated, “ChatGPT may produce inaccurate information about people, places, or facts.” AI platforms, also known as Large Language Models (LLMs), are growing in popularity in the workplace — primarily in journalism. The dichotomy of benefits and harms is stark when discussing AI in journalism — while it can be an effective tool to assist with things like writer’s block, the very core of journalism ethics is truth in reporting and this technology has been found to spout misinformation. And many are concerned about AI replacing jobs in journalism.

Below, we break down just a few of the major potential issues of AI to ultimately demonstrate that human interaction is essential to the journalistic process.

Data Leaks

A large concern about AI is the fact that LLMs learn from each interaction with humans by storing conversations. While this grows the ability to use the platform for more expansive uses, companies can suffer greatly if LLMs are used improperly by employees.

According to The Verge, many companies, such as iHeartMedia, Verizon, Samsung, and Apple, have blocked ChatGPT from corporate computers in fear that it could leak IP addresses or confidential information to competitors if an employee uses company property or uploads company information to interact with the LLM.

In the case of journalism, this could lead to sensitive sources being leaked or allowing anyone to see information before it’s published. Should a press release be embargoed for private information or stock-related news that could influence the market, this could be incredibly dangerous for journalists and the institutions they represent.

Hallucinations

OpenAI is currently being sued by a Georgia citizen for providing false and defamatory information in relation to a summary of a legal case. When asked to write a summary of the case, the LLM provided falsified information, specifically stating that an individual who was not involved in the case was being sued for embezzlement.

As aforementioned, ChatGPT states explicitly on the website: “ChatGPT may produce inaccurate information about people, places, or facts.” When it comes to journalism, this is possibly the most harmful detractor from using AI as a primary source for writing articles. Not only does it violate ethical journalism standards, but it also could lead to a multitude of legal liabilities.

Biases

According to MIT researchers, LLMs have the potential to be biased — and in extreme cases, there are instances of racism and sexism.

LLMs mimic the responses provided to them by human interactions and various datasets. As human beings, we are all susceptible to personal bias. However, the AI does not distinguish between facts and biases — rather, it takes all the information it receives as fact and regurgitates it. Efforts are being made to develop “ethical AI” — but as it is, it cannot be fully relied on to be 100% impartial in data or language use.

If a journalist is utilizing the LLM as a primary writing tool, the likelihood of unwanted biases may seep into the language, requiring a careful examination or even a rewrite to avoid discriminatory practices.

Takeaway

Ultimately, while AI can be a helpful tool, its place in journalism is not that of fully replacing humans. At some point in AI use, a human is required to analyze the data provided, carefully research sources, and limit any interactions that could leak sensitive information. While using AI is often exciting due to the multitude of options and resources it provides, like all things, it must be used in moderation.

Subscribe to Beyond Bylines to get media trends, journalist interviews, blogger profiles, and more sent right to your inbox.

Caroline Kouneski - Cision PR Newswire
Caroline Kouneski
Recent Posts

Caroline is an outgoing Content Editor for PR Newswire from Baltimore, MD. Caroline is a Randolph-Macon College graduate, with a bachelor's degree in Political Science, Communication Studies, and Religious Studies. She enjoys game shows, sushi, and visiting the city in her free time.

You may also like...