Artificial Intelligence: Raising New Ethics Questions in Media and Journalism

Artificial Intelligence: Raising New Ethics Questions in Media & JournalismThe last decade has seen a dramatic increase in our daily use of artificial intelligence.

Nowadays, AI systems are being used in leading newsrooms around the world.

I recently detailed AI’s current roles in journalism, and the new possibilities they present.

With these new possibilities come questions regarding ethical use of AI in newsrooms and the factors that should be considered when judging the moral application of AI in today’s news.

Here, we’re going to examine the emerging ethics surrounding AI use in journalism as media professionals face new challenges never before met in the modern newsroom.

The need for speed

Journalism today is a speed game. The frequency and breadth of news in 2018 requires journalists to break news only minutes after an event takes place.

This new reality makes AI-assisted reporting an attractive prospect to overworked journalists who must write compelling copy in little time.

But our current news cycle suffers from an emphasis on speed over accuracy – a problem likely to be exacerbated by increased AI use.

If journalists aren’t careful when considering the ethical implications of using AI algorithms to augment their reporting, then unforeseen biases ­— hidden in the AI’s programming — could jeopardize the story’s objectivity.

AI systems are far from becoming advanced to the point of forming their own opinions, so any bias would be a pre-existing disposition built into the system by the AI’s human creators. Furthermore, this bias could be quite unintentional.

Last month, concerns over AI prompted the UK government to publish a paper on ethical use of AI. The paper addresses issues that arise when existing prejudices are built into a new system.

The paper argues that AI systems should be painstakingly designed with input from the most diverse group of people possible to provide a safeguard against unintended prejudice.

Data collection: A hot-button issue

Transparency is a central precept in western media.

Because the media must be transparent in reporting and writing stories, adding AI to the mix can make transparency tricky.

If a newsroom draws a conclusion about a story using an AI process that involves proprietary information, that newsroom is responsible for navigating the line between transparency and protecting the proprietary information.

An AI system is only as good as the data it uses to learn, and the processes involved in gathering that data has become a hot-button issue.

As newsrooms become more dependent on AI, we must consider how data collection can affect a system’s ability to perform in the way that’s expected — and how to ethically secure that data.

Facebook recently has come under scrutiny for its data-collection practices, elevating ethical data collection issues into the public eye.

In an effort to unify data collection practices across all companies operating in the European Union, the European Parliament and Council has agreed upon a General Data Protection Regulation (GDPR) that will go into effect later this month. This wide-reaching legislation will govern the way American tech companies do business in the EU.

The reality of getting left behind

AI is expensive.

This ultimately limits the news organizations that can benefit from this technology: Larger newsrooms can afford these tools; smaller, local newsrooms may be unable.

AI tech in newsrooms will widen the gap between the have and have-not news organizations. If AI reporting advances to the point where it becomes necessary to remain competitive, news organizations could fail when they otherwise might’ve had a chance.

One solution could be that smaller news organizations outsource AI needs to third-party tech firms specializing in newsroom systems.

But allowing AI to endanger local news could have dire consequences and currently is a concern in many news-associated organizations.

This, of course, is on top of the fact that many communities face losing their source for local news with traditional print journalism being in decline.

AI already is prevalent in most forms of media and will become ubiquitous as we move further into the 21st century. As we accept AI into our lives, we should constantly question the ways it changes our relationship with news.

Subscribe to Beyond Bylines to get media trends, journalist interviews, blogger profiles, and more sent right to your inbox.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Julian Dossett is a Cision editor and black coffee enthusiast. He’s based in New Mexico.

You may also like...