The Rise of Newsroom Smart Machines: Optimizing Workflow with Artificial Intelligence
As computer algorithms become more advanced, artificial intelligence (AI) increasingly has grown prominent in the workplace.
Top news organizations now use AI for a variety of newsroom tasks.
Americans have grown up on decades of science fiction stories with plots of intelligent robots turning on their human creators, and in recent years, prominent technologists have spoken out against AI threats (see Elon Musk’s recent warnings).
But current AI systems largely are still dependent on humans to function correctly, and the most pressing concern is understanding how to correctly operate these systems as they continue to thrive in a variety of media-related industries.
New Challenges in Machine Learning
Artificial intelligence refers to the study of making machines “smart.” In today’s world, this concept has been adapted to the development of machine learning (ML) systems — the current form of AI most commonly used in the workplace.
ML systems continue to show promising results in their ability to assist in increasingly-complex tasks, but machine learning systems’ current capabilities are strictly limited.
The main issue is determining the scope of an ML system’s task. ML systems only are suited to carry out well-defined roles and will still require significant human direction in the immediate future.
So, while ML systems soon will become ubiquitous in many professions, they won’t replace the professionals working in those fields for some time — rather, they will become an advanced tool that will aid in decision making.
This is not to say that AI will never endanger human jobs. Automation always will find a way.
As the technology improves, ML systems undoubtedly will replace rather than supplement many human professionals — a concern that has been creeping up the in minds of media professionals in recent years. But current ML systems used in journalism and media are far from becoming advanced enough to replace human writers.
Can a #drone drop a robot into a war zone to report? Doesn’t take a meal break- but can’t empathise with a story! Relationship between man and machine- our future co workers #FMLSummit @AlJazeera pic.twitter.com/Bs2NLWcoNp
— Morwen Williams (@morwenw) March 5, 2018
Working with AI Reporters
The privatization of AI has culminated in better, affordable ML systems that can be adapted to a variety of news-related functions.
Newsrooms can use this to optimize and increase their workflows.
Associated Press has been at the forefront of AI application in newsrooms and began using AI to automate reporting on sports and corporate earnings in 2013. Using machine learning algorithms, AP exponentially expanded its quarterly earnings reports to cover virtually every company in the stock market – a feat that was difficult, if not impossible, before machine learning systems became available.
In the summer of 2016, the Washington Post debuted Heliograf during the Rio Olympic Games. Heliograf is a ML system that the Post developed to analyze medal counts and game scores in real time. Using raw Olympic Game statistics as the input, Heliograf generated news stories in a matter of seconds, allowing the Post to provide up-to-the-minute coverage using a fraction of the manpower than was previously required.
Facebook also has shown interest in AI. The tech giant began investing in AI research in 2013 – and today, it uses its own ML systems to fight the spread of fake news. With powerful algorithms able to analyze thousands of news stories per second, Facebook uses ML to create complex profiles of fake stories, hoping to flag and ban misleading content with increased speed and accuracy.
Just this month, Reuters debuted Lynx Insights, its new automation tool. With Lynx Insights, Reuters has adapted ML to the concept of cybernetic reporting — human writers augmenting their reporting using smart machines. By combining humans’ aptitude for natural writing with smart machines’ ability to crunch giant datasets with amazing speed, Reuters hopes to bring a modern edge to its newsroom.
These organizations all use ML systems in different ways, but the tools that the ML systems use to accomplish tasks are very similar. The two most important ML processes used in journalism today are Natural Language Processing (NLP) and Natural Language Generation (NLG).
Essentially, NLP and NLG are the tools that ML systems use to read and write.
For ML systems to learn, they need data, and a lot of it. NLP allows ML systems to use written, natural language as the data input. Considering the complexity behind human language and how we use it, this is no small feat.
ML systems analyze the written language by using pre-determined algorithms, which guide the system on the input’s purpose, how the system should understand it, and what the input’s connection is to the desired output. In the world of AI journalism, that output is NLG.
The natural language of computers is incomprehensible to humans, so NLG allows ML systems to write in language that we can understand. NLG is what ML systems use to communicate their conclusions based on the data input that was received.
For ML systems used in journalism and media, the NLG output would be automated news stories, or insights that journalists can use to enrich their reporting.
Automation Marches On
The key to the successfully operating a ML system in a newsroom is finding the correct correlation between the input and output.
ML systems are prone to the same biases as their human creators, and media professionals must gain the ability to question their results to ensure accuracy and transparency within their news organizations. That’s why understanding how ML systems function is critical.
Last week, Associated Press released a report on how ML systems should be used in news organizations, emphasizing the importance of understanding ML systems’ limitations.
ML systems can only understand what they are programmed to understand, and complex social language constructs such as irony are still beyond their grasp.
Leaders in world news already are taking steps to address the ethical dilemmas presented by AI reporting. During this month’s Future of Media Leaders’ Summit, members of Al Jazeera and BBC spoke about the near-term ethical implications in AI, such as considering which judgement calls, traditionally handled by human journalists, could be responsibly automated – and how to alert audiences about which stories are AI-written, so readers will have an understanding on how AI effects their news.
The more accurate the data, the better the results, but no algorithm will ever be 100 percent accurate. Reality infinitely is more complex than the model created by a ML system – there are more variables than can ever be completely measured with full accuracy.
Artificial intelligence is not inherently dangerous, but using ML systems without a fundamental understanding could be disastrous, as they are still largely dependent on humans to function correctly. And, even when they evolve to function independently, we need to understand them in order to ensure that their goals and functions align with our own.
Subscribe to Beyond Bylines to get media trends, journalist interviews, blogger profiles, and more sent right to your inbox.
Julian Dossett is a Cision editor and black coffee enthusiast, based in New Mexico.