In newsrooms across the globe, AI technology has become an essential part of the story-creation process. It is commonly used in newsrooms to help make news a faster process.
While AI technology can be informative when creating stories, it lacks the awareness and thinking skills to avoid bias or adding wrong information while writing.
According to a report published by JournalismAI, a survey suggests 90 per cent of newsrooms are using AI in news production, 80 per cent in news distribution, and 75 per cent in news gathering.
That’s almost the majority of the important tasks in a newsroom being processed and completed by AI technology alone. These are all tasks that must be done with care and consideration since one wrong move could lead to misinterpretation of important information or even unintentional bias towards serious topics.
Humans of course can decipher between right and wrong, and as journalists, know what to add to a story to make a very truthful, fair, and informative report, but AI is just programmed to add in whatever information falls under what is told to be used under a preset category, without ever thinking twice.
A perfect example of AI technology incorrectly producing stories is CNET’s attempt to use AI technology. The goal was for AI technology to take the wheel, and only be given the information needed, to then produce the stories from the ground up all on its own.
According to a CNN report from January 2023, the technology was secretly being used for around two months since November 2022, and while this may have seemed like an excellent idea at first, it eventually led to editors and readers finding multiple errors, such as incorrect information, or not crediting sources inside of multiple stories, which were generated by the AI technology.
CNET’s editor-in-chief Connie Guglielmo said they used their own internally designed AI engine, and not ChatGPT, which most assumed was being used since it’s well-known for being a quick and easy tool on stories and essays.
Even though it was confirmed they developed their own AI system, this was only a test project. It still harmed their credibility as a reliable news source, since many of the stories readers found that day possibly had some false facts and lacked credit to other sources.
However, this does not mean newsrooms should completely disregard AI technology. There are benefits that it can bring to help with story creation.
It can help with data searching and fact-checking, which are both great examples of essential tasks that could take a while for a human journalist to complete.
CNET also sees this hidden potential as well, according to their AI policy. CNET plans to continue testing AI technology, to see how it can help benefit newsrooms with important tasks such as organizing large amounts of information and generating images and videos. With these tasks also being heavily moderated by editors before publishing, to have a fast and reliable process for creating news stories.
AI technology can still be used as an extra tool for human journalists, but it should never fully take over any newsrooms since there will be too much at stake.