Use of Artificial Intelligence (AI) in Journalism
AI (Artificial Intelligence) can have a role in ethical, responsible and truthful journalism. However, it should not be used to replace human judgment and critical thinking — essential elements of trusted reporting. It may have a role in your newsrooms as a tool to assist in your work.
If news organizations are going to use AI, RTDNA recommends they have a clear policy for how AI is to be used in newsgathering, editing and distributing content across platforms. AI intersects with core journalism principles like accuracy, context, trust, and transparency. Carefully weigh all issues before integrating into your news organization. Because this is an emerging and fast-changing area, newsrooms and RTDNA might find it necessary to review guidelines regularly.
Below is an outline of critical areas of focus and questions you should consider when drafting a policy:
Accuracy, Context and Clarity
AI programs have the ability to modify every element of content — audio, video, still pictures, and words. In many cases, AI programs may enhance your media. However, AI programs may not offer the proper context, have facts misplaced or may be confusing to the end user without thoughtful guidelines.
The following questions should help you guide your decision-making around accuracy, context, and clarity:
- What do your newsroom/station/parent company guidelines say about AI use surrounding accuracy, context, and clarity? Are they up to date?
- Can you fully understand the capabilities and source material for the AI program before implementation? Also consider:
- What are your safeguards to protect against inadvertent plagiarism?
- Can you independently verify the AI tool’s accuracy?
- Are there opportunities to test the AI tool prior to publication?
- How have you taken ownership over the disclosure language for the consumer?
- What is your newsroom system and set of expectations for human review before publication?
Transparency and Disclosure
In establishing policies around the use of artificial intelligence in newsrooms, consider the importance of transparency to the trust you build with your audience.
In general, disclosing how you use artificial intelligence is preferable to non-disclosure.
The following questions should help you guide your decision-making around transparency and AI use: • Does the benefit to the public of your use of AI outweigh any risk or detriment to trust in news by not disclosing its use?
- How does your audience feel about the use of this technology?
- What falls under the definition of AI? Examples: content generation vs. content distribution/organization vs. video or text editing vs. grammar/spelling tools.
- Where will you provide disclosure about your use of AI? Examples: Website, social media, on-air
- Are journalists able to review any and all AI-influenced content before it reaches the consumer? If not, can you justify its use to the audience?
Journalists have long-standing practices to weigh the public’s right to know against an individual’s right to privacy. It is unlikely AI can properly consider these issues in the same way. It is important to ensure that AI is programmed to operate within ethical and legal boundaries and that its use does not violate privacy or other fundamental rights.