Big media's AI experiments: Lessons for every creator

250828 AI in the newsroom wrapup

Leading news organizations like the Associated Press, BBC, Washington Post, and Financial Times are strategically implementing AI tools for tasks like data analysis and content testing while maintaining strict human oversight and editorial standards to navigate the promises and pitfalls of generative AI in journalism.

Table of Contents

It’s no exaggeration to say that we are living in the midst of a new industrial revolution. Generative AI’s rapid transition from sci-fi dream to everyday staple is already having a profound effect on countless aspects of human life. Nowhere is this more apparent than in the newsroom. Over the past month, we’ve explored how some of the world’s most respected news organizations are navigating the promises and pitfalls of AI. From finding ways to automate routine tasks to chronicling AI’s complex impact on the civic sphere, these stories offer a nuanced look at how we are navigating the age of AI. 

Lessons from the AP and BBC

The necessity of human oversight in automated workflows

The Associated Press treats AI output strictly as “unvetted source material” and bans its use on sensitive topics like crime, finance, and health. Every draft must be flagged in their content management system, then reviewed and approved by a human editor before publication. Their approach reminds us that humans always need to be in the driver’s seat with AI.

Creator takeaway: Don’t treat AI as some infallible oracle. Always check the information it gives you. Use it as a launchpad instead of a final product.

Don’t treat AI as some infallible oracle. Always check the information it gives you. Use it as a launchpad instead of a final product.

Bridging the comprehension deficit in AI implementation

The BBC’s internal research found that most BBC journalists engaged with AI as users of pre-made tools because they found it hard to understand the inner workings of AI. This created a disconnect between technical capabilities and practical understanding. This illustrates the importance of looking beyond the hype and the hope to gain a genuine understanding of AI’s capabilities.

Creator takeaway: Take time to learn about the strengths (and weaknesses) of the tools you use and keep your expectations realistic.

Take time to learn about the strengths (and weaknesses) of the tools you use and keep your expectations realistic.

Insights from the Washington Post’s Haystacker and Bandito

Computational analysis as a journalistic discovery method

Haystacker allows journalists to sift through large amounts of data and glean newsworthy trends or patterns. Bandito lets editors experiment with different headlines, blurbs, and photos, effectively automating A/B testing. This shows how AI can excel at scaling-up tasks.

Creator takeaway: There are many different ways AI can help you streamline your workflow. 

Warnings from Politico’s “Bots and Ballots”

Information authenticity in the era of synthetic media

Politico’s “Bots and Ballots” series demonstrated the fragility of truth in an age of easy fakes. Democracy relies on informed voters making rational decisions about the policies they support, but it’s hard for people to make rational decisions when they’re in the midst of a fog of mis- and disinformation.  

Creator takeaway: We all have a role to play if democracy is to endure in the age of easy fraud. Treat everything you see online with a degree of skepticism, and think carefully about what you decide to share and amplify.

Treat everything you see online with a degree of skepticism, and think carefully about what you decide to share and amplify.

Takeaways from the Financial Times’ Ask FT Prototype

Mitigating hallucination risk in retrieval systems

  • Ask FT runs on 140+ years of FT editorial content in an attempt to address two shortcomings of AI: Information reliability and copyright compliance. Even though their tool was limited to a select pool of high-quality content, its output could still be highly flawed.

Creator takeaway: AI can help your audience connect with your back catalog. However, setting up a chatbot does require a degree of tech-savviness, so it’s not for everyone. And it isn’t something you can set up once and walk away from. Because AI can hallucinate, you’ll have to proactively tune your chatbot to reduce the risk of hallucination.

Conclusion

For better or worse, AI is here to stay. With all the hype and hope surrounding it, it can be tempting to view it through a simplistic lens where it’s either a glowing force for good or a shadowy force of ruin. The stories we’ve told in this series reveal a far more complicated reality. AI can help and hurt in a multitude of different ways. The technology that is making it harder to differentiate truth from fiction is also helping us cope with personal tragedies. 

It also reminds us that AI is not a zero-sum game. Choosing to use AI doesn’t mean the Washington Post, the Associated Press, and the Financial Times have completely turned their newsrooms over to the robots. Rather, they’re focusing on finding the niches where AI can do the most good. If there’s one overarching moral to these stories, it’s that we need to approach AI logically with a proper understanding of its strengths and weaknesses in order to use it to maximum advantage. 

Illustration of colorful books on a shelf against a dark background.