Dispatch

AI can aid journalism — but only with strict oversight, says Mila scientific director

Dispatch

November 27, 2025

Share:

AI can aid journalism — but only with strict oversight, says Mila scientific director
Hugo Larochelle

AI may offer cash-strapped newsrooms “an infinite amount of interns,” but those digital interns — like real ones — will need close supervision and rigorous fact-checking, a leading Canadian AI researcher told attendees on Parliament Hill.

Hugo Larochelle, scientific director at Mila, Canada’s largest AI research institute, gave the keynote at AI & The Press: Threats and Opportunities, hosted by World Press Freedom Canada on Nov. 6.

Journalists should think of AI like an intern, Larochelle said — one that is not yet trustworthy without training, guidance and plenty of fact-checking.

WPFC President Heather Bakken opened the morning session with a warning: “Artificial intelligence is reshaping journalism and is doing so at a pace that demands immediate attention so we can decide how and where it should be used. 

Canada has long been a leader in AI research and development. Ottawa is preparing a new AI strategy, with a task force appointed by AI Minister Evan Solomon is due to report in November. (See WPFC’s submission here.) 

Senator Andrew Cardozo, who hosted the session, said Canada must assert “digital sovereignty,” so that AI does not fall under the control of massive U.S. tech firms that have a stranglehold over social media, data storage and other internet functions. In an era of trade wars and America First bellicosity, Cardozo said such dominance is particularly concerning.

“We must get back that leadership, both in terms of developing AI, but also developing the regulations that we need to save Canada,” he told the symposium.

Larochelle’s keynote focused on the use of AI in the media business, calling it a tool that supports a range of cognitive tasks — from summarizing long documents to quickly accessing information from diverse data sources.

AI, he said, is still prone to “hallucinating” — producing false information. It struggles to judge the reliability of online sources and is unable to detect irony. Larochelle pointed to a case where an AI system mistook satire from The Onion for real news.

Since AI must be closely supervised, its effect on newsroom jobs will be more about augmentation than replacement.

Larochelle highlighted two major AI threats to an already fragile media business: the “zero-click future” and the “liar’s dividend.”

AI-generated summaries on platforms like Google threaten media revenues. Until now, Google searches delivered links — often to news sites — that produced valuable ad impressions and potential subscriptions.

Now Google shows AI-generated summaries, often built from creators’ content without consent. No clicks mean no revenue for the publisher.

“Gathering facts, presenting these facts, doing investigations — that’s very costly,” Larochelle said. “If all the money goes to what is somewhat a simpler task, which is summarizing that information, that is obviously a legitimate concern,” he said.

As AI-driven fakes circulate widely on social media and beyond, they threaten to corrode trust in the press and in democratic life itself, especially when political actors use false and malicious content to target their opponents.

Warning about deepfakes also opens the door to the “liar’s dividend.”

Politicians and other high-profile individuals can simply wave away evidence of wrongdoing by calling it an AI fake.

Larochelle said society needs broad AI literacy to manage its risks and share its benefits.


Stay informed with Dispatch, our bimonthly newsletter.

*Indiciates required field

Sponsors