Newsroom

Public interest must drive Canada’s approach to AI

Advocacy

October 30, 2025

Share:

Public interest must drive Canada’s approach to AI

Regulatory oversight is needed to mitigate risk and ensure benefits are widely shared

World Press Freedom Canada submission to 30-Day National Sprint Consultation on Artificial Intelligence (AI) Strategy

Introduction: AI brings both threats and opportunities

Artificial intelligence is reshaping our world at a pace that demands caution and celerity in determining how it should be used and managed.

The deployment of AI brings both threats and opportunities to the gathering and dissemination of the fact-based news that is essential to our democracy.

World Press Freedom Canada (WPFC) is a non-profit, volunteer board that defends the role of journalists to report without fear or interference. 

In pursuit of those aims, we publish a bimonthly newsletter on media freedom and host advocacy events such as our annual Press Freedom Day awards luncheon that draws a crowd of parliamentarians, diplomats and journalists.

On November 6, we will host a symposium titled “AI & the Press: Threats and Opportunities” on Parliament Hill, featuring speakers from the technology world and news media. Out of our sessions, we will produce a report with policy prescriptions designed to ensure AI works for the benefit of Canadian journalism.

As WPFC committee members, we have witnessed how the erosion of business models for traditional newsrooms has undermined the ability of journalists to fulfill their essential role of information providers and watchdogs. 

More recently, the spread of misinformation and disinformation has polluted the media ecosystem and eroded trust in all media platforms.

WPFC welcomes the opportunity to submit our views to the federal government’s AI Strategy Task Force. Our perspective on AI is driven by our firm belief that Canada must have a robust, independent news media to inform our society, and safeguard our economy and democracy.

Balanced approach to AI development and guardrails needed

We encourage the task force to recommend the government pursue a balanced approach that promotes AI development while erecting guardrails against abuse. 

In a recent podcast featured on The Globe and Mail website, AI pioneer Geoffrey Hinton warned the technology poses an existential threat to humanity unless developed with humanistic principles undergirding it. Only government can mitigate those risks.

We recognize the federal government must ensure the country remains at the forefront of AI technology and its deployment. 

We need to break the stranglehold that a handful of foreign — mostly American — mega-corporations maintain on the digital lives of Canadians, especially on social media platforms. AI deployment cannot be allowed to follow that same trajectory of concentrated ownership that serves the interests of foreign tech billionaires.

We warn, however, against pursuing sovereignty aims by boosting domestic providers while giving short shrift to the interests of consumers and content creators, including journalists and news media companies.

Protection of Canadians’ privacy and their intellectual property must be a hallmark for any federal approach to AI. Large language model systems that scrape websites and essentially steal content are anathema to the maintenance of a robust news ecosystem.

AI-powered platforms are offering summaries and synthetic content that is diverting traffic from publishers. They can turbocharge efforts to spread disinformation and pollute public discourse. AI companies should be held accountable for the accuracy of their content, and provide full disclosure about sourcing.

While there are risks associated with AI, the technology will also provide enormous benefits which must be shared widely among the Canadian population to avoid further widening digital divides. 

Most Canadians are in a state of constant news deprivation and 2.5 million people in this country have almost no local news, according to a recent report by the Canadian Centre for Policy Alternatives.

Can AI help fill these gaps? Possibly. Large language models can synthesize and organize vast amounts of data in support of journalism. 

There’s a glimmer of hope with a rise in independent digital outlets and a growing appetite for community-driven journalism that can be AI-assisted.

But the audience must come first. Media outlets that ‘touch the grass’ by building relationships in the community and publishing news that serves the local public interest could restore trust in journalism at the grassroots level.

Ethical guidelines must guide every deployment of AI in the newsrooms. 

Recommendation: Government should incorporate News Media Alliance’s Global Principles on Artificial Intelligence into its strategy

WPFC, therefore, endorses the international News Media Alliance’s Global Principles on Artificial Intelligence and urges the task force and the government to incorporate these principles into its AI strategy.

Those principles include:

  • Developers, operators and deployers of AI systems must respect intellectual property rights of content creators. 
  • Publishers should receive adequate remuneration for the use of copyrighted content.
  • Existing markets for licensing creators’ and rightsholders’ content should be recognized. 
  • Providers and deployers of AI systems should ensure accountability for their outputs. AI systems pose risks for competition and public trust in the quality and accuracy of content. 
  • Quality and integrity is fundamental to establishing trust in the application of AI tools and services. These values should be at the heart of the AI lifecycle, from the design and building of algorithms, to inputs used to train AI tools and services, to those used in the practical application of AI.
  • AI systems should be trustworthy. AI systems and models should be designed to promote trusted and reliable sources of information produced according to the same professional standards that apply to publishers and media companies. AI developers and deployers must use best efforts to ensure AI-generated content is accurate, correct and complete. 
  • AI systems should be safe and address privacy risks. The collection and use of personal data in AI system design, training, and use should be lawful with full disclosure to users that is easy to understand. 

Conclusion: Public interest must drive Canada’s approach to AI

We understand many tech firms will resist regulatory oversight, and we understand our American trading partners will fight any effort to constrain their dominance. 

However, these core principles cannot be left to voluntary compliance. 

As our prime minister noted in his book Value(s), markets typically do not embrace the non-monetary values of society. It is the role of government to ensure technological innovation does not harm essential institutions and instead benefits the broadest reaches of the Canadian population.

The public interest must drive Canada’s approach to adopting and implementing AI. Regulatory oversight is needed to mitigate risks and ensure the benefits of AI are widely shared.


Stay informed with Dispatch, our bimonthly newsletter.

*Indiciates required field

Sponsors