EDITORIAL: Belling the cat called AI

Apr 11, 2024 | Editorial, OP-ED

Prime Minister Justin Trudeau announced $2.4 billion in federal spending to develop Canada’s Artificial Intelligence (AI) infrastructure on April 7.

“These investments in Budget 2024 will help harness the full potential of AI so Canadians, and especially young Canadians, can get good-paying jobs while raising our productivity, and growing our economy,” Trudeau said.

With rapid transformations in the AI sector, this announcement could pave the way for a legal roadmap for AI development and regulation.

Trudeau’s office, through the official website, broke down how the funding package will be used. One of the points mentioned “risks of advanced or nefarious AI systems.”

To this end, $50 million is set aside to develop the Canadian AI Safety Institute which will help in the “safe development and deployment of AI.”

This amounts to a small fraction of $2.4 billion but it’s a start even if it doesn’t seem proportionate for now.

With the Artificial Intelligence and Data Act (AIDA) tabled in 2022, Canada acknowledged the uncertainty regarding AI regulation. The problem with AIDA, at the time, was that it provided little clarity as to the scope of its regulatory measures.

At the World Governments Summit held in Dubai in February, OpenAI CEO Sam Altman said he was concerned about the “subtle societal misalignments” that AI potentially presents that could cause harm.

Last month saw two major developments in the domain of AI regulation.

Beijing hosted the second International Dialogue on AI Safety (IDAIS-Beijing) where AI scientists and governance experts convened to discuss a roadmap for international AI safety.

The European Parliament approved the Artificial Intelligence Act after the European Commission proposed a regulatory framework for AI in 2021.

With the funding boost and other world governments being cognizant of regulation in AI, it is now time that Canada enforces a legal framework.

In this post-digital age, where human-like interfaces and machine learning are merging to create a lucid reality, the importance of safe AI cannot be ignored.

AI dominates a major portion of our daily functioning. Unlocking our phones with facial recognition, telling Siri to play our favourite music and talking to chatbots programmed as customer care executives are only a handful of examples of how machine-enabled programming has impacted our lives.

More specific to journalism, AI is being used for text and voice translation, transcription and turning text into audio.

Across the globe, broadcasters are introducing AI news presenters. Zae-In for South Korean broadcaster SBS, Sana for Indian broadcaster Aaj Tak and Fedha for Kuwait News are some examples of AI news presenters.

While AI presents problem-solving, cost-cutting and time-saving acumen, it should also make us tread with caution. Are we giving it more power than we would be able to monitor?

In its nascent stage usage, are we enabling it to steal those very jobs that we seek assistance with?

It is a helpful ally or will it bite the hand that’s coding it?

As far as governments are concerned, are countries playing Frankenstein by giving machines so much control?

On the other hand, if we don’t stay ahead of the curve, someone else will, leaving us at a disadvantage.

AI’s scope cannot be neglected, but what is needed is a regulatory legal framework that will keep its use in check. It remains to be seen if Canada will step up and revise AIDA to tame the ever-changing AI landscape.