CCI: Officer needed to steer responsible AI strategy

CCI: Officer needed to steer responsible AI strategy

/ /

The Council of Canadian Innovators (CCI), an advocacy group focused on helping high-growth Canadian tech scale globally, sees more than $200 billion in value for “AI opportunities” domestically, with a projected jump in valuation to more than $2 trillion by 2030. 

But for any Canadian business steeped in artificial intelligence to reach their full market potential, there needs to be a “strong marketplace framework,” the CCI advocates, as well as greater access to talent, capital and customers. 

These are just a few of the challenges the CCI has outlined in their new report, A Roadmap for Responsible AI Leadership in Canada. At the heart of the matter is the fact that almost three quarters of all intellectual property rights (IRPs) created through Canada’s current federal AI Strategy are actually owned by foreign (read: American) corporations, from Uber to Google. 

The answer to bringing the benefits of a thriving AI ecosystem home to Canada is setting up an environment that allows domestic companies to scale up into “global leaders,” as the CCI sees it. This calls for the creation of “a policy and regulatory framework that encourages the rapid growth of domestic companies” as well as a responsible AI framework that features “high trust, clear rules, fast action and global leadership.”

A new officer to enforce AIDA

The CCI recognizes that creating a larger Canadian ecosystem that produces global AI leadership is not just a matter of creating responsive and scalable market frameworks for businesses.

According to the report, the public needs to be brought into the fold, with language that “convey[s] trust and certainty for the public.” This involves baking in a clear statement of user and citizen rights into any AI strategy that offers assurance (and data protection) when automated decision systems are deployed. 

The group’s report also calls for the creation of a Parliamentary Technology and Science Officer, which would act as an independent government advisor specifically addressing issues related to the Artificial Intelligence and Data Act (AIDA).

Building on Canada’s Artificial Intelligence and Data Act

AIDA was first released in June 2022 as part of the Canadian government’s Bill C-27, also known as the Digital Charter Implementation Act 2022

Bill C-27 also introduced language supporting the  Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA), which similarly aim to offer a responsible framework for developing and commercializing new consumer tech. AIDA, however, would be the first piece of legislation in Canada to regulate the development and deployment of AI systems in the private sector. 

The legislation outlines the purpose of AIDA as follows:

  1. To regulate international and interprovincial trade and commerce in AI systems by establishing common requirements applicable across Canada for the design, development and use of those systems; and
  2. To prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or harm to their interests.

In addition: “Harm” is defined in the AIDA as (a) physical or psychological harm to an individual, (b) damage to an individual’s property, or (c) economic loss to an individual.

To that end, AIDA will apply to persons carrying out a “regulated activity,” which the legislation defines as:

  • processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;
  • designing, developing or making available for use an artificial intelligence system or managing its operations.

The CCI’s report requests that regulatory development and implementation of these new responsible AI standards be moved up (ie. within 12 months of Royal Assent) so that Canadian businesses don’t lose pace with the rapid development of new language learning models (for instance) globally.

Responsible AI takes a global village

Any business leaders looking to understand how they can explore AI without risking legal violations is to tread extremely carefully. While there aren’t many rules in place specifically regulating the use of AI, there is a lot of attention being paid to creating exactly those kinds of guidelines. 

Startups specifically have a lot to gain if they are able to responsibly leverage AI to increase automation, scale operations and ultimately speed up innovation. This is especially true when it comes to driving R&D, as teams may be able to leverage AI for quality control or to even supplant human practitioners for startups that are just getting off the ground.

To learn more about business growth strategies and how teams can take advantage of non-dilutive R&D funding options, book a call with Boast today. 

Talk to an expert from Boast AI today to learn more about how we combine cutting edge technology with years of expertise—and a founder’s POV—to optimize your R&D and fund your innovation.

Boast

Boast Logo