AI Regulation: What are the rules for using AI to drive innovation?

AI Regulation: What are the rules for using AI to drive innovation?

/ /

The concept of Artificial Intelligence (AI) has graduated from science fiction, to novel application to business buzzword in rapid succession over the past few years. 

While once the subject of dystopian blockbusters, AI made waves in consumer culture last year when Dall-E 2 and other image generating applications allowed virtually anyone to create realistic (and at times problematic) graphics using short keyword prompts. 

Since then, the conversation around AI and machine learning (ML) has only grown louder, with the introduction of ChatGPT expanding the footprint of potential applications to encompass the written word and even vocalization—and, in the process, putting creatives of all stripes on edge about the potential implications. 

As the applications for AI in the larger cultural conversation have moved from novel to more practical and tactical uses, concerns have been raised by artists, politicians and business leaders alike. 

AI concerns: From jobs to security

On the one hand, there’s the chorus of voices who oppose AI for many of the classic reasons that automation in general gets a bad reputation: It’s taking jobs away from humans who need them. While there are many claims to be made for or against this stance, there’s not a ton of existing evidence to support this—to say nothing of the many jobs that are expected to come online as a direct result of greater AI adoption. 

Upon closer examination, however, the primary concerns about AI relate to data privacy and security. For any AI application to be effective in production, it needs to ingest a significant amount of relevant data. The “open secret” of many of the most controversial AI applications has been that they source their data from across the internet, often with little or no consideration of usage permissions (to say nothing of data attribution). 

This has made AI use and adoption tricky to regulate. That’s because beyond being extremely behind the ball when it comes to understanding the latest technological advancements related to AI, legislators in the United States and Canada are also years behind global peers in establishing general data protection rules. 

EU pitches first Artificial Intelligence Act

While the European Union has led the world in establishing effective and enforceable Global Data Protection Regulation (GDPR) in 2016, there remains to be an equivalent, recently-drafted rule-of-law in the U.S. or Canada (though it’s worth noting that any entity doing business with EU citizenry must adhere to GDPR standards by default). 

Similarly, the EU appears to be lapping the US and Canada when it comes to proposing AI-specific data regulation, too, having officially pitched the Artificial Intelligence Act (AIA) in April 2021. Just like with GDPR, the EU’s AIA acted as a forcing function (and source of inspiration) for other governments to map out how they would regulate AI without hindering potential innovation.

As such, lawmakers in North America have introduced proposals on both sides of the border that they hope will be adopted as part of future legislation to ensure safe and responsible use of AI going forward. 

Canada’s Artificial Intelligence and Data Act (AIDA)

In June 2022, to bring a new focus on data protection alongside regulation for AI adoption, the Canadian government released Bill C-27, also known as the Digital Charter Implementation Act 2022

In addition to the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA), Bill C-27 introduces the Artificial Intelligence and Data Act (AIDA), which would be first piece of legislation in Canada to regulate the development and deployment of AI systems in the private sector. 

In many ways, Bill C-27 is a direct response to the Blueprint for an AI Bill of Rights released in the US the previous October (more on that later), calling for a wealth of consumer protections as well as establishing “rights” by the government to directly audit or intervene upon any AI systems in production. Unlike the US’s Blueprint, however—which largely amounts to a wishlist from the White House—the AIDA is part of a legislative package (Bill C-27) that is actually already on a path to becoming law.

The legislation outlines the purpose of AIDA as follows:

  1. To regulate international and interprovincial trade and commerce in AI systems by establishing common requirements applicable across Canada for the design, development and use of those systems; and
  2. To prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or harm to their interests.

In addition: “Harm” is defined in the AIDA as (a) physical or psychological harm to an individual, (b) damage to an individual’s property, or (c) economic loss to an individual.

To that end, AIDA will apply to persons carrying out a “regulated activity,” which the legislation defines as:

  • processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;
  • designing, developing or making available for use an artificial intelligence system or managing its operations.

Now for what these “persons” will be responsible for doing is a bit less clear, as there is language in AIDA geared toward mitigating risks of harm and bias presented by the use of “high-impact” AI systems. The act stops short of defining what “high-impact” means, however, and will require further definition as AIDA inches closer to law. 

Broadly, however, individuals managing high-impact AI must “establish measures to identify, assess and mitigate the risks of harm or biased output that could result from [the AI system’s] use.” These individuals also must implement ways to “monitor compliance with and effectiveness of risk mitigation measures.”

The act also calls for greater transparency about AI, specifically as it relates to consumer data. Where the AI system is made available for use, for instance, the person responsible must publish (on a publicly-available website) a plain-language description of the system, explaining: 

  • how the system is to be used 
  • the types of content that it is intended to generate
  • the types of decisions, recommendations or predictions it is intended to make
  • the risk mitigation measures in place. 

Once passed, AIDA will also call for the appointment of a Minister who will have broad enforcement powers. These includes ordering organizations using high-impact AI to:

  • Produce records
  • Complete an audit, or engage an independent auditor to conduct an audit
  • Implement any measure specified in an audit report
  • For a high-impact system, cease using the system or making it available if the system gives rise to a serious risk of imminent harm
  • Publish on a publicly available website certain information about an audit, as long as it does not disclose confidential business information

The U.S. pitches a Blueprint for an AI Bill of Rights

While Canada’s proposed legislation is opaque on certain definitions (ie. what specifically qualifies as “high impact” AI), its specific language around penalties and enforcement—as well as being pitched alongside broader data protection legislation—demonstrates a much more actionable plan than what’s been pitched so far in the United States.

Still, the US’s new AI framework marries themes from previous, non-US legislation around data privacy, all through a lens of social justice and equity that many experts argue hasn’t been prioritized to date

The Blueprint is broken out into five pillars that any organization developing or using AI should adhere to:

  • Safe and Effective Systems: Citizens shouldn’t be exposed to untested or poorly qualified AI systems that could have unsafe outcomes—whether to individuals personally, specific communities, or to the operations leveraging individual data.
  • Algorithmic Discrimination Protections: Simply put, AI models can’t be designed with bias in mind, nor should systems be deployed that haven’t been vetted for potential discrimination.
  • Data Privacy: Organizations mustn’t engage in abusive data practices, nor should the use of surveillance technologies go unchecked. 
  • Notice and Explanation: Individuals should always be informed when (and how) their data is being used and how it will affect outcomes. 
  • Human Alternatives, Consideration, and Fallback: Individuals should not only have authority to opt-out of data collection, but there should be a human practitioner they can turn to when concerns arise. 

No one government can safeguard data, promote innovation

The biggest takeaway for any business leaders looking to understand how they can explore AI without risking legal violations is to tread extremely carefully. While there aren’t many rules in place specifically regulating the use of AI, there is a lot of attention being paid to creating exactly those kinds of guidelines. 

Startups specifically have a lot to gain if they are able to responsibly leverage AI to increase automation, scale operations and ultimately speed up innovation. This is especially true when it comes to driving R&D, as teams may be able to leverage AI for quality control or to even supplant human practitioners for startups that are just getting off the ground.

Understanding how to characterize the use of AI in the context of R&D requires expertise—specifically when it comes to outlining what activities could qualify for tax credits or government grants that founders can reap to extend their runway.

To learn more about business growth strategies and how teams can take advantage of non-dilutive R&D funding options, book a call with Boast today. 

Boast

Boast Logo