White House Executive Order for AI, explained

White House Executive Order for AI, explained

/ /

The Biden Administration dropped an executive order this week that promises to be “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust,” according to a statement from White House deputy chief of staff Bruce Reed.

While the mandate is broad, the crux of the federal government’s involvement is to put safeguards around the technology before ‘bad actors’ leverage it at the risk of national security.

The measures also come along with data security protections for consumers that have long been missing at the federal level. While legislation like the European Union’s Global Data Protection Regulation (GDPR) was enacted almost a decade ago to protect personal data against encroaching digital technologies, the United States has lacked any formal equivalent legislation, despite offering Blueprints and pitches for a path forward. 

The move comes at a time where the market for artificial intelligence has never been more valuable, with generative AI—that is, responsive tools like ChatGPT—alone expected to reach a $1.3 trillion valuation by 2032, while the broader market for artificial intelligence is already valued at more than $150 billion today. 

The big question for founders is: How will this executive order impact their ability to grow a startup in the AI space?

We’ll unpack the mandate in more detail, but the executive order lays out 8 key pillars that will dictate their AI strategy going forward, including:

  • Creating new AI safety and security standards for AI
  • Protecting consumer privacy
  • Advancing AI equity and civil rights
  • Providing consumer protections
  • Supporting the AI workforce
  • Promoting AI innovation and competition
  • Working with international partners to implement AI standards
  • Developing guidance for federal agencies’ use and procurement of AI

AI companies must both test for safety and practice transparency

Before any AI driven technologies get exposed to consumers, developers must perform safety tests (known as “red teaming”) to uncover any potential user threats. In these tests, teams must literally attempt to break the AI model to uncover potential vulnerabilities. The results must be shared with the federal government, who then has the power to enforce modifications or halt operations entirely.

This mandate actually expands on the nearly 75-year-old Defense Production Act, which gives the White House broad administrative oversight for industries that are considered linked to national security, according to the executive order. 

A new set of industry standards—with some still TBD

The National Institute of Standards and Technology (NIST) is the government body tasked with managing the rollout of the executive order, with a few key provisions already outlined in the mandate—and many more expected to roll out soon. 

First is the aforementioned call for businesses to share the results of their red teaming with the NIST before products are launched. What hasn’t been determined, however, is a requirement for businesses to follow a set testing standard or adhere to a designated method, making much of the provision reliant on voluntary compliance.

What has been determined by the mandate, however, is the need for better signifiers of technologies and products that are powered by AI. This includes the use of watermarks to inform consumers when images have been generated using artificial intelligence, for instance, in an attempt to limit so-called ‘imposter content’ and deep fakes from proliferation. 

Restrictions around biotechnology—but not much enforcement, yet

Another pillar of the executive order is to put safeguards around the use of biological materials when developing new AI products. While the details here are light, they hinge on the development of new biological synthesis screening techniques in a private-public partnership between biotechnology companies and federal agencies. 

In this instance (as with many of the other features of the announcement) enforcement doesn’t hinge so much on formal penalties but instead come as a condition of government-backed funding. So while biotechnology companies may not face legal action for ignoring the new mandates (yet, at least), not complying can result in a loss of government funding.

This is a similar tactic being pursued to enforce other parts of the measure: With government funding playing such a key role in helping new technologies take flight, there’s little incentive to not follow the new guidelines. 

Still, so much of the document promises further details and guidance, especially around much-needed consumer data protections. While the document goes long in outlining the importance of new data privacy rules, it puts the onus on Congress “to pass bipartisan data privacy legislation to protect all Americans.”

Targeting big tech, but impacting everyone

At the heart of these rules is the need for federal leadership to wrangle some control over a private AI sector that’s growing at a breakneck pace. This is largely being driven by major investments from American Big Tech companies like Google, Microsoft and Alphabet, who—along with emerging tech giants like Open AI and chipmaker NVIDIA—have been able to explore new models and techniques largely unchecked. 

Because many of these companies have access to a wealth of consumer data (and data fuels AI), it only makes sense that the government would want to put protections in place to both ensure consumer safety and limit unfair competition.

But Big Tech is hardly the only frontier for new AI. As the Washington Post recently reported “every startup is now an AI company” as developers across industries leverage machine learning techniques to advance their innovation. 

AI for Startups: Tread carefully!

The biggest takeaway for any business leaders looking to understand how they can explore AI without risking legal violations is to tread extremely carefully. While there aren’t many rules in place specifically regulating the use of AI, there is a lot of attention being paid to creating exactly those kinds of guidelines. 

Startups specifically have a lot to gain if they are able to responsibly leverage AI to increase automation, scale operations and ultimately speed up innovation. This is especially true when it comes to driving R&D, as teams may be able to leverage AI for quality control or to even supplant human practitioners for startups that are just getting off the ground.

Understanding how to characterize the use of AI in the context of R&D requires expertise—specifically when it comes to outlining what activities could qualify for tax credits or government grants that founders can reap to extend their runway.

To learn more about leveraging cutting-edge technologies to create industry-changing solutions, download our ebook, How to use AI for R&D.

About: White House Executive Order for AI, explained

Understand more about business growth strategies and how teams can take advantage of non-dilutive R&D funding options by booking a call with Boast today

Boast

Boast Logo