The current debate around artificial intelligence (AI) is riddled with contradicting stances. While some politicians characterize AI and machine learning (ML) tools as job killers, industry groups tout automation as an avenue to drive innovation and support an entirely new kind of labor market.
What makes this larger debate so unique is that even stakeholders who support the adoption of AI don’t always share an opinion on the best ways to bring new ML tools into the mainstream.
This is all playing out currently in a series of United States Senate hearings where the technologists behind the highest-profile AI use cases are not just pitching their stance on AI’s implications, but actually explaining how AI works to lawmakers.
This comes as adoption of tools like OpenAI’s ChatGPT outpace the ability for lawmakers to understand the immediate and long-term implications of AI’s use. What’s missing from all of this—both in the US and really around the globe—is any kind of government regulation that effectively puts guard rails around massively popular (but still relatively nascent) generative AI technologies.
Technologists advocate for AI regulation
OpenAI CEO Sam Altman has emerged as one of the strongest proponents of government regulation for technologies like his own ChatGPT.
“As technology advances, we understand that people are anxious about how it can change the way we live,” Altman said in the hearing. “We are too, but we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous offsets.”
Altman was joined by IBM VP and chief privacy and trust officer Christina Montgomery and New York University Professor Emeritus Gary Marcus on May 16, where he outlined his own three-part solution to AI governance.
First, Altman is calling for the creation of an entirely new government agency that has the power to grant and revoke licenses for the use of large AI models. Establishing a set of safety standards that dictate that agency’s compliance and licensure would be the second mandate Altman wishes to see, building on the currently unenforceable Blueprint for an AI Bill of Rights pitched by the White House earlier this year.
Finally, Altman thinks there should be independent audits—by independent experts—that ensure license-holding AI practitioners aren’t overstepping in their use of AI models or breaking key tenets of the AI blueprint. These experts will watch for algorithms that can self-replicate, for instance, of “exfiltrate in the wild,” which would essentially indicate that the AI model is acting independently and may be out of the control of human practitioners.
Even among AI evangelists, approaches to regulation differ
Where Altman’s proposal differed from that of the other experts taking part in the hearing was demanding greater transparency into the data that AI models are being trained with.
NYU’s Marcus, for instance, has called for not just greater transparency, but any transparency into how solutions like ChatGPT and graphical generative AI tools like Midjourney or Dall-E 2 rely on copyrighted works to fuel their models without appropriate permissions.
Like Altman, however, Marcus advocates for the creation of an agency tasked specifically with regulating AI, referencing the model created at the Food and Drug Administration (FDA) for ensuring food safety before products go to market.
IBM’s Montgomery, on the other hand, is against creating a new agency as it would simply take too much time to create. She makes the case that we’re already seeing AI adoption on a massive scale across industries, and existing regulatory bodies are simply not keeping up.
A hunger for AI to drive innovation
Despite the current lack of guidance on how to deploy AI for business, leaders from the SMB space to the enterprise are still turning to AI and ML tools to drive the creation of new products and services
More than two-thirds of executives polled by Gartner say the benefits of implementing generative AI outweigh the potential risks, for instance. Similarly, industry leaders across the globe recently told the World Economic Forum (WEF) that AI’s impact will actually be positive, as new jobs in big data analytics, cybersecurity and business operations will spring up as a result.
Startups are even using generative AI tools to help drive their research and development, introducing automation to help speed up productization and even derive new use cases. In fact, the WEF even cited AI as a means to accelerate R&D in emergent industries like cleantech, which both the US and Canadian governments have targeted as a priority for investment.
If you’re working to deliver new solutions in cleantech or any emerging sector, it’s likely that you might qualify for government funding that can help you save money (and fuel your runway) while you drive innovation. To learn more about how your startup may qualify for innovation funding, schedule a call with our team today.