Skip to main content

AI expert in Congress warns against rush to regulation: ‘We’re not there yet’

One AI expert in Congress is warning against the rush to regulate artificial intelligence, and says Congress should know why it is regulating AI before setting up a bureaucracy.

The only member of Congress with an advanced degree in artificial intelligence says lawmakers should move slowly to impose new regulations on AI, in part because policymakers and even experts in the field have yet to lay out clear regulatory objectives.

Rep. Jay Obernolte, R-Calif., says this deliberate approach is a good thing, despite pressure from high-profile tech leaders to halt AI development until its dangers are better understood. In an interview with Fox News Digital, Obernolte said it makes no sense to start regulating until Congress knows precisely what dangers it’s trying to avoid.

"Before we can create a regulatory framework around AI, we have to very explicit about what our goals are with our regulation," Obernolte said. "In other words, what kind of bad behavior and bad outcomes are we trying to prevent? What are we afraid might happen?"

ELON MUSK'S WARNINGS ABOUT AI RESEARCH FOLLOWED MONTHS-LONG BATTLE AGAINST ‘WOKE’ AI

Obernolte likely has the most informed take on the challenges AI poses to society among all members of Congress – he is vice chair of the House AI Caucus, has a master's in AI from UCLA and owns a video game development studio. Obernolte warns there is a massive gulf between the dire predictions about AI sent last week by tech luminaries such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, and what lawmakers in Congress should be doing about it.

The tech letter from last week warned about advanced AI that would automate away "all the jobs," "outsmart" its human masters and lead to "loss of control of our civilization," and said AI developers must show these risks are "manageable." In the months leading up to the letter, Musk has been warning that AI is being used to program systems like ChatGPT to avoid controversial answers, which Musk has said marks the dangerous development of "woke" AI.

But Obernolte said the broad warnings in the tech letter aren’t specific enough to be actionable.

"If you read the letter, you get the definite sense that we’re not there yet," Obernolte said about its lack of specificity. "Even these deep thinkers don’t have a clear understanding of exactly what they fear about AI."

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON ‘GIANT AI EXPERIMENTS’: ‘DANGEROUS RACE’

He argued that this lack of specific purpose behind regulation is a big reason why he worries about the European Union’s approach. The EU Parliament is considering a bill that would require AI developers to apply for a permit with regulators, but Obernolte said repeating that move in the U.S. would only pass the buck from lawmakers to regulators who still don’t have a clear sense of purpose.

"If we take that step before we really understand why we’re regulating, I fear we haven’t accomplished anything," he said. "We’re just kind of shifting responsibility from the legislative branch to the executive branch at that point. What makes us think that a bureaucracy could do a better job in thinking about these issues given how… theoretical and poorly defined they are?"

Obernolte said none of this means Congress is doing nothing. He said Republicans and Democrats are working together on a federal data privacy bill that could start moving in the House by this summer.

That bill is aimed at letting people know what data of theirs is being collected, and giving people some control over how that data is collected and used. Obernolte said this bill would go a long way toward dealing with a major AI issue, which is the ability of AI systems to pierce data privacy walls.

AI EXPERTS WEIGH DANGERS, BENEFITS OF CHATGPT ON HUMANS, JOBS AND INFORMATION: ‘DYSTOPIAN WORLD’

"AI has this uncanny ability to pierce through digital data privacy in ways that enable malicious actors like, for example, China, to create surveillance states that are truly Orwellian," he said. "They can use AI to create loyalty scores that predict future loyalty to the government. That’s something we certainly would not want to have happen here."

But Obernolte downplayed the possibility of meeting the tech giants’ call for a "pause" in new AI development until its dangers are more fully understood. "I don’t think that’s realistic," he said.

"First of all, you’ve got the rule followers and you’ve got the people who don’t follow the rules. Those unscrupulous actors are still going to develop AI," he said. As others in Congress have admitted already, countries like China would be unlikely to delay AI development along with the U.S., even if the U.S. could somehow enforce a moratorium.

"Our authority to mandate a pause like that is… really quite limited," Obernolte said.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.