Skip to main content

OpenAI CEO Altman politely declines job as top AI regulator: ‘I love my current job’

OpenAI CEO Sam Altman on Tuesday declined an offer to become the top federal regulator for artificial intelligence, but agreed that a new agency is needed to police AI.

The CEO of the company that delivered ChatGPT to the world said Tuesday he was not interested in becoming the federal government’s top regulator of artificial intelligence technology.

CEO Sam Altman and other witnesses at a Senate Judiciary subcommittee were asked what they would do to ensure the government has a firm grip on how AI is developed and deployed, and Altman said his first step would be to create a new federal agency.

"I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards," he said in response to a question from Sen. John Kennedy, R-La.

OPENAI CEO SAM ALTMAN ADMITS HIS BIGGEST FEAR FOR AI: ‘IT CAN GO QUITE WRONG’

Altman said he would also create a set of federally enforced safety standards for AI, and require AI companies to be independently audited to ensure compliance.

"Would you be qualified, if we promulgated those rules, to administer those rules?" Kennedy asked Altman.

OPEN AI CEO SAM ALTMAN FACES SENATE PANEL AS PRESSURE BUILDS TO REGULATE AI

"I love my current job," Altman said, adding that he would send recommendations for people who are qualified to run a new agency. After being pressed on how much money he makes at OpenAI, Altman said he only makes enough to have health insurance and has no equity in OpenAI.

"I’m doing this because I love it," Altman said.

The question whether a new federal agency is needed was a topic that came up several times at the Senate hearing. Both Altman and New York University Professor Emeritus Gary Marcus agreed a new agency is needed.

AI PAUSE CEDES POWER TO CHINA, HARMS DEVELOPMENT OF ‘DEMOCRATIC' AI, EXPERTS WARN SENATE

However, IBM official Christina Montgomery, the company’s chief privacy and trust officer, argued against a new agency and said AI risks should be managed using the existing infrastructure of the federal government. She also argued that AI should be regulated based on how it’s used, and that tougher rules should be imposed when AI is used for riskier applications.

Marcus was very alarmed at the prospect of emerging AI technology, and at times argued that the companies building AI should not be trusted by themselves to make recommendations on how to regulate AI. He also recommended that independent scientists need to be deployed to independently ensure that companies are complying with AI rules.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.