OpenAI Promises to Provide US Government Early Access to Future Foundational AI Models

The image features the OpenAI logo and text overlaid on a waving American flag background.

OpenAI’s CEO, Sam Altman, has taken to social media to say that United States regulators will be provided with early access to OpenAI’s next foundational AI model.

This public promise comes as part of OpenAI’s broader safety initiative, which includes claims that the company will allocate “at least 20%” of its computing resources to company-wide safety efforts.

“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations,” Altman writes on X, formerly Twitter. “Excited for this!”

The U.S. AI Safety Institute was unveiled by Vice President Kamala Harris last fall, just two days after President Biden issued the first executive order concerning federal AI content regulation.
Tech Crunch notes in its coverage that Altman’s social media promise is similar to one OpenAI made with regulators in the United Kingdom in June.

Some tech industry experts wonder if these public moves respond to concerns over OpenAI’s work on AI safety. In May, Tech Crunch reported that the company had sacrificed resources it said would go toward safety initiatives in favor of launching new products, like GPT-4o later that month and the company’s Sora video generator platform, which the company has been consistently tight-lipped about how it trained.

In response to criticism — and two long-time OpenAI employees departing the company to form a safety-focused AI company, Safe Superinteligence Inc. — OpenAI created a new safety group in late May. However, Altman and other board members oversee it, leading to fresh concerns over the company’s sincerity concerning safety.

Two women sit on a blue couch in an office setting, each with a laptop. One wears a blue shirt and the other a brown cardigan. They are engaged in conversation with bookshelves and large windows in the background, providing a well-lit, cozy atmosphere.
Credit: OpenAI

“We want current and former employees to be able to raise concerns and feel comfortable doing,” Altman continues in his new post on X. “This is crucial for any company, but for us especially and an important part of our safety plan.”

“In May, we voided non-disparagement terms for current and former employees and provisions that gave OpenAI the right (although it was never used) to cancel vested equity. We’ve worked hard to make it right,” Altman concludes.

OpenAI’s promises live in a complicated context concerning AI safety and regulation. On the one hand, those responding to Altman’s post in favor of increased AI regulation question what his words amount to in terms of genuine action. However, some respondents think that the government should have no control over AI and that Altman is wrongly bowing to outside pressures.

“We’ve heard this promise before. Talk is cheap. Your top safety-focused employees quit and became whistleblowers because they don’t trust you. What, specifically, will you do to earn our trust back?” writes @AISafetyMemes.

“What does ‘safety’ mean? A. Censorship. B. Propaganda. C. Compliance with the government. D. All of the above,” asks @legitknuckle.

Four people are sitting on a cozy sofa near a large window, engaging in a lively discussion. Two have laptops on their laps, and all are smiling. The room is bright with natural light, and green plants are lining the window sill, adding a fresh ambiance.
Credit: OpenAI

“So you’re saying we won’t get any future models until some fed Oks it. Great,” adds @deepwhitman.

“Can you address the accusation that 20% of compute was never allocated to safety efforts?” writes @alexkaplan0.

There is an interesting split between people who think OpenAI hasn’t done nearly enough concerning safety and those who believe that the company shouldn’t do anything at all if it could impact the company’s products.

As for American citizens at large, a recent YouGov survey shows that the majority of survey respondents are concerned, cautious, and skeptical about AI, while a much smaller proportion of people are excited or hopeful. Further, one in seven Americans are “very concerned” that AI could end humanity — a problem that departed OpenAI executives hope to address with Safe Superintelligence Inc.


Image credits: Featured image created using OpenAI’s logo and an asset licensed via Depositphotos.

Discussion