Non Cult Crypto News

Non Cult Crypto News

in

Area 51 for AI? US gov secures early access to cutting edge models

It’s unclear whether the government would be required to inform the general public in the event an AI model becomes sentient.

Own this piece of crypto history

Collect this article as NFT

COINTELEGRAPH IN YOUR SOCIAL FEED

Artificial intelligence firms OpenAI and Anthropic recently agreed to give the US AI Safety Institute early access to any “major” new AI models developed by their respective firms. 

While the deal was purportedly signed over mutual safety concerns, it’s unclear exactly what the government’s role would be in the event of a technology breakthrough with significant safety implications. 

OpenAI CEO and cofounder Sam Altman regarded the deal as a necessary step in a recent post on the X social media platform. 

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”

Frontier AI models

OpenAI and Anthropic are both working towards the same goal: the development of an artificial general intelligence (AGI). There’s no scientific consensus as to exactly how AGI should be defined, but the basic idea is an AI capable of doing anything a human could do given the necessary resources. 

Both companies, separately, have charters and missions explaining that it’s their independent goals to create AGI safely with humanity’s interests at its core. If one or both companies were to succeed, they would become the de facto gatekeepers of technology capable of imbuing machines with the intelligence of a human. 

In inking a deal with the US government to disclose share their models prior to any product launches, both companies have punted that responsibility up to the federal government. 

Area 51

As Cointelegraph recently reported, OpenAI may be on the verge of a breakthrough. The company’s “Strawberry” and “Orion” projects are purported to be capable of advanced reasoning and new techniques for dealing with AI’s hallucination problem.

According to a report from The Information, the US government has already seen these tools and any early internal iterations of ChatGPT with them implemented.

A blog post from the US National Institute of Standards and Technology indicates that this deal and the safety guidelines surrounding it are all voluntary for the companies participating. 

The benefit to this form of light touch regulation, according to the companies and analysts supporting it, is that it spurs growth and allows the sector to regulate itself. Proponents, such as Altman, cite this agreement as an example of exactly how such cooperation can benefit both the government and the corporate world. 

However, one of the potentially concerning side effects of light-touch regulation is a lack of transparency. If OpenAI or Anthropic succeed in their mission, and the government feels the public doesn’t have a need to know, there doesn’t appear to be any legal impetus requiring disclosure. 

Related: Anthropic CEO says future of AI is a hive-mind with a corporate structure

This article first appeared at Cointelegraph.com News

What do you think?

Written by Outside Source

Pudgy Penguins floor price rises as key metrics improve

Vitalik Buterin has radical scheme for the pseudo-decentralization of entire cities

Back to Top

Ad Blocker Detected!

We've detected an Ad Blocker on your system. Please consider disabling it for Non Cult Crypto News.

How to disable? Refresh

Log In

Or with username:

Forgot password?

Don't have an account? Register

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

To use social login you have to agree with the storage and handling of your data by this website.

Add to Collection

No Collections

Here you'll find all collections you've created before.