Raimondo's announcement was made on the same day. advertised by google The release of new data highlights the superior performance of the company's latest artificial intelligence model, Gemini, showing it outperformed OpenAI's GPT-4, which powers ChatGPT, on several industry benchmarks. The U.S. Department of Commerce could receive early warning about Gemini's successor if the project makes full use of Google's extensive cloud computing resources.
Rapid advances in the AI field over the last year have led some AI experts and executives to call for a pause on development of anything more powerful than GPT-4, the model currently used by ChatGPT.
Samuel Hammond, senior economist at the American Innovation Foundation, a think tank, said a key challenge for the U.S. government is that models do not necessarily have to cross computational thresholds during training to be potentially dangerous. It states that.
Dan Hendricks, director of the nonprofit Center for AI Safety, said the requirement is reasonable given recent developments in AI and concerns about its capabilities. “Companies are spending billions of dollars on AI training, and their CEOs are warning that AI could become superintelligent in the coming years,” he says. “It seems logical that the government knows what AI companies are up to.”
Anthony Aguirre, executive director of the Future of Life Institute, a nonprofit organization that ensures that innovative technology benefits humanity, agrees. “Right now, we're experimenting on a large scale with virtually no external oversight or regulation,” he says. “Reporting these AI training practices and associated safety measures is an important step. But more is needed. There is strong bipartisan agreement on the need for AI regulation. , we hope Congress can act on this soon.”
Raimondo said at a Hoover Institution event on Friday that the U.S. National Institute of Standards and Technology (NIST) is currently developing standards for testing the safety of AI models as part of the U.S. government's new AI Safety Institute. He said he is working on formulating a new policy. Determining the risk of an AI model typically involves scrutinizing the model and attempting to cause problematic behavior or output. This is a process known as “red teaming.”
Raimondo said her department is working on guidelines to help companies better understand the potential risks in the models they are developing. These guidelines could include ways to ensure that AI cannot be used to violate human rights, she suggested.
The October executive order on AI gives NIST until July 26th to have these standards in place, but some working with NIST say they have a long way to go to properly implement this. They say they lack the necessary funding and expertise.