An anonymous reader cites a report from Ars Technica. After much speculation, the National Institute for AI Safety, part of the National Institute of Standards and Technology (NIST), has finally announced its leadership team. Paul Cristiano, who was appointed head of AI safety and is a former OpenAI researcher who pioneered a fundamental AI safety technique called reinforcement learning from human feedback (RLHF), said he believes AI development could end. He is also known for predicting that “the sex is 50% true.'' Cristiano's research pedigree is impressive, but by appointing a so-called “AI destroyer,” NIST risks encouraging unscientific thinking that many critics view as mere speculation. Some people are concerned that this may not be the case.
There are rumors that NIST employees are against the adoption. Last month's controversial VentureBeat report cited two anonymous sources who claimed NIST officials were in a “revolt” apparently because of Cristiano's so-called “AI destroyer” views. . VentureBeat reported that some employees and scientists believed that Cristiano's effective altruism and “coupling with long-termism” could undermine the institute's objectivity and integrity. He reportedly threatened to resign due to concerns. NIST's mission is to advance science by working to “promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that strengthen economic security and improve the quality of life.” is rooted in advancing the Effective altruists believe in “using evidence and reason to find ways to benefit others as much as possible,” and long-termists believe in “using evidence and reason to find ways to benefit others as much as possible,” while long-termists believe in “doing more to protect future generations.” “You should do something,” but both are more subjective and opinion-based. On the Bankless podcast, Cristiano opined last year that “there's about a 10 to 20 percent chance of an AI takeover,” and that humans would die as a result, adding, “Overall, there's probably a 50 to 50 chance of an AI takeover.” There is a high possibility that it will happen.” If we had a human-level AI system, we would soon be doomed. ” “We are most likely to die because we introduce mass amounts of AI everywhere, not because AI suddenly appears and kills everyone… [And] If for some reason, God forbid, all these AI systems are trying to kill us, they will definitely kill us,” Cristiano said.
As head of AI safety, Cristiano will likely need to monitor current and potential risks. According to a Department of Commerce press release, he “designed and conducted testing of Frontier AI models, with an emphasis on evaluating models for capabilities related to national security concerns,” led the evaluation process, and led the “Frontier Model “Risk mitigation to strengthen safety and security.'' . Cristiano has experience mitigating AI risks. He left OpenAI and founded the Alignment Research Center (ARC), which the Department of Commerce describes as a “nonprofit research organization that aims to advance theoretical research to align future machine learning systems to human interests.” organization.” Part of ARC's mission is to test whether AI systems are evolving to manipulate or deceive humans, according to the ARC website. ARC is also conducting research to help AI systems scale “well.” “In addition to Cristiano, the Safety Institute’s leadership team includes Mara Quintero Campbell, a Department of Commerce official who, as acting COO and chief of staff, led projects related to COVID-19 response and CHIPS Act enforcement. It is planned to be completed,” Ars reported. “Adam Russell, an expert in human-AI teaming, prediction, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University and Mark Rattonello, a former White House global AI policy expert who helped draft Biden's AI executive order, will be responsible for international engagement. ”
U.S. Secretary of Commerce Gina Raimondo said in a press release: “We need our nation’s best talent to protect our nation’s global leadership in responsible AI and ensure that we are prepared to fulfill our mission of mitigating the risks and harnessing the benefits of AI.” That's why we've selected the best in their fields to join the American AI Safety Institute leadership team. ”