The U.S. government must take “swift and decisive” action to avert significant national security risks posed by artificial intelligence (AI), which in the worst case scenario poses an “extinction-level threat to humanity.” The U.S. government must act “swiftly and decisively,” the report said. The US government announced this on Monday. time: “Current frontier AI developments pose urgent and growing risks to national security,” said the report, obtained by Time magazine ahead of publication. “The Rise of Advanced AI and AGI” [artificial general intelligence] “It could destabilize global security in a way reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that can perform most tasks at or above the human level. Although such systems do not currently exist, major AI laboratories are working towards them, and many expect his AGI to become a reality within the next five years.
As part of their research, the report's three authors interviewed more than 200 government officials, experts, and employees of cutting-edge AI companies (including OpenAI, Google DeepMind, Anthropic, and Meta) over a period of more than a year. I spent a lot of time working on this report. Testimony from some of these conversations paints a disturbing picture, in which many AI safety professionals in cutting-edge labs are using perverse tactics to drive decision-making by executives who control the company. This suggests that they are concerned about incentives. The completed document, titled “Action Plan to Improve the Safety and Security of Advanced AI,” sets out a series of sweeping and precedent-setting changes that, if enacted, would fundamentally disrupt the AI industry. Recommends policy measures that do not. The report recommends that Congress should make it illegal to train AI models using more than a certain level of computing power.
The report recommends that thresholds should be set by a new federal AI agency, but the report notes, by way of example, that the thresholds should be set by a new federal AI agency that uses They suggest that it may be set at a value slightly above the level of computing power that will be used. -4 and Google's Gemini. The new AI agency will need government permission to train and deploy new models above a certain threshold for AI companies at the industry's “frontier,” the report added. Authorities should also “urgently” consider outlawing the disclosure of the “weights”, or inner workings, of powerful AI models, for example under open source licenses, with violations punishable by prison terms. The report states that penalties may be imposed. It recommends that the government further tighten controls over the production and export of AI chips and direct federal funds to “tailoring” research aimed at making advanced AI more secure.