The European Parliament passed a sweeping AI law in March, but “Microsoft, Google and open AI leaders all want AI regulation in the US,” they wrote. CIO magazine. According to the article, “Even the Chamber of Commerce, which often opposes corporate regulation, is calling on Congress to protect human rights and national security as the use of AI expands,” while the White House is pushing for an AI Bill of Rights. A blueprint has been published.
However, even though the U.S. Congress has not passed an AI bill, 16 different states in the U.S. have, and “state legislatures have already introduced more than 400 AI bills across the U.S. this year. That’s six times the number that will be deployed in 2023.”
Gori Mahadavi, a lawyer at global law firm BCLP, which founded the AI Working Group, said much of the bill targets both the developers of AI technology and the organizations that use AI tools. And with populous states such as California, New York, Texas, and Florida passing or considering AI legislation, companies operating across the country won't be able to avoid regulation. Companies developing and using AI must be ready to answer questions about how AI tools work, even when deploying simple automated tools like spam filtering, says Mahdavi. says Mr. “These questions are going to come from consumers, and they're going to come from regulators,” she added. “Clearly there will be increased surveillance across the board here.”
According to the article, there will be sector-specific bills and bills requiring transparency (of both development and production). “The third category of AI bills covers a wide range of AI bills, often focusing on transparency, anti-bias, impact assessment requirements, consumer opt-out provisions, and other issues. It’s coming.”
One example the article points to is Senate Bill 1047, introduced in the California state legislature in February, which would “require safety testing of AI products before they are released, and prohibit AI developers from allowing others to create derivative models of their products.” “It requires people to prevent people from doing things that were being used to cause serious harm.”
Adrian Fisher, an attorney at Basecamp Legal, a Denver law firm that is monitoring the state's AI bill, said: CIO While much of the legislation promotes best practices in privacy and data security, the fragmented regulatory environment “results in the need for national standards and laws to provide a consistent framework for the use of AI.” It's highlighted,” he said.
Thanks to Slashdot reader snydeq for sharing the article.