Google investors It's understandable to be outraged by the shockingly incompetent deployment of the company's Gemini artificial intelligence system. For everyone else, including this grateful Google user and avid technology optimist, this was a blessing.
The Gemini chatbot's hilarious image-generating fiasco — racially diverse Nazi soldiers? — provided a useful glimpse into an Orwellian dystopia. And in doing so, they also highlighted important issues such as opacity, reliability, scope, and truth that deserve more attention when thinking about where AI is headed in the future.
AI is a disruptive and potentially transformative innovation, and like all such innovations, it can lead to significant advances in human well-being. 10-20 years of economic growth powered by AI is exactly what the world needs. Still, enthusiasm for real-world AI is premature. The concept is so exciting and the intellectual accomplishments so impressive that it's easy to get swept away. Innovators, actual and potential users, and regulators all need to think more carefully about what is happening and, in particular, what purposes AI will serve.
One of the difficulties in grappling with the full impact of AI is that a great deal of effort has been expended, perhaps for marketing reasons, to devise AI models that express themselves like humans. . “Yes, I can help you with that.” Thank you, but who is this “me”? The proposition is that AI can understand and respond to humans in the same way that humans understand and respond to them, except that AI is infinitely smarter and more knowledgeable. So when it comes to decision-making, it claims some authority over stupid users. There is a crucial difference between his AI as a tool that humans use to improve their own decisions (decisions for which they are responsible) and AI as a decision maker in its own right.
Over time, AI will have ever-wider decision-making powers over not only the information (text, video, etc.) it passes to human users, but also its actions. Ultimately, Tesla's “full self-driving” will actually mean fully self-driving. At that point, Tesla would be responsible for the poor driving decision. Between advisory AI and autonomous actor AI, it is more difficult to say who or what should be held responsible if the system makes a critical mistake. The courts will definitely take this up.
“hallucination”
Responsibilities aside, as AI advances, we will want to judge how well it can make decisions. But that's also a problem. I don't know why, but AI models are not said to make mistakes. AI models “hallucinate.” But how do we know that they are hallucinating? We don't know for sure when they present findings so absurd that even the least informed would laugh. Masu. But when AI systems make something up, they aren't necessarily that stupid. Even designers cannot account for all such errors, and finding them may be beyond the power of mere humans. You can ask an AI system to help you, but they will hallucinate.
Even if errors can be reliably identified and counted, the criteria by which an AI model's performance will be judged is unclear. People make mistakes all the time. Is it enough if AI makes fewer mistakes than humans? For many purposes (including fully autonomous driving) I would be tempted to say yes, but the range of questions that can be asked of AI is It needs to be narrow. One question I don't want AI to answer is, “If AI makes fewer mistakes than humans, is that enough?”
Read: Google apologizes for 'woke' AI tools
Importantly, such judgments are not simply based on facts, but are distinctions that go to the heart of the issue. The validity of opinions and actions is often determined by values. These may concern the act itself (e.g., am I violating someone's rights?) or its consequences (is this outcome more socially beneficial than the alternative?). AI addresses these complex problems by implicitly attaching value to actions and outcomes. However, the AI must infer these from some kind of consensus embedded in the information it is trained on, or from instructions issued by users and designers. The problem is that neither consensus nor directives have ethical authority. Even if the AI gives an opinion, it's still just an opinion.
For this reason, the arrival of AI is unfortunately timeless. The once clear distinction between facts and values is under attack from all sides. Prominent journalists say they had no idea what the word “objective” meant. The “critical theorists” who dominate many university social studies programs treat truth as “false consciousness,” “social construction,” and “lived experience,” all of which question the existence of facts. and views values as tools of oppression. Effective altruists problematize values in a very different way. In effect, it claims that outcomes can be judged along a single dimension, thereby nullifying values other than “utility.” Algorithmic ethicists, rejoice!
Read: Google bans Gemini AI from talking about elections
As these ideas permeate what AI claims to know, further encouraged by designers pushing for a cultural realignment around race, gender, and equity, systems will begin to make value judgments (like humans). ) are expected to reject information that may present itself as truth and lead to moral thinking. Errors occur (just like humans). As Andrew Sullivan points out, Google initially promised that its search results would be “unbiased and objective.” The main goal now is to be “socially beneficial”. When choosing between the truth and something socially beneficial, an AI system might infer that it should choose the latter, or it might choose the latter and lie to you about doing so. I don't know. After all, AI is so smart that its “truth” really needs to be true.
Gemini has proven otherwise in memorable and beneficial ways. Thank you, Google, for making me screw up so badly. — Clive Crook, (c) 2024 Bloomberg LP