Google's Gemini AI chatbot has come under fire for producing historically inaccurate images, especially when depicting people from different eras and nationalities. Google is aware of this issue and is actively working to improve Gemini's accuracy, and while it emphasizes diversity in image generation, adjustments are needed to meet historical accuracy standards. He emphasizes that. 9to5Google reports: The Twitter/X post that specifically brought this issue to light included a prompt asking Gemini to have its AI generate images of women from Australia, the United States, the United Kingdom, and Germany. All four prompts produced images of dark-skinned women, according to Google's Jack Krawcyczk. It pointed outit's not wrong, but it may not be what you expect.
However, a bigger problem that I noticed in the wake of that post is that Gemini has a hard time accurately portraying humans in historical context, and the people depicted have darker skin tones. They often had a specific nationality that was not historically accurate. Google in a statement Post to Twitter/Xacknowledges that its Gemini AI image generation is “off the mark” when it comes to historical depictions, and the company acknowledges that it is working on improvements. Google also does say that the diversity represented in images produced by Gemini is “generally a good thing,” but it's clear that some tweaking is needed. References: Why Google's new AI Gemini was accused of refusing to acknowledge the existence of white people (The Daily Dot)