Google has decided to put a temporary halt on its image generation feature in the Gemini AI platform following concerns over inaccuracies. There have been reports of the program producing historically incorrect images, such as depicting individuals wearing Nazi uniforms in response to requests for images of a “1943 German Soldier.”
I’ve never been so embarrassed to work for a company. pic.twitter.com/eD3QPzGPxU
— St. Ratej (@stratejake) February 21, 2024
A person identifying themselves as a Google employee expressed embarrassment over the situation, although there’s been no independent verification of their employment status. Google acknowledged the issue, admitting that the program didn’t quite meet expectations when dealing with historical prompts.
Prabhakar Raghavan, Google’s senior vice president of knowledge and information, addressed the matter in a blog post. He clarified that while the aim of Gemini is to offer diverse representations, it unfortunately didn’t filter out inappropriate responses. Raghavan assured users that specific prompts should yield accurate results, especially those concerning individuals in specific professions or cultural contexts.
We’re aware that Gemini is offering inaccuracies in some historical image generation depictions. Here’s our statement. pic.twitter.com/RfYXSgRyfz
— Google Communications (@Google_Comms) February 21, 2024
This incident adds to the ongoing discussion surrounding AI technology. Recent controversies, like sexually explicit AI-generated images surfacing online and AI-generated calls impersonating President Joe Biden received by voters in New Hampshire, have prompted calls for regulation to ensure responsible use of such technology.