Google recently announced a temporary halt to its Gemini AI model’s ability to generate images of people. This decision came after significant backlash due to the model’s failure to accurately represent white individuals. The controversy arose when users noticed that images created by Gemini predominantly depicted people of color, even in historical contexts where white individuals would be expected.
In response to the criticism, Google issued a statement on Thursday, explaining that they would pause the generation of images of people by the Gemini model while they work on improving its accuracy. The company acknowledged the issue, noting that while Gemini typically produces a diverse range of images to cater to users worldwide, it had fallen short in this particular instance.
It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR
— Deedy (@debarghya_das) February 20, 2024
The images generated by Gemini circulated widely on social media, sparking ridicule and criticism. Users accused Google of prioritizing “wokeness” over accuracy, citing examples such as depictions of non-white Swedish women and scenes featuring Black and Asian Nazi soldiers, which were deemed inaccurate and inappropriate.
Debarghya Das, a founding engineer at Glean, further highlighted the issue by sharing several images generated by Gemini that failed to acknowledge the existence of white people.
This incident adds to Google’s challenges in the field of AI technology as it strives to compete with rivals like OpenAI. Last year, Google faced criticism and issued apologies for inaccuracies in its AI products, including its chatbot Bard mistakenly claiming that the James Webb Space Telescope had taken the first images of a planet outside the solar system.