Google has decided to put a temporary stop to its Gemini AI model’s ability to create images of people following some serious criticism. People were not happy because the images mostly showed people of color, even in situations where white people should have been depicted.
In a statement released on Thursday, Google acknowledged the problem, admitting that Gemini’s AI didn’t quite get it right this time. They promised to work on fixing the issue and release a better version soon.
The controversy started when images generated by Gemini started circulating on social media. Many people found them laughable and upsetting. Some accused Google of trying to be overly politically correct at the expense of accuracy. For example, they showed non-white Swedish women and portrayed Black and Asian individuals as Nazi soldiers.
Debarghya Das, a well-known figure in the tech world, pointed out the struggle in getting Gemini to recognize white people, sharing examples of images lacking white representation.
This incident is part of a larger discussion about AI models often failing to include people of color and sometimes reinforcing stereotypes. Google, in its competition with companies like OpenAI, has faced setbacks before. Just last year, they had to apologize when their AI chatbot Bard made false claims during a demonstration about the James Webb Space Telescope.
By pausing Gemini’s ability to generate images of people, Google is showing it’s taking these concerns seriously and is committed to making its AI technologies more inclusive and accurate.