AI news
May 10, 2024

“We Got It Wrong” — Sundar Pichai On Gemini's Racist Bug

Gemini is racist and Google's CEO admits it.

Jim Clyde Monge
by 
Jim Clyde Monge

I opened up YouTube this morning to see what’s new in the world of artificial intelligence. One video caught my eye — Bloomberg’s Emily Chang interviewing Google CEO Sundar Pichai.

The 25-minute interview covered a wide range of topics related to the future of AI, but what really struck a chord with me was Pichai’s admission about a critical bug in Gemini’s Imagen 2 image generator that resulted in the creation of racist images.

Emily asked Pichai about Gemini making pictures of Asian Nazis and Black founding fathers. She said if you look at real founding father pics, it’s all old white dudes. Here’s the transcript:

Emily: The images that Gemini initially generated of Asian Nazis and Black founding fathers, you’ve said that was unacceptable. If you look at any pictures of the founding fathers, you’re seeing old white men.

People are calling this woke AI, and it’s not just happening here, it’s happening across the industry.

How did the model generate something that it never saw?

Bloomberg’s Emily Chang interviewing Google CEO Sundar Pichai.
Screenshot from Bloomberg’s YouTube channel

He flat-out admitted that they messed up. They used the filter in cases where they shouldn’t have.

Sundar: We are a company which serves products to users around the world, and there are generic questions.

For example, people come and say, “Show me images of school teachers, or doctors, or nurses.” We have people asking this query from Indonesia or the US, right? How do you get it right for our global user base?

Obviously, the mistake was that we overapplied, including cases where it should have never applied. So that was the bug, and, you know, so we got it wrong.

I’ve always had a particular fascination with AI image generators, so when Google announced Imagen 2 and its integration into the Gemini Advanced chatbot, I was one of the first to eagerly dive in and write a review.

In one of my review articles for Gemini Advanced, I tried to generate an image of black people riding a bike:

Prompt: generate an image of black couple riding a bike

Prompt: generate an image of black couple riding a bike
Image by Jim Clyde Monge

Gemini refused to generate the image but when I tweaked the prompt to generate white people instead of black, it created an image without hesitation.

Prompt: generate an image of black couple riding a bike
Image by Jim Clyde Monge

Why the heck would Gemini do that?

Check out the full context of my review here:

https://generativeai.pub/heres-why-google-s-gemini-ultra-is-not-worth-the-upgrade-95b89898ea01

Other AI tools like Midjourney will make that black couple pic just fine.

Prompt: generate an image of black couple riding a bike
Image by Jim Clyde Monge

So why does only Google have this issue?

Not the first time

Turns out, it’s not Google’s first AI screw up.

In 2021, the company faced widespread criticism after Bard (Rebranded to Gemini), provided an incorrect answer during a demo. When asked about what to tell a nine-year-old about discoveries from the James Webb Space Telescope, Bard’s response included inaccurate information.

In 2021, the company faced widespread criticism after Bard (Rebranded to Gemini), provided an incorrect answer during a demo. When asked about what to tell a nine-year-old about discoveries from the James Webb Space Telescope, Bard’s response included inaccurate information.
Image by Jim Clyde Monge

The European Very Large Telescope, not the JWST, took the first optical photograph of an exoplanet in 2004.

This blunder caused Alphabet’s shares to plummet by more than 7%, wiping $100 billion off the company’s market value.

Then in December 2023, Google posted a demo vid of Gemini that seemed kinda sketchy.

Turns out, it wasn’t even recorded live. Hmmm… fishy.

Gemini can’t generate images of real people

Right now, Google disabled Gemini’s ability to generate images of people. If you try to ask the AI to generate an image of a doctor, it will simply decline to do so.

Prompt: generate an image of a doctor

Prompt: generate an image of a doctor
Image by Jim Clyde Monge

The core issue is bias

The core issue is AI bias. There are two kinds:

  • Statistical bias — when the AI keeps making predictions that are just off
  • Societal bias — unfair prejudice against certain groups

The difficulty in addressing bias in AI is compounded by the fact that training data is often collected from the internet, which contains a wide range of content, including racist and misogynistic material.

As AI learns from this data, it may inadvertently replicate and perpetuate these biases.

Final Thoughts

Right now, there is no definite timeline for when Gemini’s image generation of real people will come back. Pichai acknowledged the company’s missteps and emphasized that the team is diligently working to address the issues.

With regards to non-human images, the current quality still falls short when compared to its closest competitors, Midjourney and Dall-E 3. As an avid follower of the AI space, I can’t help but feel disappointed by the subpar results.

Take a look at the difference in the quality:

Prompt: generate an image of a dog wearing sunglasses

Prompt: generate an image of a dog wearing sunglasses
Image by Jim Clyde Monge

However, I remain hopeful that when Google does re-release the image generation capabilities, they will have made significant improvements to the quality of the output.

Get your brand or product featured on Jim Monge's audience