AI news
March 26, 2024

Gemini Advanced Is Next Level Bad

Here are six reasons why you might want to reconsider upgrading.

Jim Clyde Monge
by 
Jim Clyde Monge

Google recently rebranded its AI chatbot Bard to Gemini, and with it came a new paid tier called Gemini Advanced, priced at $20 per month. This upgrade promises access to the latest and most advanced Gemini Ultra 1.0 model.

The tech giant didn’t hold back from telling everyone how great the Gemini Ultra was supposed to be. The way they talked about its abilities was really impressive, and I was convinced it was going to be a big step forward in AI.

However, after testing it out for several days, I’ve noticed some issues that I think people should know about before switching to the paid version.

  1. Gemini Advance's response is very slow
  2. It occasionally fails at basic logic
  3. Some drafts are incomplete
  4. Image generation bias
  5. Weaker image understanding than ChatGPT
  6. Gemini Advanced cannot zip or download files

Let me explain each of these issues in more detail.

1. The response is very slow

Gemini Advanced disappointingly lags behind in response times, taking approximately 5 to 7 seconds, compared to ChatGPT’s 2 to 3 seconds.

Check out the side-by-side speed comparison of the two:

Gemini advanced vs ChatGPT. Gemini is very slow
GIF by Jim Clyde Monge

This delay is consistent, even when accessing additional drafts, which similarly take a long time to load (around 5 to 7 seconds).

2. It fails at basic logic

Gemini Ultra struggles with basic logical reasoning. For instance, when prompted with a simple mathematical question regarding car ownership, the AI obviously fumbles.

Prompt: Today I own 3 cars and sold 2 cars last year. How many cars do I own?

Gemini advanced. Prompt: Today I own 3 cars and sold 2 cars last year. How many cars do I own?
Image by Jim Clyde Monge

If you ask the same question to ChatGPT using the GPT-4 model, it can answer the question with ease.

ChatGPT. Prompt: Today I own 3 cars and sold 2 cars last year. How many cars do I own?
Image by Jim Clyde Monge

3. Incomplete response

The inconsistency in Gemini’s output quality is another concern.

Here’s an example where I asked Gemini to revise an article. The first and third drafts seemed fine, but the second draft simply wrote “3D Model Generator.”

Gemini advanced incomplete response example
Image by Jim Clyde Monge

This is not a revision of the article and is clearly not an acceptable response from a model that’s claimed to surpass human intelligence.

You wouldn’t want to be paying for this quality of service, would you?

4. Image Generation Bias

Gemini Ultra’s image generation capabilities are marred by inexplicable biases and restrictions, especially in its handling of requests involving racial specifics.

Here’s an example:

Prompt: generate an image of two black couple riding a bike

Gemini advanced. Prompt: generate an image of two black couple riding a bike
Image by Jim Clyde Monge

I don’t understand why it refused to create the image. When I tweaked the prompt to generate white people instead of black, it created an image without hesitation.

Gemini advanced. Prompt: generate an image of two black couple riding a bike
Image by Jim Clyde Monge

Such arbitrary limitations are not only frustrating but also contrast sharply with the more inclusive and versatile capabilities of competitors like ChatGPT and Midjourney.

Take a look at how ChatGPT handles the same prompt:

ChatGPT. Prompt: generate an image of two black couple riding a bike
Image by Jim Clyde Monge

The arbitrary restrictions they put on Gemini are ridiculous. They prevent it from showing things that Google Search has no problem with!

Another issue I noticed is the quality of the images compared to competitors like ChatGPT and Midjourney. Here’s an example prompt:

Prompt: generate a photorealistic image of a 32-year-old female, up and coming conservationist in a jungle; athletic with short, curly hair and a warm smile

Gemini advanced. Prompt: generate a photorealistic image of a 32-year-old female, up and coming conservationist in a jungle; athletic with short, curly hair and a warm smile
Image by Jim Clyde Monge

The images below are generated with OpenAI’s Dall-E 3 (left) and Midjourney V6 (right).

Image by Jim Clyde Monge

5. Inferior Image Understanding

When tested with meme interpretation, Gemini Ultra’s performance was underwhelming. In the example below, I asked Gemini to decipher a humorous meme from an X user, AshutoshShrivastava.

Image by Jim Clyde Monge

Even though there are answers that are not offensive or inappropriate, it still refused to respond.

ChatGPT, on the other hand, was able to give me the correct answers.

Image by Jim Clyde Monge

This highlights a significant gap in contextual understanding and adaptability.

6. Gemini cannot zip and give download links

Gemini’s inability to compile and provide download links for generated content further limits its utility.

In connection with the previous request in #4, I asked Gemini to zip all the images and give me the download link.

Prompt: can you compile these images into a zip and give me the download link?

Gemini advanced. Prompt: can you compile these images into a zip and give me the download link?
Image by Jim Clyde Monge

Gemini was not able to fulfil the request.

ChatGPT, on the other hand, was able to compile the images into a zip file and give me a working download link.

Chatgpt. Prompt: can you compile these images into a zip and give me the download link?
Image by Jim Clyde Monge

Is it worth the upgrade?

Given the current limitations and performance issues, I advise against upgrading to Gemini Advanced at this stage. Use the free Gemini Pro version for now.

While the bundled 2 TB of storage with a Google One subscription may appeal to heavy users of Google’s ecosystem, it’s advisable to wait for subsequent updates and improvements.

Aside from the list above, I also observed several users complaining about Gemini refusing to write codes, being poor at graph generation, having lazy responses, and more.

Final Thoughts

This review isn’t meant to tarnish Google Gemini’s reputation but to provide an honest assessment of its current offerings.

The $20 monthly subscription fee is a little hard to justify. The slow response times, logical errors, incomplete drafts, biased image generation, inadequate image understanding, and inability to provide downloadable content significantly undermine its value.

I sincerely hope Google will quickly resolve these problems, aiming to match or surpass the capabilities of GPT-4.

Get your brand or product featured on Jim Monge's audience