AI news
January 20, 2024

Mark Zuckerberg Is Spending Billions On AI Chips

By the end of 2024, Meta AI aims to purchase an additional 350,000 H100 GPUs.

Jim Clyde Monge
by 
Jim Clyde Monge

Today, Mark Zuckerberg announced Meta AI’s ambitious new plans: the company is investing tens of billions of dollars in AI-optimized computer chips. By the end of 2024, Meta AI aims to purchase an additional 350,000 H100 GPUs, bringing their total compute count to an impressive 600,000 GPUs.

These H100 GPUs, produced by NVIDIA, come with a hefty price tag of about $30,000 each.

350,000 x 30,000 = $10.5 billion. That's one heck of a purchase!

But this move isn’t just a purchase.

It’s a proclamation that Zuckerberg is betting big on AI, and he’s playing to win.

If you haven’t seen the video announcement yet, here’s a link.

Here are the takeaways from that video:

  • Meta’s AI research group, FAIR, is getting moved to the same part of the company as the team building generative AI products.
  • Mark Zuckerberg confirmed Meta is currently training LLaMA 3, a part of their roadmap for future AI models.
  • Meta is significantly increasing its infrastructure, aiming for up to 600,000 Nvidia H100s for computational support.
  • Meta is focusing on integrating AI with everyday devices, like glasses, to enhance user interaction with AI.

“Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit” — Mark Zuckerberg

While Meta’s timeline for achieving AGI remains unclear, this announcement makes it evident that Zuckerberg aims to compete with AI leaders like OpenAI and Google.

It’s quite awkward that a company supposedly dedicated to human interaction now wants to pivot towards humans talking to machines.

What are H100 GPUs?

The H100 GPU, developed by NVIDIA, is currently the most powerful GPU chip on the market and is designed specifically for AI applications.

The H100 contains 80 billion transistors, which is six times more than its predecessor, the A100 chip.

H100 GPU from NVIDIA
H100 GPU from NVIDIA

According to a Financial Times report, NVIDIA should have shipped a total of 550,000 of its latest H100 GPUs worldwide in 2023—that's a lot of GPUs.

Developing the Next LLaMA

Llama is an open-source large language model that’s available for research and commercial use. Although the 70B model can outperform other open-source language models, it’s not as good as OpenAI’s GPT-4 or Google’s PaLM 2.

That’s why Meta is building the next generation of Llama. According to Zuckerberg, it will have code generation capability while improving reasoning and planning abilities.

Final Thoughts

Aside from Stability AI, there aren’t many companies I know that are building open-source language models that can compete with proprietary giants like Google and Microsoft. So it’s encouraging to see Meta make big investments into open-sourcing AI.

Historically, technology has connected people by facilitating human-to-human communication. But as Zuckerberg notes, the future will increasingly involve humans conversing with AI systems too. He seems convinced that this AI future is imminent and positive, even if the public isn’t fully on board yet.

Time will tell whether open-sourcing leads to safer, more aligned AI or not. But you have to admire Meta’s willingness to open up their research for the greater good. And if they can democratize access to advanced AI, that could be empowering for countless developers and startups worldwide.