The Nvidia GB200 Grace Blackwell Superchip Nvidia
Nvidia has unveiled a 鈥渟uperchip鈥 for training artificial intelligence models, the most powerful it has ever produced. The US computing firm, which has recently rocketed in value to become the world鈥檚 third-largest company, has yet to reveal the cost of its new chips, but observers expect a high price tag that will make them accessible to only a few organisations.
The chips were announced by Nvidia CEO Jensen Huang at a press conference in San Jose, California, on 18 March. He showed off the company’s new Blackwell B200 graphics processing units (GPUs), each of which has 208 billion transistors 鈥 the tiny switches at the heart of modern computing devices 鈥 compared with the 80 billion transistors of Nvidia鈥檚 current-generation Hopper chips. He also revealed the GB200 Grace Blackwell Superchip, which combines two of the B200 chips.
Advertisement
鈥淏lackwell is just going to be an amazing system for generative AI,鈥 said Huang. 鈥淎nd in the future, data centres are going to be thought of as AI factories.鈥
GPUs have become coveted hardware for any organisation seeking to train large AI models. During AI chip shortages in 2023, Elon Musk spoke of GPUs being 鈥渃onsiderably harder to get than drugs鈥 and some academic researchers without access bemoaned being 鈥淕PU poor鈥.
Nvidia claims its Blackwell chips can deliver 30 times performance improvement when running generative AI services based on large language models such as OpenAI’s GPT-4 compared with Hopper GPUs, all while using 25 times less energy.
Free newsletter
Sign up to The Daily
The latest on what鈥檚 new in science and why it matters each day.

It says that whereas GPT-4 required approximately 8000 Hopper GPUs and 15 megawatts of power to perform 90 days of training, the same AI training could be done using just 2000 Blackwell GPUs consuming 4 megawatts of power.
The company hasn’t yet revealed the cost of the Blackwell GPUs, but the price tag is likely to reach eye-watering levels, given that the Hopper GPUs already cost between $20,000 and $40,000 each. This focus on developing more powerful and expensive chips means they 鈥渨ill only be accessible to a select few organisations and countries鈥, says at Hugging Face, a company that develops tools for sharing AI code and datasets. 鈥淎part from the environmental impacts of this already very energy-intensive tech, this is truly a Marie Antoinette, 鈥榣et them eat cake鈥 moment for the AI community,鈥 she says.
The electricity demand from data centre expansions 鈥 largely driven by the generative AI boom 鈥 is expected to double by 2026, matching the energy consumption of Japan today. That can also come with steep rises in carbon emissions if the data centres supporting AI training continue to rely on fossil fuel power plants.
Global demand for GPUs has also meant geopolitical complications for Nvidia amid growing tensions and strategic competition between the US and China. The US government has implemented export controls on advanced chip technologies to delay China鈥檚 AI development efforts in a move that it describes as vital to US national security 鈥 and that has forced Nvidia to create less powerful versions of its chips for Chinese customers.
Topics:



