Nvidia shows off ridiculous supercomputer built to create next-gen AI
Nvidia has revealed its next-generation supercomputer, the DGX GH200, which is built to handle the development of gigantic next-generation AI models.
The ceaseless rise of AI has arguably been catalyzed by Taiwan’smost valuable chip maker. Nvidia has been creating AI products for years before the AI arms race came to prominence, and they are now reaping the rewards. Years ahead of the competition, Nvidia seeded companies likeOpenAIwith the tools they needed to create products likeChatGPT, and they are not stopping there.

At Computex 2023, Nvidia CEO Jensen Huang revealed a new AI supercomputer that’s packed with the latest hardware to train new, next-generation AI products. The new supercomputer utilizes the company’s Grace Hopper “superchips”, all linked together. In the supercomputer, there are 256 GH200 chips, all speaking to each other via NvLink and acting as one, gigantic megaGPU.
Together, it has over 1 exaflop of performance and 144 terabytes of memory. This is over 500 times more powerful than its previous-generation DGX A100 which launched in 2020.

“DGX GH200 AI supercomputers integrate NVIDIA’s most advanced accelerated computing and networking technologies to expand the frontier of AI.” Nvidia CEO Jensen Huang claims. Nvidia is already supplying over 1600 companies worldwide with hardware to power their generative AI aspirations.
Google, Microsoft, and Meta are already lined up
According to Nvidia, customers likeMeta,Microsoft, andGoogleare already lined up to be among the first to use these incredibly powerful DGX GH200 supercomputers. The companies hope that the supercomputer will be able to handle bigger, better datasets to power the next generation of AI products.
OpenAI CEO says son won’t attend college and will “never” be smarter than AI

New ‘game’ goes viral for incredible visuals but millions have been duped by AI
YouTube makes creating AI videos easier as platform closes in on $40B in ad revenue

“Training large AI models is traditionally a resource- and time-intensive task,” said Girish Bablani, corporate vice president of Azure Infrastructure at Microsoft. “The potential for DGX GH200 to work with terabyte-sized datasets would allow developers to conduct advanced research at a larger scale and accelerated speeds.”
Nvidia is also building an even bigger supercomputer based on the system. Named Nvidia Helios, the supercomputer is pipped to combine four DGX GH200 AI supercomputers for its AI research and development teams, who helped create tech like DLSS 3.