NVIDIA announced that the NVIDIA H100 Tensor Core GPU is in full production, with global tech partners planning in October to roll out the first wave of products and services based on the NVIDIA Hopper™ architecture. Unveiled in April, H100 is built with 80 billion transistors and benefits from a range of technology breakthroughs. Among them are the powerful new Transformer Engine and an NVIDIA NVLink® interconnect to accelerate the largest AI models, like advanced recommender systems and large language models, and to drive innovations in such fields as conversational AI and drug discovery.

In addition to Hopper's architecture and Transformer Engine, several other key innovations power the H100 GPU to deliver the next massive leap in NVIDIA's accelerated compute data center platform, including second-generation Multi-Instance GPU, confidential computing, fourth-generation NVIDIA NVLink and DPX Instructions. A five-year license for the NVIDIA AI Enterprise software suite is now included with H100 for mainstream servers. This optimizes the development and deployment of AI workflows and ensures organizations have access to the AI frameworks and tools needed to build AI chatbots, recommendation engines, vision AI and more.