The Leibniz Supercomputing Centre (LRZ), Cerebras Systems, and Hewlett Packard Enterprise (HPE), announced the joint development and delivery of a new system featuring next-generation AI technologies to significantly accelerate scientific research and innovation in AI for Bavaria. The new system is funded by the Free State of Bavaria through the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region's mission to becoming an international AI hotspot. The new system is also an additional resource to Germany's national supercomputing computing center, and part of LRZ's Future Computing Program that represents a portfolio of heterogenous computing architectures across CPUs, GPUs, FPGSs and ASICs.

Delivering next-generation AI with scalable and accelerated compute features: The new system is purpose-built to process large datasets to tackle complex scientific research. The system is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, which makes it the first solution in Europe to leverage the Cerebras CS-2 system. The HPE Superdome Flex server delivers a modular, scale-out solution to meet computing demands and features specialized capabilities to target large, in-memory processing required to process vast volumes of data.

Additionally, the HPE Superdome Flex server's specific pre-and post-data processing capabilities for AI model training and inference is ideal to support the Cerebras CS-2 system, which delivers the deep learning performance of 100s of graphics processing units (GPUs), with the programming ease of a single node. Powered by the processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 delivers greater AI-optimized compute cores, faster memory, and more fabric bandwidth than any other deep learning processor in existence. Cerebras CS-2 delivers the AI chip with 850,00 computing cores: AI methods and machine learning need computing power.

Currently, the complexity of neural networks used to analyze large volumes of data is doubling in a matter of months. However, such applications have so far been run primarily on general purpose or graphics processors (CPU and GPU). Offering a powerful system and software for AI development: To support the Cerebras CS-2 system, the HPE Superdome Flex server provides large-memory capabilities and unprecedented compute scalability to process the massive, data-intensive machine learning projects that the Cerebras CS-2 system targets.

The HPE Superdome Flex server also manages and schedules jobs according to AI application needs, enables cloud access, and stages larger research datasets. In addition, the HPE Superdome Flex server includes a software stack with programs to build AI procedures and models. In addition to AI workloads, the combined technologies from HPE and Cerebras will also be considered for more traditional HPC workloads in support of larger, memory-intensive modeling and simulation needs.