In the past, compute and direct-attached storage have been used to feed data to AI workflows. But scaling with traditional storage can lead to disruption and downtime for ongoing operations. Disruptions affect the productivity of data scientists and data engineers. Downtime or slow AI performance can set off a chain reaction that reduces developer productivity and causes operational expenses to spin out of control.

Advances in individual and clustered GPU computing architectures using NVIDIA DGX systems have made them the preferred platform for workloads such as high-performance computing (HPC), deep learning (DL), video processing, and analytics. Maximizing performance in these environments requires a supporting infrastructure, including storage and networking, that can keep GPUs fed with data. Dataset access must be provided at ultralow latencies with high bandwidth.

NetApp® EF-Series AI tightly integrates DGX A100 systems, NetApp EF600 all-flash arrays, and the BeeGFS parallel file system with state-of-the-art InfiniBand networking. NetApp EF600 AI simplifies artificial intelligence deployments by eliminating design complexity and guesswork. You can start small and scale seamlessly from science experiments and proofs-of-concept to production and beyond.

EF600 powered BeeGFS building blocks have been verified with up to eight DGX A100 systems. By adding more of these building blocks, the architecture can scale to multiple racks supporting many DGX A100 systems and petabytes of storage capacity. This approach offers the flexibility to alter compute-to-storage ratios independently based on the size of the data lake, the DL models that are used, and the required performance metrics.

Investing in state-of-the-art compute demands state-of-the-art storage that can handle thousands of training images per second. You need a high-performance data services solution that keeps up with your most demanding DL training workloads.

The NetApp EF600 all-flash array gives you consistent, near-real-time access to data while supporting any number of workloads simultaneously. To enable fast, continuous feeding of data to AI applications, EF600 storage systems deliver up to 2 million cached read IOPS, response times of under 100 microseconds, and 42GBps sequential read bandwidth in one enclosure. With 99.9999% reliability from EF600 storage systems, data for AI operations is available whenever and wherever it's needed.

BeeGFS is a parallel file system that provides flexibility, which is key to meeting diverse and evolving AI workloads. NetApp EF-Series storage systems supercharge BeeGFS storage and metadata services by offloading RAID and other storage tasks, including drive monitoring and wear detection.

The DGX A100 system is a next-generation DL platform that requires equally advanced storage and data management capabilities. By combining DGX A100 with BeeGFS building blocks based on NetApp EF600 systems, this verified architecture can be implemented at almost any scale. You could pair a single DGX A100 with a single BeeGFS building block. Or you could have up to 140 DGX A100 systems with a scalable number of BeeGFS building blocks presenting a single storage namespace.

Combined with the outstanding cloud integration and software-defined capabilities of the NetApp product portfolio, NetApp storage solutions enable a full range of data pipelines that span the edge, the core, and the cloud for successful DL projects. To learn more, read our two related NetApp Verified Architecture documents, NetApp EF-Series AI with NVIDIA DGX A100 Systems and BeeGFS: NVA Design and NetApp EF-Series AI with NVIDIA DGX A100 Systems and BeeGFS: NVA Deployment.

Attachments

  • Original document
  • Permalink

Disclaimer

NetApp Inc. published this content on 18 March 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 18 March 2021 17:19:02 UTC.