Organizations today are generating massive volumes of data at the network edge. To gain maximum business value from smart sensors and IoT data, they are looking for a real-time event-streaming solution that enables edge computing. Computationally demanding jobs are increasingly performed at the edge, outside of data centers. Artificial Intelligence (AI) inferencing is one of the drivers of this trend. Edge servers provide sufficient computational power for these workloads, especially when using accelerators, but limited enterprise-class storage is often an issue, especially in multiserver environments.

NetApp and Lenovo have partnered to develop an affordable, easy-to-manage, validated edge inferencing solution that is simple, smart, and secure, at an affordable price. The solution helps you meet requirements with a modern all-flash array that offers comprehensive data services, integrated data protection, seamless scalability, higher levels of performance, and cloud integration.

The Lenovo ThinkSystem SE350 is an edge server that is designed to enable traditional IT and OT applications as well as new transformative IoT and AI systems. The ThinkSystem SE350, built on the Intel Xeon D2100 CPU, is a compact, rugged system that is designed to fit into any environment. The NetApp® AFF C190 system is optimized for flash and delivers 10 times faster application response than hybrid arrays. If more storage capacity or faster networking speed is needed, NetApp AFF A220 or NetApp AFF A250 can also be used.

All-flash storage enables you to run more workloads on a single system without compromising performance. This validated solution demonstrates high performance and optimal data management with an architecture that uses either a single or multiple Lenovo SR350 edge servers interconnected with a single NetApp AFF storage system. This solution can address use cases such as autonomous vehicles, patient monitoring, cashierless payment, and inventory monitoring.

The design in Figure 1 shows how multiple ThinkSystem SE350s can be deployed in an edge environment, such as multiple retail stores. The model management can be handled through a single storage node (AFF C190) and pushed out to each of the compute nodes, making it easy to simplify model management for your AI workloads. This design also provides local data storage so that you don't have to move all of the data from the compact edge servers back to the cloud. This tiering can reduce your storage costs by retaining the data locally and moving only necessary data back to the cloud.

Figure 1) Physical architecture overview.

The NetApp and Lenovo solution is a flexible scale-out architecture that is ideal for enterprise AI inference deployments. NetApp storage delivers the same or better performance as local SSD storage and offers the following benefits to data scientists, data engineers, and IT decision makers:

  • Effortless sharing of data between AI systems, analytics, and other critical business systems. This data sharing reduces infrastructure overhead, improves performance, and streamlines data management across the enterprise.
  • Independently scalable compute and storage minimize costs and improve resource utilization.
  • NetApp data compaction and deduplication reduce the amount of storage needed, and automatic cold data tiering lowers storage costs.
  • Meet demanding and constantly changing business needs with a solution that delivers seamless scalability, easy cloud connectivity, and integration with emerging applications.

To learn more about this joint solution, read the technical report and visit www.netapp.com/ai.

Attachments

  • Original document
  • Permalink

Disclaimer

NetApp Inc. published this content on 05 April 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 05 April 2021 15:09:00 UTC.