Log in
Forgot password ?
Become a member for free
Sign up
Sign up
New member
Sign up for FREE
New customer
Discover our services
Dynamic quotes 
  1. Homepage
  2. Equities
  3. United States
  4. Nasdaq
  5. NetApp, Inc.
  6. News
  7. Summary
    NTAP   US64110D1046


SummaryMost relevantAll NewsAnalyst Reco.Other languagesPress ReleasesOfficial PublicationsSector news

NetApp : How to speed up deep learning model training in the automotive sector

06/11/2021 | 11:05am EDT

Enabling lane detection at scale with NetApp, Run:AI, and Microsoft Azure

Today's automotive leaders are investing heavily in data-driven software applications to advance the most important innovations in autonomous and connected vehicles, mobility, and manufacturing. These new applications require an orchestration solution and a shared file system for their massive datasets to run distributed training of deep learning models on GPUs. The fascinating process for training AI models in the automotive industry involves many, many images used in a 3D matrix that's formed from 2D color images. These images are analyzed at the pixel and color (RGB) level to detect various objects, such as pedestrians, other cars, and traffic lights.

GPUs need to be maintained at high utilization to reduce training times, permit fast experimentation, and minimize the cost of usage. In addition, a high-performance, easy-to-use file system that prevents GPUs from waiting for data-'GPU starvation'-is imperative in accelerating model training in the cloud and optimizing cost.

Run:AI, Microsoft, and NetApp have teamed together to address a lane-detection use case by building a distributed training deep learning solution at scale that runs in the Azure cloud. This solution enables data scientists to fully embrace the Azure cloud scaling capabilities and cost benefits for automotive use cases.

How we set up our deep learning model training

Here are the tools we used, and how we used them:

  • Azure NetApp Files provided high-performance, low-latency, scalable storage through NetApp®Snapshot copies, cloning, and replication.
  • Azure Kubernetes Service (AKS) simplified deploying and orchestrating a managed Kubernetes cluster in Azure.
  • Azure compute SKUs with GPUs. These are specialized VMs available with single or multiple GPUs.
  • Run:AI enabled pooling of GPUs into two logical environments: one for build and one for training workloads. A scheduler manages the compute requests that come from data scientists, enabling elastic scaling from fractions of GPU to multiple GPUs and multiple GPU nodes. The Run:AI platform is built on top of Kubernetes, enabling simple integration with existing IT and data science workflows.
  • NetApp Trident integrates natively with AKS and its Persistent Volume framework and was used to seamlessly provision and manage volumes from systems running on Azure NetApp Files.
  • Finally, we did machine learning (ML) versioning by using Azure NetApp Files Snapshot technology combined with Run:AI. This combination perserved data lineage and allowed data scientists and data engineers to collaborate and share data with their colleagues.

What we found

By working with Run:AI, Azure, and NetApp technology, we enabled distributed computations in the cloud, creating a high-performing distributed training system. The system worked with tens of GPUs that communicated simultaneously in a meshlike architecture. And-to optimize cost-we were able to keep them fully occupied at about 95% to 100% utilization.

We were able to saturate GPU utilization and keep the GPU cycles as short as possible. (This is one of the highest-cost components in the architecture.) Azure NetApp Files provides various performance tiers that guarantee sustained throughput at submillisecond latency. We started our distributed training job on a small GPU cluster. Later, we added GPUs to the cluster on demand without interrupting the training-by using the dynamic service level change capabilities of Run:AI software to provide optimal GPU utilization.

Different data science and data engineering teams were able to use the same dataset for different projects. One team was able to work on lane detection, while another team worked on a different object detection task using the same dataset. Researchers and engineers were able to allocate volumes on demand.

We had full visibility of the AI Infrastructure. Using Run:AI's platform, we had full visibility of the AI infrastructure including all pooled GPUs, at the job, project, cluster, and node levels.

Looking to get started?

In this use case, lane detection for autonomous vehicles, we were able to use NetApp, Run:AI and Azure to create a single, unified experience for accelerating model training on the cloud, thus reducing costs while improving training times and simplifying processes for data scientists and engineers. Details are available in this technical report and apply to model training across industries and verticals.


NetApp Inc. published this content on 11 June 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 11 June 2021 15:04:00 UTC.

© Publicnow 2021
All news about NETAPP, INC.
06:17aNETAPP  : Raymond James Upgrades NetApp to Outperform From Market Perform; Price..
06/22NETAPP  : 3D interactive tour of ONTAP AI
06/22INTRODUCING SNAPCENTER 4.5 : A must-read for IT administrators
06/22NETAPP  : StorageGRID is Splunk SmartStore ready
06/22NETAPP  : Acquires France-Based Data Mechanics for Undisclosed Sum
06/22NETAPP  : Acquires Data Mechanics to Accelerate Spot Roadmap and Optimize Data A..
06/22NETAPP  : What's next with multicloud deployments, according to your colleagues
06/21NETAPP  : Management's Discussion and Analysis of Financial Condition and Result..
06/21NETAPP  : FlexPod for your full-stack AI and complete AI lifecycle
06/18NETAPP  : Break the silos and discover the full potential of your data
More news
Financials (USD)
Sales 2022 6 136 M - -
Net income 2022 796 M - -
Net cash 2022 2 262 M - -
P/E ratio 2022 22,9x
Yield 2022 2,47%
Capitalization 18 078 M 18 078 M -
EV / Sales 2022 2,58x
EV / Sales 2023 2,39x
Nbr of Employees 11 000
Free-Float 99,6%
Duration : Period :
NetApp, Inc. Technical Analysis Chart | MarketScreener
Full-screen chart
Technical analysis trends NETAPP, INC.
Short TermMid-TermLong Term
Income Statement Evolution
Mean consensus OUTPERFORM
Number of Analysts 25
Last Close Price 80,99 $
Average target price 83,32 $
Spread / Average Target 2,87%
EPS Revisions
Managers and Directors
George Kurian Chief Executive Officer & Director
César Cernuda Rego President
Michael J. Berry Chief Financial Officer & Executive Vice President
Thomas Michael Nevens Independent Chairman
William H. Miller Chief Information Officer & SVP
Sector and Competitors
1st jan.Capitalization (M$)
NETAPP, INC.22.27%18 078
PURE STORAGE, INC.-13.22%5 589
NETLIST, INC.256.58%464