So far in this blog series, I’ve focused on the nuts and bolts of planning AI deployments, building data pipelines from edge to core to cloud, and the considerations for moving machine learning and deep learning projects from prototype to production. Over the next several months, I want to focus on real-world AI use cases in specific industries, including automotive, healthcare, financial services, and manufacturing.

 

I’ll be starting with the automotive industry, exploring how companies are applying the data engineering and data science technologies I’ve been discussing to transform transportation. I’ll take a closer look at the problems companies are trying to solve, and explore approaches for gathering data from a variety of sensors and other sources as well as building appropriate data pipelines to satisfy both training and inferencing needs.

AI Use Cases in Automotive

Even when you focus on a single industry like automotive, the number of possible AI use cases is large. NetApp divides AI in the auto industry into four segments with multiple use cases in each segment:

  • Autonomous driving
  • Connected vehicles
  • Mobility as a Service
  • Smart manufacturing

Naturally, there are overlaps between some of these segments; success in one area can yield benefits in another. For example, autonomous driving may be an essential element of a mobility-as-a-service strategy. There are also many requirements that all segments have in common, including infrastructure integration, advanced data management, and security/privacy/compliance.

 

I’ll look at each of these segments in more detail in coming blogs, but I want to introduce them here, and highlight some of the key challenges and use cases in each.

Autonomous driving

When you think about AI in automotive, self-driving is likely the first use case that comes to mind. While the holy grail in the industry is full self-driving, most companies are already offering increasingly sophisticated adaptive driver assistance systems (ADAS) as stepping stones toward Level 5 autonomy.

Figure 1: The five stages of autonomy

But the challenges to achieving full self-driving are significant. Each car deployed for R&D generates a mountain of data (1TB per hour per car is typical). Teams can expect to accumulate hundreds of petabytes to exabytes of data as autonomous driving projects progress, resulting in significant challenges:

  • How do you create a pipeline to move data efficiently from vehicles to train your neural network?
  • How do you efficiently prepare (image quality, resolution) and label data for neural network training?
  • How much storage and compute will you need to train your neural network? Should your training cluster be on-premises or in the cloud?
  • How do you correctly size infrastructure for your data pipelines and training clusters including storage needs, network bandwidth, and compute capacity?

I’ll cover many of these autonomous driving topics in-depth in the next several blogs, including architecting data pipelines for gathering and managing data, DL workflows, and the various models that researchers are exploring to achieve autonomous driving.

Connected vehicles

We increasingly expect all our devices to be connected and intelligent like our smart phones. Cars and other vehicles are quickly transforming into connected devices, and there are a number of immediate use cases for AI in connected cars.

  • Personal assistants / voice-activated operations
  • Telematics and predictive maintenance
  • Infotainment/recommenders

Today, cars use cellular and WiFi connections to upload and download entertainment, navigation, and operational data. In the near future, we’ll also see cars connecting to each other, to our homes, and to infrastructure. Audi has already introduced technology to connect cars to stoplight infrastructure, enabling drivers in select cities to catch a “green wave”, timing their drives to avoid red lights.

That’s just one of many opportunities to use data from connected cars. While not every use case requires artificial intelligence, in an upcoming blog I’ll focus on several important use cases that do, including predictive maintenance. We’ll explore approaches to efficiently gather and process information from cars around the globe.

Mobility as a Service

In the future, car ownership may decline in favor of various forms of ride sharing, particularly in dense urban areas. Car companies will need to become mobility companies to address changing consumer demand. Many car companies are already branching out, acquiring scooter- and bike-sharing companies and creating delivery services.

 

The machine learning and deep learning problems in mobility-as-a-service models are significantly different than those in autonomous driving:

  • How do you predict customer demand?
  • How do you optimize fleet efficiency and minimize customer wait times?
  • How do you dynamically set prices in response to demand?
  • How do you ensure passenger physical security?
  • How do you protect customer data, prevent fraud, and balance privacy versus convenience?

From an infrastructure standpoint, these distributed problems require different strategies and may require smart algorithms on the consumer’s device (smart phone), in the vehicle, and in the cloud, plus long-term, secure data management for compliance.

Smart Manufacturing

The auto industry has a lot on its plate. Companies must look for ways to increase operational efficiency to free up capital for investments like those described above. Industrial Internet of Things (IIoT) and Industry 4.0 technologies are the key to streamlining business, automating and optimizing manufacturing processes, and increasing the efficiency of the supply chain.

 

Common manufacturing use cases include:

  • Increased use of computer vision for anomaly detection
  • Process control for improved quality/reduced waste
  • Predictive maintenance to maximize productivity of manufacturing equipment

I’ll explore the applications of AI for smart manufacturing across all industries, including automotive, in a future blog.

Meet NetApp at TU-Automotive Detroit, June 4-6

NetApp is an exhibitor at TU-Automotive Detroit, the world’s largest auto tech conference and the only place to meet the most innovative minds in connected cars, mobility & autonomous vehicles under one roof. Come to our booth C224 to meet with our auto subject matter experts.

 

Learn about how NetApp is partnering with NVIDIA, systems integrators, hardware providers and cloud partners to put together smart, powerful, trusted AI automotive solutions to help you achieve your business goals.

 

Attend the panel discussion: AI & the Brains Behind the Operation on June 6, 2:45 pm, with Thomas Carmody, Head of Transport and Infrastructure at our partner Cambridge Consultants (booth B140). Thomas will be addressing—amongst other topics—how to anticipate data storage challenges to meet autonomous vehicles (AV) grade level requirements.

More Information and Resources

NetApp is working to create advanced tools that eliminate bottlenecks and accelerate results—results that yield better business decisions, better outcomes, and better products.

 

NetApp ONTAP AI and NetApp Data Fabric technologies and services can jumpstart your company on the path to success. Check out these resources to learn about ONTAP AI.

Previous blogs in this series:

  1. Is Your IT Infrastructure Ready to Support AI Workflows in Production?
  2. Accelerate I/O for Your Deep Learning Pipeline
  3. Addressing AI Data Lifecycle Challenges with Data Fabric
  4. Choosing an Optimal Filesystem and Data Architecture for Your AI/ML/DL Pipeline
  5. NVIDIA GTC 2018: New GPUs, Deep Learning, and Data Storage for AI
  6. Five Advantages of ONTAP AI for AI and Deep Learning
  7. Deep Dive into ONTAP AI Performance and Sizing
  8. Bridging the CPU and GPU Universes
  9. Make Your Data Pipeline Super-Efficient by Unifying Machine Learning and Deep Learning

Santosh Rao

Santosh Rao is a Senior Technical Director for the Data ONTAP Engineering Group at NetApp. In this role, he is responsible for Data ONTAP technology innovation agenda for Workloads and Solutions ranging from NoSQL, Big Data, Deep Learning, and other 2nd and 3rd Platform Workloads.

He has held a number of roles within NetApp and led the original ground up development of Clustered ONTAP SAN for NetApp as well as a number of follow-on ONTAP SAN products for Data Migration, Mobility, Protection, Virtualization, SLO Management, App Integration and All Flash SAN. Prior to joining NetApp, Santosh was a Master Technologist for HP and led the development of a number of Storage and Operating System Technologies for HP including development of their early generation products for a variety of storage and OS technologies over the years.