Leading Lap Times in the Premier League of AI Applications
If you want to reach peak times in MotoGP laps, you have to fulfill a few requirements: the rider needs very good reaction time, and the g-forces must be adapted optimally. The bike itself has to be perfectly tuned, so it is necessary to determine the appropriate data settings. The bike must also be robust and able to reach high speed—on a racetrack that accommodates it.
Such a scenario is not restricted to motorcycles in MotoGP racing. Extremely high-speed, real-time data movement and latency is also required for certain applications of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with neural networks. When you have hot datasets—that is, important information that is accessed frequently—along with algorithms that become more demanding, it is a challenge to provision provide them optimally and to manage tasks.
In this case, companies usually settle for make do with closed “server only” systems. These systems work at high speed. However, they are very limited in terms of scaling and flexibility to provide the data.
A more powerful solution with the BeeGFS file system
Converged infrastructures such as FlexPod® Select from NetApp and Cisco, complemented by the BeeGFS file system from ThinkParQ, are considerably more powerful for these applications.
FlexPod consists of the following components:
- High-performance NetApp® EF-Series storage
- Cisco Unified Computing System (UCS) technology of the M5 class with Fabric Interconnect and powerful Cisco Nexus switches
The converged infrastructure FlexPod is inherently ideal for the most demanding applications with AI, ML, and DL and high-analytic workloads. But BeeGFS extends the system for specific applications. When a system analyzes video data, for example, metadata is distributed or managed. Capacity is expanded so conveniently and efficiently that high-performance computing (HPC) and AI cluster solutions can easily handle peak I/O loads. This capability enables powerful access to the hot data. All-flash NetApp E-Series storage significantly improves system performance. In addition, it can be scaled and made fail-safe.
Latencies of less than 100 microseconds
The combination of the BeeGFS file system and the NetApp EF570 all-flash array ensures that the metadata requirements of a parallel file system are met. It requires only two rack units to deliver extreme IOPS. Latencies are less than 100 microseconds, and response times are up to 21Gbps of bandwidth. With optional NVIDIA GPUs in the Cisco UCS Servers, for example, 90,000 high-resolution images can be processed per second in an analytical recognition and categorization application. The decisive factor is demand-driven, parallel paths with Spark. Your results can then be efficiently saved back to BeeGFS.
The BeeGFS parallel file system with NetApp technology offers additional benefits:
- Cost control: The BeeGFS base is free, and several features can be added for a fee.
- BeeGFS support will help you if you experience any problems.
- The BeeOND parallel file system spans hard disks and accelerates AI training.
- Data is securely stored thanks to buddy mirroring.
- In storage pools, manual tiering takes place according to the access frequency.
Furthermore, BeeGFS can be set up in a few minutes. It doesn’t require any changes in the kernel and runs in the user environment. The environment is extended with an IP for the management node and a service for metadata, storage, or the end customer. NetApp SANtricity® on-box management makes installation easy and provides the necessary high availability.
FlexPod Select with BeeGFS as the file system is one of the NetApp AI solutions for achieving deep analytics for AI, ML, or DL applications with maximum performance—through a fast, flexible, and secure platform.
To find out more, visit the FlexPod Select product page.