High-performance computing (HPC) is the ability to process data and perform complex calculations at high speeds. HPC is the foundation for scientific, industrial, and societal advancements. It is through data that groundbreaking scientific discoveries are made, game-changing innovations are fueled, and quality of life for billions of people around the globe is improved.

 

Two things are true in today’s high-performance computing market:

As more companies in all industries around the globe implement HPC solutions, there are three critical components they need to consider: compute, networking, and storage. In this five-part blog, we’ll take a look at the storage component and the five key characteristics you should look for when selecting storage to support your HPC operations:

  1. Speed
  2. Reliability
  3. Simplicity
  4. Scalability
  5. Cost

I’ll start by talking about the most obvious characteristic: speed.

 

The story of the tortoise and the hare is widely used to prove that fast doesn’t always win the race, but in the highly competitive world of business, speed is everything. Being the first to make a groundbreaking discovery helps labs and biomedical companies secure grants to continue their research. Getting new products to market first gives manufacturers a significant edge over the competition. Being the first to locate new reserves helps oil and gas companies take a larger share of the global energy market. Coming in first is so important that several organizations are using Summit, the world’s fastest supercomputer, to help accelerate research and make their mark in science.

 

However, even the fastest supercomputer cannot meet expectations if it doesn’t have equally fast storage to support it. The compute component of the HPC solution can work only as fast as the storage that feeds it the raw data and the storage that ingests the processed data. A lag in the storage results in a lag in processing. To get the most value from their HPC investments, organizations need high-performance storage.

 

Whether you’re working to discover a cure for cancer, to deliver mind-blowing special effects for the next blockbuster film, or to develop new materials to build skyscrapers that can withstand the most powerful earthquakes, the NetApp® HPC solution can help accelerate the process.

 

Built on a modular NetApp E-Series storage architecture, the solution can support multipetabyte datasets and file systems such as Lustre, IBM Spectrum Scale, and BeeGFS. Enterprise-grade E-Series systems deliver top performance in industry benchmarks, offering:

  • Less than 50ms latency
  • Up to 1 million random read IOPS
  • Up to 13GBps sustained (maximum burst) write bandwidth per scalable building block

With nearly 1 million systems shipped, NetApp E-Series storage is a proven solution for the most extreme workloads.

 

Learn more about the NetApp HPC solution, and find out how customers around the world are maximizing performance in their HPC environments with NetApp E-Series storage.

 

And stay tuned for the second part of this five-part series, in which I will talk about the importance of reliability when you choose storage for your HPC deployment.

Julie Fagan

Julie Fagan has a long career in high-tech solutions marketing. She loves working at NetApp where she gets to focus on video surveillance and bringing the best video storage solutions to the world along with her awesome co-workers.