In the race for global competitiveness and technology leadership, the Department of Energy (DOE) recently unveiled its Summit supercomputer, a joint solution from Oak Ridge National Laboratory, IBM, and Nvidia. With the ability to deliver over 200 peak petaflops and 3.3 exaops, the power of Summit is unmatched in giving scientists a platform for groundbreaking discoveries in astrophysics and materials, as well as cancer surveillance and human health and diseases.


Summit is just the beginning. In March, President Trump requested $636 million in funding for exascale activities, with additional funding increases anticipated.


Although supercomputers like DOE Summit and Lawrence Berkeley National Laboratory’s Edison are reinventing what’s possible, reaping the benefits of high-performance computing (HPC) can be fraught with challenges. In most laboratories, any amount of downtime is disastrous. One outage can cost millions of dollars, or even put lives at risk. As the transition to exascale speeds up, labs are relying on infrastructure that can keep up with extreme, real-time workloads in an environment that is easy to deploy, manage, and scale. But all those capabilities come at a cost, and no lab has the budget to build data center after data center just to keep up. As a result, many labs rely on contract laboratories for analytics and services, which is not a cost-effective or sustainable solution.


With the NetApp® HPC solution, you get industry-leading price/performance in a true pay-as-you-grow solution for your high-performance data workloads. Built on a modular NetApp E-Series storage architecture, the solution can support multipetabyte datasets and file systems like Lustre, IBM Spectrum Scale, BeeGFS, and more. Get consistent, near-real-time access to research data with the ability to deploy and configure performance and capacity on the fly as your needs dictate. And with the flexibility to support 100Gb IB, 100Gb NVME-oF, 32GB FC, and 12Gb SAS connectivity, you don’t have to worry about standing up expensive equipment or racking up power, cooling, and support costs for gear you use for only a limited time.


NetApp HPC enables you to:

  • Accelerate performance. Deploy systems that deliver top performance in SPC-1 and SPC-2 benchmarks.
  • Simplify operations. Integrate with your existing infrastructure and applications by using plug-ins, APIs, and orchestration tools.
  • Improve reliability. Enjoy 99.9999%+ availability and industry-leading durability, based on nearly 1M units shipped.

To learn more about NetApp high-performance computing solutions for labs, visit

Julie Fagan

Julie Fagan has a long career in high-tech solutions marketing. She loves working at NetApp where she gets to focus on bringing the best video surveillance and high performance computing storage solutions to the world along with her awesome co-workers.

Add comment