Welcome to the third part of this five-part series on choosing storage for your high-performance computing (HPC) solution. In the first two blog posts, I talked about speed and reliability. Now, I will cover a characteristic that seems to go against all basic aspects of HPC: simplicity.
HPC environments are inherently complex. Hundreds or even thousands of systems run in parallel to complete millions of jobs per day in the largest environments. Managing all of these compute nodes along with the network and storage components is a daunting task—not to mention the complexities involved when the environment needs to scale to meet growing demand.
To relieve IT staff of some of these burdens, many companies look toward the cloud for a solution. According to Intersect360 Research, HPC in the cloud grew by 44% in 2017 and is expected to continue growing for the foreseeable future.
Although HPC solutions in the cloud can help simplify operations, HPC comes with its own pitfalls:
- Performance bottlenecks from lack of high-bandwidth storage capabilities and low-latency networking
- Lack of control over data, including difficulty (or inability) to move it out of the cloud
- High cost of storing and moving data in the cloud
In an environment where complexity rules, simplicity is key. Whether you deploy your compute resources on premises, on the edge of the cloud, or in the cloud, the NetApp® HPC solution makes enterprise storage easy.
Built on a modular NetApp E-Series storage architecture, the solution offers:
- A single platform that is easy to install and support
- Nondisruptive scaling of performance and capacity without complex deployments or migrations
- Dynamic replication for spontaneous configuration and faster deployment
- Automation of common tasks
- Proactive monitoring and support that automates issue resolution and reduces management overhead
And stay tuned for the fourth part of this five-part series, in which I will talk about the importance of scalability when you choose storage for your HPC deployment.