With so many “next-generation” data management solutions available today, why would anyone choose SAN (Storage Area Network) over something like HCI or just moving everything straight to the cloud?

 

I see SAN as a solution to a problem. SAN takes away the complexity of a multi-server environment by consolidating storage onto a single, high-performance appliance that is redundant and accessible over a private network.

 

This architecture is also useful if you have already invested significantly in compute but need to scale storage on demand. With SAN, you can fully maximise compute capacity while scaling storage capacity independently, squeezing every compute cycle and GB out of your investments.

 

Pooling resources together offers some cost benefits as well—there is economy in scale after all! Take, for example, an implementation with 20 independent VMware data stores, each with around 150GB of free space. While there is plenty of compute, the VM requires ~200GB of storage capacity. In this instance, you would have to purchase a new server to accommodate just a single VM, even though there is around 3TB worth of unused storage space just sitting there.

 

What if the CPU utilization is around 30% on all servers and RAM is at 40%, but there are no more HDD slots? You could maybe put in new disks to compensate, but what do you do with the VMs while you perform this upgrade? SAN enables you to seamlessly migrate workloads from one server to another and perform maintenance of underlying systems without any disruption. With SAN, workloads of different sizes can even sit on the same shared storage but utilize different compute.

 

By choosing to put NetApp at the heart of your SAN infrastructure, you get many other great features:

  • Storage efficiencies such as data deduplication, compression, and thin provisioning help free up capacity and relieve management strain
  • Data protection features like snapshots, SnapMirror, and SnapVault give you the ability to offload computation cycles for analysis and data protection to another system while keeping your primary storage free, in addition to helping you stay compliant
  • Data replication technologies such as MetroCluster allow you to mirror two NetApp systems over a distance of 700 kilometres for geographically disparate disaster recovery

I wanted to touch on another big benefit of SAN: speed! With the advancements in flash technology and the emerging popularity of NVMe (non-volatile memory express) SAN is able to offer you higher IOPS and lower latency access for your business-critical applications.

 

This speed of access is especially important given the ever-increasing demand for instant access to the data generated by our business-critical applications. Often these applications are designed for a SAN environment, and as such, perform better in a SAN environment over other solutions like cloud.

 

SAN is not just for those continuity systems, but also for the modern data centre, providing us with ultra-low-latency access, storage efficiencies, and data retention across the data fabric, ensuring our data is where we need it, when we need it.

Matthew Underhill

Matt is an IT Team Leader at the Liverpool School of Tropical Medicine (LSTM), the oldest school of tropical medicine in the world, where he manages the L2/L3 and Server Team. He also facilitates IT support for a number of LTSM's overseas offices, and is the lead technical contact for one of LTSM's biggest partners in Malawi. Growing up in Kenya, Matt is passionate about driving digital transformation in his native home of Africa. He has nearly a decade of IT experience, and has been working with NetApp since 2016.

When he's not designing and implementing IT infrastructure at work, he's doing it at home too! He enjoys playing around in his home lab with computers, switches, routers, APs, and whatever else he has lying around. He's also an animal lover, with a whole farm's worth of critters in his back yard. Matt would say he's an "urban chicken keeper, a Jack Russel tamer, and a rabbit wrangler."