Welcome to the age of scale-out converged systems—made possible by FlexPod® SF. Together, Cisco and NetApp are delivering this new FlexPod solution built architecturally for the next-generation data center. Architects and engineers are being asked to design converged systems that deliver new capabilities to match the demands of consolidating silos, expanding to web-native apps, and embracing the new modes of operations (for example, DevOps).

 

New Criteria for Converged Systems

Until now, converged systems have served design criteria of integration, testing, and ease of configuration, within the confines of current IT operations and staples like Fibre Channel. A new approach, however, focuses on the following design requirements:

 

 

Converged systems needs to deliver on performance, agility, and value.

 

Enter the First Scale-Out FlexPod Solution Built on Cisco UCS and Network

Cisco and NetApp have teamed to deliver FlexPod SF, the world’s first scale-out converged system built on Cisco’s UCS Server, Cisco’s Nexus switching, Cisco management, VMware vSphere 6, and newly announced NetApp® SolidFire® SF9608 nodes running the NetApp SolidFire Element® OS on Cisco’s C220 platform. The solution is designed to bring the next-generation data center to FlexPod.

 

SF9608 Nodes Powered by Cisco C220 and the NetApp SolidFire Element OS

The critical part of bringing FlexPod SF forward is the new NetApp SF9608 nodes. For the first time, NetApp is producing a new Cisco C220-based node appliance running the Element OS.

 

SF9608 nodes built on Cisco UCS C220 M4 SFF Rack Server have these specifications:

  • CPU: 2 x 2.6GHz CPU (E5-2640v3)
  • Memory: 256GB RAM
  • 8 x 960GB SSD drives (non-SED)
  • 6TB raw capacity (per node)

Each node has these characteristics:

  • Block storage: iSCSI-only solution
  • Per volume IOPS-based quality of service (QoS)
  • 75,000 IOPS
  • Single copy of data kept—that is, a primary and replicated copy

Users can obtain support through 888-4NetApp or Mysupport@netapp.com.

 

The key here is that it’s the same Element OS that’s nine revisions mature, born from service providers, and used by some of the biggest enterprise and telco businesses in the world. The Element OS is preconfigured on the C220 node hardware to deliver a storage node appliance just for FlexPod. Element OS 9 delivers:

  • Scale-out clustering. You can cluster a minimum of four nodes, and then add or subtract nodes as needed. You’ll get maximum flexibility with linear scale for performance and capacity, because every node has CPU, RAM, 10GB, SSD IOPS, and capacity.
  • QoS. You can control the entire cluster’s IOPS for setting minimum, maximum, and burst settings per workload to deliver mixed workloads without performance issues.
  • Automation programmability. The Element OS has a 100% exposed API, which is preferred for programming no-touch operations.
  • Data assurance. The OS enables you to protect data from loss of drives or nodes. Recovery for a drive is 5 minutes, and less than 60 minutes for a full node failure (all without any data loss).
  • Inline efficiency. The solution is always on and inline to the data, reducing the footprint through deduplication, compression, and thin provisioning.

The Element OS is also different from existing storage software. It’s important to understand that FlexPod SF is not a dual-controller architecture with SSD shelves; you will not need 93% of the overhead tasks.

 

Use Cases Delivering the Next-Generation Data Center

As you design for the next-generation data center, you’ll find requirements that are often buzzword-worthy but take technical meaning within FlexPod SF’s delivery:

  • Agility. You’re able to respond by means of the infrastructure stack to a variety of on-demand needs for more resources, offline virtual machine (VM) or app building from infrastructure requests, and autonomous self-healing from failures or performance issues (end-to-end QoS—compute, network, storage).
  • Scalability. You gain scalability not just in size but in how you scale—with granularity, across generations of products—moving, adding, or changing resources such as the new storage nodes. FlexPod SF delivers scale in size (multi-PB, multimillions of IOPS, and so on) and gives you maximum flexibility to redeploy and adjust scale.
  • Predictability. FlexPod SF offers performance, reliability, and capabilities to deliver a SLA from compute, network, and storage via VMware so that VMs, apps, and data can be consumed without periodic delivery issues from existing infrastructure.

With the next-generation data center, IT can simplify and automate, build for “anything as a service” (XaaS), and accelerate the adoption of DevOps. FlexPod SF delivers the next-generation data center for VMware Private Clouds and gives IT and service providers the ability to deliver infrastructure as a service.

  • VMware Private Cloud. Different from server virtualization, where the focus is on virtualization of apps, integration to existing management platforms and tools, and optimization of VM density.
    • Instead of managing through a component UI, manage through the vCenter plug-in or Cisco UCS Director.
    • Move from silos to consolidated and mixed workloads through QoS.
    • Instead of configuring elements of infrastructure, automate through VMware Storage Policy-Based Management, VMware vRealize Automation, or Cisco UCS Director.
  • Infrastructure as a service. Currently, service and cloud providers take the components of FlexPod SF and deliver them as a service. With this new FlexPod solution, you’ll be able to configure multitenancy with much more elasticity of resources with performance controls to construct a SLA for on-demand consumption.

FlexPod SF Cisco Validated Design

A critical part of the engineering is the Cisco Validated Design (CVD), which encompasses all the details needed from a full validation of a design. With FlexPod SF, the validation was specific to the following configuration:

 

 

As you can see, the base strength of Cisco’s UCS and Nexus platforms now configures into scale-out NetApp SF9608 nodes with a spine-leaf 10Gb top-of-rack configuration. All of this is “new school,” and the future is now. Add CPU and RAM in small and flexible increments along with 10Gb network and storage 1U at a time (from a base four-node configuration).

 

Architecture and Deployment Considerations

FlexPod SF is not your average converged system. To architect and deploy, you’ll need to rethink your work—for example, helping the organization understand workload profiles to set QoS, and creating policy automation for rapid builds and self-service. Here are some considerations:

  • Current mode of operations
    • Analyze the structure of current IT operations. FlexPod SF presents the opportunity for IT or a service provider to move past complex configurations to profiles, policy automation, and self-service so VM builders and developers can operate with agility.
  • Application profiles and consolidation
    • Help organizations align known application and VM profiles to programmable settings in QoS, policies, and tools such as PowerShell.
    • Set QoS for minimum, maximum, and burst separate from capacity settings. This granularity enables architects to apply settings that will consolidate app silos and SLAs without overprovisioning hardware resources.
  • Cisco compute and network: same considerations as previous FlexPod solutions; only B Series supported at this time.
  • Storage
    • Architecting the SF9608 nodes is straightforward. With the Element OS, your design requirements are for volume capacity (GB/TB) and IOPS settings through QoS. The IOPS settings are:
      • Minimum: the key ability to deliver performance SLAs. This ability is delivered through the Element OS on a 4+ node governing the maximum capabilities of the cluster and inducing latency to workloads trespassing the QoS settings.
      • Maximum: capping a maximum IOPS of a workload.
      • Burst: over a given time, allows a workload to go past maximum if the cluster can supply the IOPS.
    • Capacity does not need to be projected for a three-to-five-year sizing as with existing storage. SF9608 nodes are an on-demand, 1U-node granularity add to needs for capacity and performance. Scale is linear: each node has CPU, RAM, 10GB, capacity, and IOPS.
    • Encryption is not available at this time.
    • Boot from SAN is supported.
    • You cannot field-update a C220 to become a SF9608 node.
    • There is no DC power at this time (roadmap).
  • VMware
    • In architecting for a FlexPod SF environment, focus on the move from server virtualization, where consolidation ratios, integration to existing stack tools, and the modernization to updated resources like all-flash, 10Gb, and faster Intel. For VMware Private Cloud environments, align all of these attributes and capabilities to an on-demand, profile-centric, policy-driven (SPBM) environment for VM administrators to completely build VMs from vCenter or Cisco UCS Director.
    • FlexPod SF presents a new opportunity for operators. The interface for daily operations is VMware vCenter, Cisco UCS Director, or both. As you build, move, add, and change VMs, you’ll notice policies that go beyond templates. You’ll see granular capabilities to completely build all attributes of VMs. You’ll also be able to present self-service portals for developers and consumers of a VMware Private Cloud to operate with agility and achieve their missions.

Learn More

Visit www.netapp.com/FlexPodSF, where you can find solution briefs, presentations, and a host of other materials to help you get started with FlexPod SF.

Lee Howard

Lee Howard is the Chief Technology Officer for the Cisco Business at NetApp. Lee leverages his over fifteen years of experience in organizational leadership and enterprise technology to shape the technical direction for the collaborative intellectual property stack between NetApp and Cisco.

In his spare time, Lee serves on several start-up advisory and corporate boards and is in search of the perfect BBQ rub. Previous to NetApp, he has focused on merger and acquisition integrations, product development and launch, market creation, and enterprise sales across his career with SanDisk, Western Digital, and Dell Technologies.

Martin Cooper

Martin Cooper is Senior Director, NGDC at NetApp. Based in London, England Martin is responsible of WW Systems Engineering for the NetApp Next Generation Data Centre business unit. Additionally, Martin leads the NetApp NDGC Office of the CTO group, NGDC Alliances, is a member of the NGDC Product Planning Council and NGDC business unit leadership team.

Martin joined SolidFire in December 2012, as one of the first two employees outside the USA charged with establishing the SolidFire business in EMEA and APJ. Martin led field engineering for SolidFire in these geographies. Expanding this role to cover WW teams as a part of NetApp after the acquisition of SolidFire in February 2016.

Prior to joining SolidFire Martin led the technical teams responsible for British Telecom globally at NetApp.

Before moving to the vendor side Martin spent 17 years as an IT practitioner at the international design consultancy Arup, where he held various leadership roles including Chief Technology Officer and Global Operations Director.

Martin is a Chartered Information Technology Professional and a Fellow of the British Computing Society.