In my seven years of experience in desktop virtualization, I have rarely come across someone that can’t recall a horror story or full failure on a VDI project. Why is that? Let’s first consider end-user expectations.

Consider end-user expectations:

  1. They want their virtual desktop to “feel” like their physical desktop.
  2. They expect a better experience than the one they are giving up. Nobody upgrades and accepts no perceived upgrade.
  3. End users want simplicity.
  4. So for perspective, let’s talk HD TVs.

Recently I purchased a new 4K Ultra TV, and, wow, the picture was great. With Blu-ray and HD programing, everything was clear and bright — I was in heaven. But then my wife changed the source to standard TV, and something was not right. The grainy picture was really a turnoff.  The base product had outperformed the content, creating an unsatisfactory result.

 

So often I have seen the same type of scenario in VDI deployments. We focus so much on the hardware chosen to deploy the compute, storage, and networking to host our VDI software that we miss the big picture.

 

The standard build for VDI will look something like this: all-flash hypervisor hosts with 20+ cores, 192+ GB of memory, and 10 GbE networking. When I look at those specs I even get excited. “Wow, that is going to be a real fast VDI offering,” I think — and it will be. But here in lies the problem. What about the application?

 

When we start a Proof of Concept (POC), it’s to prove the technology works. It’s usually built out on two ESXi hosts with a test storage array, and then the IT department simulates some workloads and tests functionality. Does that sound about right to you? Once the IT department is comfortable with their results they will release it with a pilot to a select group of people. This group will consist of a few power users, a few task workers, and, quite often, a few people that are influential to either the general population of employees for buy-in or by someone with influence over the executive team. Once that small group has satisfactory results, it is released as a production offering. The problem has not reared its head yet because it’s not noticeable (yet).

 

With VDI, this is where things start to go south. There are a few reasons why, but they usually all point back to not seeing VDI as an overall solution but instead thinking smaller (e.g. as a project). A project is something that’s kept in a box. You know where it’s all at, so you don’t reach outside the box to tweak, tune, or look for add-ons or connections. But with VDI, everything touches everything. Let’s get back to the TV experience I was talking about earlier.

 

See, VDI is like a 4K Ultra TV: While you’ll get amazing quality, if the applications use old spinning hard drives (with inherent latency and congestion of queue depth), then the desktop will be waiting around for a response when the end user makes a request. When a desktop itself is extremely fast but an individual application remains at the same speed it has always been at, the application causes a perception of poor VDI performance. This perception is enhanced because the gap between the high-performance desktop and the slow application has widened.

Here is a normal flow of VDI

When an end user connects to a VDI desktop, they are using a connection broker with a protocol like PCoIP that will connect to a virtual desktop hosted on one of those extremely powerful servers/hosts. Those servers are fed data from super fast all-flash storage, and it’s wrapped up with 10 GbE networking.Sounds great so far, right? Now the end users finally log on, and, wow, its running super fast … everyone is happy. They click on an application and it’s OK. They click open another application and “ehhh, OK, fine.” And another and another. After a while, many end users are logged on, and they all open these applications and use them at the same time thereby creating a worse experience with that application. By the end of the day, when people are polled about how their VDI experience has been, you will hear concerns and complaints. By the end of the first week, there are a list of people saying it’s not as fast or there are problems with their VDI desktop.

 

The IT team will test the desktops, storage, VDI software, and networking and then usually come up with one or two tweaks — but for the most part, it’s running very well. IT will push back on the users. Here’s where VDI will live in a limbo … it’s technically working, but the end users are unhappy. Unless, of course, the executive team experiences the issues as well, and the VDI project will get canceled.

 

Why were the end users unhappy? Ultimately, because VDI was treated as a project and not an overall solution. Was the problem with VDI? In many cases the answer is “no” — it was with the rest of the infrastructure hosting the applications the users depend on.

 

So what is a complete VDI solution? It involves a hardware solution as mentioned above for VDI but also a revamp of the storage your applications are sitting on. If the desktop responds at 2-5 ms latency with IOPS on demand and your application is responding at 30-300 ms latency and uncertain IOPS, you have problems. You need to bring the data as close to the performance of the desktop as possible. The simplest solution is to put everything on flash. Here is where the pushback comes in. You can’t mix VDI and databases on the same storage. You really don’t want to mix any other workload with your VDI workloads. I often use an analogy: A data center is like the ocean, and desktops are like a pond. The ocean has high tide and low tide, and the waves come in mostly consistent.  It is easy to predict the needs of the ocean/data center, but VDI is like 1,000 kids circled around a pond tossing rocks into it. They create tons of waves, all interrupting the other waves, and no two waves are the same from one minute to another. In other words, VDI is disruptive at the storage layer. This is why everyone says separate your VDI from everything else. VDI is a beast!

To succeed you need flash, right?

I know, it’s expensive to put everything in your data center on flash storage. Or is it?

 

With SolidFire we provide a true multi-tenant storage solution that can host your VDI data infrastructure, your application data, and, if necessary, your files, databases, CRM, messaging tools, etc. We can do it on a single storage array with guaranteed Quality of Service (QoS). By mixing VDI and data center applications, we make it cost-effective – while ensuring not just performance but a predictable guaranteed performance for every application without a noisy neighbor situation. An amazing part about this solution is that if an application is not performing as well as it should, SolidFire can provision additional performance to a specific application/volume on demand without migrating data or taking anything offline. We provide the ability to fine tune your VDI or data center (or a combination of both) on the fly with data in motion without the headaches and painful processes of the past.

 

As many existing customers are finding, Solidfire is the only storage solution that is cost-effective at delivering end user compute, application, data, etc. in a single storage solution.  Rest assured that once your data is on SolidFire, you will not have to do a data migration anytime soon as SolidFire elastic scale storage enables you to add capacity on demand and retire older hardware without downtime.

 

Read more about VDI opportunities with SolidFire at the new SolidFire Learning Center.

Jeremy Hall