Mastering VMware Horizon 7(Second Edition)
上QQ阅读APP看书,第一时间看更新

Hardware-accelerated graphics for Horizon View

Early versions of virtual desktop technology faced challenges when it came to delivering high-end graphical content, as the host servers were not designed to render and deliver the size and quality of images required for such applications.

Let's start with a brief history and background. Technology to support high-end graphics was released in several phases, with the first support for 3D graphics released in vSphere 5, with View 5.0 using software-based rendering. This gave us the ability to support things such as the Windows Aero feature, but it was still not powerful enough for some of the really high-end use cases due to this being a software feature.

The next phase was to provide a hardware-based GPU virtualization solution that came with vSphere 5.1 and allowed virtual machines to share a physical GPU by allowing virtual machines to pass through the hypervisor layer to take advantage of a physical graphics card installed in the host server.

If we had this conversation a couple of years ago and you had a use case that required high-end graphics capabilities, then virtual desktops would not have been a viable solution. As we just discussed, in a VDI environment, graphics will be delivered using a virtualized, software-based graphics driver as part of the hypervisor.

Also, don't forget that, as we are now using servers to host the virtual desktops, we are using the power of the graphics card in the server, and servers aren't renowned for their high-end graphics capabilities and have limited GPU power, as typically all a server needs to do is display a management console.

That has all changed now. With the release of View 5.2 back in 2013, the ability to deliver hardware-accelerated graphics became a standard product feature with the introduction of Virtual Shared Graphics Acceleration (vSGA), which was then followed with the launch of Virtual Dedicated Graphics Acceleration (vDGA).

We will discuss these two technologies in the next sections of this chapter. We will also discuss the latest installment of graphics capabilities in Horizon View, with Virtual Graphics Processing Unit (vGPU).

Virtual Shared Graphics Acceleration (vSGA)

The vSGA implementation allows for multiple virtual desktop machines to share a physical GPU card, which is installed into the ESXi host server that is hosting those virtual desktop machines.

In this model, the virtual desktop machines do not have direct access to a dedicated physical GPU card. Instead, the standard VMware SVGA 3D graphics driver that is part of VMware Tools is installed on the virtual desktop's operating system. The SVGA driver is a VMware driver that provides support for DirectX 9.0c and OpenGL 2.1.

the GPU in the ESXi server. In this configuration, the driver supplied by the graphics card manufacturer (VIB) is installed on the ESXi hypervisor rather than the virtual desktop machine's own operating system.

Graphics commands from user sessions are intercepted by this driver and sent to the hypervisor, which controls the GPU in the ESXi server. In this configuration, the driver supplied by the graphics card manufacturer (VIB) is installed on the ESXi hypervisor rather than the virtual desktop machine's own operating system. Delivery to the user's endpoint works in exactly the same way, where DevTAP encodes the user experience to PCoIP or Blast Extreme, and delivers it to the end user's device, either in an HTML5 browser or Horizon Client.

The following diagram shows an overview of vSGA architecture:

There are a number of configuration and support options to consider, which we will cover in the next sections.

vSGA supported configurations

vSGA will support OpenGL 2.1- and DirectX 9-based applications running either on Windows 7 or 8 virtual desktop machines, virtualized on VMware vSphere 5.1 and greater, using one of the following manufacturers' GPU cards:

  • Intel HD Graphics P4700
  • Tesla M6 and M60
  • Grid K1 and K2
  • AMD FirePro S4000X, S7000, S9000, S9050, and W7000

For the latest compatibility guide, see the following link: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsga .

How many virtual desktops are supported with vSGA?

This is a question that gets asked when talking about delivering hardware-based graphics within a Horizon View environment, so let's spend some time understanding this. Within Horizon View, you can create different desktop pools depending on the use case, as we will cover in Chapter 7, Managing and Configuring Desktop Pools, where one of the desktop pools will be configured to use high-end graphics. Typically, you would not want to give all users access to a hardware-based GPU, hence the reason you would create a desktop pool for this particular use case.

So, to answer the question, the number of virtual desktops you can allocate to a GPU is dependent on the amount of video memory (VRAM) that you allocate to each virtual desktop. The thing to bear in mind is that the resources are shared, and therefore, normal VMware virtualization rules apply. The first thing to note is how memory is shared.

Note

Half of the video memory allocated to a virtual desktop machine is allocated from the GPU card's memory and the other half is from the host server's memory. When sizing your host servers, you need to ensure that you have enough memory configured in the server to allocate this as video memory.

Based on this, and with the number of virtual desktops supported being based on the amount of allocated VRAM, let's look at how that works out.

So, the default amount of VRAM allocated to a virtual desktop machine is 128 MB. So, in this example, 64 MB will come from the GPU and the other 64 MB from the host server. If you then take a GPU card that has 4 GB of VRAM on board, you will be able to support 64 virtual desktops (4 GB or 4096 MB divided by 64 MB from the GPU = 64 virtual desktop machines).

Within Horizon View, you can allocate a maximum of 512 MB of VRAM per virtual desktop machine. If you apply this to the preceding example using the same 4 GB GPU card, you now reduce the number of supported virtual desktops down to 16 (4 GB or 4096 MB divided by 256 MB from the GPU = 16 virtual desktop machines).

Note

With the AMD solutions, the maximum number of supported desktops is 15 per GPU.

We stated previously that normal VMware virtualization rules apply, so let's explain exactly what that means. Basically, what happens when you can't fulfil a virtual desktop machine's specification and there are insufficient resources? It won't boot or power on, right? It's the same for GPU configuration. If you configure a desktop pool with more virtual desktop machines than you can support on that GPU, then they will not boot.

Note

If you do happen to configure more virtual desktop machines in a pool where you probably cannot guarantee the GPU resources to be available, set the Hardware 3D setting in the View Administrator console to Automatic. Doing this allows Horizon View to revert to the software-based 3D rendering in order to deliver the virtual desktop machines.

Virtual Dedicated Graphics Acceleration (vDGA)

While vSGA works on a shared basis, vDGA allows for an individual virtual desktop machine to have its own dedicated access to a physical GPU card installed in the ESXi host server. This allows the virtual desktop machine to have a higher level of graphic performance, making it perfect for such use cases as CAD/CAM applications, as it supports DirectX (9, 10, and 11), OpenGL 4.4, and NVIDIA CUDA.

The following diagram shows the architecture for vDGA:

The vDGA solution makes use of a feature called VMDirectPath I/O pass-through, sometimes referred to as PCI pass-through, which allows the virtual desktop machine to pass through the hypervisor layer and directly access the hardware in the host server. In this case, the hardware in question is the NVIDIA GPU cards.

Note

As a virtual desktop machine is mapped directly to a GPU on a one-to-one basis, you cannot use vSphere features such as HA, DRS, or vMotion.

How many virtual desktops are supported with vDGA?

Unlike vSGA, which is limited by the amount of memory on the GPU card, vDGA is limited purely by the number of GPUs or GRID cards you can physically fit into the host server. This is dependent on your server vendor and what they support.

Note

Server vendors offer NVIDIA GRID-enabled servers that are prebuilt, and therefore, this technology is only available from the OEM channel. The primary reason is that servers require additional power and cooling components to drive the GRID cards.

For example, an NVIDIA GRID K2 GPU card has two GPUs on board, which would mean that you can allocate four virtual desktop machines to this card. Depending on your server hardware platform, you could install more than one card, therefore increasing the number of users that have access to a hardware-enabled GPU in their virtual desktop.

The vDGA-supported configurations

The following GPU cards are supported with vDGA:

  • GRID K1 and K2
  • Tesla M60 and M60
  • Quadro 1000M, 3000M, and 5000M
  • Quadro K2000, K2200, K3100M, K4000, K4200, K5000, K5200, and K6000
  • AMD FirePro S7150

For the latest compatibility guide, see the following link: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vdga

Virtual GPU (vGPU)

In the previous sections, we have talked about two different models for delivering high-end graphics. However, there are a couple of limitations with each of those solutions.

With vSGA, you get the scalability in terms of the number of users that can use the GPU card; however, because it does not use the native driver provided by the GPU vendor, then some of the ISV's will not certify their applications running on this solution. They would need to certify the VMware SVGA driver, as that's the driver that's used.

So, the answer to tackle the ISV support issue is to use vDGA, which does use the native GPU vendor's graphics driver, but now you are limited in terms of scalability and the high cost. Having a virtual desktop machine dedicated to a GPU, with only a handful of GPUs available in each host server, would make for quite an expensive solution. Having said that, there may be a use case where that would be the correct solution.

So, what we need is a solution that takes the best of both worlds, a solution that takes the shared GPU approach for scalability, yet uses the native graphics drivers.

That solution is called Virtual GPU (vGPU), and was launched as part of the Horizon 6 Version 6.1 release.

The following diagram shows the architecture for vGPU:

In this model, we have the native NVIDIA driver installed in the virtual desktop machines, which then has direct access to the NVIDIA GRID card in the host servers. The GPU is then effectively virtualized and time-sliced, with each virtual desktop machine having a slice of that time.

Note

vGPU is only available with VMware vSphere 6 and Horizon View 6.1 and later.

How many virtual desktops are supported with vGPU?

With vGPU, the number of supported users/virtual desktop machines is based on different profiles. These profiles are detailed in the following diagram, and give you the number of users, number of supported monitors, and so on:

As with the vDGA and vSGA solutions, you need to check that you have the correct supported hardware. In addition, you should also check that your applications are supported in these configurations. You can find the current list of supported applications by following the link to the NVIDIA website: http://www.nvidia.com/object/grid-isv-tested-applications.html