Tuesday, November 29, 2011

Virtualization options

Anyone who's heard of virtualization probably has discovered there are many different tools out there to help install and run multiple operating systems on a single computer. While many of these systems are similar, the truth is that there are radically different approaches to virtualization that have developed over the years, with advantages and disadvantages for each method. Here's a quick rundown on popular virtualization approaches and technologies, and reasons why you might want to pick one over another. I've tried to keep the terminology consistent below, although many in the industry will prefer different terms. The most neutral term I could think of for the virtual machines themselves is "guest". The "host" is the operating system that guests run under.


Emulation or "Full virtualization"

Emulation is one of the oldest forms of virtualization and exists in many forms, from systems that emulate every aspect of a computer, including the CPU, to versions that use features of the host CPU to sandbox the hosted operating system and which use software merely to emulate the rest of the hardware. Almost all modern emulators use the latter approach if they can get away with it.

The primary advantage of emulation is that it requires virtually no support within the guest operating system, which is made to believe it's been installed on a normal computer with no special requirements.

The disadvantages of emulation are numerous. Even with CPU support, it's slow and inefficient. As an example, if an application on a guest needs to access the network, the data it sends needs to go through two device drivers (the one for the emulator, and the real one), and a virtual device emulator inbetween. Many CPUs don't support virtualization natively and thus can't be used to run the faster emulators. Emulators typically have very clumsy boundaries in terms of limiting CPU, memory, or disk usage.

Despite the disadvantages, emulators tend to be the most popular choices for virtualization today, largely because of the lack of a need to have the writers of the guest operating system be involved in making their systems work. Less of a problem for free operating systems, where the code can be modified, running multiple proprietary operating systems usually requires emulation unless the vendor cooperates with the virtualization system you want to run.


Para-virtualization

Para-virtualization is probably my favorite virtualization system, although it comes at some costs. In para-virtualization, the guest operating system is written such that it is aware it doesn't have complete control over the hardware, and instead cooperates with the "real" host operating system.

Implementations of para-virtualization can be dramatically different. One early system, User Mode Linux, and a related project called Cooperative Linux, allowed a Linux-based operating system to run under another operating system such as GNU/Linux or Windows. The UML kernel would simply talk to the underlying operating system and have it do the heavy lifting work.

A more advanced, generic, system, and my choice when it's available, is Xen. Xen is a host operating system that provides the bare minimum for hosted operating systems, handling starting and stopping operating systems, parceling out memory and CPU resources, and telling each instance what hardware it is allowed to access. This operating system is called a hypervisor. Typically, by default, all resources are assigned to a special guest operating system called the "Dom0", and that operating system provides basic networking and other services to the other guests.

Xen is relatively efficient, with guests able to talk directly to the hardware without going through emulation layers; although the counter to that is that it's generally complicated to set up, with the admin needing to have a high degree of knowledge about the hardware if the admin wants to ensure each guest runs efficiently. The "easy" ways to administer a Xen system can result in a very slow system. And like Emulation, Xen has relatively clumsy boundaries to be set when you need to assign resources to specific guests.

The fact each Xen guest is aware it's running in a virtual environment means that you get certain huge improvements. It's easy, for example, to reboot a computer (as in shut it down, cut the power, wait ten seconds, and start it back up) without actually killing the Xen guests, which - beyond seeing the time move forward suddenly - will act as if nothing has happened. More advanced Xen servers make it easy to migrate guests from one physical computer to another without shutting anything down. To be fair, more advanced emulators have the same capabilities, but Xen does it with the full cooperation of the underlying operating system, which in theory, at least, makes the entire process more reliable. The underlying system is expecting an outage, and so doesn't get upset about it.

Para-virtualization with a hypervisor like Xen's is probably the best compromise between application transparency (applications are completely unaware that they're on a virtual platform) and efficiency. However, the requirement for operating system level support means it's hard to run a fully para-virtualized VM system. Xen supports a fall-back mode where emulation is used to run operating systems that do not support its hypervisor, but obviously the moment you use it you lose the advantages of the para-virtualization approach and might as well use something more geared towards emulation.


Operating system level virtualization (or virtual virtualization!)

An extremely common virtualization scenario is where a single computer serves large numbers of guests that all run the same operating system (or operating system kernel.) This comes about because in Enterprises there's usually a deliberate decision not to diversify too far from a standard platform, because amongst nerds like me, there's usually a favorite system we're comfortable with, and because ISPs that offer VPS services usually offer hundreds of customers the same basic systems.

Unix-style operating systems have offered some tools that provide the ability to host different environments running from a common kernel for quite a while, although the concept has only recently become advanced enough for system administrators to take it seriously. The original system, chroot, permitted a tree of processes to see a branch of the core file system as its root file system, and you'd load up that branch with all the files that make up a Unix system. While it worked for some applications, chroot is too crude to work for anything but the simplest applications. Networking, for example, is still shared amongst environments.

The BSD branches of Unix implemented a system called "jails", which took the chroot concept and added all the other aspects of an operating system to ensure that each "jailed" process tree would really have an entire environment that it could play in without ever seeing any evidence it was part of a bigger whole.

It's taken a while for Linux to adopt the same concept. An early version of the concept is OpenVZ, which is used by many VPS providers, and does exactly what you'd expect from the above. A single kernel runs multiple environments, each seeing a subtree of the file system as being their root file system, each seeing their own network devices, and so on. OpenVZ required a patched Linux kernel, and the patches were never integrated into official Linux, and so it had limited support, but it's proven to be very popular nonetheless.

What can OpenVZ run? Well, essentially any operating system that can use the version of the Linux kernel running on the host. The operating system usually requires some small modifications, so that, for example, it doesn't start checking the disk for errors when it starts up, but once up, most applications will never know the difference between the hosted operating system and the main operating system. You can run a combination of different operating systems as long as those systems all support the kernel you're running.

While OpenVZ isn't supported by the Linux developers, the OpenVZ and Linux developers have been cooperating on a very similar project called LXC. LXC is a Linux-friendly version of the same concept, and can run on an unmodified kernel, because all the modifications needed are being integrated into the main Linux tree. LXC uses a Linux technology called "cgroups" that's designed to replace the functionality OpenVZ's kernel modifications added.

It's important to understand that, while reliable, LXC is not considered production quality yet. That doesn't mean you can't use it, or even entrust your important data to it - the very nature of LXC means it's as reliable as the host operating system. However, you need to understand that LXC has certain security holes in it that are going to take time to fix. Those holes will not affect a normal application, even a bug ridden one, but they do mean if the server is public facing, and a hacker is able to gain access to it, that hacker can theoretically gain access to the host computer too.

Operating system virtualization has advantages and disadvantages over other virtualization systems. Like para-virtualization, it has no need for special hardware support, as the host operating system inherently supports the concept natively. The technology is generally much, much, faster and more efficient than the alternatives, as there's no complexity at all between the operating system and the hardware it runs on. And it's much, much, easier to allocate resources, with it being perfectly possible to increase memory and disk space in real time. You can even give all your guests unlimited resources, and start to rein in any that cause problems when the problems start to show up.

Major downsides? There's no easy way to save a guest instance, so all the fancy Xen functionality where you can reboot your computer or move guests between computers without shutting down the guests themselves is simply impossible. The instances are too ingrained in the host operating system for it to be possible to separate them out and save them. And that also means that critical operating system updates, such as updates to the kernel, require you restart the entire system, the host and the guests.

Another downside is that, as yet, no operating system level virtualization platform for Linux is completely, 100%, transparent. One issue, for example, is that because all memory, real and virtual, is shared, it's difficult to give each environment a picture of how much memory is available. This has caused some applications that specifically check for virtual memory (swap memory) to fail because the Linux APIs that report on available memory cannot sanely report what's available in this kind of environment.


So what should you use?
  • If Xen is an option to you, I encourage you to check it out especially if you're trying to run multiple servers.
  • For testing operating systems or running an additional desktop operating system, I recommend an emulator called VirtualBox. It's free, supported by  Oracle (it's a Sun technology), and it's extremely good. It runs under Windows and Linux.
  • Ubuntu users who need to run servers probably need to look into KVM at the moment. There's more native support within Ubuntu for KVM. I don't particularly like the concept, it's very much emulator oriented, but it might suit what you want to do if you can't get Xen to do it.
  • If you have limited hardware, and you're not too concerned about security, LXC is a pretty decent option. If you're concerned about security, I encourage you to look at OpenVZ. For the most part, when LXC is finished, it should be capable of running OpenVZ VMs unchanged, so the only problem you'll run into with OpenVZ is the lack of support. And that whole "Need to reboot all servers from time to time" thing that applies to LXC too.
 Have fun!

No comments:

Post a Comment