Virtualization In Linux Kernel 2.6.20 178
mcalwell writes with an article about the Kernel-based Virtual Machine (or KVM for short) in the release candidate Linux 2.6.20 kernel. From the article: "[T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. [KVM] is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the KVM for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen."
Re:Oddness in kernel release cycle (Score:3, Informative)
Re:Oddness in kernel release cycle (Score:2, Informative)
'So go get it. It's one of those rare "perfect" kernels.'
mirror (Score:0, Informative)
For only being a release candidate the Linux 2.6.20 kernel has already generated quite a bit of attention. On top of adding asynchronous SCSI scanning, multi-threaded USB probing, and many driver updates, the Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. Kernel-based Virtual Machine (or KVM for short) is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the Kernel-based Virtual Machine for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen.
What has been merged into the Linux 2.6.20 kernel is the device driver for managing the virtualization hardware. The other component that comprises KVM is the user-space program, which is a modified version of QEMU. Kernel-based Virtual Machine for Linux uses Intel Virtualization Technology (VT) and AMD Secure Virtual Machine (SVM/AMD-V) for hardware virtualization support. With that said, one of the presented hardware requirements to use KVM is an x86 processor with either of these technologies. The respective technologies are present in the Intel Core series and later, Xeon 5000 series and later, Xeon LV series, and AMD's Socket F and AM2 processors.
The Kernel-based Virtual Machine also assigns every virtual machine as a regular Linux process handled by the Linux scheduler by adding a guest mode execution. With the virtual machine being a standard Linux process, all standard process management tools can be used. The KVM kernel component is embedded into Linux 2.6.20-rc1 kernels and newer, but the KVM module can be built on older kernels (2.6.16 to 2.6.19) as well. At this stage, KVM supports Intel hosts, AMD hosts, Linux guests (x86 and x86_64), Windows guests (x86), SMP hosts, and non-live migration of guests. However, still being worked on is optimized MMU virtualization, live migration, and SMP guests. Microsoft Windows x64 does not work with KVM at this time.
Whether you are using a kernel with KVM built-in or loading it as a module, the process for setting up and running guest operating systems is quite easy. After setting up an image (qemu-img will work with KVM) and the KVM kernel component loaded, the modified version of QEMU can be used with the standard QEMU arguments to get you running.
The hardware requirements to use KVM is an x86/x86_64 processor with AMD or Intel virtualization extensions and at least one Gigabyte of system memory to allow for enough RAM for the guest operating system. For our purposes, we had used two dual-core Intel Xeon LV processors with the Linux 2.6.20-rc3 kernel, which was released on January 1, 2007. Below is the rundown of system components used.
Hardware Components
Processor: 2 x Intel Xeon LV Dual-Core 2.00GHz
Motherboard: Tyan Tiger i7520SD S5365
Memory: 2 x 512MB Mushkin ECC Reg DDR2-533
Graphics Card: NVIDIA GeForce FX5200 128MB PCI
Hard Drives: Western Digital 160GB SATA2
Optical Drives: Lite-On 16x DVD-ROM
Cooling: 2 x Dynatron Socket 479 HSFs
Case: SilverStone Lascala LC20
Power Supply: SilverStone Strider 560W
Software Components
CmdrTaco Penis: Very small and covered with herpes sores.
Operating System: Fedora Core 6
The benchmarks we had used for comparing the performance was Gzip compression, LAME compilation, LAME encoding, and RAMspeed. The virtualization environments we had used were QEMU 0.8.2 with the kqemu accelerator module, Xen 3.0.3, and finally KVM. We had also compared these virtualized environments against running Fedora Core 6 Zod without any form of virtualization. During the Xen 3.0.3 testing, we had used full virtualization and not para-virtualization. The image size was set to 10GB during the testing process. The operating system used throughout the entire testing process was Fedora Core 6 Zod.
Looking over the virtualization performance results, K
Re:Acronym overload (Score:5, Informative)
Re:Oddness in kernel release cycle (Score:5, Informative)
No, the attention has been drawn from people actually giving a fuck.
Kernels from 2.6.9 onwards are a disaster.
The original idea was that "distributions will fork off and maintain kernel for releases". This idea has degenerated into "only distributions can fork and maintain a kernel". Sole developers and hobbyists are being treated the same way Microsoft treats them - as a "one night stand". In fact, even distributions are unable to keep up with that. Fedora has half of these bugs in it. So does etch, so does mandriva and all other lesser distributions. Only RHELL and Suse ship something reasonably useable and it is 1 year behind on features.
Reality is that anything past 2.6.9 should be called 2.7.x and that is it. And it may be seriously worth it to consider Gentoo/BSD or Debian/BSD. While the BSD crowd has its own failings, it does not change fundamental APIs for entertainment purposes every month on the stable branch.
from about a month back ... (Score:4, Informative)
Re:Simple Q: will this run Win XP as a guest? (Score:5, Informative)
After playing around with paravirtualization with Xen for the past two+ years, I finally got the cash in August to buy a cheapo AMD dual-core 64-bit system (~$800 at Best Buy: an HP system with a 4200 and 2 gigs of DDR2 RAM). I've run both Xen and QEMU on it under 64-bit Gentoo Linux. The performance of Windows XP on Xen vs. QEMU is fairly close. I would have to say that it seems to me that where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU. I suspect this wouldn't be the case on better hardware with a high performance SCSI/RAID system. That should, at least, make things a bit better anyway. But for the time being I'm sticking with Xen since it's just too easy to use. And I am especially interested in the live migration features. As long as you have centralized disk storage, you can move live VMs between physical hosts with less than a second of interruption (ie. your users will never notice). Keep in mind, I'm doing this all at home as I'd really like to collapse many of my machines into one or two boxes and keep everything else as simple X displays where GUIs are needed. I've currently got four VMs running on the box with two of them being fully virtualized (Windows XP SP2 for access DRMed crap and Redhat Linux 7 which still hosts some services I don't want to part with) and the other two being paravirtualized (Domain0 which is just the VM management environment and my Gentoo Asterisk "PBX"). PAravirtualized performance is damn amazing. I think if I used strictly paravirtualized OSes I could probably squeeze out 20 VMs from this guy with decent performance. I actually just added two more gigs to the system tonight, and if I assume 128 megs per virtual machine (I've allocated 512M to the Windows XP VM) I can get up to 32 VMs running simultaneously.
As far as KVM goes, I've had a good deal of experience with QEMU and it KVM is similar, there are some limitations I hope they will overcome. (For what it's worth, the hardware based virtualization in Xen is also a modified QEMU process called qemu-dm) The main one being PCI device allocation. Xen allows you to partition your PCI devices and assign individual cards to specific VMs. I don't think QEMU does this, and I expect that KVM doesn't either.
Re:Oddness in kernel release cycle (Score:2, Informative)
Re:kqemu? (Score:4, Informative)
paravirt KVM on the way (Score:5, Informative)
Re:Oddness in kernel release cycle (Score:3, Informative)
I'm not sure in general, but I've been happily using 2.6.19 for a while with no issues.
As for kvm, I downloaded it about a week ago and manually built and installed it (on 2.6.19), and I've had no trouble with it at all. It was very easy to build and install following the instructions [sourceforge.net], and creating images and installing a new os on them is trivial. I set up a couple of images for experimenting with ubuntu and fedora (my main os is gentoo), and I set up another image on which I installed Plan 9, just to play around with that a little.
Re:Oddness in kernel release cycle (Score:3, Informative)
If you search the kernel archives for ext3 corruption you'll find a couple long threads discussing the issue and the solution.
Re:Apples to Oranges (Score:5, Informative)
Re:Apples to Oranges (Score:2, Informative)
http://fabrice.bellard.free.fr/qemu/qemu-accel.ht
Re:Mod... Parent... Up (Score:4, Informative)
Well, first off there's CentOS if you don't need the support. Secondly, while the kernel guys are happy hacking away at 2.6.x, there are other distributions like Debian and Ubuntu LTS which will support a stable API/ABI for several years.
Yes, now 2.6 keeps breaking but does anyone remember the bad old days when distros were backporting hundreds of patches from 2.5 to 2.4? What the distros are shipping now is closer to a vanilla kernel, for better and for worse. They pick one version, stay with it and stabilize it. That'll what SLES, RHEL and all the other distros do.
Re:Acronym overload (Score:3, Informative)
NATO is an acronym, KVM isn't.
Re:Mod... Parent... Up (Score:3, Informative)
http://en.wikipedia.org/wiki/Linux_kernel#Version
The fact that most people don't seem to run 2.6.16 seems to indicate that people are happy to forgo some stability in exchange for having the new features in the latest 2.6.x kernel available now.
Poor scientific practice (Score:4, Informative)
Why do they document the model of CD-ROM drive they used, but not the configuration of each emulation/simulation environment? I was shocked by the LAME compile times--and forced to wonder and guess what the filesystem configuration was. Is the filesystem located in an image file on the "host" computer's filesystem? Wouldn't it be interesting to try using a comparible medium across all benchmarks (shared NFS server, or low-level access to the same block device)?
Not enough data (CPU time vs. real time, etc.), not enough benchmarks (different filesystem media, etc.), poor documentation (configuration, anyone?), on what doesn't even amount to an official release. Correct me if I'm wrong.
Re:Mod me down! (Score:4, Informative)
Re:Oddness in kernel release cycle (Score:3, Informative)
IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16
IDE-SCSI never worked properly. I've had constant problems with it since I started CD burning on Linux. Thankfully it is now obsoleted by the new ATA drivers since the ATA devices just shows up on the system as a SCSI device. If you really need to have SCSI support for IDE devices I highly suggest trying the new drivers.
Yes, we drop support for obsolete crap. (Score:3, Informative)
The IDE-SCSI abomination is a foul and evil hack that should have been removed many years ago. Back in the early days, it was needed for CD burning. Linux no longer requires IDE-SCSI. If the cdrecord author told you otherwise... well, he was lying because he damn well knows this isn't true.
Your "fundamental APIs" are not APIs at all. They are kernel-internal details. Screwing around with unmaintained out-of-tree drivers is really not supported, and will never be supported. Go use Windows Vista if you want that... no, wait, Microsoft breaks stuff too! I guess you'll have to live in a fantasy world.
Re:Oddness in kernel release cycle (Score:3, Informative)
It's ironic that you mention IDE-SCSI as not working. The latest excitement is that devices that used to be treated as IDE are now being treated as SCSI if you build the appropriate drivers, so people are finding that the drives they thought were "IDE" are actually "ATA", and on
As far as stability is concerned, I've had exactly one kernel problem using a kernel that has ~9 patches that aren't from mainline. And that problem (it would break a device sharing a legacy IRQ with an nvidia ethernet card on a system with MSI support if you don't use msi=disable) was fixed in 2.6.18.y and 2.6.19.1.
The idea in the 2.4 days was that distros (and only distros) would fork off and maintain a release with hundreds of backported patches from the development series that won't be available in a stable vanilla kernel for several years. The idea now is that the latest 2.6.x should work for everybody. Backporting anything is a bad idea in general, and regressions should be fixed before a 2.6.x comes out, or shortly thereafter.