Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades Operating Systems Software Linux

Virtualization In Linux Kernel 2.6.20 178

mcalwell writes with an article about the Kernel-based Virtual Machine (or KVM for short) in the release candidate Linux 2.6.20 kernel. From the article: "[T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. [KVM] is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the KVM for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen."
This discussion has been archived. No new comments can be posted.

Virtualization In Linux Kernel 2.6.20

Comments Filter:
  • by EzInKy ( 115248 ) on Wednesday January 10, 2007 @02:54AM (#17535804)
    I've seen a lot of mentions of file corruption on their mailing list, even with ext3.
  • by marol ( 734015 ) on Wednesday January 10, 2007 @02:55AM (#17535812)
    Quoting Torvalds from the 2.6.19 release announcement:
    'So go get it. It's one of those rare "perfect" kernels.'
  • mirror (Score:0, Informative)

    by Anonymous Coward on Wednesday January 10, 2007 @02:57AM (#17535838)
    already slashdotted.

    For only being a release candidate the Linux 2.6.20 kernel has already generated quite a bit of attention. On top of adding asynchronous SCSI scanning, multi-threaded USB probing, and many driver updates, the Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. Kernel-based Virtual Machine (or KVM for short) is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the Kernel-based Virtual Machine for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen.

    What has been merged into the Linux 2.6.20 kernel is the device driver for managing the virtualization hardware. The other component that comprises KVM is the user-space program, which is a modified version of QEMU. Kernel-based Virtual Machine for Linux uses Intel Virtualization Technology (VT) and AMD Secure Virtual Machine (SVM/AMD-V) for hardware virtualization support. With that said, one of the presented hardware requirements to use KVM is an x86 processor with either of these technologies. The respective technologies are present in the Intel Core series and later, Xeon 5000 series and later, Xeon LV series, and AMD's Socket F and AM2 processors.

    The Kernel-based Virtual Machine also assigns every virtual machine as a regular Linux process handled by the Linux scheduler by adding a guest mode execution. With the virtual machine being a standard Linux process, all standard process management tools can be used. The KVM kernel component is embedded into Linux 2.6.20-rc1 kernels and newer, but the KVM module can be built on older kernels (2.6.16 to 2.6.19) as well. At this stage, KVM supports Intel hosts, AMD hosts, Linux guests (x86 and x86_64), Windows guests (x86), SMP hosts, and non-live migration of guests. However, still being worked on is optimized MMU virtualization, live migration, and SMP guests. Microsoft Windows x64 does not work with KVM at this time.

    Whether you are using a kernel with KVM built-in or loading it as a module, the process for setting up and running guest operating systems is quite easy. After setting up an image (qemu-img will work with KVM) and the KVM kernel component loaded, the modified version of QEMU can be used with the standard QEMU arguments to get you running.

    The hardware requirements to use KVM is an x86/x86_64 processor with AMD or Intel virtualization extensions and at least one Gigabyte of system memory to allow for enough RAM for the guest operating system. For our purposes, we had used two dual-core Intel Xeon LV processors with the Linux 2.6.20-rc3 kernel, which was released on January 1, 2007. Below is the rundown of system components used.
    Hardware Components
    Processor: 2 x Intel Xeon LV Dual-Core 2.00GHz
    Motherboard: Tyan Tiger i7520SD S5365
    Memory: 2 x 512MB Mushkin ECC Reg DDR2-533
    Graphics Card: NVIDIA GeForce FX5200 128MB PCI
    Hard Drives: Western Digital 160GB SATA2
    Optical Drives: Lite-On 16x DVD-ROM
    Cooling: 2 x Dynatron Socket 479 HSFs
    Case: SilverStone Lascala LC20
    Power Supply: SilverStone Strider 560W
    Software Components
    CmdrTaco Penis: Very small and covered with herpes sores.
    Operating System: Fedora Core 6

    The benchmarks we had used for comparing the performance was Gzip compression, LAME compilation, LAME encoding, and RAMspeed. The virtualization environments we had used were QEMU 0.8.2 with the kqemu accelerator module, Xen 3.0.3, and finally KVM. We had also compared these virtualized environments against running Fedora Core 6 Zod without any form of virtualization. During the Xen 3.0.3 testing, we had used full virtualization and not para-virtualization. The image size was set to 10GB during the testing process. The operating system used throughout the entire testing process was Fedora Core 6 Zod.

    Looking over the virtualization performance results, K

  • Re:Acronym overload (Score:5, Informative)

    by X0563511 ( 793323 ) * on Wednesday January 10, 2007 @03:07AM (#17535904) Homepage Journal
    It's not really a problem when you have lots of letters in an acronym. It's more of a problem when you have at least three different things in the same industry with the same acronym. [wikipedia.org]
  • by arivanov ( 12034 ) on Wednesday January 10, 2007 @03:12AM (#17535924) Homepage

    No, the attention has been drawn from people actually giving a fuck.

    Kernels from 2.6.9 onwards are a disaster.

    • PIO IDE causes a deadlock on Via chipsets under heavy IO from 2.6.11 onwards. Worst in 2.6.16, but still reproducible on others.
    • IDE TAPE no longer works from 2.6.10 onwards
    • IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16
    • LONGHAUL is broken to some extent since 2.6.9
    • There is a change in fundamental APIs - termIO (2.6.16), locking (2.6.15), scheduling(every second f*** kernel), etc every release so it is takes a fully blown porting effort and untangling unrelated changes to backport fixes to a driver.

    The original idea was that "distributions will fork off and maintain kernel for releases". This idea has degenerated into "only distributions can fork and maintain a kernel". Sole developers and hobbyists are being treated the same way Microsoft treats them - as a "one night stand". In fact, even distributions are unable to keep up with that. Fedora has half of these bugs in it. So does etch, so does mandriva and all other lesser distributions. Only RHELL and Suse ship something reasonably useable and it is 1 year behind on features.

    Reality is that anything past 2.6.9 should be called 2.7.x and that is it. And it may be seriously worth it to consider Gentoo/BSD or Debian/BSD. While the BSD crowd has its own failings, it does not change fundamental APIs for entertainment purposes every month on the stable branch.

  • by Gopal.V ( 532678 ) on Wednesday January 10, 2007 @03:20AM (#17535960) Homepage Journal
    Does the dec 12th story [slashdot.org] make this one a dupe or was just early warning ?
  • by eno2001 ( 527078 ) on Wednesday January 10, 2007 @03:23AM (#17535976) Homepage Journal
    My experience so far...

    After playing around with paravirtualization with Xen for the past two+ years, I finally got the cash in August to buy a cheapo AMD dual-core 64-bit system (~$800 at Best Buy: an HP system with a 4200 and 2 gigs of DDR2 RAM). I've run both Xen and QEMU on it under 64-bit Gentoo Linux. The performance of Windows XP on Xen vs. QEMU is fairly close. I would have to say that it seems to me that where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU. I suspect this wouldn't be the case on better hardware with a high performance SCSI/RAID system. That should, at least, make things a bit better anyway. But for the time being I'm sticking with Xen since it's just too easy to use. And I am especially interested in the live migration features. As long as you have centralized disk storage, you can move live VMs between physical hosts with less than a second of interruption (ie. your users will never notice). Keep in mind, I'm doing this all at home as I'd really like to collapse many of my machines into one or two boxes and keep everything else as simple X displays where GUIs are needed. I've currently got four VMs running on the box with two of them being fully virtualized (Windows XP SP2 for access DRMed crap and Redhat Linux 7 which still hosts some services I don't want to part with) and the other two being paravirtualized (Domain0 which is just the VM management environment and my Gentoo Asterisk "PBX"). PAravirtualized performance is damn amazing. I think if I used strictly paravirtualized OSes I could probably squeeze out 20 VMs from this guy with decent performance. I actually just added two more gigs to the system tonight, and if I assume 128 megs per virtual machine (I've allocated 512M to the Windows XP VM) I can get up to 32 VMs running simultaneously.

    As far as KVM goes, I've had a good deal of experience with QEMU and it KVM is similar, there are some limitations I hope they will overcome. (For what it's worth, the hardware based virtualization in Xen is also a modified QEMU process called qemu-dm) The main one being PCI device allocation. Xen allows you to partition your PCI devices and assign individual cards to specific VMs. I don't think QEMU does this, and I expect that KVM doesn't either.
  • by Ryan Mallon ( 689481 ) on Wednesday January 10, 2007 @03:33AM (#17536050)
    Umm, what? According to http://www.kernel.org/ [kernel.org] 2.6.19.1 is the latest stable version. Stable versions are denoted by having an even number for the major revision, odd numbers are for development versions.
  • Re:kqemu? (Score:4, Informative)

    by popeydotcom ( 114724 ) on Wednesday January 10, 2007 @04:02AM (#17536202) Homepage
    A I understand it kvm makes use of the VT instructions present in modern CPUs to make QEMU nice and zippy. Older CPUs don't have those instructions so they would still "need" kqemu to make QEMU go full speed.
  • by ens0niq ( 883308 ) on Wednesday January 10, 2007 @04:08AM (#17536212)
    > [T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. Yep. But Molnár Ingo (yes, the hungarian kernel hacker) Ingo Molnar announced [kerneltrap.org] a new patch introducing paravirtualization support for KVM.
  • by jnana ( 519059 ) on Wednesday January 10, 2007 @04:14AM (#17536250) Journal

    I'm not sure in general, but I've been happily using 2.6.19 for a while with no issues.

    As for kvm, I downloaded it about a week ago and manually built and installed it (on 2.6.19), and I've had no trouble with it at all. It was very easy to build and install following the instructions [sourceforge.net], and creating images and installing a new os on them is trivial. I set up a couple of images for experimenting with ubuntu and fedora (my main os is gentoo), and I set up another image on which I installed Plan 9, just to play around with that a little.

  • by Spoke ( 6112 ) on Wednesday January 10, 2007 @04:51AM (#17536442)
    The file corruption talked about has been in the kernel for some time, but recent changes made it more visible and easier to trigger. It should be fixed in the latest 2.6.20rc kernel.

    If you search the kernel archives for ext3 corruption you'll find a couple long threads discussing the issue and the solution.
  • Re:Apples to Oranges (Score:5, Informative)

    by repvik ( 96666 ) on Wednesday January 10, 2007 @05:06AM (#17536526)
    Xen requires a P6 or better at this time (available for ~5 years). They hope to add support to ARM and PPC at a later time. KVM, OTOH, depends on brand-spanking new CPUs with virtualization instructions. QEmu just requires some CPU-thingy.
  • Re:Apples to Oranges (Score:2, Informative)

    by overbored ( 450853 ) on Wednesday January 10, 2007 @05:25AM (#17536620) Homepage Journal
    rtfa. not qemu, but qemu accelerator.

    http://fabrice.bellard.free.fr/qemu/qemu-accel.htm l [bellard.free.fr]
  • by Kjella ( 173770 ) on Wednesday January 10, 2007 @06:08AM (#17536864) Homepage
    Anyone other than SLES or RHEL is a second class Linux citizen today. Without vendor support you can forget about trying to run a stable Linux kernel anymore. Bring back the old odd / even split!

    Well, first off there's CentOS if you don't need the support. Secondly, while the kernel guys are happy hacking away at 2.6.x, there are other distributions like Debian and Ubuntu LTS which will support a stable API/ABI for several years.

    Yes, now 2.6 keeps breaking but does anyone remember the bad old days when distros were backporting hundreds of patches from 2.5 to 2.4? What the distros are shipping now is closer to a vanilla kernel, for better and for worse. They pick one version, stay with it and stabilize it. That'll what SLES, RHEL and all the other distros do.
  • Re:Acronym overload (Score:3, Informative)

    by dunstan ( 97493 ) <dvavasour@i e e . o rg> on Wednesday January 10, 2007 @06:42AM (#17537058) Homepage
    Strictly it's not an acronym unless it is commonly pronounced as a word.

    NATO is an acronym, KVM isn't.
  • by thue ( 121682 ) on Wednesday January 10, 2007 @06:50AM (#17537098) Homepage
    If people really wanted the old stable versions then they would be using 2.6.16.y, which is still being maintained using the same old stable policies as 2.4

    http://en.wikipedia.org/wiki/Linux_kernel#Versions [wikipedia.org]

    The fact that most people don't seem to run 2.6.16 seems to indicate that people are happy to forgo some stability in exchange for having the new features in the latest 2.6.x kernel available now.
  • by piranha(jpl) ( 229201 ) on Wednesday January 10, 2007 @07:31AM (#17537304) Homepage

    Why do they document the model of CD-ROM drive they used, but not the configuration of each emulation/simulation environment? I was shocked by the LAME compile times--and forced to wonder and guess what the filesystem configuration was. Is the filesystem located in an image file on the "host" computer's filesystem? Wouldn't it be interesting to try using a comparible medium across all benchmarks (shared NFS server, or low-level access to the same block device)?

    Not enough data (CPU time vs. real time, etc.), not enough benchmarks (different filesystem media, etc.), poor documentation (configuration, anyone?), on what doesn't even amount to an official release. Correct me if I'm wrong.

  • Re:Mod me down! (Score:4, Informative)

    by WNight ( 23683 ) on Wednesday January 10, 2007 @09:24AM (#17538060) Homepage
    There's no valid way to enforce post-sale contracts, EULAs aren't valid.
  • by gmack ( 197796 ) <gmack@noSpAM.innerfire.net> on Wednesday January 10, 2007 @09:39AM (#17538272) Homepage Journal

    IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16

    IDE-SCSI never worked properly. I've had constant problems with it since I started CD burning on Linux. Thankfully it is now obsoleted by the new ATA drivers since the ATA devices just shows up on the system as a SCSI device. If you really need to have SCSI support for IDE devices I highly suggest trying the new drivers.

  • by r00t ( 33219 ) on Wednesday January 10, 2007 @11:48AM (#17540130) Journal
    Does anybody still have an IDE tape drive that hasn't died of old age? Is is actually big enough to do a backup?

    The IDE-SCSI abomination is a foul and evil hack that should have been removed many years ago. Back in the early days, it was needed for CD burning. Linux no longer requires IDE-SCSI. If the cdrecord author told you otherwise... well, he was lying because he damn well knows this isn't true.

    Your "fundamental APIs" are not APIs at all. They are kernel-internal details. Screwing around with unmaintained out-of-tree drivers is really not supported, and will never be supported. Go use Windows Vista if you want that... no, wait, Microsoft breaks stuff too! I guess you'll have to live in a fantasy world.
  • by iabervon ( 1971 ) on Wednesday January 10, 2007 @02:53PM (#17543622) Homepage Journal
    IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16

    It's ironic that you mention IDE-SCSI as not working. The latest excitement is that devices that used to be treated as IDE are now being treated as SCSI if you build the appropriate drivers, so people are finding that the drives they thought were "IDE" are actually "ATA", and on /dev/sda now. Not only is the functionality of treating "IDE" devices as "SCSI" still available, it doesn't require a special module, and it's becoming default. They're eventually going to ditch "IDE" entirely, because it's crufty old code that nobody really likes. Of course, with any luck, PATA controllers will be enumerated and the information put in sysfs along with master/slave, so you'll still be able to get /dev/hda out of udev for your PATA hard drive.

    As far as stability is concerned, I've had exactly one kernel problem using a kernel that has ~9 patches that aren't from mainline. And that problem (it would break a device sharing a legacy IRQ with an nvidia ethernet card on a system with MSI support if you don't use msi=disable) was fixed in 2.6.18.y and 2.6.19.1.

    The idea in the 2.4 days was that distros (and only distros) would fork off and maintain a release with hundreds of backported patches from the development series that won't be available in a stable vanilla kernel for several years. The idea now is that the latest 2.6.x should work for everybody. Backporting anything is a bad idea in general, and regressions should be fixed before a 2.6.x comes out, or shortly thereafter.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...