Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software IT

Server Consolidation Guide via Virtualization 26

sunshineluv7 writes to tell us TechTarget is running a good overview of 'why, when, and how to use virtualization technologies to consolidate server workloads.' The summary provides links to several podcasts and other articles relating real world experience with how to utilize virtualization to best meet your needs. From the summary: "Advances in 64-bit computing are just one reason that IT managers are taking a hard look at virtualization technologies outside the confines of the traditional data center, says Jan Stafford, senior editor of SearchServerVirtualization.com."
This discussion has been archived. No new comments can be posted.

Server Consolidation Guide via Virtualization

Comments Filter:
  • by Anonymous Coward on Wednesday August 09, 2006 @06:44PM (#15877189)
    If you are suggesting that a critically-stressed virtual machine would somehow detrimentally affect a properly-configured host machine or other properly-configured VMs on the same host to any significant degree...you're demonstrating a lack of knowledge about the fundamental principles and concepts of virtualization.

    A VM only gets ahold of the resources you give it. If one VM with 512M RAM eats every last bit of memory in a blaze of glory, that doesn't affect dedicated resources elsewhere. Similarly, a properly-configured host will not allow any VM to grab 100% of the host CPU either.
  • by Asgard ( 60200 ) * <jhmartin-s-5f7bbb@toger.us> on Wednesday August 09, 2006 @08:37PM (#15877758) Homepage
    Its not so much one VM going bad, but that your application is totally self-contained on that VM, so you can move it (live, as with VMware ESX) to another hardware device with no worries about changing DNS, IPs, odd dependendencies in /usr/lib, etc.
  • by billstewart ( 78916 ) on Wednesday August 09, 2006 @10:16PM (#15878094) Journal
    RTFA - it's about virtual machines, not virtual-domain web servers (which by now are old technology and an obvious win.) Yes, virtualization does take some extra resources, and you need a disciplined approach to administration to use them successfully in a production environment, but production environments already needed disciplined administration and enough resources - the assertion of the virtual-machine people is that it's actually easier than maintaining multiple boxes, especially given the extremely fast CPUs and cheap RAM available these days.


    In a Unix environment, you can argue about whether the basic multi-user permissions environment and extra tricks like jails are enough to provide security in a multi-user multi-application market or whether it's helpful to use virtual machines as well. In a Windows environment, there's really not much question, even with XP-Pro and server versions - there's just not enough help from the OS. But even in a Unix environment, there are applications that want to use specific directories, or specific TCP and UDP port numbers, and virtualization lets you run multiple instances at the same time managed by different people. It also provides you some Least-Privilege-Principle separation of powers between your administrators - you can have one person who needs root to manage the firewall, but doesn't need to muck with the database, and somebody else who needs to control the database but doesn't need to touch the web servers.


    For some applications, like virtual colo, virtualization environments really do rock, whether they're VMWare, UML, Xen, or whatever. I've seen people renting out virtual machines for ~$20/month or less, when physical colo costs would be $100, and it works fine (if there's enough cheap RAM) because usually you don't really need a big CPU full-time just to run an email server and web server or whatever.


    Running multiple OS's at once is mainly useful in a desktop environment, or for specialized tasks like running an OpenBSD firewall, a Windows domain administration system, and a Linux general-purpose environment including web server and database all on the same box. I agree that it's usually cleaner to run everything in a single environment, even if it's multiple VMs - but there are times that the tools you want to use won't all run on the same OS.

  • by tadheckaman ( 578425 ) <.moc.namakceh. .ta. .dat.> on Wednesday August 09, 2006 @11:10PM (#15878285) Homepage
    Place the server in undo mode/snap shot mode, and then just backup the vmdk. When its placed into the undo/snap mode, it makes the vmdk readonly, writing the changes to a seperate file. Then all you need to do it copy that vmdk, and when done, commit the undo/snap. When restoring the backup, the system is brought online as if it lost power. On ESX its a snap to do, and Vizioncore makes software that does this for you (ESXRanger), however I leave the VMware Server as an exercise for the reader. As I dont have any need for this, I havent looked into actually scripting it in VMware Server. But the idea is the same, and I bet that its possible.
    Doing a quick search on the forums, sounds like vmware-cmd is the tool to use, or write a script to talk to VMware's SDK.
  • Re:64bit? (Score:3, Informative)

    by demon ( 1039 ) on Thursday August 10, 2006 @12:02AM (#15878444)
    2k3 Datacenter can't support 128 GB on i386; it's not possible, as PAE only adds an extra 4 address bits (going from 32 to 36 bits of physical address space). Also, there are still user process limitations that make it impossible for apps like, say, database servers to address more than 3 GB (not 4 GB; it's a limitation due to kernel address space mappings in a process). x86_64 wipes that out easily, so for healthy sized virtualization environments, it's definitely the preferred environment (and you can still run your i386 apps transparently).

Save the whales. Collect the whole set.

Working...