Forgot your password?
Operating Systems Software IT

VMware ESXi Available For Free Starting Today 241

Posted by ScuttleMonkey
from the free-always-sounds-better dept.
Mierdaan writes "VMware's bare-metal hypervisor is available for free starting today. ESXi, which can either be installed or run from an embedded device available in certain servers, has a 32MB footprint and gives small businesses an easy way to get into the virtualization world, with easy upgrade paths to enterprise-level features such as (H)igh (A)vailability and (D)istributed (R)esource (S)cheduler. ESXi runs on most any hardware with a server-class disk controller, and previously retailed for $495. VMware is obviously shooting to prevent Microsoft's Hyper-V technology from gaining a foothold in the marketplace."
This discussion has been archived. No new comments can be posted.

VMware ESXi Available For Free Starting Today

Comments Filter:
  • more info. (Score:5, Informative)

    by stoolpigeon (454276) * <bittercode@gmail> on Monday July 28, 2008 @02:52PM (#24372933) Homepage Journal

    This zdnet blogger [] already gave it a spin on some commodity-like hardware (which it seems to me there might be a few here who will be so inclined) and has a nice write-up of the results as well as some good tips on how to avoid some trouble spots for those not fortunate enough to be putting this on enterprise level hardware.
    Downloading the ISO does require creating an account with a ton of required fields - so there are a few minutes of typing involved. There is also the usual eula to agree too, which I need to go over before I do anything with the disc image I've downloaded.

  • by clang_jangle (975789) * on Monday July 28, 2008 @02:54PM (#24372957) Journal
    Oh, this is going to be fun, I can hardly wait! BTW the download link in TFA appears to be broken, you can get it here [].
  • awesome... (Score:4, Informative)

    by teknopurge (199509) on Monday July 28, 2008 @02:56PM (#24373003) Homepage

    In our testing VMWare is by far the best performing VM platform out there, especially on the networking benchmarks. This is nothing but a good thing.

  • by SuperBanana (662181) on Monday July 28, 2008 @03:06PM (#24373149)

    Don't mind the $2500 per-physical-machine-maximum-2-cpus price tag on the version which actually lets you do stuff, like manage the machines, migrate them, share storage, etc.

  • Re:more info. (Score:0, Informative)

    by Anonymous Coward on Monday July 28, 2008 @03:18PM (#24373329)
    You forgot to mention the key point of the hardware compatibility list: ESXi server requires, at minimum, a storage controller which is not present in anything but enterprise level machines and costs about $250 street price to upgrade a compatible server (one with PCI-X slots.)

    Cliffs: Don't plan on running this on anything you have lying around the house or office, unless you happen to have a spare Dell Poweredge 1950 (the cheapest compatible hardware, retail approx $2500 for 'barebones' config.)
  • by Anonymous Coward on Monday July 28, 2008 @03:24PM (#24373409)

    And well worth it, I might add. It is a proven enterprise level technology and it really will save you money right out of the gate. I'm running 20 Windows Server 2003 boxen on a single HP DL385 G3 with 2 AMD 2218's and 16GB RAM, and I'm still only running at about 60-70% utilization.

    For the standard version of Virtual Infrastructure you're going to spend around $2500-$6000, plus around $5000-$10000 for 1 or 2 servers to run it.

    Again, worth it.

  • Re:awesome... (Score:3, Informative)

    by Richard_at_work (517087) <> on Monday July 28, 2008 @03:29PM (#24373499)
    Uhm, HyperV is not VirtualPC - its completely different (although it can use VirtualPC and VirtualServer images if you really want it to).

    HyperV does have multiple LAN segments (with the ability to setup routing between as required) and unlimited snapshots are available as standard, to respond to both your issues.
  • Re:Business Model? (Score:3, Informative)

    by QuantumRiff (120817) on Monday July 28, 2008 @03:34PM (#24373551)
    To sell you the features that extend it, such as management, hot migration to other machines, etc. The ESXi is cool, but a very, very base product. If you start playing with it, you will want to pay for all the features that go along with ESX to manage, deploy, etc..
  • by jeffmeden (135043) on Monday July 28, 2008 @03:34PM (#24373563) Homepage Journal
    You are right. The management software you want is Virtual Center (included as part of ESXi). The only thing you lack is the advanced management features such as automated high availability.
  • by JayGuerette (457133) on Monday July 28, 2008 @03:37PM (#24373609)

    Don't mind the $2500 per-physical-machine-maximum-2-cpus price tag on the version which actually lets you do stuff, like manage the machines, migrate them, share storage, etc.

    When you're running 10-20 virtual servers on a single ESX host and look at the hardware cost, space & resource consumption, and management costs of 10-20 physical servers.... this suddenly looks cheap. We're running 100+ ESX hosts... this is an *extremely* cost-effective solution.

  • by hal9000(jr) (316943) on Monday July 28, 2008 @03:37PM (#24373611)
    You can find a FAQ [].

    I haven't looked at ESXi in depth. The biggest missing component I see is the lack of a service console--no command line. I have a few Dell 2550(?) that for some reason have CDrom issues that I need console access for.

    It looks like you have plenty of time to install ESXi and play with it. As long as your virtual servers aren't resource hogs, you can save bundles in hardware. If you step up to ESX and Virtual Ifrastructure, you can manage all your VM's through a single server. You can move, with VMotion VM's from one hypervisor to another (running, if they are using the same SAN), take snapshots (and restore!) of running machines live. virtualizaiton makes your life so much easier.

    Guess I am a bit of a fan-boi.
  • by moogoogaipan (970221) on Monday July 28, 2008 @03:42PM (#24373703)
    Just found this out: To use ESXi with VC you would need to purchase ESX Foundation Oh well, still, I'll try it w/o Virtual Center.
  • Re:Not FREE (Score:1, Informative)

    by Anonymous Coward on Monday July 28, 2008 @03:42PM (#24373723)

    Technically in English, "free" has no default value. If you want to avoid ambiguity, you have to say something like "free of charge"

    You mean as in "VMware ESXi Available For Free"?

  • by Feyr (449684) on Monday July 28, 2008 @04:00PM (#24374021) Journal

    their ESX software is an hypervisor that you must install directly on the hardware to start with. if you want to run linux/win under it, you need to get vmware server.

    ESXi seems to be ESX without the "service console" (a linux console that runs virtually that lets you manage stuff on the esx server)

    to manage it you need the VI client which you can download on their site. it's the same client for all of their software (except vmware server, because it sucks)

    VI client is, sadly, windows only

  • Re:more info. (Score:5, Informative)

    by nabsltd (1313397) on Monday July 28, 2008 @04:10PM (#24374157)

    ESX or ESXi works just fine with a bunch of plain old IDE and SATA controllers...see here [] for more information.

    You can't put virtual machines on an IDE drive, but you can put them on SATA disks with the controllers listed at that link. You don't get RAID on any of them, though, even if they have some sort of RAID available. ESX(i) only officially supports storing VMs on RAID arrays if the disks appear to be SCSI of some sort (including SAS, or SATA on an SAS-capable controller).

    You could also use Openfiler [] to create iSCSI targets that ESXi can use to store VMs, and Openfiler can use any storage that any modern Linux can use, including Linux software RAID. This allows you to have a VMware ESX(i) setup permanently (ESX was available as a free 90-day trial) on some pretty cheap hardware.

  • Re:more info. (Score:5, Informative)

    by Anonymous Coward on Monday July 28, 2008 @04:11PM (#24374179)

    "3.9 Audit Rights. You will maintain accurate records as to your use of the Software as authorized by this Agreement, for at least two (2) years from the last day on which support and subscription services ("Services") expired for the applicable Software. VMware, or persons designated by VMware, will, at any time during the period when you are obliged to maintain such records, be entitled to inspect such records and your computing devices, in order to verify that the Software is used by you in accordance with the terms of this Agreement..."

    No wonder no one wants to read the EULA.

    They don't want the VMware SWAT team busting in on them to see if they're using free software in accordance with the license.

  • Re:more info. (Score:3, Informative)

    by Anonymous Coward on Monday July 28, 2008 @04:13PM (#24374219)
    You don't even need to mess with iSCSI if you don't want to: ESXi can use a plain old NFS NAS. That's not exactly a stretch.

    As I've already pointed out, ESXi also runs quite happily on a bunch of bog-standard SCSI controllers like the Adaptec AIC7xxx range, so you don't even need remote storage of any kind, and certainly not an enterprise class SAN.
  • by mccabem (44513) on Monday July 28, 2008 @04:14PM (#24374229)

    There is no Firewire for servers or workstations.

    There's just Firewire like there's just USB. He's talking about Firewire support in VMware like there's USB support in VMware.


  • by joe_n_bloe (244407) on Monday July 28, 2008 @04:46PM (#24374757) Homepage

    Also you can surf the web for other management applications written using the VI API. There are some out there already and I think that the release of ESXi will really accelerate this. Which is a good thing because VC could use a kick in the pants (would be good for VMware too).

    BTW there is a limited built-in web management interface.

  • by J-F Mammet (769) on Monday July 28, 2008 @04:52PM (#24374823) Homepage

    For my work we wanted to setup a HA cluster with 2 (or at worse 3) servers running both a Linux and Windows environment for some DRM stuff. So after years of just toying with VMWare server and simple VMs like that, I finally jumped into the wonderful world of hypervisors.
    I of course first tried the open source solutions, and boy was that a nightmare. First Xen, on a DRBD+OCFS2+Heartbeat environment. Never managed to get it to be stable, got either kernel panic from OCFS after some time, or the servers would hang when doing live migrations. Also tried the iSCSI way, and still no way to stabilize the thing.
    Then since I though the issue was with the only officially supported Xen kernel (2.6.18) I tried KVM since it's integrated into the mainline kernel. Well surprise, I got more or less the exact same result. Kernel panic when trying the migrate a VM...
    So I gave ESX a try, not really believing it would be any better. Well, it actually works, but while it was easier to set up than KVM/Xen for HA and stuff like that, it sure wasn't trivial either. I spent a lot of time on google researching the various issues I was having (who would think that you HAVE to use the names of the machines and not their IPs when setting up the HA stuff?), but at least I got it to work. The accounting people sure aren't happy with it though...

  • Re:more info. (Score:5, Informative)

    by Anonymous Coward on Monday July 28, 2008 @04:56PM (#24374869)
    If YOU knew the first thing about VMWare ESX YOU'D know that they use almost unmodified Linux drivers, and any device supported by the driver will work under ESX and ESXi just as well as it will work under Linux.

    Not to mention if YOU were actually reading the thread YOU'D know that the GGP is complaining that he has to buy a $250 "Enterprise class" SAS controller and have a server with PCI-X slots in it, which is total crap. The only reason he thinks this is because the ZDNet blogger who wrote the "review" the GGP read is an idiot who has some weird fixation with SAS and totally ignores all the other available, cheaper and less troublesome storage options such as SCSI or an NFS mounted NAS.

    Last but not least, you said it yourself: VMWare only support various certified platforms, but don't expect to get much support for ESXi anyway. ESXi will be fine in an enterprise setup you need a scratch server, or have a spare "supported" server lying around so you can be sure it will work. If you're expecting to throw ESXi on any old bit of whitebox crap and get enterprise quality server out of it, you're delusional. At the same time, whining that you can't setup a simple whitebox machine and run ESXi on it for your own uses because you have to buy a $250 SAS controller first is just uninformed crap.

    But thanks for playing.
  • simplicity (Score:2, Informative)

    by dgym (584252) on Monday July 28, 2008 @06:51PM (#24376769)
    There are many setups that should work, but don't. I have used the following extensively, and in production, so maybe it can help.

    On each node I setup LVM, from which I can allocate logical volumes for the guests (e.g. guest 1 gets /dev/guests/1 on both machines).

    I then use DRBD to mirror the logical volumes, so yes, there can be quite a lot of DRBD devices - one per guest.

    For OpenVZ the DRBDs get ext3 (so quota works) and it is mounted on the node running the guest. This doesn't support live migration, instead I suspend to disk, copy the dump, and restore it on the other machine. With the intermediate steps of unmounting, switching primaries, and mounting this takes about 5 seconds.

    For KVM the guests just use the DRBDs directly. I enable dual primary which lets me do live migrations over TCP. This is extremely fast, fast enough that it would be appropriate for load balancing.

    One notable benefit of this system, as opposed to cluster file systems, is that there is no locking across the network. Each logical volume is "owned" by one node at a time, so there is no need for synchronizing access for every read or write.

    Seen too many options yet?
  • by martums (306333) on Monday July 28, 2008 @07:23PM (#24377177)

    their ESX software is an hypervisor that you must install directly on the hardware to start with. if you want to run linux/win under it, you need to get vmware server.

    I disagree with the last part of what you said. The VMware Server product will let you run one or more virtual machines on top of Linux or Windows. ESXi has no underlying host OS, and is (supposed to be) a bare metal hypervisor, (god, I hate that word), allowing you to run one or more virtual machines on the bare metal, using only the hypervisor, (Without Windows or Linux booting first. The ongoing debate of whether ESX or ESXi leverages any *nix is not for me to engage in). VMware Server is a completely different product as opposed to ESX and ESXi. And now that both VMware Server and ESXi are available free, seems like VMware Server just became the red-headed stepchild.

    ESX does not require VMware Server. Two separate products, now both available free of charge.

    VMware Server might be a cheap alternative if you can't shell out the $300 for Workstation. The latter of which, is worth every penny.

  • by martums (306333) on Tuesday July 29, 2008 @09:30AM (#24383825)

    is the lack of a service console--no command line. I have a few Dell 2550(?) that for some reason have CDrom issues that I need console access for.

    It is possible, though unsupported, to SSH in to ESXi. This doesn't have the same functionality as the service console, as you're probably aware. It's enabled on one or more of the ESXi servers we use, (for development, not production, lest the flames ensue), and is handy in a pinch. Paul Lalonde posted instructions in the community at;jsessionid=529C6EC4C2DAD952438F591A8052BBBB [] quoting his instructions...

    1. Boot your ESXi server, wait for it to finish loading, and then do the following:
    2. ALT-F1 to change to the main console
    3. Type 'unsupported' (you will not be able to see what you're typing)
    4. When prompted, enter the root user's password
    5. Type: vi /etc/inetd.conf
    6. Find the line that begins with #ssh
    7. Cursor over the first 's' and press the 'i' key (for insert mode)
    8. backspace, esc
    9. Type SHIFT+colon (:) and then 'wq!' to write and exit
    10. Type 'ps | grep inetd' to find the inetd process
    11. Send the hangup signal to the process ID output from step 10 with: kill -s HUP
    12. You can now SSH into your ESXi server


Happiness is a positive cash flow.