- Installed two of the 750GB drives into new benfranklin. Need to come up with a partition layout. Maybe just one system per disk (one of which will be virtualized)? I think this is good enough for testing, so that's what I'll do.
- RHEL5 is installed, as well as the latest version of vmware server. The current (default) configuration seems to be invalid, asking to re-configure via /usr/bin/vmware-config.pl. Reconfiguring with the defaults makes no difference. It says it needs inted/xinetd, but neither is installed. Yum won't work without RHN, so we've gotta set that up. I'm just going to unetitle tomato; we seem to have agreed that that machine's a lost cause. (New Benfranklin). I installed the prerequisites and now it's asking for the 20-character serial number. I got 15 of them from VMWare; the link is in the text above the big shiny "Download" button. I put them in a text file in my home directory. Entered the serial number, and vmware runs.
- Making a machine on a virtual disk is easy, just follow the default settings for the most part. The virtual disks are split into 2GB files on the real disk, so disk-intensive activities might take a noticeable hit.
- Making a machine on a partition/real disk requires doing a "custom" setup.
- Choose NAT for networking so we don't have to mess with getting new IPs, etc. To use this, just select DHCP for the VM's OS.
- It seems like vmware needs to be reconfigured whenever a new kernel is installed. Not too big of a deal, but something to keep in mind.
- I ran into the labelling problem that we encountered with Xen: Since GRUB was installed on TestDisk, the kernel parameter "root=LABEL=/" ended up pointing to TestDisk's root when the real new benfranklin booted. As before, the solution was to change the boot parameter to "root=/dev/sda2", where real new benfranklin's root actually is.
- Runs on any standard x86 hardware
- Supports 64-bit guest operating systems, including Windows, Linux, and Solaris
- Supports two-processor Virtual SMP, enabling a single virtual machine to span multiple physical processors ("experimental")
- Runs on a wider variety of Linux and Windows host and guest operating systems than any server virtualization product on the market
- Captures entire state of a virtual machine and rolls back at any time with the click of a single button
- Installs like an application, with quick and easy, wizard-driven installation
- Quick and easy, wizard-driven virtual machine creation
- Supports Intel Virtualization Technology
- Protects investment with an easy upgrade path to VMware Infrastructure
I've set up two virtual machines on new benfranklin: One is on a virtual disk on the "real" machine's hard drive (named "TestFile"), and one is on it's own hard drive (named "TestDrive"). Otherwise, they're set up with the same stats: 1024MB RAM, NAT ethernet, 2 processors. Root access is the usual scheme.
- Other than being a little slow to read from CD, the installation of RHEL5 to TestFile seems as snappy as a physical installation. Boots/reboots can be very sluggish, though.
- These seem to cover the main uses of our servers:
- CPU/RAM: Compile something big, and compare nonvirtual, virtual file, and virtual disk timings.
- Network: Use ttcp compare nonvirtual, virtual file, and virtual disk transfer rates.
- Hard Disk: Use hdparm
- This page should be helpful: . Now we won't have to write any custom tests. In that case, ttcp seems like the simplest for network tests (had to set up a the RPMforge repo (a-la pepper's) to get yum to install it), and maybe one of those NASA ones to test computing performance.
These are averages from several runs.
|Machine||ttcp †||hdparm -t||hdparm -T||compilation ‡||compilation 2 ††|
|Real||71788.49 KB/s||90 MB/s||5070 MB/s||4m||4m|
|TestFile||32901.71 KB/s||37 MB/s||2677 MB/s||11m||9m|
|TestDisk||31386.37 KB/s||37 MB/s||6091 MB/s||11m||11m|
† - From machine to blackbody over UNH.
‡ - Compilation of linux-126.96.36.199, default
make menuconfig with
make bzImage, "real" time
†† - Same as above, but with available RAM at the maximum recommended allotment.
I've posted the full results because the 0 on Execl Throughput seems to give a final score of zero (it mentions a buffer overrun and segfaults, so that's probably why). Roentgen's results are also posted, to provide a comparison against a machine we're likely to virtualize.
|Arithmetic Test (type = double)||314.1||194.3||113.8||155.3|
|Dhrystone 2 using register variables||1408.6||874.0||787.2||265.2|
|File Copy 1024 bufsize 2000 maxblocks||1211.4||814.8||750.2||183.2|
|File Copy 256 bufsize 500 maxblocks||906.0||622.9||584.2||159.9|
|File Copy 4096 bufsize 8000 maxblocks||1626.3||988.8||967.1||230.0|
|Shell Scripts (8 Concurrent)||3875.0||169.5||167.7||544.2|
|System Call Overhead||561.4||332.8||322.7||423.5|
Here's a Cacti appliance. We should play with this. I've got it working from hobo's IP (188.8.131.52 using bridged networking instead of NAT, password scheme "ju"), but it's listing einstein as "Unknown" status. I've tried editing /etc/snmp/snmpd.conf to allow SNMP access from that IP, but it didn't change anything. I also can't get it to work at all from a farm IP, such as 10.0.0.237 (despite real new benfranklin's ability to use farm IPs). It seems to be a problem on einstein's end; I can do an snmpwalk from okra to einstein.farm.physics.unh.edu, but not from okra to einstein.unh.edu. Firewall-related, maybe? It'd be easier if we could get the box to run from a farm IP.
It does seem to work, though. I've got it graphing blackbody, so the issue is probably related to einstein's firewall.
While slower than a physical machine, VMWare seems fast enough for many of our services. The main weakness of VMWare seems to be disk acess. So, something like einstein, which provides the disk-intensive service of home directories, should remain physical, while things like cacti, splunk, and other little things should be virtualized, possibly on einstein.
To turn a physical machine into a virtual one, use Converter, on Windows. We're slightly screwed for this part.
NOTE: Experimental support only is available for Linux-based physical to virtual machine conversions using the Vmware Converter BootCD (cold cloning) if the source physical machine has SCSI disks.