Gourd
Gourd is a 2 quad-CPU server in a 2U rackmount chassis put together nicely by Microway. It arrived at UNH on 11/24/2009. The system has an Areca RAID card with ethernet port and an IPMI card with ethernet port. The motherboard is from Super Micro.
This is the page for the new Gourd Hardware. The old Gourd is described here.
Gourd now hosts Einstein as a VM. The previous einstein hardware is described here
Hardware Details
Details need to be put here!
Network Settings
- IP address UNH: 132.177.88.75 (eth1)
- IP address Farm: 10.0.0.252 (eth0)
- IP address RAID: 10.0.0.152
- IP address IPMI: 10.0.0.151
npghome Network Alias
Home folders are served over an aliased network interface associated with the npghome hostname. The network scripts that manage these interfaces are called ifcfg-npghomefarm and ifcfg-npghomeunh.
- IP address npghome Farm: 10.0.0.240 (eth0:1)
- IP address npghome UNH: 132.177.91.210 (eth1:1)
Software and Services
This section contains details about the services and software on Gourd and information about their configurations. Details of Gourd's configurations can be found at Gourd Configuration Files
IPTables
Gourd uses the standard NPG iptables firewall. Gourd allows ssh, svn, icmp, portmap and nfs.
Gourd serves two volumes over NFS
Home folders: /home on Gourd contains home directories for all npg users. The NFS share is accessible to all hosts in the servers, npg_clients and dept_clients Netgroup lists, and to all 10.0.0.0/24 (server room backend) hosts.
Mail: To reduce the size of the size of the Einstein VM the /mail directory on Gourd stores mail for all npg users. The nfs share is accessible only to Einstein and is mounted in /var/spool/mail on Einstein.
/etc/exports
# Share home folders (copied from old Einstein) /home \ @servers(rw,sync) \ @npg_clients(rw,sync) \ @dept_clients(rw,sync) \ 10.0.0.0/24(rw,no_root_squash,sync) # Share /mail with Einstein /mail \ 132.177.88.52(rw,sync) \ 10.0.0.248(rw,no_root_squash,sync)
Subversion
Gourd hosts Maurik's subversion code repository at svn://gourd.unh.edu/gravity. The repository is stored in /home/svn/gravity. The subversion service runs under xinetd. Its configuration is located in /etc/xinetd.d/svn
/etc/xinetd.d/svn
service svn { port = 3690 socket_type = stream protocol = tcp wait = no user = svn server = /usr/bin/svnserve server_args = -i -r /home/svn disable = no }
VMWare
Gourd is running VMware Server version 2.0.2. It acts as our primary virtualization server. it is accessible at https://gourd.unh.edu:8333/ or from localhost:8222 if you're logged in or port forwarding from Gourd over SSH.
Guest VMs on Gourd
Disks and Raid Configuration
This is the post-migration Gourd disk setup (as of 03/01/10).
Disks and Raid configuration
Drive Bay | Disk Size | Raid Set | Raid level |
---|---|---|---|
1 | 750 GB | System/Scratch | Raid1 + Raid0 |
2 | 750 GB | ||
3 | 750 GB | Software RAID | Pass Through |
4 | 750 GB | Software RAID | Pass Through |
5 | Empty | None | None |
6 | Empty | None | None |
7 | 750 GB | Hot Swap | None |
8 | 750 GB | Hot Swap | None |
Volume Set and Partition configuration
Raid set | Volume set | Volume size | Partitions |
---|---|---|---|
Set 1 | System(Raid1) | 250 GB | System: (/, /boot, /var, /tmp, /swap, /usr, /data ) |
Set 2 | Scratch(Raid0) | 1000GB | Scratch space (/scratch) |
/dev/md0 | sdc1 & sdb1 | 500 GB | Home Dirs: /home |
/dev/md1 | sdc2 & sdd2 | 100 GB | Mail: /mail |
/dev/md2 | sdc3 & sdd3 | 150 GB | Virtual Machines: /vmware |
Pre-Migration Setup Notes
This section contains notes and information from before and during the migration process. It's here mainly for reference purposes. Eventually it should either be moved to another page or removed from this one, once we've finished getting this page updated with the current Gourd setup information.
Important things to remember before this system takes on the identity of Einstein
- The ssh fingerprint of the old einstein needs to be imported.
- Obviously, all important data needs to be moved: Home Directories, Mail, DNS records, ... (what else?)
- Fully test functionality before switching!
Configurations Needed
- RAIDs need to be setup on Areca card.
- /home, /var/spool/mail and virtual machines will be stored on software raid due to the inability to read Area RAID members without an Areca card.
- Need to copy the system drive from the passthrough to a Raid1 mirror
- Mail system needs to be setup
- Webmail needs to be setup. Uses Apache?
- DNS needs to be setup.
- Backup needs to be made to work.
- Backups work. Copied rsync.backup from taro for now, need to change to include homes after the changeover from Einstein.
- rhn_config - I tried this but our subscriptions seem messed up. (Send message to customer support 11/25/09)
- Denyhosts needs to be setup.
- Appears to be running as of 1/05. Should it be sending e-mail notifications like Einstein/Endeavour?
- NFS needs to be set up.
- Home folders will be independent of a particular system. Gourd will normally serve the home folders via an aliased network interface. Automount configuration for each machine will need to change so that /net/home refers to /home on npghome.unh.edu/132.177.91.210. This alias can be configured on a secondary machine if gourd needs to go down, and should be transparent to the user.
- Need to create a share to provide the mail spool to Einstein. LDAP database is small enough that it may be easier to just store that on the Einstein VM.
Initialization
Server arrived 11/24/2009, was unpacked, placed in the rack and booted on 11/25/2009.
Initial configuration steps are logged here:
- Initial host name is gourd (gourd.unh.edu) with eth0 at 10.0.0.252 and eth0.2 (VLAN) at 132.177.88.75
- The ARECA card is set at 10.0.0.152. The password is changed to the standard ARECA card password, user is still ADMIN.
- The IPMI card was configured using the SuperMicro ipmicfg utility. The net address is set to 10.0.0.151. Access is verified by IPMIView from Taro. The grub.conf and inittab lines are changed so that SOL is possible at 19200 baud.
- The LDAP configuration is copied from Taro. This means it is currently in client ldap mode, and needs to be change to an ldap server before production. You can log in as yourself.
- The autofs configuration is copied from Taro. The /net/home and /net/data directories work.
- The sudoers is copied from Taro, but it does not appear to work - REASON: pam.d/system-auth
- Added "auth sufficient pam_ldap.so use_first_pass" to /etc/pam.d/system-auth - now sudo works correctly.
Proposed RAID Configuration
Considering that "large storage" is both dangerous and inflexible, and we really don't want to have a large volume for /home or /var/spool/mail, the following configuration may actually be ideal. We use only RAID1 for the mail storage spaces, so that there is always the option of breaking the RAID and using one of the disks in another server for near instant failover. This needs to be tested for Areca RAID1. We also need to consider that a RAID card has "physical drives", "Volume Set" and "Raid Set". The individual physical drives are grouped into a "Volume Set". This volume set is then partitioned into "Raid Sets", and the Raid Sets are exposed to the operating system as disks. These disks can then (if you insist, as you do for the system) partitioned by the operating system using parted or LVM into partitions, which will hold the filesystems.
We only need 4 of the drive bays to meet our code needs. The other 4 drive bays can hold an additional storage of less critical data, exta VMs, a hot spare, and an empty, which can be filled with a 1 TB drive and configured like a "Time Machine" to automatically backup the /home and VMs to, so that this system no longer depends on Lentil for its core backup. (Just an idea for the future.)
Disks and Raid configuration
Drive Bay | Disk Size | Raid Set | Raid level |
---|---|---|---|
1 | 250 GB | Set 1 | Raid1 |
2 | 250 GB | ||
3 | 750 GB | Set 2 | Raid1 |
4 | 750 GB | ||
5 | 750 GB | Set 3 | Raid1 |
6 | 750 GB | ||
7 | 750 GB | Hot Swap | None |
8 | None | empty | None |
Volume Set and Partition configuration
Raid set | Volume set | Volume size | Partitions |
---|---|---|---|
Set 1 | Volume 1 | 250 GB | System: (/, /boot, /var, /tmp, /swap, /usr, /data ) |
Set 2 | Volume 2 | 500 GB | Home Dirs: /home |
Volume 3 | 100 GB | Var: Mail and LDAP | |
Volume 4 | 150 GB | Virtual Machines: Einstein/Roentgen/Corn | |
Set 3 | Volume 5 | 250 GB | Additional VM space |
Volume 6 | 500 GB | Additional Data Space |
RAID Configuration Notes
- Copied the 250GB system drive pass-thru disk to a 250GB RAID 1 volume on two 750GB disks (Slots 1 and 2)
- Something may have been wrong with the initial copy of the system. It booted from the RAID a couple of times but didn't come back up on reboot this weekend. I am attempting the copy again from the original unmodified system drive using ddrescue -- Adam 1/11/10
- Discovered that the problem wasn't the copy, it was that the BIOS allows you to choose the drive to boot from by the SCSI ID/LUN, and had somehow gotten set to boot from 0/1 (/dev/sdb - the empty scratch partition) instead of 0/0 which had the system on it. I can still boot the original system drive without issue. -- Adam 1/12/10
- Remaining 500GB on each drive spanned to a 1TB RAID 0, mounted on /Scratch
- Two 750GB disks as pass-thru, set up as software RAID (Slots 3 and 4)
- 500GB RAID 1 for home folders (/dev/md0) currently mounted in /mnt/newhome temporarily, needs to be moved to /home during migration
- 100GB RAID 1 for mail (/dev/md1) mounted in /Mail
- 150GB RAID 1 for virtual machines (/dev/md2) mounted in /VMWare and added as a local datastore in VMWare
- Currently two 750GB drives in Slots 5 and 6 for testing
- Two 750GB drives in Slots 7 and 8 as hot spares