Difference between revisions of "Gourd"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
Line 25: Line 25:
  
 
# RAIDs need to be setup on Areca card.
 
# RAIDs need to be setup on Areca card.
#* We may not be able to implement the recovery plan which involves taking drives from Gourd and mounting them on another system temporarily, unless the backup system has an Areca card. According to Areca's [http://faq.areca.com.tw/index.php?option=com_quickfaq&view=items&cid=2:Firmware/BIOS&id=511:Q10050910&Itemid=1 knowledgebase]: "it is not possible to access a raidset member drive without controller." What are our alternatives?
+
#* /home, /var/spool/mail and virtual machines will be stored on software raid due to the [http://faq.areca.com.tw/index.php?option=com_quickfaq&view=items&cid=2:Firmware/BIOS&id=511:Q10050910&Itemid=1 inability to read Area RAID members without an Areca card].
 +
#* Need to copy the system drive from the passthrough to a Raid1 mirror
 
# Mail system needs to be setup
 
# Mail system needs to be setup
 
# Webmail needs to be setup. Uses Apache?
 
# Webmail needs to be setup. Uses Apache?
 
# DNS needs to be setup.  
 
# DNS needs to be setup.  
 
# Backup needs to be made to work.
 
# Backup needs to be made to work.
 +
#* Backups work. Copied rsync.backup from taro for now, need to change to include homes after the changeover from Einstein.
 
# rhn_config  - I tried this but our subscriptions seem messed up. (Send message to customer support 11/25/09)
 
# rhn_config  - I tried this but our subscriptions seem messed up. (Send message to customer support 11/25/09)
 
# Denyhosts needs to be setup.
 
# Denyhosts needs to be setup.
# NFS needs to be set up. Gourd can serve up home folders to users (would require change in client configs), and also needs an NFS share of the /var/spool and ldap database which is only accessible by the Einstein VM.  
+
#* Appears to be running as of 1/05. Should it be sending e-mail notifications like Einstein/Endeavour?
# Home folders should be independent of a particular device. We can designate some hostname/ip (existing, or new?) and have whichever machine is serving the home folders listening to that IP using an aliased network interface. This is simple to set up following the instructions in this [http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch03_:_Linux_Networking#Multiple_IP_Addresses_on_a_Single_NIC HOWTO].
+
# NFS needs to be set up.  
 +
#* Home folders will be independent of a particular system. Gourd will normally serve the home folders via an [http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch03_:_Linux_Networking#Multiple_IP_Addresses_on_a_Single_NIC aliased network interface]. Automount configuration for each machine will need to change so that /net/home refers to /home on npghome.unh.edu/132.177.91.210. This alias can be configured on a secondary machine if gourd needs to go down, and should be transparent to the user.
 +
#* Need to create a share to provide the mail spool to Einstein. LDAP database is small enough that it may be easier to just store that on the Einstein VM.
  
 
== Initialization ==
 
== Initialization ==

Revision as of 17:57, 5 January 2010

This is the new Gourd Hardware. The old one is here old gourd

Note: This is the page for the NEW GOURD - 8 core server from Microway, which will serve EINSTEIN as a VM.

The previous einstein hardware is described in the previous page for Einstein at old einstein

New Microway Server

The new einstein is a 2 quad-CPU server in a 2U rackmount chassis put together nicely by Microway. It arrived at UNH on 11/24/2009. The system has an Areca RAID card with ethernet port and an IPMI card with ethernet port. The motherboard is from Super Micro. Details need to be put here!

  • IP address UNH: 132.177.88.75 (currently VLAN)
  • IP address Farm: 10.0.0.252
  • IP address RAID: 10.0.0.152
  • IP address IPMI: 10.0.0.151

Setting Up the Server

Important things to remember before this system takes on the identity of Einstein

  1. The ssh fingerprint of the old einstein needs to be imported.
  2. Obviously, all important data needs to be moved: Home Directories, Mail, DNS records, ... (what else?)
  3. Fully test functionality before switching!

Configurations Needed

  1. RAIDs need to be setup on Areca card.
  2. Mail system needs to be setup
  3. Webmail needs to be setup. Uses Apache?
  4. DNS needs to be setup.
  5. Backup needs to be made to work.
    • Backups work. Copied rsync.backup from taro for now, need to change to include homes after the changeover from Einstein.
  6. rhn_config - I tried this but our subscriptions seem messed up. (Send message to customer support 11/25/09)
  7. Denyhosts needs to be setup.
    • Appears to be running as of 1/05. Should it be sending e-mail notifications like Einstein/Endeavour?
  8. NFS needs to be set up.
    • Home folders will be independent of a particular system. Gourd will normally serve the home folders via an aliased network interface. Automount configuration for each machine will need to change so that /net/home refers to /home on npghome.unh.edu/132.177.91.210. This alias can be configured on a secondary machine if gourd needs to go down, and should be transparent to the user.
    • Need to create a share to provide the mail spool to Einstein. LDAP database is small enough that it may be easier to just store that on the Einstein VM.

Initialization

Server arrived 11/24/2009, was unpacked, placed in the rack and booted on 11/25/2009.

Initial configuration steps are logged here:

  • Initial host name is gourd (gourd.unh.edu) with eth0 at 10.0.0.252 and eth0.2 (VLAN) at 132.177.88.75
  • The ARECA card is set at 10.0.0.152. The password is changed to the standard ARECA card password, user is still ADMIN.
  • The IPMI card was configured using the SuperMicro ipmicfg utility. The net address is set to 10.0.0.151. Access is verified by IPMIView from Taro. The grub.conf and inittab lines are changed so that SOL is possible at 19200 baud.
  • The LDAP configuration is copied from Taro. This means it is currently in client ldap mode, and needs to be change to an ldap server before production. You can log in as yourself.
  • The autofs configuration is copied from Taro. The /net/home and /net/data directories work.
  • The sudoers is copied from Taro, but it does not appear to work - REASON: pam.d/system-auth
  • Added "auth sufficient pam_ldap.so use_first_pass" to /etc/pam.d/system-auth - now sudo works correctly.

Disks and Raid Configuration

Current Disk usage Estimates for Einstein:

Mail (/var/spool): approx. 30GB
Home Folders (/home): approx. 122GB
Virtual Machines (/data/VMWare on Taro):  approx. 70GB
LDAP Database (/var/lib/ldap): approx. 91MB


Disk sizes in the following tables are based roughly on these current usage estimates with plenty of extra space to grow. They can be adjusted as appropriate to better suit our needs, and to make these designs more cost effective.


Proposed Configuration

Considering that "large storage" is both dangerous and inflexible, and we really don't want to have a large volume for /home or /var/spool/mail, the following configuration may actually be ideal. We use only RAID1 for the mail storage spaces, so that there is always the option of breaking the RAID and using one of the disks in another server for near instant failover. This needs to be tested for Areca RAID1. We also need to consider that a RAID card has "physical drives", "Volume Set" and "Raid Set". The individual physical drives are grouped into a "Volume Set". This volume set is then partitioned into "Raid Sets", and the Raid Sets are exposed to the operating system as disks. These disks can then (if you insist, as you do for the system) partitioned by the operating system using parted or LVM into partitions, which will hold the filesystems.

We only need 4 of the drive bays to meet our code needs. The other 4 drive bays can hold an additional storage of less critical data, exta VMs, a hot spare, and an empty, which can be filled with a 1 TB drive and configured like a "Time Machine" to automatically backup the /home and VMs to, so that this system no longer depends on Lentil for its core backup. (Just an idea for the future.)


Disks and Raid configuration

Drive Bay Disk Size Raid Set Raid level
1 250 GB Set 1 Raid1
2 250 GB
3 750 GB Set 2 Raid1
4 750 GB
5 750 GB Set 3 Raid1
6 750 GB
7 750 GB Hot Swap None
8 None empty None


Volume Set and Partition configuration

Raid set Volume set Volume size Partitions
Set 1 Volume 1 250 GB System: (/, /boot, /var, /tmp, /swap, /usr, /data )
Set 2 Volume 2 500 GB Home Dirs: /home
Volume 3 100 GB Var: Mail and LDAP
Volume 4 150 GB Virtual Machines: Einstein/Roentgen/Corn
Set 3 Volume 5 250 GB Additional VM space
Volume 6 500 GB Additional Data Space