Difference between revisions of "Taro"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
m
 
(19 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
Taro is a data/computation server. Thinkmate serial number SN-826407.
 
Taro is a data/computation server. Thinkmate serial number SN-826407.
 +
[[Image:taro.jpg|thumb|200px|Taro: A large-leaved plant grown throughout the tropics for its edible starchy roots]]
 +
 +
  
 
= Hardware Details =
 
= Hardware Details =
  
 +
* Purchased in Jan 2009 from Thinkmate.
 
* Quad-Core Intel® Xeon® E5472 3.00GHz 1600FSB 12MB Cache (80W)
 
* Quad-Core Intel® Xeon® E5472 3.00GHz 1600FSB 12MB Cache (80W)
 
* [http://www.supermicro.com/products/motherboard/Xeon1333/5400/X7DWA-N.cfm Supermicro X7DWA-N - EATX - Intel® 5400 Chipset]
 
* [http://www.supermicro.com/products/motherboard/Xeon1333/5400/X7DWA-N.cfm Supermicro X7DWA-N - EATX - Intel® 5400 Chipset]
* 4 x 2GB PC2-6400 800MHz FB-DIMM
+
* 4 x 2GB PC2-6400 677MHz FB-DIMM
 
* Chenbro SR107 EATX Chassis - No PS – Black + Rack Mount Conversion Kit
 
* Chenbro SR107 EATX Chassis - No PS – Black + Rack Mount Conversion Kit
 
* 2 x Chenbro SR107 Black 4-Bay SATA Hotswap
 
* 2 x Chenbro SR107 Black 4-Bay SATA Hotswap
Line 11: Line 15:
 
* 500GB SATA 7200RPM - 3.5" - Seagate Barracuda® 7200.11
 
* 500GB SATA 7200RPM - 3.5" - Seagate Barracuda® 7200.11
 
* Samsung 22x DVD+/-RW Dual Layer (SATA)
 
* Samsung 22x DVD+/-RW Dual Layer (SATA)
* MSI nVidia GeForce N280GTX OC 1GB GDDR3 PCI Express 2.0 (2xDVI)
+
* MSI nVidia GeForce N280GTX OC 1GB GDDR3 PCI Express 2.0 (2xDVI) (Removed?)
 +
* Areca-ARC 1231 12-channel RAID card on address: 10.0.0.97
  
[[Media:SuperMicro_MNL-0945.pdf Local copy of the Motherboard manual]]
+
[[Media:SuperMicro_MNL-0945.pdf | Local copy of the Motherboard manual]]
  
 
= Network Configuration =
 
= Network Configuration =
*IP address Farm:  10.0.0.247 (eth0)
+
 
*IP address Farm2: 10.0.2.247 (eth2)
+
Taro's network configuration contains bridge interfaces to support KVM virtual machines.
*IP address UNH:  132.177.88.86 (eth1)
+
 
 +
*IP address Farm:  10.0.0.247 (eth1/farmbr)
 +
*IP address UNH:  132.177.88.86 (eth2/unhbr)
  
 
Hostnames: <code>taro.unh.edu</code>, <code>taro.farm.physics.unh.edu</code>
 
Hostnames: <code>taro.unh.edu</code>, <code>taro.farm.physics.unh.edu</code>
Line 24: Line 31:
 
=Software and Services=
 
=Software and Services=
  
This section contains details about the services and software running on Taro and information about their configurations.  
+
Taro is one of the few systems that has a bit more accessibility from off-campus. It requires additional monitoring to make sure everything is working and there are no compromises on security.
 +
Taro stores a considerable amount of data on its RAID
 +
 
 +
== Globus ==
 +
 
 +
This is a system for transferring data to/from Jlab. See more on the [[globus]] page.
  
 
== IPTables ==
 
== IPTables ==
Line 39: Line 51:
 
         10.0.0.0/24(rw,no_root_squash,sync)
 
         10.0.0.0/24(rw,no_root_squash,sync)
  
==VMWare==
+
=== Drive configuration ===
 +
 
 +
; RAID
 +
* RAID Is hardware based with an ARECA card at ip 10.0.0.97
 +
* Current setup is RAID-5 across 6 drives, with a 7th drive as a hot spare.
 +
* There is a singe volume on the RAID, lun 0/0/0
 +
 
 +
== Upgrade to Centos 7 ==
 +
 
 +
# Boot from USB stick into installed
 +
## Choose one of the physical disks that were previously part of the Software RAID to install system.
 +
## Partition drive, note that you have to make the installed erase the drive first.
 +
## Install minimum system. Set root password.
 +
# When installation done, reboot.
 +
# Disable and Mask NetworkManager
 +
# Setup the Farm ethernet port.
 +
# Setup the UNH ethernet port.
 +
# Update yum: "yum update" and say yes to all the updates.
 +
# mount the old Software RAID:
 +
## yum install mdadm
 +
## mdadm --detail --scan
 +
## mdadm --assemble --scan
 +
## mount /dev/md127 /mnt/olddisk
 +
# Copy the old SSH keys to the new system
 +
## cd /etc/sshd ;  (cd /mnt/olddisk/etc/ssh && tar czvf - .) | tar xzvf -
 +
## systemctl restart sshd
 +
# Copy the git user to the new machine.
 +
##  grep git: /mnt/olddisk/etc/passwd >> /etc/passwd
 +
##  grep git: /mnt/olddisk/etc/shadow >> /etc/shadow
 +
##  cd /home; (cd /mnt/olddisk/home && tar czvf - git ) | tar xzvf -
 +
# Setup SSSD & LDAP
 +
## yum install -y openldap-clients sssd-ldap nss-pam-ldapd
 +
## Copy Gourd ldap dir: rsync -ravH gourd:/etc/openldap .
 +
## Copy gourd sssd.conf:  scp gourd:/etc/sssd/sssd.conf .
 +
## systemctl enable sssd
 +
## systemctl start sssd
 +
## authconfig --enablesssd --enablesssdauth --enableldap --enableldapauth --enablemkhomedir  --ldapserver="ldaps://einstein ldaps://pepper" --ldapbasedn=dc=physics,dc=unh,dc=edu --enablelocauthorize --enableldaptls --update
 +
# Setup Auto Mount.
 +
## yum install autofs
 +
## Copy auto.net and auto.master from Gourd.
 +
# Setup IPtables.
 +
## Copy iptables-npg from old install to iptables
 +
## Install: yum install iptables-services
 +
## copy the netgroup2iptables: scp gourd:/usr/local/bin/netgroup2iptables.pl /usr/local/bin
 +
## systemctl stop firewalld
 +
## systemctl disable firewalld
 +
## systemctl mask firewalld
 +
## systemctl start iptables
 +
## systemctl enable iptables
 +
## scp gourd:/etc/init.d/iptables-netgroups /etc/init.d/
 +
## systemctl start iptables-netgroups
 +
# Install Fail2ban
 +
## yum install -y epel-release
 +
## yum install -y  fail2ban whois
 +
## systemctl enable fail2ban
 +
## systemctl start fail2ban
 +
## scp gourd:/etc/fail2ban/filter.d/fail2ban.conf /etc/fail2ban/filter.d
 +
## scp gourd:/etc/fail2ban/jail.local /etc/fail2ban/
 +
## systemctl restart fail2ban
 +
# Install NFS export
 +
## copy old exportfs
 +
## mkdir /data
 +
## Edit /etc/fstab to add /data
 +
## mount /data
 +
### systemctl enable rpcbind
 +
### systemctl enable nfs-server
 +
### systemctl enable nfs-lock
 +
### systemctl enable nfs-idmap
 +
### systemctl start rpcbind
 +
### systemctl start nfs-server
 +
### systemctl start nfs-lock
 +
### systemctl start nfs-idmap
 +
 
 +
= ToDo =
 +
 
 +
* NFS export
 +
* science packages
  
Taro is running [[VMWare]] Server version 2.0.2. It acts as a secondary virtualization server. It is accessible at https://taro.unh.edu:8333/ or from localhost:8222 if you're logged in or port forwarding over SSH. There are currently not Virtual Machines running on Taro.
+
== Continue Upgrade ==

Latest revision as of 14:44, 8 August 2017

Taro is a data/computation server. Thinkmate serial number SN-826407.

Taro: A large-leaved plant grown throughout the tropics for its edible starchy roots


Hardware Details

  • Purchased in Jan 2009 from Thinkmate.
  • Quad-Core Intel® Xeon® E5472 3.00GHz 1600FSB 12MB Cache (80W)
  • Supermicro X7DWA-N - EATX - Intel® 5400 Chipset
  • 4 x 2GB PC2-6400 677MHz FB-DIMM
  • Chenbro SR107 EATX Chassis - No PS – Black + Rack Mount Conversion Kit
  • 2 x Chenbro SR107 Black 4-Bay SATA Hotswap
  • PC Power and Cooling Turbo-Cool® 860 - SLI Ready
  • 500GB SATA 7200RPM - 3.5" - Seagate Barracuda® 7200.11
  • Samsung 22x DVD+/-RW Dual Layer (SATA)
  • MSI nVidia GeForce N280GTX OC 1GB GDDR3 PCI Express 2.0 (2xDVI) (Removed?)
  • Areca-ARC 1231 12-channel RAID card on address: 10.0.0.97

Local copy of the Motherboard manual

Network Configuration

Taro's network configuration contains bridge interfaces to support KVM virtual machines.

  • IP address Farm: 10.0.0.247 (eth1/farmbr)
  • IP address UNH: 132.177.88.86 (eth2/unhbr)

Hostnames: taro.unh.edu, taro.farm.physics.unh.edu

Software and Services

Taro is one of the few systems that has a bit more accessibility from off-campus. It requires additional monitoring to make sure everything is working and there are no compromises on security. Taro stores a considerable amount of data on its RAID

Globus

This is a system for transferring data to/from Jlab. See more on the globus page.

IPTables

Taro uses the standard NPG iptables firewall. Taro allows ssh, icmp, portmap and nfs connections.

NFS Shares

Taro serves its /data volume over NFS. It can be accessed from any system via automount either in /net/data/taro or /net/taro/data.

/etc/exports

/data   @servers(rw,sync) @npg_clients(rw,sync) \
       10.0.0.0/24(rw,no_root_squash,sync)

Drive configuration

RAID
  • RAID Is hardware based with an ARECA card at ip 10.0.0.97
  • Current setup is RAID-5 across 6 drives, with a 7th drive as a hot spare.
  • There is a singe volume on the RAID, lun 0/0/0

Upgrade to Centos 7

  1. Boot from USB stick into installed
    1. Choose one of the physical disks that were previously part of the Software RAID to install system.
    2. Partition drive, note that you have to make the installed erase the drive first.
    3. Install minimum system. Set root password.
  2. When installation done, reboot.
  3. Disable and Mask NetworkManager
  4. Setup the Farm ethernet port.
  5. Setup the UNH ethernet port.
  6. Update yum: "yum update" and say yes to all the updates.
  7. mount the old Software RAID:
    1. yum install mdadm
    2. mdadm --detail --scan
    3. mdadm --assemble --scan
    4. mount /dev/md127 /mnt/olddisk
  8. Copy the old SSH keys to the new system
    1. cd /etc/sshd ; (cd /mnt/olddisk/etc/ssh && tar czvf - .) | tar xzvf -
    2. systemctl restart sshd
  9. Copy the git user to the new machine.
    1. grep git: /mnt/olddisk/etc/passwd >> /etc/passwd
    2. grep git: /mnt/olddisk/etc/shadow >> /etc/shadow
    3. cd /home; (cd /mnt/olddisk/home && tar czvf - git ) | tar xzvf -
  10. Setup SSSD & LDAP
    1. yum install -y openldap-clients sssd-ldap nss-pam-ldapd
    2. Copy Gourd ldap dir: rsync -ravH gourd:/etc/openldap .
    3. Copy gourd sssd.conf: scp gourd:/etc/sssd/sssd.conf .
    4. systemctl enable sssd
    5. systemctl start sssd
    6. authconfig --enablesssd --enablesssdauth --enableldap --enableldapauth --enablemkhomedir --ldapserver="ldaps://einstein ldaps://pepper" --ldapbasedn=dc=physics,dc=unh,dc=edu --enablelocauthorize --enableldaptls --update
  11. Setup Auto Mount.
    1. yum install autofs
    2. Copy auto.net and auto.master from Gourd.
  12. Setup IPtables.
    1. Copy iptables-npg from old install to iptables
    2. Install: yum install iptables-services
    3. copy the netgroup2iptables: scp gourd:/usr/local/bin/netgroup2iptables.pl /usr/local/bin
    4. systemctl stop firewalld
    5. systemctl disable firewalld
    6. systemctl mask firewalld
    7. systemctl start iptables
    8. systemctl enable iptables
    9. scp gourd:/etc/init.d/iptables-netgroups /etc/init.d/
    10. systemctl start iptables-netgroups
  13. Install Fail2ban
    1. yum install -y epel-release
    2. yum install -y fail2ban whois
    3. systemctl enable fail2ban
    4. systemctl start fail2ban
    5. scp gourd:/etc/fail2ban/filter.d/fail2ban.conf /etc/fail2ban/filter.d
    6. scp gourd:/etc/fail2ban/jail.local /etc/fail2ban/
    7. systemctl restart fail2ban
  14. Install NFS export
    1. copy old exportfs
    2. mkdir /data
    3. Edit /etc/fstab to add /data
    4. mount /data
      1. systemctl enable rpcbind
      2. systemctl enable nfs-server
      3. systemctl enable nfs-lock
      4. systemctl enable nfs-idmap
      5. systemctl start rpcbind
      6. systemctl start nfs-server
      7. systemctl start nfs-lock
      8. systemctl start nfs-idmap

ToDo

  • NFS export
  • science packages

Continue Upgrade