Difference between revisions of "Endeavour"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
Line 30: Line 30:
The newer development in OpenPBS is renamed '''Torque''', which is what is installed on our systems. See [http://www.adaptivecomputing.com Adaptive Computing] (company) and go to [http://www.adaptivecomputing.com/products/open-source/torque  Torque Resource Manager]. This includes documentation.
The newer development in OpenPBS is renamed '''Torque''', which is what is installed on our systems. See [http://www.adaptivecomputing.com Adaptive Computing] (company) and go to [http://www.adaptivecomputing.com/products/open-source/torque  Torque Resource Manager]. This includes documentation.
[http://docs.adaptivecomputing.com/torque/4-1-4/help.htm#topics/0-intro/welcome.htm Documentation for TORQUE]
=== Commands ===
=== Commands ===

Revision as of 18:02, 26 August 2015


Endeavour was purchased as part of a full cluster, from Microway (quote: MWYQ11029 ~ $16,000 for base system)
It came with 13 x2 Number Smasher nodes (Total was ~$90,000)
Arrived at UNH in April 2009.

Notes on configuration status/changes and ToDo is at the bottom.

Endeavor web server is active: Endeavour
It runs the Ganglia monitoring software on Endeavour Ganglia

Endeavor RAID card is connected to:
Endeavor SWITCH is connected to
Hardware Temperature monitoring
Cacti Statistics: roentgen.unh.edu/cacti

Upgrading Nodes

Upgrading the nodes

System Usage

This section explains some of the special use for this system.

OpenPBS = Torque = Portable Batch System

PBS is a system for scheduling compute jobs onto nodes, aka "workload management software", that was first created by NASA in the '90s. We ran this early version on our farm back then. It is very sophisticated and thus not so trivial to configure. Some things are already setup.

The company supporting the old open source version is PBS Gridworks which seems to be a devision of "Altair". They haven't touched their free open version since 2001.
There are no manuals for OpenPBS from Altair, only for PBS Pro. To get to them, you need to create a username/password at the PBS Pro User Area you can then get to the Documentation. Do not expect a one to one correspondence between the OpenPBS and PBSPro versions (like, you don't need a FLEX license for the open one.)

The newer development in OpenPBS is renamed Torque, which is what is installed on our systems. See Adaptive Computing (company) and go to Torque Resource Manager. This includes documentation.

Documentation for TORQUE


This gives a quick overview of all the known nodes and whether they are up. If they are what the status is. "pbsnodes -a" lists everything about all the nodes, "pbsnodes -l up" shows a list of the nodes that are up, "pbsnodes -l down" the same for the ones that are down.
Graphical interface to PBS. Really old and probably not that useful.
Graphical interface to monitor nodes. It gives a quick view of node status. Use Ganglia for more sophisticated node stats.
Command-line tool for submitting jobs to PBS

qperf Command

qperf measures bandwidth and latency between two nodes. It can work over TCP/IP as well as the RDMA transports.

There are many more tests to use (rc_bi_bw) built in on the man page.

On the first node just run

Tbow@node2 ~]$qperf

On the second node run this to test Infiniband and ethernet

Tbow@node3 ~]$qperf -t 5 node2.farm.physics.unh.edu rc_bi_bw tcp_bw

Initial setup and Configuration

  • Set the UNH IP address (endeavour.unh.edu) on eth1. [done]
    • This made the system think of itself as "endeavor" rather than "master", causing PBS to get confused. PBS in /var/spool/pbs adjusted, also the maui scheduler in /usr/local/maui/maui.cfg modified. [done]
  • I switched the IP address on eth0 to from (since that is the usual gateway address, and we want to bridge the two backend networks.) [done]
    • This requires ALL "hosts" files on the nodes to be modified [done,all nodes but 25]
    • Also, the /root/.shosts /root/.rhosts and /etc/ssh/ssh_known_hosts /etc/ssh/shosts.equiv files need to be copied from node2 to node* [done,all nodes but 25]
    • The file /var/spool/pbs/server_name needs to be updated as well [done,all nodes but 25]
    • The /etc/pam.d/system-auth-ac needs to include the ldap module. (NOT DONE only for 2,3)
  • Set the root password to standard scheme. [done,master only]
  • Setup the LDAP client side. [done,master only]
  • Recompiled PBS to include the xpbs and xpbsmon commands.[done]
  • Configured and started the iptables firewall [done,master only]
  • Integrated the backend network with the farm backend network (bridged the network switches) [done]
  • Setup automount, standard /net/data and /net/home [done,master only]
    • TODO: We need a new rule that resolves /net/data/node2 for the disk in node2 etc. The nodes need to export their /scratch partition. The other partitions may not be needed, since the "rcpf" command (a foreach with rcp) can copy files in batch.
  • The /etc/nodes file included the "master" node. This is too dangerous. It means that in a batch copy the file is also automatically copy back to the master, with potentially dangerous results.
  • To add users to the "microway Ganglia control" part, add them to /etc/mcms.users Password is login password, LDAP is honored.
  • Setup and started SPLUNK - runs its own server, and forwards to pumpkin.
  • Added endeavour disks /data1 and /data2 to the export table and the LDAP auto.data table, so ls /net/data/endeavour{1,2} works.
  • On node2 (so far) to get users and home directories integrated:
    • Copy /etc/ldap.conf /etc/openldap/* to node from endeavour.
    • copy /etc/nsswitch.conf to node from endeavour.
    • restart autofs (/etc/init.d/autofs restart)


  1. Figure out the monitoring system, Ganglia, and other Microway goodies.[partially done]
  2. LDAP (client) on nodes?
  3. Test the Infiniband and MPICH setup.

Long Term To Do

Possible long term tasks if manpower is available.

Long term goal is to have Endeavour as an independent system is need be.

  1. Run a replicate LDAP server on Endeavour.
  2. Run a replicate Named (DNS) server on Endeavour and Roentgen.
  3. Replicate home directories for selected users (this may be too tricky, really)? Else create a local copy of each user.