Sysadmin Todo List

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search

This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under Completed.

Daily Check off list

Each day when you come in check the following:

  1. Einstein (script):
    1. Up and running?
    2. Disks are at less than 90% full?
    3. Mail system OK? (spamassasin, amavisd, ...)
  2. Temperature OK?
  3. Systems up: Taro, Pepper, Pumpkin/Corn ?
  4. Backups:
    1. Did backup succeed?
    2. Does Lentil need a new disk?

Important

Towards a stable setup

Here are some options:

  • Test VMware server (See VMWare Progress). Specifically, I would like to know:
    1. How easy is it to move a VM from one hardware to another? (Can you simply move the disks?) Yes.
    2. Specifically, if you need to service some hardware, can you move the host to other hardware with little down time? (Clearly not for large disk arrays, like pumpkin, but that is storage, not hosts). Considering portability of disks/files, the downtime is the time it takes to move the image around and start up on another machine.
    3. Do we need a RedHat license for each VM or do we only need a license for the host, as with Xen? It seems to consume a license per VM. Following this didn't work for the VMWare systems. The closes thing to an official word that I could find was this.
    4. VMware allows for "virtual appliances", but how good are these really? Are these fast enough?

Miscellaneous

  • Figure out the mailman password!
  • We should look into what software is necessary on what machines, for disk space concerns. I'm thinking of Pepper in particular, do we really want openoffice data taking up an eighth of the root partition?
  • Set up authenticating print queue for wigner on einstein.
  • Gourd is giving smartd errors. Should we be concerned at all, since nobody uses it anymore?
  • Move systems into the new rack.
    • Zip tie the cables in place behind the rack.
  • Set up the wigner printer queue on einstein, as well as tracking supply usage, etc.
  • Set up the new benfranklin. ATI strikes again. Maybe the proprietary drivers work better, but these are so bad that FC9 dies from it.
  • Pull Maurik's old data off blackbody, and then make blackbody a real workstation again.
  • Bohr is slow with PDFs. Maybe time to put a newer distro on bohr?
  • Upgrade lepton.
  • Order replacement batteries for those UPS power strip things. Looks like they're HC 1221W, made by CSB battery co.
  • Figure out if we can RMA pauli8's mobo, since its memory controller died.
  • Move pauli8 drives into another machine so that Heisenberg can access his data.
  • Get the rest of the paulis up. Looks like NIS is in the way on at least one of them. Update to LDAP will be necessary.
  • Fix some of the older workstations (hobo, ennui, etc.)
  • Get PxEboot network installs working. Gluon should be a good testbed, since it needs to be fixed anyways.
  • Monitoring: I would like to see the new temp-monitor integrated with Cacti, and fix some of the cacti capabilities, i.e. tie it in with the sensors output from pepper and taro (and tomato/einstein). Setup sensors on the corn/pumpkin. Have an intelligent way in which we are warned when conditions are too hot, a drive has failed, a system is down. I'm starting to get the hang of getting this sort of data via snmp. I wrote a perl script that pulls the temperature data from the environmental monitor, as well as some nice info from einstein. We SHOULD be able to integrate a rudimentary script like this into cacti or splunk, getting a bit closer to an all-in-one monitoring solution. It's in Matt's home directory, under code/npgmon/
  • Check into smartd monitoring (and processing its output) on Pepper, Taro, Corn/Pumpkin, Einstein, Tomato (Actually, all the systems).
  • Learn how to use cacti using a VM appliance.

Ongoing

Documentation

  • Maintain the Documentation of all systems!
    • Main function
    • Hardware
    • OS
    • Network
  • Continue homogenizing the configurations of the machines.

Maintenance

  • Check e-mails to root every morning
  • Check up on security [1]

On-the-Side

  • Learn how to use ssh-agent for task automation.
  • Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting. I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups. Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [2]
  • Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
    • pauli nodes -- Low priority!

Waiting

  • That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen? Who was that guy, anyhow? (Silviu Covrig, probably) The machine is gluon, according to him. Waiting on ASUS tech support for warranty info Aaron said it might be power-supply-related. Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. Got RMA, sending out on wed. Waiting on ASUS to send us a working one! Called ASUS on 8/6, they said it's getting repaired right now. Wohoo! Got a notification that it shipped! ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. What should we do about this? I'm going to call them up and have a talk, considering looking at the details on their shipment reveals that they sent us a different motherboard, different serial number and everything but with the same problem.

Completed

  • Finish cleaning up.
  • Finish setting up Parity for the new admin.
  • Pepper had 108M free on /. I checked the usage and saw I could remove /var/spool/up2date to free nearly 400mb.
  • Evaluate the hardware needs. Pumpkin, the new Einstein, Pepper, Taro and Lentil seem to be all sufficient quality and up to date. Do we need another HW? If so, what? Taro is now Tomato, we have an awesome new Taro now. Everything now seems sufficient!
  • The new taro is set up, and has CUDA drivers installed.
  • Pumpkin can't run CUDA, because it has a xen kernel. The card in pumpkin will likely end up in the new taro.
  • The old taro is now accessible as tomato.
  • Updated SSH known hosts on all systems to reflect new taro/tomato.
  • Make sure lentil is properly backing up all systems. Check! Including new roentgen and the new marie!

Previous Months Completed

June 2007

July 2007

August 2007

September 2007

October 2007

NovDec 2007

January 2008

February 2008

March/April/May/June 2008 (I'm doing a great job keeping track of this, eh?)

July/Aug/Sep/Oct/Nov 2008 (It was the move, but still no excuse!)