Old Sysadmin Todo List
From Nuclear Physics Group Documentation Pages
Jump to navigationJump to searchThis is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under Completed.
Daily Check off list
Each day when you come in check the following:
Monitoring
- Check the Cacti temperatures and other indicators: http://roentgen.unh.edu/cacti
- Check Splunk: [1] or localhost:8000 on pumpkin.
Verify the following:
- Einstein (script):
- Up and running?
- Disks are at less than 90% full?
- Mail system OK? (spamassasin, amavisd, ...)
- Temperature OK?
- Systems up: Taro, Pepper, Pumpkin ?
- Backups:
- Did backup succeed?
- Does Lentil need a new disk?
Important
Miscellaneous
Ongoing
Documentation
- Maintain the Documentation of all systems!
- Main function
- Hardware
- OS
- Network
- Continue homogenizing the configurations of the machines.
Maintenance
- Check e-mails to root every morning
- Check up on security [2]
On-the-Side
- Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting. I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups. Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [3]
- Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
- pauli nodes -- Low priority!
Waiting
- Move pauli8 drives into pauli5 so that Heisenberg can access his data. The raid array on them has been rebuilt, but there seems to be weird behaviour with LVM or something related.
- Get the rest of the paulis up. Looks like NIS is in the way on at least one of them. Update to LDAP will be necessary. A workaround can be to make a local jhh user on each, and point its home directory to /net/home/jhh. Not the most elegant solution, but the fact that NIS was around seems to have blocked LDAP from working properly. Do we even need the paulis anymore? Is silas giving some computational space to Jochen?
Completed
- Set up UPS monitoring for each UPS. It works! Einstein even cleanly shuts down when the power is critically low.
- Bohr is slow with PDFs. Maybe time to put a newer distro on bohr? He's using RHEL4. It's good enough for now, and he hasn't mentioned it again.
- Gourd is giving smartd errors. Should we be concerned at all, since nobody uses it anymore? People do use it, just not all that often. Once Lorenzo's graduated and everyone straightens out their data, we can evaluate our data storage setup.
- Set up signal generator software on lab computer.
- Set up USB oscilloscope software on lab computer.
- Get familiar with denyhosts.
- Learn how to use cacti using a VM appliance. Now unnecessary, since we run cacti directly on roentgen.
- Backups weren't running because the RPM cron job hung, and run-parts does stuff in alphabetical order. RPM comes before rsync, so rsync-backup never got run. To prevent this in the future, rsync-backup is now called 0rsync-backup. so it will always run right after anacron and logwatch.
- The workstation Wilson can now print to wigner.
- Rebuilt the npg-admins mailing list. Because of mailman's strange structure, it wasn't possible to carry over the old archives cleanly. There apparently were also version incompatibilities, so we had to start over.
- Monitoring: I would like to see the new temp-monitor integrated with Cacti, and fix some of the cacti capabilities, i.e. tie it in with the sensors output from pepper and taro (and tomato/einstein). Setup sensors on the corn/pumpkin. Have an intelligent way in which we are warned when conditions are too hot, a drive has failed, a system is down. I'm starting to get the hang of getting this sort of data via snmp. I wrote a perl script that pulls the temperature data from the environmental monitor, as well as some nice info from einstein. We SHOULD be able to integrate a rudimentary script like this into cacti or splunk, getting a bit closer to an all-in-one monitoring solution. It's in Matt's home directory, under code/npgmon/ - Mostly done
Previous Months Completed
March/April/May/June 2008 (I'm doing a great job keeping track of this, eh?)
July/Aug/Sep/Oct/Nov 2008 (It was the move, but still no excuse!)