Sysadmin Todo List
This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under Completed.
Daily Check off list
Each day when you come in check the following:
- Einstein (script):
- Up and running?
- Disks are at less than 90% full?
- Mail system OK? (spamassasin, amavisd, ...)
- Temperature OK? No water blown into room?
- Systems up: Taro, Pepper, Pumpkin/Corn ?
- Backups:
- Did backup succeed?
- Does Lentil need a new disk?
Important
Towards a stable setup
Here are some thoughts, especially to Steve, about getting an über-stable setup for the servers.
Some observations:
- When we get to DeMeritt at the end of next summer, we need a setup that easily ports to the new environment. We will also be limited to a total of 10 kWatts heat load (36000 BTUs, or 3 tons of cooling), due to the cooling of the room. That sounds like a lot, but Silas and Jiang-Ming will also put servers in this space. Our footprint should be no more than 3 to 4 kWatts of heat load.
- Virtual systems seems like the way to go. However, our experience with Xen is that it does not lead to highly portable VMs.
- VMware Server is now a free product. They make money consulting and selling fancy add-ons. I have good experience with VMware Workstation on Mac's and Linux. But it is possible (like RedHat which was once free) that they will start charging when they reach 90% or more market share.
Here are some options:
- We get rid of Tomato, Jalapeno, Gourd and Okra and perhaps also Roentgen. If we want we can scavenge the parts from Tomato & Jalapeno (plus old einstein) for a toy system, or we park these systems in the corner. I don't want to waste time on them. The only bumps that I can think of here would be that Xemed/Aaron use Gourd. And that moving/reconfiguring roentgen's web services could be a big pain since neither Matt or I know much about the setup. But I guess we have to think about it anyhow, since its ancient OS will probably stop being supported at some point. Otherwise I think we're all in favor of cutting down on the number of physical machines that we've got running. Oh, and what about the paulis?
- Test VMware server (See VMWare Progress). Specifically, I would like to know:
- How easy is it to move a VM from one hardware to another? (Can you simply move the disks?)
- Specifically, if you need to service some hardware, can you move the host to other hardware with little down time? (Clearly not for large disk arrays, like pumpkin, but that is storage, not hosts).
- Do we need a RedHat license for each VM or do we only need a license for the host, as with Xen?
- VMware allows for "virtual appliances", but how good are these really? Are these fast enough?
- Evaluate the hardware needs. Pumpkin, the new Einstein, Pepper, Taro and Lentil seem to be all sufficient quality and up to date. Do we need another HW? If so, what?
- What I'd really like to see is a division of einstein's current labor. It seems like a good idea to have mail and LDAP on seperate machines so that every service isn't crippled when one machine has a problem. Or maybe at least have some kind of backup LDAP server that can be swapped in in a pinch. Whether this would require another machine or just adding the service to an existing machine, I don't know.
I would like to have some answers to these questions by February 28th. Answers do not need to be complete (as in, fully testing the VMware server capabilities). It would be commendable if by then we can form a clear plan.
Pumpkin/Corn
Pumpkin is now stable. Read more on the configuration at Pumpkin and Xen.
Einstein Upgrade
Einstein upgrade project and status page: Einstein Status Note: Einstein (current one) has a problem with / getting full occasionally. See Einstein#Special_Considerations_for_Einstein
It seems this is not moving forward sufficiently. I think we need a new strategy to get this accomplished. My new thought is to abandon the Tomato hardware, which may have been a source of the difficulties, and use what we learned for the setup of "Einstein on RHEL5" to create a virtual machine Tomato, where we test the upgrade to RHEL5.
Environmental Monitor
The big Maytag has been installed. When I came in this morning (2/15/2008), it was running at a pretty high temperature though (over 70), so I made sure to turn it down. Should the box fan be off? The A/C doesn't seem to work as well as before, probably because the big hole behind the fan isn't being blocked.
We have an environmental monitor running at http://10.0.0.98 This is capable of sending email and turning the fan on and off (needs to be set up more intelligently). It responds to SNMP so we can integrate it with Cacti (needs to be done). Cacti doesn't support traps, as it's a polling tool. A possible workaround is to have another daemon run that captures traps and writes them somewhere cacti can pick them up, such as syslog. Or, maybe we can just use splunk instead.
Miscellaneous
- Lepton constantly has problems printing. It seems that at least once a month the queue locks up. This machine has Fedora Core 3 installed, I wonder if it would be more worth it to just put CentOS on it and be done with this recurring problem.
- Roentgen was plugged into one of the non-battery-backup slots of its UPS, so I shut it down and moved the plug. After starting back up, root got a couple of mysterious e-mails about /dev/md0 and /dev/md2:
Array /dev/md2 has experienced event "DeviceDisappeared"
. However,mount
seems to indicate that everything important is around:
/dev/vg_roentgen/rhel3 on / type ext3 (rw,acl) none on /proc type proc (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/md1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/vg_roentgen/rhel3_var on /var type ext3 (rw) /dev/vg_roentgen/wheel on /wheel type ext3 (rw,acl) /dev/vg_roentgen/srv on /srv type ext3 (rw,acl) /dev/vg_roentgen/dropbox on /var/www/dropbox type ext3 (rw) /usr/share/ssl on /etc/ssl type none (rw,bind) /proc on /var/lib/bind/proc type none (rw,bind) automount(pid1503) on /net type autofs (rw,fd=5,pgrp=1503,minproto=2,maxproto=4)
and all of the sites listed on Web Servers work. Were those just old arrays that aren't around anymore but are still listed in some config file?
- Clean out some users who have left a while ago. (Maurik should do this.)
- Monitoring: I would like to see the new temp-monitor integrated with Cacti, and fix some of the cacti capabilities, i.e. tie it in with the sensors output from pepper and taro (and tomato/einstein). Setup sensors on the corn/pumpkin. Have an intelligent way in which we are warned when conditions are too hot, a drive has failed, a system is down.
- Check into smartd monitoring (and processing its output) on Pepper, Taro, Corn/Pumpkin, Einstein, Tomato.
- Decommission Okra. - This system is way too outdated to bother with it. Move Cacti to another system. Perhaps a VM, once we get that figured out?
- Decide whether we want to decommission Jalapeno. It is currently not a stable system, and perhaps not worth the effort trying to make it stable. It's only service is Splunk, which can be moved to another system (which?). We could "rebuild" the HW if there is need.
- Gourd's been giving smartd errors, namely
Offline uncorrectable sectors detected:
/dev/sda [3ware_disk_00] - 48 Time(s)
1 offline uncorrectable sectors detected
Okra also has an offline uncorrectable sector!
- Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
- pauli nodes -- Low priority!
- Learn how to use cacti on okra. Seems like a nice tool, mostly set up for us already. Find out why lentil isn't being read by cacti. Install the net-snmp package, copy the /etc/snmpd/snmpd.conf from a working machine to the new one, start the snmpd service. Still not working though. Lentil won't start iptables-netgroups
("Net::SSLeay object version 1.30 does not match bootstrap parameter 1.25 at /usr/lib/perl5/5.8.8/i386-linux-thread-multi/DynaLoader.pm line 253, line 225.
Compilation failed in require at /usr/lib/perl5/vendor_perl/5.8.8/IO/Socket/SSL.pm line 17, line 225.
BEGIN failed--compilation aborted at /usr/lib/perl5/vendor_perl/5.8.8/IO/Socket/SSL.pm line 17, line 225.
Compilation failed in require at /usr/lib/perl5/vendor_perl/5.8.8/Net/LDAP.pm line 156, line 225."),
maybe that's why. Lentil and pumpkin have the same Perl packages installed, yet pumpkin doesn't fail at starting the script.
Ongoing
Documentation
- Maintain the Documentation of all systems!
- Main function
- Hardware
- OS
- Network
- Continue homogenizing the configurations of the machines.
- Improve documentation of mail software, specifically SpamAssassin, Cyrus, etc.
Maintenance
- Check e-mails to root every morning
- Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent. Yup, bohr died. Expanded his root by 2.5 gigs. Still serious monitor problems though, temporarily bypassed with vesa... Bohr's problem seems tied to the nvidia drivers, let's wait until the next release and see how those work out.
- Check up on security [1]
- Clean up Room 202.
- Ask UNH if they have are willing/able to recycle/reuse the three CRTs that we have sitting around.
On-the-Side
- Learn how to use ssh-agent for task automation.
- Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting. I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups. Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [2] Put this on the backburner for now, since the current rate of backup disk consumption will give about 10 months before the next empty disk is needed.
Waiting
- jalapeno hangups: Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. lm_sensors fails to detect anything readable. Is there a way around this? Jalapeno's been on for two weeks with no problems, let's keep our fingers crossed…
- That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen? Who was that guy, anyhow? (Silviu Covrig, probably) The machine is gluon, according to him. Waiting on ASUS tech support for warranty info Aaron said it might be power-supply-related. Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. Got RMA, sending out on wed. Waiting on ASUS to send us a working one! Called ASUS on 8/6, they said it's getting repaired right now. Wohoo! Got a notification that it shipped! ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. What should we do about this? I'm going to call them up and have a talk, considering looking at the details on their shipment reveals that they sent us a different motherboard, different serial number and everything but with the same problem.
- Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry. Mac OS X now working, IT guy should be here week of June 26th Did he ever come? No, he didn't, and did not respond to a voice message left. Will call again.
- Sent an email to UNH Property Control asking what the procedure is to get rid of untagged equipment, namely, the two old monitors in the corner. Apparently they want us to fill out lots of information on the scrapping form like if it was paid for with government money, etc, as well as give them serial numbers, model numbers, and everything we can get ahold of. Then, we get to hang onto them until the hazardous equipment people come in and take it out, at their leisure. Waiting to figure out what we want to do with them.
Completed
- Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? Or maybe just wean him off of NPG… It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains. Heisenberg's complaining now. Fixed his pauli machine by walking in the room (still don't know what he was talking about) and dirac had LDAP shut off. He wants the paulis up whenever possible, which I explained could be awhile because of the heat issues. Pauli doesn't crash anymore, as far as I can tell. Switching the power supply seems to have done it.