Difference between revisions of "Sysadmin Todo List"
Line 1: | Line 1: | ||
This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]]. | This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]]. | ||
== Important == | == Important == | ||
+ | === Pumpkin/Corn === | ||
+ | |||
+ | Our new system needs to be setup and integrated/tied in. Read more: [[Pumpkin]] | ||
+ | |||
=== Einstein Upgrade === | === Einstein Upgrade === | ||
[http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/index.html Massive amount of deployment documentation for RHEL 5] | [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/index.html Massive amount of deployment documentation for RHEL 5] |
Revision as of 21:05, 21 December 2007
This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under Completed.
Important
Pumpkin/Corn
Our new system needs to be setup and integrated/tied in. Read more: Pumpkin
Einstein Upgrade
Massive amount of deployment documentation for RHEL 5
Remade the RAID5. Will redo the software, but if the RAID dies again we should think of a different machine to sub for einstein.
- Pick a date within the next week Monday, 7/23/2007
- Send an e-mail to Aaron, warning him of the future takedown of tomato Done
- Update Tomato to RHEL5 Installed w/ basic configuration (auth, autofs, etc)
- Check all services einstein currently provides. Locate as many custom scripts, etc. as is reasonable and label/copy them.
- DNS Installed, set up, working
- LDAP Installed, set up, working. Changed config files on tomato and einstein to do replication, but their LDAP services need restarted. Need to schedule a time to do it on einstein. Double-check configs!
- Postfix Installed, set up, working!
- AMaViS Installed, set up
- ClamAV Installed, set up
- SpamAssassin Installed, set up, working? (need to test to make sure)
IMAPLet's try Dovecot instead. It supposed to be simpler to maintain and is supported by RedHat.cyradm localhost
gives "cannot connect to server". This all seems to be sasl-related. It'd be probably be easy if there was a way to have cyrus use PAM. LDAP and sasl Nevermind, that has to do with using SASL to authenticate LDAPsaslauthd -v
lists pam and ldap as available authentication mechanisms, and /etc/sysconfig/saslauthd has an entry "MECH=pam"…! What am I missing? Tried making a new "mail.physics.unh.edu.crt" for tomato, but couldn't because that would have required revoking einstein's cert of the same name. Tried using the "tomato.unh.edu.crt" and "tomato.unh.edu.key", but is giving the same results as the "mail.physics.unh.edu.*" copied from einstein. Tried using tomato's UNH address instead of hostname: same result. I'm able to login using theimtest
program, but the server doesn't send the same messages as shown here.- /home Installed, set up, working
- Samba Installed, set up, working. If anyone needs samba access, they need to find us and have us make them a samba account. No LDAP integration.
- Web?
- Fortran compilers and things like that? (Also needs compat libs--Nope, tomato is 32-bit.)
- Clone those services to tomato
- Switch einstein <-> tomato, and then upgrade what was originally einstein
- Look into making an einstein, tomato failsafe setup.
Miscellaneous
- When I was checking root's email, I waded through the logwatch reports and found that gourd's been giving smartd errors, namely
Offline uncorrectable sectors detected:
/dev/sda [3ware_disk_00] - 48 Time(s)
1 offline uncorrectable sectors detected
benfranklin is apparently up and running somewhere, because it's reporting drive issues too:
Currently unreadable (pending) sectors detected:
/dev/hda - 48 Time(s)
1 unreadable sectors detected
interesting that both have a 48 times error every day. Gourd's drive needs looking at, and benfranklin eventually someday if we find out where it is. Benfranklin is Dan's workstation, it's in the room next to Maurik's office. BenFranklin is a Pentium III "Coppermine" at 800MHz. I have ordered a replacement system already, so we can decommission the old BenFranklin.
- Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
- pauli nodes
- Environmental monitoring. See The Friday Taro Event for more.
- For sensors, we need to get them working on just one reliable machine to get decent notification. blackbody has a CPU temp. sensor that works. lm_sensors FAQ - adding more sensors seems to work for bb, but some settings seem to be off. Sensors is working on both Taro and Pepper. We should now tie this into the Cacti readout scheme. (volunteers?)
- We should also look into dedicated temperature monitoring devices. Ordered, a nice one, should be here before Christmas. This can be read out by SNMP so we can get Okra/Cacti to read and log it.
- RAID monitoring with smartd works on gourd. We should try to figure out why.
- To fix the RAID degradation, a disk has been transferred from lentil to taro. Now lentil is one short. Shouldn't really be a problem, considering how many other spare empty drives we have in lentil. Well, once we figure out why lentil gives us errors in its backup script...
- Booted tomato with a Xen kernel. I'm going to try to set up some VMs. Apparently VMs of RHEL are free with the host's license. It should be easier to experiment with IMAP and the other upgrade issues (such as the swap) with some VMs running on the network. Xen Docs Tomato isn't cooperating with the installation, as usual. We need to resize or rearrange the data partition to give some space for installations. Apparently it's not possible to simply resize an LVM partition.
- Idea for a future virtual machine: Set it up with vital services like LDAP and mail, and let it get the latest and greatest updates. Test that these updates don't break anything, then send the packages the npg-custom repository.
- Learn how to use cacti on okra. Seems like a nice tool, mostly set up for us already. Find out why lentil and okra (and tomato?) aren't being read by cacti. Could be related to the warnings that repeat in okra:/var/www/cacti/log/cacti.log. Not related to the warnings; those are for other machines that are otherwise being monitored. Try adding cacti to the exclude exclude list in access.conf Nevermind, lentil doesn't have any restrictions. Need to find out the requirements for a machine to be monitored by cacti/rrdtools. The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed. Tried configuring things on improv as in the Cacti HowTo, but no go. Must be some other settings as well. At some point on friday, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down".
- Install the right SNMP stuff on tomato so that it can be graphed
- jalapeno hangups: Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. lm_sensors fails to detect anything readable. Is there a way around this?
Ongoing
Documentation
- Maintain the Documentation of all systems!
- Main function
- Hardware
- OS
- Network
- Continue homogenizing the configurations of the machines.
- Improve documentation of mail software, specifically SpamAssassin, Cyrus, etc.
Maintenance
- Check e-mails to root every morning
- Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent. Yup, bohr died. Expanded his root by 2.5 gigs. Still serious monitor problems though, temporarily bypassed with vesa... Bohr's problem seems tied to the nvidia drivers, let's wait until the next release and see how those work out.
- Check up on security [1]
On-the-Side
- See if we can get the busted printer in 322 to work down here.
- Certain settings are similar or identical for all machines, such as resolv.conf. It would be beneficial to write a program to do remote configuration. This would also simplify the process of adding/upgrading machines. Since resolv.conf was mentioned, I made a prototype that seems to work. Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed. Learn how to use ssh-agent for most of these tasks
- Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting. I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups. Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [2] Put this on the backburner for now, since the current rate of backup disk consumption will give about 10 months before the next empty disk is needed.
Waiting
- That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen? Who was that guy, anyhow? (Silviu Covrig, probably) The machine is gluon, according to him. Waiting on ASUS tech support for warranty info Aaron said it might be power-supply-related. Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. Got RMA, sending out on wed. Waiting on ASUS to send us a working one! Called ASUS on 8/6, they said it's getting repaired right now. Wohoo! Got a notification that it shipped! ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. What should we do about this? I'm going to call them up and have a talk, considering looking at the details on their shipment reveals that they sent us a different motherboard, different serial number and everything but with the same problem.
- Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry. Mac OS X now working, IT guy should be here week of June 26th Did he ever come? No, he didn't, and did not respond to a voice message left. Will call again.
- Try to pull as much data from Jim William's old drives as possible, if there's even anything on them. They seem dead. Maybe we can swap one board to the other drive and see if it works? What room is he in? His computer is working now (the ethernet devices will have to be changed to a non-farm setup once the machine is back in his office).
- Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? Or maybe just wean him off of NPG… It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains. Heisenberg's complaining now. Fixed his pauli machine by walking in the room (still don't know what he was talking about) and dirac had LDAP shut off. He wants the paulis up whenever possible, which I explained could be awhile because of the heat issues.
Completed
- Lentil Backup issue resolved
- First of all Do NOT use disks smaller than 350 GB for backup!!, since those will not even fit one copy of what needs to be backed up.
- The link /mnt/npg-daily-current must exist and point to an actual drive.
- Old entry: Lentil's not doing backups. I tried manually runing the script friday afternoon and the email log looks like it was backing up and stopped for no real reason. Checking the space on the drives (since the script seems unable to do so now), I found that npg-daily/28 is basically full, and npg-daily/29 is an untouched 250gb. Maybe an update screwed around with how the script checks free space, preventing it from knowing how to move to the next drive. It's probably not any update - lentil was working fine until "The Friday Taro Event". I manually made the new symbolic link from /mnt/npg-daily-current -> /mnt/npg-daily/29 . Maybe this'll fix it? That seems to be a no. Lots of unable to make hard link errors, invalid cross-device link, and similar errors. It needs to know to copy the data it's backing up to the disk since it's a new disk. I still think it's got something to do with that unable to statfs error.
- Taro and Pepper Iptable configurations: It seems that we have a problem in the iptable configuration, which caused connectivity issues on taro.
- Taro now has one ethernet port connected to the backend network. Outside connectivity is over the VLAN.
- Swap jalapeno power supply. No need to schedule downtime, considering it's always down. Has this been done? It's been up all week. The Epsilon power supply we bought for taro way back at the begining of the summer is now in jalapeno.
- The weather might be getting too cold for running two air conditioners. The top one has been having some issues. Yesterday I came in and it had left the floor wet. Today, it had collected a major amount of ice and started to flash its lights and beep. I turned it off after that. The other day I came in and both were coated in ice and the room was stifling. I defrosted with the door wide open and fans on high. Any chance we can start leaving the window open without running environmental risks to the machines? We'd have to disconnect the top machine to do that, and that's a heavy-duty job, so probably not. See The Friday Taro Event Whoops. Looks like it happened by itself.
- Made a hard copy of roentgen's
getent passwd
in /etc/passwd (ditto for group and shadow), and commented out the nis dependencies in roentgen's /etc/nsswitch.conf. Now roentgen should be free of the shackles of NIS. While doing the transfer, I noticed that my account didn't have a shadow entry! I copied the entry from einstein to roentgen, and voila, I could log into roentgen. Good riddance, NIS! - Coincidentally, roentgen's UPS started flipping out right after I finished the above. It ran out of power, so I quickly replaced the old UPS with the new one that was being used by Matt's workstation.
- Temporarily mounted that random box fan that's been in the hall all summer into the old AC's spot. The maintenance guy's note implies that we'll be getting a fan to actually fit the hole, and Maurik said we might get a freestanding AC unit to pump cold air straight onto the machines.
- Sent a request to order a replacement battery for Roentgen's UPS to Michelle Walker.
- Something very wrong happened to tomato: I changed grub.conf to boot at runlevel 3 (normal, excludes X) and rebooted, but there was a kernel panic mentioning not being able to find vg_tomato. I rebooted again at the default runlevel, and got the same error. I booted with the intall disk, and it doesn't see any LVM anywhere. Ok, so it sees some LVM on sdd4. It also lists /dev/md1 as a "foreign" filesystem, presumably this is where the LVM should be residing. The kernel panic has a message saying that only 7/8 devices could be found for RAID5 device md1. What are the odds of taro and tomato losing a disk at the same time? Actually, it says 7/8 failed. This doesn't seem possible... Is the RAID card unplugged or something? Was able to make a new RAID, maybe it was a software error?