Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
Line 31: Line 31:
 
* The weather might be getting too cold for running two air conditioners. The top one has been having some issues. Yesterday I came in and it had left the floor wet. Today, it had collected a major amount of ice and started to flash its lights and beep. I turned it off after that. '''The other day I came in and both were coated in ice and the room was stifling. I defrosted with the door wide open and fans on high. Any chance we can start leaving the window open without running environmental risks to the machines?''' We'd have to disconnect the top machine to do that, and that's a heavy-duty job, so probably not.
 
* The weather might be getting too cold for running two air conditioners. The top one has been having some issues. Yesterday I came in and it had left the floor wet. Today, it had collected a major amount of ice and started to flash its lights and beep. I turned it off after that. '''The other day I came in and both were coated in ice and the room was stifling. I defrosted with the door wide open and fans on high. Any chance we can start leaving the window open without running environmental risks to the machines?''' We'd have to disconnect the top machine to do that, and that's a heavy-duty job, so probably not.
 
* Idea for a future virtual machine: Set it up with vital services like LDAP and mail, and let it get the latest and greatest updates. Test that these updates don't break anything, then send the packages the npg-custom repository.
 
* Idea for a future virtual machine: Set it up with vital services like LDAP and mail, and let it get the latest and greatest updates. Test that these updates don't break anything, then send the packages the npg-custom repository.
* Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why.
+
* Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? '''Or maybe just wean him off of NPG…''' It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. '''The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains.'''
* Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery.
 
* Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? '''Or maybe just wean him off of NPG…''' It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out.
 
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored.  <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font>  Nevermind, lentil doesn't have any restrictions.  Need to find out the requirements for a machine to be monitored by cacti/rrdtools.  The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. '''Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed.''' Tried configuring things on improv as in the Cacti HowTo, but no go.  Must be some other settings as well. '''At some point on friday, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down".'''
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored.  <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font>  Nevermind, lentil doesn't have any restrictions.  Need to find out the requirements for a machine to be monitored by cacti/rrdtools.  The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. '''Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed.''' Tried configuring things on improv as in the Cacti HowTo, but no go.  Must be some other settings as well. '''At some point on friday, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down".'''
 
* Install the right SNMP stuff on tomato so that it can be graphed
 
* Install the right SNMP stuff on tomato so that it can be graphed

Revision as of 16:03, 12 November 2007

This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under Completed.

Important

Einstein Upgrade

Massive amount of deployment documentation for RHEL 5

  1. Pick a date within the next week Monday, 7/23/2007
  2. Send an e-mail to Aaron, warning him of the future takedown of tomato Done
  3. Update Tomato to RHEL5 Installed w/ basic configuration (auth, autofs, etc)
  4. Check all services einstein currently provides. Locate as many custom scripts, etc. as is reasonable and label/copy them.
    1. DNS Installed, set up, working
    2. LDAP Installed, set up, working. Changed config files on tomato and einstein to do replication, but their LDAP services need restarted. Need to schedule a time to do it on einstein. Double-check configs!
    3. Postfix Installed, set up, working!
    4. AMaViS Installed, set up
    5. ClamAV Installed, set up
    6. SpamAssassin Installed, set up, working? (need to test to make sure)
    7. IMAP cyradm localhost gives "cannot connect to server". This all seems to be sasl-related. It'd be probably be easy if there was a way to have cyrus use PAM. LDAP and sasl Nevermind, that has to do with using SASL to authenticate LDAPsaslauthd -v lists pam and ldap as available authentication mechanisms, and /etc/sysconfig/saslauthd has an entry "MECH=pam"…! What am I missing? Tried making a new "mail.physics.unh.edu.crt" for tomato, but couldn't because that would have required revoking einstein's cert of the same name. Tried using the "tomato.unh.edu.crt" and "tomato.unh.edu.key", but is giving the same results as the "mail.physics.unh.edu.*" copied from einstein. Tried using tomato's UNH address instead of hostname: same result. I'm able to login using the imtest program, but the server doesn't send the same messages as shown here.
    8. /home Installed, set up, working
    9. Samba Installed, set up, working. If anyone needs samba access, they need to find us and have us make them a samba account. No LDAP integration.
    10. Web?
    11. Fortran compilers and things like that? (Also needs compat libs--Nope, tomato is 32-bit.)
  5. Clone those services to tomato
  6. Switch einstein <-> tomato, and then upgrade what was originally einstein
  7. Look into making an einstein, tomato failsafe setup.

Miscellaneous

  • Environmental monitoring. See The Friday Taro Event for more.
    • For sensors, we need to get them working on just one reliable machine to get decent notification. blackbody has a CPU temp. sensor that works
    • We should also look into dedicated temperature monitoring devices.
    • RAID monitoring with smartd works on gourd. We should try to figure out why.
  • To fix the RAID degradation, a disk has been transferred from lentil to taro. Now lentil is one short.
  • Booted tomato with a Xen kernel. I'm going to try to set up some VMs. Apparently VMs of RHEL are free with the host's license. It should be easier to experiment with IMAP and the other upgrade issues (such as the swap) with some VMs running on the network. Xen Docs Tomato isn't cooperating with the installation, as usual. We need to resize or rearrange the data partition to give some space for installations. Apparently it's not possible to simply resize an LVM partition. Something very wrong happened to tomato: I changed grub.conf to boot at runlevel 3 (normal, excludes X) and rebooted, but there was a kernel panic mentioning not being able to find vg_tomato. I rebooted again at the default runlevel, and got the same error. I booted with the intall disk, and it doesn't see any LVM anywhere. Ok, so it sees some LVM on sdd4. It also lists /dev/md1 as a "foreign" filesystem, presumably this is where the LVM should be residing. The kernel panic has a message saying that only 7/8 devices could be found for RAID5 device md1. What are the odds of taro and tomato losing a disk at the same time? Actually, it says 7/8 failed. This doesn't seem possible... Is the RAID card unplugged or something?
  • The weather might be getting too cold for running two air conditioners. The top one has been having some issues. Yesterday I came in and it had left the floor wet. Today, it had collected a major amount of ice and started to flash its lights and beep. I turned it off after that. The other day I came in and both were coated in ice and the room was stifling. I defrosted with the door wide open and fans on high. Any chance we can start leaving the window open without running environmental risks to the machines? We'd have to disconnect the top machine to do that, and that's a heavy-duty job, so probably not.
  • Idea for a future virtual machine: Set it up with vital services like LDAP and mail, and let it get the latest and greatest updates. Test that these updates don't break anything, then send the packages the npg-custom repository.
  • Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? Or maybe just wean him off of NPG… It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains.
  • Learn how to use cacti on okra. Seems like a nice tool, mostly set up for us already. Find out why lentil and okra (and tomato?) aren't being read by cacti. Could be related to the warnings that repeat in okra:/var/www/cacti/log/cacti.log. Not related to the warnings; those are for other machines that are otherwise being monitored. Try adding cacti to the exclude exclude list in access.conf Nevermind, lentil doesn't have any restrictions. Need to find out the requirements for a machine to be monitored by cacti/rrdtools. The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed. Tried configuring things on improv as in the Cacti HowTo, but no go. Must be some other settings as well. At some point on friday, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down".
  • Install the right SNMP stuff on tomato so that it can be graphed
  • jalapeno hangups: Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. lm_sensors fails to detect anything readable. Is there a way around this?

Ongoing

Documentation

  • Maintain the Documentation of all systems!
    • Main function
    • Hardware
    • OS
    • Network
  • Continue homogenizing the configurations of the machines.
  • Improve documentation of mail software, specifically SpamAssassin, Cyrus, etc.

Maintenance

  • Check e-mails to root every morning
  • Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent. Yup, bohr died. Expanded his root by 2.5 gigs. Still serious monitor problems though, temporarily bypassed with vesa... Bohr's problem seems tied to the nvidia drivers, let's wait until the next release and see how those work out.
  • Check up on security [1]

On-the-Side

  • See if we can get the busted printer in 322 to work down here.
  • Certain settings are similar or identical for all machines, such as resolv.conf. It would be beneficial to write a program to do remote configuration. This would also simplify the process of adding/upgrading machines. Since resolv.conf was mentioned, I made a prototype that seems to work. Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed. Learn how to use ssh-agent for most of these tasks
  • Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting. I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups. Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [2] Put this on the backburner for now, since the current rate of backup disk consumption will give about 10 months before the next empty disk is needed.

Waiting

  • It turns out that business services didn't actually order the new server.
  • That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen? Who was that guy, anyhow? (Silviu Covrig, probably) The machine is gluon, according to him. Waiting on ASUS tech support for warranty info Aaron said it might be power-supply-related. Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. Got RMA, sending out on wed. Waiting on ASUS to send us a working one! Called ASUS on 8/6, they said it's getting repaired right now. Wohoo! Got a notification that it shipped! ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. What should we do about this?
  • Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry. Mac OS X now working, IT guy should be here week of June 26th Did he ever come? No, he didn't, and did not respond to a voice message left. Will call again.
  • Try to pull as much data from Jim William's old drives as possible, if there's even anything on them. They seem dead. Maybe we can swap one board to the other drive and see if it works? What room is he in? His computer is working now (the ethernet devices will have to be changed to a non-farm setup once the machine is back in his office).

Completed

  • Swap jalapeno power supply. No need to schedule downtime, considering it's always down. Has this been done? It's been up all week. The Epsilon power supply we bought for taro way back at the begining of the summer is now in jalapeno.

Previous Months Completed

June 2007

July 2007

August 2007

September 2007

October 2007