Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(177 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
== Daily Check off list ==
 
Each day when you come in check the following:
 
# Einstein ([[Script Prototypes|script]]):
 
## Up and running?
 
## Disks are at less than 90% full?
 
## Mail system OK? (spamassasin, amavisd, ...)
 
# Temperature OK? No water blown into room?
 
# Systems up: Taro, Pepper, Pumpkin/Corn ?
 
# Backups:
 
## Did backup succeed?
 
## Does Lentil need a new disk?
 
  
== Important ==
+
== Projects ==
 +
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
 +
**VMs: Einstein
 +
**Physical: [[endeavour]], [[taro]], and [[gourd]]
 +
*Mailman: Clean up mailman and make sure all the groups and users are in order.
 +
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
 +
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
 +
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
 +
*Do a check on Lentil to see if there is any unneccessary data being backed up.
  
=== Towards a stable setup  ===
+
==Daily Tasks==
  
Here are some thoughts, especially to Steve, about getting an über-stable setup for the servers.  <br>
+
These are things that should be done every day when you come into work.
Some observations:
 
# When we get to DeMeritt at the end of next summer, we need a setup that easily ports to the new environment. We will also be limited to a total of 10 kWatts heat load (36000 BTUs, or 3 tons of cooling), due to the cooling of the room. That sounds like a lot, but Silas and Jiang-Ming will also put servers in this space. Our footprint should be no more than 3 to 4 kWatts of heat load.
 
# Virtual systems seems like the way to go. However, our experience with Xen is that it does not lead to highly portable VMs.
 
# VMware Server is now a free product. They make money consulting and selling fancy add-ons. I have good experience with VMware Workstation on Mac's and Linux. But it is possible (like RedHat which was once free) that they will start charging when they reach 90% or more market share.
 
  
Here are some options: <br>
+
#Do a physical walk-through/visual inspection of the server room
* We get rid of Tomato, Jalapeno, Gourd and Okra and perhaps also Roentgen. If we want we can scavenge the parts from Tomato & Jalapeno (plus old einstein) for a toy system, or we park these systems in the corner. I don't want to waste time on them. The only bumps that I can think of here would be that Xemed/Aaron use Gourd.  Otherwise I think we're all in favor of cutting down on the number of physical machines that we've got running. Oh, and what about the paulis? '''Since they're not under our "jurisdiction", they'll probably end up there anyhow.
+
#Verify that all systems are running and all necessary services are functioning properly
* Test VMware server (See [[VMWare Progress]]). Specifically, I would like to know:
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
## How easy is it to move a VM from one hardware to another? (Can you simply move the disks?) '''Yes.'''
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
## Specifically, if you need to service some hardware, can you move the host to other hardware with little down time? (Clearly not for large disk arrays, like pumpkin, but that is storage, not hosts). '''Considering portability of disks/files, the downtime is the time it takes to move the image around and start up on another machine.'''
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
## Do we need a RedHat license for each VM or do we only need a license for the host, as with Xen? '''It seems to consume a license per VM. Following [http://kbase.redhat.com/faq/FAQ_103_10754.shtm this] didn't work for the VMWare systems. The closes thing to an official word that I could find was [http://www.redhat.com/archives/taroon-list/2004-August/msg00292.html this].'''
+
#*[[Roentgen]]: Make sure website/MySQL are available
## VMware allows for "virtual appliances", but how good are these really? Are these fast enough?
+
#*[[Jalapeno]]: Named and Cups
* Evaluate the hardware needs. Pumpkin, the new Einstein, Pepper, Taro and Lentil seem to be all sufficient quality and up to date. Do we need another HW? If so, what?
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
 +
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
 +
#*Check logs for errors, keep an eye out for other irregularities.
 +
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
 +
#*Verify that temperatures are acceptable.
 +
#*Monitor other graphs/indicators for any unusual activity.
  
=== Einstein Upgrade ===
+
==Weekly Tasks==
  
Einstein upgrade project and status page: [[Einstein Status]]
+
These are things that should be done once every 7 days or so.
'''Note:''' Einstein (current one) has a problem with / getting full occasionally. See [[Einstein#Special_Considerations_for_Einstein]]
 
  
It seems this is not moving forward sufficiently. I think we need a new strategy to get this accomplished. My new thought is to abandon the Tomato hardware, which may have been a source of the difficulties, and use what we learned for the setup of "Einstein on RHEL5" to create a virtual machine Tomato, where we test the upgrade to RHEL5.  
+
#Check physical interface connections
 +
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
 +
#Check Areca RAID interfaces
 +
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
 +
#Clean up the server room, sweep the floors.
  
 +
==Monthly Tasks==
  
 +
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
  
=== Miscellaneous ===
+
==Annual Tasks==
* '''MarieCurie''': Feynman's video card doesn't fit in any of mariecurie's slots (is it AGP or something?). I'm going to see if blackbody's Nvidia (which can do widescreen with the "nv" driver) fits, whenever Sarah isn't busy with her machine. If it does, then she can take the card, at least while I mess around with her ATI card in blackbody. '''blackbody's card doesn't fit in any of the slots, either. I have no clue what kind of connection it needs. The next step is to just look for and order a PCI/PCI-X NVidia card that's known to work at 1680x1050 on RedHat.''' Tried the 6200LE, had the same problems as the ATI cards!
 
* Lepton constantly has problems printing. It seems that at least once a month the queue locks up. This machine has Fedora Core 3 installed, I wonder if it would be more worth it to just put CentOS on it and be done with this recurring problem.
 
* Fermi has problems allowing me to log in. nsswitch.conf looks fine, getent passwd shows all the users like it's supposed to. There are no restrictions in ''/etc/security/access.conf'', either.
 
* Gourd won't let me (Matt) log in, saying no such file or directory when trying to chdir to my home, and then it boots me off. Trying to log in as root from einstein is successful just long enough for it to tell me when the last login was, then boots me. '''(Steve here) I was able to log in and do stuff, but programs were intermittently slow.
 
* Clean out some users who have left a while ago. (Maurik should do this.)
 
* '''Monitoring''': I would like to see the new temp-monitor integrated with Cacti, and fix some of the cacti capabilities, i.e. tie it in with the sensors output from pepper and taro (and tomato/einstein). Setup sensors on the corn/pumpkin. Have an intelligent way in which we are warned when conditions are too hot, a drive has failed, a system is down.  '''I'm starting to get the hang of getting this sort of data via snmp. I wrote a perl script that pulls the temperature data from the environmental monitor, as well as some nice info from einstein. We SHOULD be able to integrate a rudimentary script like this into cacti or splunk, getting a bit closer to an all-in-one monitoring solution. It's in Matt's home directory, under code/npgmon/'''
 
* Check into smartd monitoring (and processing its output) on Pepper, Taro, Corn/Pumpkin, Einstein, Tomato.
 
* Decommission Okra. - This system is way too outdated to bother with it. Move Cacti to another system. Perhaps a VM, once we get that figured out?
 
* Learn how to use [[cacti]]. We should consider using a VM appliance to do this, so it's minimal configuration, and since okra's only purpose is to run cacti.
 
  
== Ongoing ==
+
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.  
=== Documentation ===
 
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
 
** Main function
 
** Hardware
 
** OS
 
** Network
 
* Continue homogenizing the configurations of the machines.
 
* Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
 
=== Maintenance ===
 
* Check e-mails to root every morning
 
* Check up on security [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-sec-network.html#ch-wstation]
 
* Clean up Room 202.
 
** Start reorganizing things back into boxes for the August move.
 
** Ask UNH if they have are willing/able to recycle/reuse the three CRTs '''and old machines''' that we have sitting around. '''Give them away if we have to.'''
 
  
=== On-the-Side ===
+
#Server software upgrades
* Learn how to use ssh-agent for task automation.
+
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.''' <font color="green"> I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups.</font> Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-acls.html#s2-acls-mounting-nfs]
+
#Run fsck on data volumes
* Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
+
#Clean/Dust out systems
** pauli nodes -- Low priority!
+
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
  
== Waiting ==
+
<!--{| cellpadding="5" cellspacing="0" border="1"
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?''' (Silviu Covrig, probably) The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. '''Got RMA, sending out on wed.''' Waiting on ASUS to send us a working one!''' Called ASUS on 8/6, they said it's getting repaired right now. '''Wohoo! Got a notification that it shipped!''' ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. '''What should we do about this?''' I'm going to call them up and have a talk, considering looking at the details on their shipment reveals that they sent us a different motherboard, different serial number and everything but with the same problem.
+
! Time of Year !! Things to Do !! Misc.
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font> '''Did he ever come?''' No, he didn't, and did not respond to a voice message left. Will call again.
+
|-
* Sent an email to UNH Property Control asking what the procedure is to get rid of untagged equipment, namely, the two old monitors in the corner. Apparently they want us to fill out lots of information on the scrapping form like if it was paid for with government money, etc, as well as give them serial numbers, model numbers, and everything we can get ahold of. Then, we get to hang onto them until the hazardous equipment people come in and take it out, at their leisure. Waiting to figure out what we want to do with them.
+
| Summer Break || ||
 
+
|-
== Completed ==
+
|  || Major Kernel Upgrades ||
* '''pumpkin/lentil/mariecurie/einstein2/corn''': <font color="red">It was all thanks to nss_ldap</font>Here's a summary of the symptoms, none of which occur for root:
+
|-
** bash has piping problems for Steve, but sometimes not for Matt. Something like <code>ls | wc</code> will print nothing and <code>echo $?</code> will print 141, aka SIGPIPE. Backticks will cause similar problems. Something like <code>echo `ls`</code> will print nothing, and <code>`ls`; echo $?</code> also prints 141. Since several system-provided startup scripts rely on strings returned from backticks, bash errors will print upon login.
+
|  || Run FDisk ||
** '''None''' of the bash problems seem to happen when logged onto the physical machine, rather than over SSH, but '''all''' of the tcsh problems still occur. '''Turned out that the newest version of bash on el5 systems is broken. Replacing it with an older version fixes the issue with bash. Tcsh still gives issues, but that appears to be unrelated.'''
+
|-
** Everything else seems fine on these two machines: disk usage, other programs, network, etc.
+
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
** Since this is now not just a pumpkin issue, the problem probably isn't corrupt files, but maybe some update messed something up. Figuring out what update that is could be tough though since there was a huge chunk of updates at some point (although I suspect the problem might be bash since tcsh depends on it). However, einstein2 is pretty much a clean slate, so a simple if tedious method would be to apply updates to it gradually until the symptoms pop up.
+
|-
** tcsh can only run built-in commands; anything else results in a "Broken pipe" and the program not running. "Broken pipe" appears even for non-existent programs (e.g. "hhhhhhhh" will make "Broken pipe" appear).
+
| Thanksgiving Break || ||
* '''Lentil''': Gotta reinstall a whole bunch of things and/or a new disk; looks like there was some damage from the power problem on Monday (the size 0 files have returned).
+
|-
* '''jalapeno hangups:''' Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. '''lm_sensors fails to detect anything readable. Is there a way around this?''' Jalapeno's been on for two weeks with no problems, let's keep our fingers crossed&hellip; '''This system is too unstable to maintain, like tomato and old einstein.''' Got an e-mail today, saying it's got a degraded array. I just turned it off since it's just a crappy space heater at this point.
+
| Winter Break || ||
* Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. '''This hasn't happened in half a year. I think it was a coincidence that a few computers had it happen at once.'''
+
|-
* Put new drive in lentil, npg-daily-33. '''That's good, because 32 is almost full already. 81%!'''
+
| || Upgrade RAID disks || Upgrade only disks connected to a RAID card
* <b><font color="red">CLAMAV died and no one noticed!</font></b> The update of clamav (mail virus scanner on einstein) on April 23rd killed this mail subsystem because some of the option in /etc/clamd.conf were now obsolete. See http://www.sfr-fresh.com/unix/misc/clamav-0.93.tar.gz:a/clamav-0.93/NEWS. This seemed to have gone unnoticed for a while. Are we sleeping at the wheel? Edited /etc/clamd.conf to comment out these options.
+
|--
* When I came in today (22nd), taro had kernel panicked and einstein was acting strangely. Checking root's email, I saw that all day the 21st and 22nd, there were SMTP errors, around 2 per minute. A quick glance at them gives me the impression that they're spam attempts, due to ridiculous FROM fields like <code>pedrofinancialcompany.inc.net@tiscali.dk</code>. I rebooted taro and einstein, everything seems fine now.
+
| Spring Break || ||  
* Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? '''Or maybe just wean him off of NPG&hellip;''' It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. '''The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains.''' Heisenberg's complaining now. Fixed his pauli machine by walking in the room (still don't know what he was talking about) and dirac had LDAP shut off. He wants the paulis up whenever possible, which I explained could be awhile because of the heat issues. ''' Pauli doesn't crash anymore, as far as I can tell. Switching the power supply seems to have done it.'''
+
|-
* Pumpkin is now stable. Read more on the configuration at [[Pumpkin]] and [[Xen]].
+
|} -->
* Roentgen was plugged into one of the non-battery-backup slots of its UPS, so I shut it down and moved the plug. After starting back up, root got a couple of mysterious e-mails about /dev/md0 and /dev/md2: <code>Array /dev/md2 has experienced event "DeviceDisappeared"</code>. However, <code>mount</code> seems to indicate that everything important is around:
 
<pre>
 
/dev/vg_roentgen/rhel3 on / type ext3 (rw,acl)
 
none on /proc type proc (rw)
 
none on /dev/pts type devpts (rw,gid=5,mode=620)
 
usbdevfs on /proc/bus/usb type usbdevfs (rw)
 
/dev/md1 on /boot type ext3 (rw)
 
none on /dev/shm type tmpfs (rw)
 
/dev/vg_roentgen/rhel3_var on /var type ext3 (rw)
 
/dev/vg_roentgen/wheel on /wheel type ext3 (rw,acl)
 
/dev/vg_roentgen/srv on /srv type ext3 (rw,acl)
 
/dev/vg_roentgen/dropbox on /var/www/dropbox type ext3 (rw)
 
/usr/share/ssl on /etc/ssl type none (rw,bind)
 
/proc on /var/lib/bind/proc type none (rw,bind)
 
automount(pid1503) on /net type autofs (rw,fd=5,pgrp=1503,minproto=2,maxproto=4)
 
</pre>and all of the sites listed on [[Web Servers]] work. Were those just old arrays that aren't around anymore but are still listed in some config file? '''We haven't seen any issues, and roentgen's going to be virtualized in the not-to-distant future, so this is fairly irrelevant.'''
 
* Gourd's been giving smartd errors, namely<code>
 
  Offline uncorrectable sectors detected:
 
        /dev/sda [3ware_disk_00] - 48 Time(s)
 
        1 offline uncorrectable sectors detected
 
</code>Okra also has an offline uncorrectable sector! '''No sign of problems since this was posted.'''
 
 
 
== Previous Months Completed ==
 
[[Completed in June 2007|June 2007]]
 
 
 
[[Completed in July 2007|July 2007]]
 
 
 
[[Completed in August 2007|August 2007]]
 
 
 
[[Completed in September 2007|September 2007]]
 
 
 
[[Completed in October 2007|October 2007]]
 
 
 
[[Completed in November/December 2007|NovDec 2007]]
 
 
 
[[Completed in January 2008|January 2008]]
 
 
 
[[Completed in February 2008|February 2008]]
 

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment