Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(737 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== General Info ==
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
This is an unordered set of tasks.  Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
 
== Important ==
 
=== Einstein Upgrade ===
 
# Pick a date within the next week
 
# Send an e-mail to Aaron, warning him of the future takedown of tomato
 
# Update Tomato to RHEL5
 
# Check all services einstein currently provides.  Locate as many custom scripts, etc. as is reasonable and label/copy them.
 
# Switch einstein <-> tomato, and then upgrade what was originally einstein
 
# Look into making an einstein, tomato failsafe setup.
 
=== Miscellaneous ===
 
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.'''
 
* Resize partitions on symanzik, bohr, tomato, roentgen, and other machines as necessary so that root has at least a gig of unused space. '''Can't do roentgen. Has a weird layout, don't want to mess anything up.'''
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored.  <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font>  Nevermind, lentil doesn't have any restrictions.  Need to find out the requirements for a machine to be monitored by cacti/rrdtools.  The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious.
 
* Learn how to set up [[evolution]] fully so we can support users. Need LDAP address book.  '''What schema does our LDAP setup suport?  Evolution uses "evolutionPerson", apparently it doesn't work without using that schema for describing people.  Schemas can be combined: [http://cweiske.de/tagebuch/LDAP%20addressbook.htm]" Typing the name of someone evolution is aware of (that is, someone you've been in communication with before) allows address book like features. Close, but not quite what we're looking for.'''
 
  
== Ongoing ==
+
== Projects ==
=== Documentation ===
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
+
**VMs: Einstein
** Main function
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
** Hardware
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
** OS
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
** Network
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
* Send an e-mail to Aaron, warning him of the future takedown of tomato.
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
* Continue homogenizing the configurations of the machines.
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
* Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
 
=== Maintenance ===
 
* Check e-mails to root every morning
 
* Keep an eye on jalapeno.  Make sure that the changes to the access rules haven't screwed anything up. '''jalapeno crashed at about 3AM on Sunday.  No peculiar logins or any weird events listed by splunk, just the typical tons of connections from okra every five minutes. Cacti doesn't show any unusual CPU, memory, or traffic near that time, either.'''  Nothing out-of-the-ordinary in the logs, etc.  Probably "just" a crash?
 
  
=== Cleaning ===
+
==Daily Tasks==
* Test unknown equipment:
 
** UPS
 
* Printer in 323 is '''not''' hooked up to a dead network port. Actually managed to ping it. One person reportedly got it to print, nobody else has, and that user has been unable ever since. Is this printer dead? We need to find out. '''Matt votes it's dead.'''
 
=== On-the-Side ===
 
* Certain settings are the similar or identical for all machines.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.
 
* Matt's learning a bit of Perl so we can figure out exactly how the backup works, as well as create more programs in the future, specifically thinking of monitoring. '''Look into the CPAN modules under ''Net::'', etc.''' I just found out that it's actually very easy to use ssh to log onto a machine and execute a command rather than a login shell: <code>ssh who@a.b.c cmd</code>.  For example, I made a bash function called "whoson" that tells me who's on the machine that I pass to it:<code>whoson roentgen</code> will prompt me for my password, then log on, run ''w'', and display the output.
 
* Eventually one day come up with a plan to deal with pauli2's kernel issue.  Probably going to do a reinstall soon, just won't touch the data partition.  Need to figure out tomato tunnel setup for 2 and 4
 
* Our newly-made loaner machine Hobo has FC7 installed now. Needs LDAP configuration, I can't seem to get it to download from einstein as usual, steve, you're gonna need to show me how it's done. Seems to be fairly responsive, but it could use some work to make it run faster. At least in the process we'll learn more about the structure of redhat-esque systems.  '''Probably can't do much since it's probably not registered.'''
 
  
== Waiting ==
+
These are things that should be done every day when you come into work.
* Sent registration request for hobo.unh.edu. Waiting on confirmation.
 
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?'''  The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error.
 
* Steve's pay.  '''Supposedly going to be remedied in the next paycheck.'''
 
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat and OS X.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font> '''Did he ever come?'''
 
  
== Completed ==
+
#Do a physical walk-through/visual inspection of the server room
* I wonder if we can go to the demerrit remains and get some drop ceiling panels or similar sound-absorbing material and put it in the server room.... The hum isn't unbearable, but I'm sick of saying "what?" all the time. '''Maurik is considering setting up a workstation room, seperate from the farm room.''' Maybe we can find some egg cartons to duct tape to the walls as a short-term fix. '''Duct-taped cardboard to the walls. Seems to have cut some of the high frequency noise. Could be placebo though.'''
+
#Verify that all systems are running and all necessary services are functioning properly
* <font size="+1" color="red">OOPS, crashed tomato. Please reboot by button. </font> (Maurik: 7/12/07 @ 10pm)  '''Rebooted. How'd that happen?'''
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Investigate printing from lepton (in 305) to myriad. Got word from John Calarco that it doesn't work. '''Apparently his root partition ran out of space. How many machines does that make now? Fixed by deleting his FC2 logical drive and expanding his FC3 to take its place.'''
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
* Test LDAP authentication on farm and general machines. We should create a number of users, each with different group settings, in order to narrow down what groups are required to access what. Seems less error-prone than using one user and modifying the settings over and over. <font color="green">See the [[LDAP]] doc, write answers there.</font> '''Made a user named "Joe Delete" that is only in his own group, and he can log into einstein, okra, lentil, tomato, gourd, ennui, and blackbody.'''
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
* Finally completed backup consolidation! No more amanda backups are left. Lentil presently has npg-daily-28 in use, 29 ready, the 500gb waiting until the jalapeno problems subside (in case we need to rebuild jalapeno), and an empty slot.
+
#*[[Roentgen]]: Make sure website/MySQL are available
* The Log Level for nmbd on einstein was set to 7. WOA, that is a lot of junk only useful for expert debugging. Please, please, pretty please, don't leave log levels so high and then leave. <font color="red">How do you even set log levels?</font>  See the [[samba]] page
+
#*[[Jalapeno]]: Named and Cups
* Add a "flavicon" to some areas of web, so that we log fewer errors, for one.
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
* The snmpd deamon on einstein was very verbose, generating 600 messages per hour, all access from Okra. I changed the default options in /etc/sysconfig/snmpd.options from # OPTIONS="-Lsd -Lf /dev/null -p /var/un/snmpd.pid -a" to OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a". Now snmpd is QUIET! We could consider a slight more verbose? (This was discovered with splunk.)
+
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
*'''SPLUNK''' is now set up on Jalapeno. It is combing the logs of einstein and roentgen and its own logs. See [[Splunk]].
+
#*Check logs for errors, keep an eye out for other irregularities.
* Checked if the backups are actually happening and working - they are.
+
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
* Renewed XML books for Amrita. They're due at the end of the month.
+
#*Verify that temperatures are acceptable.
* Fixed the amandabackup.sh script for consolidating amanda-style backups.
+
#*Monitor other graphs/indicators for any unusual activity.
* Investigate the change in ennui's host key '''Almost certainly caused by one of the updates. Just remembered that I was using ennui for a few minutes and I saw the "updates ready!" icon in the corner and habitually clicked it. Darn ubuntu habits. Doesn't explain WHY it changed, only how.''' It probably wasn't an individual update, but almost certainly was the transition from Fedora 5 to 7.  ennui isn't a very popular machine to SSH into, so the change probably just went unnoticed for the two-or-so weeks since the upgrade.  I had early thought that it couldn't have been the OS change, since it had been awhile since the change, but upon further thought, it makes perfect sense.
 
* Look into getting computers to pull scheduled updates from rhn when they check in. '''See [[RHN|updates through RHN]]'''
 
* Look into monitoring RAID, disk usage, etc. '''[[SMARTD]], etc.'''
 
* Removed Aaron from "who should get root's mail?" part of einstein's /etc/aliases file. Now he won't get einstein's email all the time. Replaced him with "root@einstein.unh.edu", since both of us check that now.
 
* Added Matt and Steve to the ACL for the backups shared mail folder. It was pretty simple with ''cyradm''.
 
* Karpiusp remembered his password, no change necessary.
 
* Need to get onto the "backups" shared folder, as well as be added as members to the lists.  '''"backups" wasn't even a mailing list, according to the Mailman interface.'''  Added Steve and Matt to the CC list for <i>/etc/cron.daily/rsync_backup</i>'s <code>mail</code> command. If the message gets sent to us, then we'll know something's wrong with the list. If we don't get it, then the problem is probably in the script. '''E-mails were received, so there's something up with the mailing list.''' Yup. Checking the mailing list archives shows no messages on the list. '''Figured out how to do shared folders with ''cyrusadm''.
 
* Nobody is currently reading the mail that is send to "root". Einstein had 3000+ unread messages. I deleted almost all. There are some useful messages that are send to root with diagnostics in them, we should find a solution for this. '''Temporarily, both Matt and Steve have email clients set up to access root's account.  The largest chunk of the e-mails concern updating ClamAV.  Maybe we should just do that?''' <font color="red">Doing this caused some ''major'' mail problems.  It's a punishment for 1) Typing a command in the wrong terminal, 2) Not thoroughly understanding the configuration and importance of a component before updating it, and 3) Not restarting the program after updating it</font>
 
* Updated SpamAssassin and ran <code>sa-update</code> to get new rules.  '''The SA documentation seems to indicate that having ''procmail'' send mail is the typical scenario.  However, ''procmail'' isn't mentioned in the appropriate Postfix configuration file[http://wiki.apache.org/spamassassin/UsedViaProcmail].  ''procmail'' and ''postfix'' are installed, though.  Do we have a special mail setup?'''  It seems like ''[[postfix]]'' is what does it. '''File this confusion under "improve mail chain documentation" rather than clutter up the list'''
 
* okra was the only machine that jdelete could log into, that also had restrictions in it's ''etc/security/access.conf'', so I commented out the old setting, then copied the content of another machine's file to okra's.
 
* jalapeno was mysteriously unreachable when I came in this morning (7/9/2007).  The cacti graphs show it going down sometime mid-Saturday.  The logs show several authentication failures beforehand... <font color="green">On Saturday, that looks like a failed breakin attempt. A repeat on Monday around 2pm. It is not clear why Jalapeno is being targetted. Check /var/log/secure. Note, I was on 7/9/2007, between 9:30am and 11:30am. I was trying to figure out how farm access is controlled. Specifically, I wanted to deny access to jalapeno to non-admin users.</font> Seems this worked ('''<font color="red">How did you do it?</font>''' '''<font color="blue">Looking like it's access.conf</font>'''), but perhaps it had an unintended side effects. We need more documentation on the [[Cacti]] page. I've not used it, so doe not know what access it needs to jalapeno. Perhaps it does not need monitorring by cacti. '''Cacti is still monitoring jalapeno.'''
 
* Figure out what to do about the mass Samba login attempts.  Since Maurik turned it off, does that mean that we don't really use it for anything important? <font color="green">Samba is still running on einstein. It is more important there. Roentgen samba access was for web stuff and now no longer needed.</font> The system causing access problems was supposedly rebooted. Also samba (einstein and roentgen) is set to be non-verbose into syslog.
 
* Test pauli4's network card by booting with a livecd. onboard works, e1000 doesn't.  Still isn't working even after specifying the onboard port in ifcfg-eth0. Installing Fedora 7 got the onboard fixed, but I don't know how to interface it with the tomato tunnel
 
* blackbody's behaving oddly.  It was hung when I came in, and a couple of services failed to startup when I restarted it.  Had to restart again, because the "greeter application" (graphical login screen) crashed instantly over and over, and when logging in via terminal, my home directory didn't mount.  Now it's mostly working, but a bunch of GNOME desktop apps crashed upon my logging in. '''Getting segmentation faults with random programs, including yum and pirut.''' Nothing jumped out at me in the logs for updates, etc.  Going to do a memtest on blackbody, just in case.  '''Neither the Fedora 7 or Backtrack discs have memtest on them, and we don't have any blank CDs, so never mind for now.  After further investigation, it probably isn't memory anyhow, since it's consistently the same set of programs that are segfaulting/crashing.''' I do think that the ubunutu disc that was in gluon when we got it has memtest. I put it on the stack of discs where we keep the rest of them. I agree though, it probably isn't the memory.  '''I think I've figured it out.  The "installonlyn" plugin is a Python script, and the two programs that fail on startup are written in Python.  Conclusion: something's wrong with the Python installation.''' Yum relies on some python stuff, which makes it quite difficult to reinstall python. Also, yum doesn't actually HAVE a reinstall function. Forcing install seems to do nothing. Reinstallation of fedora should do it... '''Reinstalled, and am not running Python updates for a while'''
 
* Figure out what network devices on tomato are doing what '''Guess we're waiting for Aaron on this one.  He needs to do something soon, because while I'm sure most of these extra devices aren't important to NPG, Xemed and the Paulis probably use them somehow, and we need to know what the deal is before installing RHEL5.'''  It's do or die for Xemed.
 
  
== Previous Months Completed ==
+
==Weekly Tasks==
[[Completed in June 2007|June 2007]]
+
 
 +
These are things that should be done once every 7 days or so.
 +
 
 +
#Check physical interface connections
 +
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
 +
#Check Areca RAID interfaces
 +
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
 +
#Clean up the server room, sweep the floors.
 +
 
 +
==Monthly Tasks==
 +
 
 +
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
 +
 
 +
==Annual Tasks==
 +
 
 +
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.
 +
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
|  || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment