Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(801 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== General Info ==
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
This is an unordered set of tasks.  Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
 
== Important ==
 
* Investigate printing from lepton (in 305) to myriad. Got word from John Calarco that it doesn't work.
 
* Test pauli4's network card by booting with a livecd.  '''onboard works, e1000 doesn't.'''
 
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.'''
 
* Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
 
* Resize partitions on symanzik, bohr, tomato, roentgen, and other machines as necessary so that root has at least a gig of unused space. '''Can't do roentgen. Has a weird layout, don't want to mess anything up.'''
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored.  <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font>  Nevermind, lentil doesn't have any restrictions.
 
* Learn how to set up [[evolution]] fully so we can support users. Need LDAP address book.  '''What schema does our LDAP setup suport?  Evolution uses "evolutionPerson", apparently it doesn't work without using that schema for describing people.  Schemas can be combined: [http://cweiske.de/tagebuch/LDAP%20addressbook.htm]" Typing the name of someone evolution is aware of (that is, someone you've been in communication with before) allows address book like features. Close, but not quite what we're looking for.'''
 
* Figure out what to do about the mass Samba login attempts.  Since Maurik turned it off, does that mean that we don't really use it for anything important? <font color="green">Samba is still running on einstein. It is more important there. Roentgen samba access was for web stuff and now no longer needed.</font> The system causing access problems was supposedly rebooted. Also samba (einstein and roentgen) is set to be non-verbose into syslog.
 
  
== Ongoing ==
+
== Projects ==
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
** Main function
+
**VMs: Einstein
** Hardware
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
** OS
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
** Network
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
* Check e-mails to root every morning
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
* Clean up 202
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
** Figure out what's worth keeping
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
** Figure out what doesn't belong here
 
* Take a look at spamassassin - Improve Performance of this if possible.
 
* Test unknown equipment:
 
** UPS
 
* Printer in 323 is '''not''' hooked up to a dead network port. Actually managed to ping it. One person reportedly got it to print, nobody else has, and that user has been unable ever since. Is this printer dead? We need to find out.
 
* Look into making a centralized interface to monitor/maintain all the machines at once. '''Along the same lines: Continue homogenizing the configurations of the machines.'''
 
* Figure out why jalapeno doesn't have 3dm sofware running.  If we find that there's no good reason, maybe we should install it?
 
* Certain settings are the similar or identical for all machines.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.
 
* Update Tomato to RHEL5 and check all services einstein currently provides. Then switch einstein <-> tomato, and then upgrade what was originally einstein. Look into making an einstein, tomato failsafe setup.  '''A good preliminary step would be to find all of the custom scripts on einstein.  If they don't have "npg" in their filenames already, it should be added if possible, so that they can all be easily <code>locate</code>d.''' Maybe something other than just "npg", because there seems to be a lot of cruft with that label.
 
* Matt's learning a bit of Perl so we can figure out exactly how the backup works, as well as create more programs in the future, specifically thinking of monitoring. '''Look into the CPAN modules under ''Net::'', etc.''' I just found out that it's actually very easy to use ssh to log onto a machine and execute a command rather than a login shell: <code>ssh who@a.b.c cmd</code>.  For example, I made a bash function called "whoson" that tells me who's on the machine that I pass to it:<code>whoson roentgen</code> will prompt me for my password, then log on, run ''w'', and display the output.
 
* Keep an eye on jalapeno.  Make sure that the changes to the access rules haven't screwed anything up.
 
  
== Waiting ==
+
==Daily Tasks==
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?'''  The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error.
 
* Steve's pay.  '''Supposedly going to be remedied in the next paycheck.'''
 
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat and OS X.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font> '''Did he ever come?'''
 
* Figure out what network devices on tomato are doing what '''Guess we're waiting for Aaron on this one.  He needs to do something soon, because while I'm sure most of these extra devices aren't important to NPG, Xemed and the Paulis probably use them somehow, and we need to know what the deal is before installing RHEL5.'''
 
* Eventually one day come up with a plan to deal with pauli2's kernel issue '''Waiting on heisenberg to let us know about the setup of these machines before reinstalling.'''  Apparently they have no special software, just data.
 
  
== Completed ==
+
These are things that should be done every day when you come into work.
* From roentgen's logwatch, Wed Jul 11 04:02:19 2007:
 
<pre> --------------------- Kernel Begin ------------------------
 
  
+
#Do a physical walk-through/visual inspection of the server room
WARNING:  Kernel Errors Present
+
#Verify that all systems are running and all necessary services are functioning properly
    10.10.88.111 sent an invalid ICMP type 3, code 3 error to a broadcast:  ...:  1473 Time(s)
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
---------------------- Kernel End ------------------------- </pre>That's ICMP for destination unreachable, port unreachable [http://www.iana.org/assignments/icmp-parameters].  Related to the samba stuff hopefully?  '''Still getting message, only with more Time(s).'''  Found more details through splunk: "roentgen kernel: 10.10.88.111 sent an invalid ICMP type 3, code 3 error to a broadcast: 132.177.91.255 on eth0".  <code>whois</code> says that 132.177.91.255 is a NIC in Kingsbury.  Looking at the splunk graph for this error, it seems to have happened almost entirely over the course of Wednesday; Thursday only gets 1-5 per hour, while late Tuesday and all of Wednesday got ~200 per hour. <br>
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
'''Partially Solved''': This has been solved: The node is 10.10.88.111, which would be a system on the Xemed end. I simply block this system from getting through the tunnel on tomato, using the iptables (/etc/sysconfig/iptables_npg) with an entry: "-A INPUT -s 10.10.88.111 -j REJECT". This rejects that node silently. We may need to add such a line on einstein or roentgen, but I think it should now be stopped at the tunnel level.  
+
#*[[Roentgen]]: Make sure website/MySQL are available
* Test LDAP authentication on farm and general machines. We should create a number of users, each with different group settings, in order to narrow down what groups are required to access what. Seems less error-prone than using one user and modifying the settings over and over. <font color="green">See the [[LDAP]] doc, write answers there.</font> '''Made a user named "Joe Delete" that is only in his own group, and he can log into einstein, okra, lentil, tomato, gourd, ennui, and blackbody.'''
+
#*[[Jalapeno]]: Named and Cups
* Finally completed backup consolidation! No more amanda backups are left. Lentil presently has npg-daily-28 in use, 29 ready, the 500gb waiting until the jalapeno problems subside (in case we need to rebuild jalapeno), and an empty slot.
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
* The Log Level for nmbd on einstein was set to 7. WOA, that is a lot of junk only useful for expert debugging. Please, please, pretty please, don't leave log levels so high and then leave. <font color="red">How do you even set log levels?</font>  See the [[samba]] page
+
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
* Add a "flavicon" to some areas of web, so that we log fewer errors, for one.
+
#*Check logs for errors, keep an eye out for other irregularities.
* The snmpd deamon on einstein was very verbose, generating 600 messages per hour, all access from Okra. I changed the default options in /etc/sysconfig/snmpd.options from # OPTIONS="-Lsd -Lf /dev/null -p /var/un/snmpd.pid -a" to OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a". Now snmpd is QUIET! We could consider a slight more verbose? (This was discovered with splunk.)
+
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
*'''SPLUNK''' is now set up on Jalapeno. It is combing the logs of einstein and roentgen and its own logs. See [[Splunk]].
+
#*Verify that temperatures are acceptable.
* Checked if the backups are actually happening and working - they are.
+
#*Monitor other graphs/indicators for any unusual activity.
* Renewed XML books for Amrita. They're due at the end of the month.
 
* Fixed the amandabackup.sh script for consolidating amanda-style backups.
 
* Investigate the change in ennui's host key '''Almost certainly caused by one of the updates. Just remembered that I was using ennui for a few minutes and I saw the "updates ready!" icon in the corner and habitually clicked it. Darn ubuntu habits. Doesn't explain WHY it changed, only how.''' It probably wasn't an individual update, but almost certainly was the transition from Fedora 5 to 7.  ennui isn't a very popular machine to SSH into, so the change probably just went unnoticed for the two-or-so weeks since the upgrade.  I had early thought that it couldn't have been the OS change, since it had been awhile since the change, but upon further thought, it makes perfect sense.
 
* Look into getting computers to pull scheduled updates from rhn when they check in. '''See [[RHN|updates through RHN]]'''
 
* Look into monitoring RAID, disk usage, etc. '''[[SMARTD]], etc.'''
 
* Removed Aaron from "who should get root's mail?" part of einstein's /etc/aliases file. Now he won't get einstein's email all the time. Replaced him with "root@einstein.unh.edu", since both of us check that now.
 
* Added Matt and Steve to the ACL for the backups shared mail folder.  It was pretty simple with ''cyradm''.
 
* Karpiusp remembered his password, no change necessary.
 
* Need to get onto the "backups" shared folder, as well as be added as members to the lists.  '''"backups" wasn't even a mailing list, according to the Mailman interface.'''  Added Steve and Matt to the CC list for <i>/etc/cron.daily/rsync_backup</i>'s <code>mail</code> command.  If the message gets sent to us, then we'll know something's wrong with the list.  If we don't get it, then the problem is probably in the script. '''E-mails were received, so there's something up with the mailing list.''' Yup. Checking the mailing list archives shows no messages on the list. '''Figured out how to do shared folders with ''cyrusadm''.
 
* Nobody is currently reading the mail that is send to "root". Einstein had 3000+ unread messages. I deleted almost all. There are some useful messages that are send to root with diagnostics in them, we should find a solution for this. '''Temporarily, both Matt and Steve have email clients set up to access root's account.  The largest chunk of the e-mails concern updating ClamAV.  Maybe we should just do that?''' <font color="red">Doing this caused some ''major'' mail problems. It's a punishment for 1) Typing a command in the wrong terminal, 2) Not thoroughly understanding the configuration and importance of a component before updating it, and 3) Not restarting the program after updating it</font>
 
* Updated SpamAssassin and ran <code>sa-update</code> to get new rules.  '''The SA documentation seems to indicate that having ''procmail'' send mail is the typical scenario.  However, ''procmail'' isn't mentioned in the appropriate Postfix configuration file[http://wiki.apache.org/spamassassin/UsedViaProcmail]. ''procmail'' and ''postfix'' are installed, though. Do we have a special mail setup?'''  It seems like ''[[postfix]]'' is what does it. '''File this confusion under "improve mail chain documentation" rather than clutter up the list'''
 
* okra was the only machine that jdelete could log into, that also had restrictions in it's ''etc/security/access.conf'', so I commented out the old setting, then copied the content of another machine's file to okra's.
 
* jalapeno was mysteriously unreachable when I came in this morning (7/9/2007).  The cacti graphs show it going down sometime mid-Saturday.  The logs show several authentication failures beforehand... <font color="green">On Saturday, that looks like a failed breakin attempt. A repeat on Monday around 2pm. It is not clear why Jalapeno is being targetted. Check /var/log/secure. Note, I was on 7/9/2007, between 9:30am and 11:30am. I was trying to figure out how farm access is controlled. Specifically, I wanted to deny access to jalapeno to non-admin users.</font> Seems this worked ('''<font color="red">How did you do it?</font>''' '''<font color="blue">Looking like it's access.conf</font>'''), but perhaps it had an unintended side effects. We need more documentation on the [[Cacti]] page. I've not used it, so doe not know what access it needs to jalapeno. Perhaps it does not need monitorring by cacti. '''Cacti is still monitoring jalapeno.'''
 
  
== Previous Months Completed ==
+
==Weekly Tasks==
[[Completed in June 2007|June 2007]]
+
 
 +
These are things that should be done once every 7 days or so.
 +
 
 +
#Check physical interface connections
 +
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
 +
#Check Areca RAID interfaces
 +
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
 +
#Clean up the server room, sweep the floors.
 +
 
 +
==Monthly Tasks==
 +
 
 +
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
 +
 
 +
==Annual Tasks==
 +
 
 +
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.
 +
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
|  || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment