Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(942 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== General Info ==
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
This is an unordered set of tasks.  Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
 
== Important ==
 
* Find out why Steve isn't getting paid what he's supposed to be getting paid. '''May be getting fixed'''
 
* Nobody is currently reading the mail that is send to "root". Einstein had 3000+ unread messages. I deleted almost all. There are some useful messages that are send to root with diagnostics in them, we should find a solution for this. '''Temporarily, both Matt and Steve have email clients set up to access root's account.'''
 
* Enable SMP on lentil.  '''Probably easily doable after the backup is done.  That might not be for a while, though.'''
 
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat and OS X.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font>
 
* Figure out what network devices on tomato are doing what
 
* Look into monitoring RAID, disk usage, etc.
 
* Look into getting computers to pull scheduled updates from rhn when they check in.
 
* Need to get onto the "backups" shared folder, as well as be added as members to the lists.  '''"backups" wasn't even a mailing list, according to the Mailman interface.'''
 
* Figure out exactly what our backups are doing, and see if we can implement some sort of NFS user access. [[NPG_backup_on_Lentil]].
 
* I set up "splunk" on einstein (production 2.2.3 version) and taro (beta 3 v2). I like the beta's functionality better, but it has a memory leak. Look for update to beta that fixes this and install update. (See: [http://www.splunk.com/base/forum:SplunkGeneral www.splunk.com/base/forum:SplunkGeneral] '''While this sounds like it could only be indirectly related to our issue, it does sound close enough and is the only official word on splunk's memory usage that I could find:'''[http://www.splunk.com/doc/latest/releasenotes/KnownIssues] <pre>When forwarding cooked data you may see the memory usage spike and kill the splunkd process. This should be fixed for beta 3.</pre>'''So, waiting for the next beta or later sounds like the best bet. I'm wary of running beta software on einstein, anyhow.'''
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already.
 
* Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''
 
* Learn how to set up [[evolution]] fully so we can support users. Need LDAP address book.
 
* Matt's learning a bit of Perl so we can figure out exactly how the backup works, as well as create more programs in the future, specifically thinking of monitoring. '''Look into the CPAN modules under ''Net::'', etc.'''
 
* Figure out what happened to lentil's Perl binary.  System logs don't show any obviously malicious logins, etc. My current suspicion is that a typo in some other script led to it (maybe something like <code>></code> when they meant <code>|</code>).
 
  
== Ongoing ==
+
== Projects ==
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
** Main function
+
**VMs: Einstein
** Hardware
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
** OS
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
** Network
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
* Clean up 202
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
** Figure out what's worth keeping
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
** Figure out what doesn't belong here
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
* Take a look at spamassassin - Improve Performance of this if possible. '''See if our setup jives with [http://evillair.netdojo.com/howto/spamassassin.html this]'''
 
* Test unknown equipment:
 
** UPS
 
* Printer in 323 is '''not''' hooked up to a dead network port. Actually managed to ping it. One person reportedly got it to print, nobody else has, and that user has been unable ever since. Is this printer dead? We need to find out.
 
* Eventually one day come up with a plan to deal with pauli2's kernel issue
 
* Look into making a centralized interface to monitor/maintain all the machines at once. '''Along the same lines: Continue homogenizing the configurations of the machines.'''
 
* Figure out why jalapeno doesn't have 3dm sofware running.  If we find that there's no good reason, maybe we should install it?
 
* Certain settings are the similar or identical for all machines.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.
 
* Update Tomato to RHEL5 and check all services einstein currently provides. Then switch einstein <-> tomato, and then upgrade what was originally einstein. Look into making an einstein, tomato failsafe setup.
 
  
== Completed ==
+
==Daily Tasks==
* Get ahold of a multimeter so we can test supplies and cables. '''Got a tiny portable one.'''
+
 
* Order new power supply for Taro '''Ordered from Newegg 5/24/2007'''
+
These are things that should be done every day when you come into work.
* Weed out unneccesary cables, we don't need a full box of ata33 and another of ata66. '''Consolodated to one box'''
+
 
* Installed new power supply for Taro.
+
#Do a physical walk-through/visual inspection of the server room
* Get printer (Myriad) working.
+
#Verify that all systems are running and all necessary services are functioning properly
* Set up skype.
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Fix sound on improv so we can use skype and music. '''sound was set to alsa, setting to OSS fixed it. should have worked either way though.'''
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
* Check out 250gb sata drive found in old maxtor box. '''Clicks/buzzes when powered up.'''
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
* Look into upgrades/patches for all our systems. '''Scheduled critical updates to be applied to all machines but einstein. If they don't break the other machines, like they didn't break the first few, I'll apply them to einstein too.'''
+
#*[[Roentgen]]: Make sure website/MySQL are available
* Started download of fedora 7 for the black computer. 520KB/s downstream? Wow.
+
#*[[Jalapeno]]: Named and Cups
* Get Pauli computers up. '''I think they're up. They're plugged in and networked.'''
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
* Find out what the black computer's name is! '''We called it blackbody.'''
+
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
* "blackbody" is Currently online. For some reason, grub wasn't properly installed in the MBR by the installer.
+
#*Check logs for errors, keep an eye out for other irregularities.
* Consolidate backups to get a drive for gourd. '''Wrote a little script to get the amanda backup stuff into a usable state, taking up WAY less space.'''
+
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
* Submitted DNS request for blackbody. '''blackbody.unh.edu is now online'''
+
#*Verify that temperatures are acceptable.
* Label hard disks '''Labeled all that we knew what they were!'''
+
#*Monitor other graphs/indicators for any unusual activity.
* Labeled machines' farm Ethernet ports
+
 
* Made RHEL5 server discs.
+
==Weekly Tasks==
* Replaced failed drive in Gourd - 251gb maxtor sata. Apparently the WD drives are 250, maxtor are 251.
+
 
* Repair local network connection on Gourd.
+
These are things that should be done once every 7 days or so.
* Repair LDAP on Gourd (probably caused by net connection). '''Replacing the drive fixed every gourd problem!!  Seems to have been related to the lack of space on the RAID when a disk was missing. If not that, IIAM (It is a mystery).'''
+
 
* New combo set on door for 202.
+
#Check physical interface connections
* Tested old PSU's, network cables, and fans.
+
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
* All machines now report to rhn. None of them pull scheduled updates though. Client-side issue?
+
#Check Areca RAID interfaces
* Documentation for networking - we have no idea which config files are actually doing anything on some machines. '''Pretty much figured out, but improv and ennui still aren't reachable via farm IPs. But considering the success of getting blackbody set up from scratch, it seems that we know enough to maintain the machines that are actually part of the farm.'''
+
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
* Make network devices eth0 and eth0.2 on all machines self-documenting by giving them aliases "farm" and "unh" '''jalapeno and tomato remain'''
+
#Clean up the server room, sweep the floors.
* Figure out why pauli nodes don't like us. (Low-priority!) '''Aside from pauli2's kernel issue this is taken care of'''
+
 
* Learned how to use the netboot stuff on tomato.
+
==Monthly Tasks==
* Set up the nuclear.unh.edu web pages to serve up the very old but working pages instead of the new but broken XML ones.
+
 
* Scheduled downtime to install taro's power supply.
+
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
* Successfully installed Fedora 7 on ennui. Getting this and blackbody set up leads me to believe that we have a good grasp on the configuration of the network, authentication, VLAN, etc.
+
#Check S.M.A.R.T. information on all server hard drives
* Installed power supply in taro. A bit crooked since the mounting is non-standard, but it's secure.
+
#*Make a record of any drives which are reporting errors or nearing failure.
* Where is the backup script? <font color="green"> Look in /usr/local/bin </font> '''Matt found it: ''rsync_backup.pl'''
+
 
* Set up 3dm raid manager on gourd with remote access. The other machines don't have this tool installed.
+
==Annual Tasks==
* Added NPG-Daily-28 drive to lentil.
+
 
* Enabled SMP on jalapeno
+
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.  
* With the new drive added, and Perl repaired, lentil is now making a backup with rsync
+
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
| || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment