Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(980 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== Important ==
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
* Lentil has a Pentium D, but only one processor shows up. Needs SMP kernel. Same problem with Gourd, I thought this system had 2 CPUs. Same problem with Jalapeno.  '''Some warning  about red hat desktop release 4 not supporting more than 1 processor when we boot into smp kernel. /proc/cpuinfo shows two now though. Both being used maybe?'''
 
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat and OS X.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font>
 
* Printer in 323 is '''not''' hooked up to a dead network port. Actually managed to ping it. Can't print though.
 
* Make and install power supply mount for taro '''Need to schedule downtime, since this machine is actually used.'''
 
* Figure out what network devices on jalapeno and tomato are doing what
 
* Look into monitoring that is already installed: RAIDs etc.
 
* Look into getting computers to pull scheduled updates from rhn when they check in.
 
* Need to get onto the "backups" shared folder. Where is it hiding?
 
* Where is the backup script?
 
* Figure out exactly what our backups are doing, and see if we can implement some sort of NFS user access. [[NPG_backup_on_Lentil]].
 
* I set up "splunk" on einstein (production 2.2.3 version) and taro (beta 3 v2). I like the beta's functionality better, but it has a memory leak. Look for update to beta that fixes this and install update. (See: [http://www.splunk.com/base/forum:SplunkGeneral www.splunk.com/base/forum:SplunkGeneral] '''While this sounds like it could only be indirectly related to our issue, it does sound close enough and is the only official word on splunk's memory usage that I could find:'''[http://www.splunk.com/doc/latest/releasenotes/KnownIssues] <pre>When forwarding cooked data you may see the memory usage spike and kill the splunkd process. This should be fixed for beta 3.</pre>'''So, waiting for the next beta or later sounds like the best bet. I'm wary of running beta software on einstein, anyhow.'''
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already.
 
* Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''
 
  
== Ongoing ==
+
== Projects ==
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
* Clean up 202
+
**VMs: Einstein
** Figure out what's worth keeping
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
** Figure out what doesn't belong here
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
* Take a look at spamassassin - Improve Performance of this if possible.
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
* Test unknown equipment:
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
** UPS
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
* Eventually one day come up with a plan to deal with pauli2's kernel issue
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
* Look into making a centralized interface to monitor/maintain all the machines at once.
 
* Update Tomato to RHEL5 and check all services einstein currently provides. Then switch einstein <-> tomato, and then upgrade what was originally einstein. Look into making an einstein, tomato failsafe setup.
 
  
== Completed ==
+
==Daily Tasks==
* Get ahold of a multimeter so we can test supplies and cables. '''Got a tiny portable one.'''
+
 
* Order new power supply for Taro '''Ordered from Newegg 5/24/2007'''
+
These are things that should be done every day when you come into work.
* Weed out unneccesary cables, we don't need a full box of ata33 and another of ata66. '''Consolodated to one box'''
+
 
* Installed new power supply for Taro.
+
#Do a physical walk-through/visual inspection of the server room
* Get printer (Myriad) working.
+
#Verify that all systems are running and all necessary services are functioning properly
* Set up skype.et on NPG mailing list.
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Fix sound on improv so we can use skype and music. '''sound was set to alsa, setting to OSS fixed it. should have worked either way though.'''
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
* Check out 250gb sata drive found in old maxtor box. '''Clicks/buzzes when powered up.'''
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
* Look into upgrades/patches for all our systems. '''Scheduled critical updates to be applied to all machines but einstein. If they don't break the other machines, like they didn't break the first few, I'll apply them to einstein too.'''
+
#*[[Roentgen]]: Make sure website/MySQL are available
* Started download of fedora 7 for the black computer. 520KB/s downstream? Wow.
+
#*[[Jalapeno]]: Named and Cups
* Get Pauli computers up. '''I think they're up. They're plugged in and networked.'''
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
* Find out what the black computer's name is! '''We called it blackbody.'''
+
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
* "blackbody" is Currently online. For some reason, grub wasn't properly installed in the MBR by the installer.
+
#*Check logs for errors, keep an eye out for other irregularities.
* Consolodate backups to get a drive for gourd. '''Wrote a little script to get the amanda backup stuff into a usable state, taking up WAY less space.'''
+
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
* Submitted DNS request for blackbody. '''blackbody.unh.edu is now online'''
+
#*Verify that temperatures are acceptable.
* Label hard disks '''Labeled all that we knew what they were!'''
+
#*Monitor other graphs/indicators for any unusual activity.
* Labeled machines' farm Ethernet ports
+
 
* Made RHEL5 server discs.
+
==Weekly Tasks==
* Replaced failed drive in Gourd - 251gb maxtor sata. Apparently the WD drives are 250, maxtor are 251.
+
 
* Repair local network connection on Gourd.
+
These are things that should be done once every 7 days or so.
* Repair LDAP on Gourd (probably causet on NPG mailing list.ed by net connection). '''Replacing the drive fixed every gourd problem!!  Seems to have been ret on NPG mailing list.elated to the lack of space on the RAID when a disk was missing.  If not that, IIAM (It is a mystery).'''
+
 
* New combo set on door for 202.
+
#Check physical interface connections
* Tested old PSU's, network cables, and fans.
+
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
* All machines now report to rhn. None of them pull scheduled updates though. Client-side issue?
+
#Check Areca RAID interfaces
* Documentation for networking - we have no idea which config files are actually doing anything on some machines. '''Pretty much figured out, but improv and ennui still aren't reachable via farm IPs. But considering the success of getting blackbody set up from scratch, it seems that we know enough to maintain the machines that are actually part of the farm.'''
+
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
* Make network devices eth0 and eth0.2 on all machines self-documenting by giving them aliases "farm" and "unh" '''jalapeno and tomato remain'''
+
#Clean up the server room, sweep the floors.
* Figure out why pauli nodes don't like us. (Low-priority!) '''Aside from pauli2's kernel issue this is taken care of'''
+
 
* Learned how to use the netboot stuff on tomato.
+
==Monthly Tasks==
* Set up the nuclear.unh.edu web pages to serve up the very old but working pages instead of the new but broken XML ones.
+
 
 +
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
 +
 
 +
==Annual Tasks==
 +
 
 +
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.  
 +
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
|  || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment