Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(991 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== Important ==
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat and OS X. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.'''
 
* Printer in 323 is '''not''' hooked up to a dead network port. Actually managed to ping it. Can't print though.
 
* Make and install power supply mount for taro
 
* Figure out what network devices on jalapeno and tomato are doing what
 
* Look into monitoring that is already installed: RAIDs etc.
 
* Look into getting computers to pull scheduled updates from rhn when they check in.
 
* Update Tomato to RHEL5 and check all services einstein currently provides. Then switch einstein <-> tomato, and then upgrade what was originally einstein. Look into making an einstein, tomato failsafe setup.
 
* Get on NPG mailing list.
 
* I set up "splunk" on einstein (production 2.2.3 version) and taro (beta 3 v2). I like the beta's functionality better, but it has a memory leak. Look for update to beta that fixes this and install update. (See: [http://www.splunk.com/base/forum:SplunkGeneral www.splunk.com/base/forum:SplunkGeneral] '''While this sounds like it could only be indirectly related to our issue, it does sound close enough and is the only official word on splunk's memory usage that I could find:'''[http://www.splunk.com/doc/latest/releasenotes/KnownIssues] <pre>When forwarding cooked data you may see the memory usage spike and kill the splunkd process. This should be fixed for beta 3.</pre>'''So, waiting for the next beta or later sounds like the best bet. I'm wary of running beta software on einstein, anyhow.'''
 
* Learn how to use cacti on okra. Seems like a nice tool, mostly set up for us already.
 
* Find out why lentil and okra (and tomato?) aren't being read by cacti.
 
  
== Ongoing ==
+
== Projects ==
* Clean up 202
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
** Figure out what's worth keeping
+
**VMs: Einstein
** Figure out what doesn't belong here
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
* Take a look at spamassassin - Improve Performance of this if possible.
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
* Test unknown equipment:
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
** UPS
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
* Eventually one day come up with a plan to deal with pauli2's kernel issue
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
* Maybe we should netboot all the machines. As steve pointed out, we've already got roaming profiles, next step from that is full netbooting. Something to think about.
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
== Completed ==
+
 
* Get ahold of a multimeter so we can test supplies and cables. '''Got a tiny portable one.'''
+
==Daily Tasks==
* Order new power supply for Taro '''Ordered from Newegg 5/24/2007'''
+
 
* Weed out unneccesary cables, we don't need a full box of ata33 and another of ata66. '''Consolodated to one box'''
+
These are things that should be done every day when you come into work.
* Installed new power supply for Taro.
+
 
* Get printer (Myriad) working.
+
#Do a physical walk-through/visual inspection of the server room
* Set up skype.
+
#Verify that all systems are running and all necessary services are functioning properly
* Fix sound on improv so we can use skype and music. '''sound was set to alsa, setting to OSS fixed it. should have worked either way though.'''
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Check out 250gb sata drive found in old maxtor box. '''Clicks/buzzes when powered up.'''
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
* Look into upgrades/patches for all our systems. '''Scheduled critical updates to be applied to all machines but einstein. If they don't break the other machines, like they didn't break the first few, I'll apply them to einstein too.'''
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
* Started download of fedora 7 for the black computer. 520KB/s downstream? Wow.
+
#*[[Roentgen]]: Make sure website/MySQL are available
* Get Pauli computers up. '''I think they're up. They're plugged in and networked.'''
+
#*[[Jalapeno]]: Named and Cups
* Find out what the black computer's name is! '''We called it blackbody.'''
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
* "blackbody" is Currently online. For some reason, grub wasn't properly installed in the MBR by the installer.
+
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
* Consolodate backups to get a drive for gourd. '''Wrote a little script to get the amanda backup stuff into a usable state, taking up WAY less space.'''
+
#*Check logs for errors, keep an eye out for other irregularities.
* Submitted DNS request for blackbody. '''blackbody.unh.edu is now online'''
+
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
* Label hard disks '''Labeled all that we knew what they were!'''
+
#*Verify that temperatures are acceptable.
* Labeled machines' farm Ethernet ports
+
#*Monitor other graphs/indicators for any unusual activity.
* Made RHEL5 server discs.
+
 
* Replaced failed drive in Gourd - 251gb maxtor sata. Apparently the WD drives are 250, maxtor are 251.
+
==Weekly Tasks==
* Repair local network connection on Gourd.
+
 
* Repair LDAP on Gourd (probably caused by net connection). '''Replacing the drive fixed every gourd problem!!  Seems to have been related to the lack of space on the RAID when a disk was missing. If not that, IIAM (It is a mystery).'''
+
These are things that should be done once every 7 days or so.
* New combo set on door for 202.
+
 
* Tested old PSU's, network cables, and fans.
+
#Check physical interface connections
* All machines now report to rhn. None of them pull scheduled updates though. Client-side issue?
+
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
* Documentation for networking - we have no idea which config files are actually doing anything on some machines. '''Pretty much figured out, but improv and ennui still aren't reachable via farm IPs. But considering the success of getting blackbody set up from scratch, it seems that we know enough to maintain the machines that are actually part of the farm.'''
+
#Check Areca RAID interfaces
* Make network devices eth0 and eth0.2 on all machines self-documenting by giving them aliases "farm" and "unh" '''jalapeno and tomato remain'''
+
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
* Figure out why pauli nodes don't like us. (Low-priority!) '''Aside from pauli2's kernel issue this is taken care of'''
+
#Clean up the server room, sweep the floors.
* Learned how to use the netboot stuff on tomato. Why aren't we using this for the paulis anymore?
+
 
* Set up the nuclear.unh.edu web pages to serve up the very old but working pages instead of the new but broken XML ones.
+
==Monthly Tasks==
 +
 
 +
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
 +
 
 +
==Annual Tasks==
 +
 
 +
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.  
 +
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
|  || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment