Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(144 intermediate revisions by 6 users not shown)
Line 1: Line 1:
This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
== Daily Check off list ==
 
Each day when you come in check the following:
 
# <font color="red">Do not update nss_ldap on RHEL5 machines until they fix it</font>
 
# Einstein ([[Script Prototypes|script]]):
 
## Up and running?
 
## Disks are at less than 90% full?
 
## Mail system OK? (spamassasin, amavisd, ...)
 
# Temperature OK? No water blown into room?
 
# Systems up: Taro, Pepper, Pumpkin/Corn ?
 
# Backups:
 
## Did backup succeed?
 
## Does Lentil need a new disk?
 
  
== Important ==
+
== Projects ==
=== Towards a stable setup  ===
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
 +
**VMs: Einstein
 +
**Physical: [[endeavour]], [[taro]], and [[gourd]]
 +
*Mailman: Clean up mailman and make sure all the groups and users are in order.
 +
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
 +
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
 +
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
 +
*Do a check on Lentil to see if there is any unneccessary data being backed up.
  
Here are some thoughts, especially to Steve, about getting an über-stable setup for the servers.  <br>
+
==Daily Tasks==
Some observations:
 
# When we get to DeMeritt at the end of next summer, we need a setup that easily ports to the new environment. We will also be limited to a total of 10 kWatts heat load (36000 BTUs, or 3 tons of cooling), due to the cooling of the room. That sounds like a lot, but Silas and Jiang-Ming will also put servers in this space. Our footprint should be no more than 3 to 4 kWatts of heat load.
 
# Virtual systems seems like the way to go. However, our experience with Xen is that it does not lead to highly portable VMs.
 
# VMware Server is now a free product. They make money consulting and selling fancy add-ons. I have good experience with VMware Workstation on Mac's and Linux. But it is possible (like RedHat which was once free) that they will start charging when they reach 90% or more market share.
 
  
Here are some options: <br>
+
These are things that should be done every day when you come into work.
* We get rid of Tomato, Jalapeno, Gourd and Okra and perhaps also Roentgen. If we want we can scavenge the parts from Tomato & Jalapeno (plus old einstein) for a toy system, or we park these systems in the corner. I don't want to waste time on them. The only bumps that I can think of here would be that Xemed/Aaron use Gourd.  Otherwise I think we're all in favor of cutting down on the number of physical machines that we've got running. Oh, and what about the paulis? '''Since they're not under our "jurisdiction", they'll probably end up there anyhow.
 
* Test VMware server (See [[VMWare Progress]]). Specifically, I would like to know:
 
## How easy is it to move a VM from one hardware to another? (Can you simply move the disks?) '''Yes.'''
 
## Specifically, if you need to service some hardware, can you move the host to other hardware with little down time? (Clearly not for large disk arrays, like pumpkin, but that is storage, not hosts). '''Considering portability of disks/files, the downtime is the time it takes to move the image around and start up on another machine.'''
 
## Do we need a RedHat license for each VM or do we only need a license for the host, as with Xen? '''It seems to consume a license per VM. Following [http://kbase.redhat.com/faq/FAQ_103_10754.shtm this] didn't work for the VMWare systems. The closes thing to an official word that I could find was [http://www.redhat.com/archives/taroon-list/2004-August/msg00292.html this].'''
 
## VMware allows for "virtual appliances", but how good are these really? Are these fast enough?
 
* Evaluate the hardware needs. Pumpkin, the new Einstein, Pepper, Taro and Lentil seem to be all sufficient quality and up to date. Do we need another HW? If so, what?
 
  
=== Miscellaneous ===
+
#Do a physical walk-through/visual inspection of the server room
* Learn how to use sieve a little better, and then send an email out to all users to let them know about it, and include an example to get rid of spam. [http://wiki.fastmail.fm/index.php?title=SieveRecipes] would be a helpful link to include, as well as sieveexample in minuti's home directory. That file should probably be pushed to all users right before the email is sent, that way they can just open the already-existing file and tweak and play if they so choose.
+
#Verify that all systems are running and all necessary services are functioning properly
* Set up elog (looks like firewall is all that's left).
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Set up the new benfranklin. '''ATI strikes again. Maybe the proprietary drivers work better, but these are so bad that FC9 dies from it.
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
* Bohr is slow with PDFs. Maybe time to put a newer distro on bohr?
+
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
* Fix some of the older workstations (parity, hobo, ennui, etc.)
+
#*[[Roentgen]]: Make sure website/MySQL are available
* Set up a working java plugin on mariecurie. Also, figure out if there's a consistent way to get it working easily on 64-bit machines. For sarah's case, it's probably easier to just install 32-bit firefox and 32-bit java plugins.
+
#*[[Jalapeno]]: Named and Cups
* '''Mariecurie''': New NVIDIA video card seems to have the same problems as the ATI ones.
+
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
* '''Monitoring''': I would like to see the new temp-monitor integrated with Cacti, and fix some of the cacti capabilities, i.e. tie it in with the sensors output from pepper and taro (and tomato/einstein). Setup sensors on the corn/pumpkin. Have an intelligent way in which we are warned when conditions are too hot, a drive has failed, a system is down. '''I'm starting to get the hang of getting this sort of data via snmp. I wrote a perl script that pulls the temperature data from the environmental monitor, as well as some nice info from einstein. We SHOULD be able to integrate a rudimentary script like this into cacti or splunk, getting a bit closer to an all-in-one monitoring solution. It's in Matt's home directory, under code/npgmon/'''
+
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
* Check into smartd monitoring (and processing its output) on Pepper, Taro, Corn/Pumpkin, Einstein, Tomato.
+
#*Check logs for errors, keep an eye out for other irregularities.
* Learn how to use [[cacti]]. We should consider using a VM appliance to do this, so it's minimal configuration, and since okra's only purpose is to run cacti.
+
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
 +
#*Verify that temperatures are acceptable.
 +
#*Monitor other graphs/indicators for any unusual activity.
  
== Ongoing ==
+
==Weekly Tasks==
=== Documentation ===
 
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
 
** Main function
 
** Hardware
 
** OS
 
** Network
 
* Continue homogenizing the configurations of the machines.
 
  
=== Maintenance ===
+
These are things that should be done once every 7 days or so.
* Check e-mails to root every morning
 
* Check up on security [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-sec-network.html#ch-wstation]
 
  
=== On-the-Side ===
+
#Check physical interface connections
* Learn how to use ssh-agent for task automation.
+
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.''' <font color="green"> I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups.</font> Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-acls.html#s2-acls-mounting-nfs]
+
#Check Areca RAID interfaces
* Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
+
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
** pauli nodes -- Low priority!
+
#Clean up the server room, sweep the floors.
  
== Waiting ==
+
==Monthly Tasks==
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?''' (Silviu Covrig, probably) The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. '''Got RMA, sending out on wed.''' Waiting on ASUS to send us a working one!''' Called ASUS on 8/6, they said it's getting repaired right now. '''Wohoo! Got a notification that it shipped!''' ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. '''What should we do about this?''' I'm going to call them up and have a talk, considering looking at the details on their shipment reveals that they sent us a different motherboard, different serial number and everything but with the same problem.
 
  
== Completed ==
+
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
* Subscribe all users to npg-users.
+
#Check S.M.A.R.T. information on all server hard drives
* Come up with a list of who's who in NPG. While maurik's gone, it's going to be important to know who's directly in NPG, who's affiliated, and who just plain uses our stuff.
+
#*Make a record of any drives which are reporting errors or nearing failure.
  
* Einstein upgrade project and status page: [[Einstein Status]]
+
==Annual Tasks==
* The switchover is complete, and has been working well. There's sure to be a few little things left to do, like little mailing list tweaks and software people want, but that's all our normal day-to-day things anyway.
 
  
* Improve documentation of mail software, specifically SpamAssassin, Cyrus, etc. '''We're now on an as-standard-as-you-can-get dovecot installation. The dovecot wiki has nearly all documentation we need for our setup, as long as a couple things are known:
+
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.  
## The specific tools involved are dovecot, postfix, and spamassassin. We've set up the dovecot-sieve plugin to allow sieve-style filtering, which should eventually be implemented for every user individually so that we can't get yelled at for marking good mail as spam, etc.
 
## At present, no antivirus is installed, although we should plan to add clamav or something similar. Not a HUGE concern, since most of our users aren't windows, but certainly something to be aware of and fix.
 
## Users with a low UID cannot login. This is a security feature, going by the reasoning that system tools and daemons run as low UIDs, and they should never need to log into dovecot. As such, emails sent to root need to be sent somewhere that's actually read. A quick hack is in place at present which sends root mail to minuti, implemented in the aliases file. Using the NPG-admins mailing list might be a better choice, or even maybe a root-mail mailing list.
 
  
* Decommission Okra. - This system is way too outdated to bother with it. Move Cacti to another system. Perhaps a VM, once we get that figured out? '''Nobody has noticed okra being down for just about forever. Nobody used it, except steve when he felt bored. Splunk should ideally be more than capable of doing everything okra did and then some.'''
+
#Server software upgrades
* Gourd won't let me (Matt) log in, saying no such file or directory when trying to chdir to my home, and then it boots me off. Trying to log in as root from einstein is successful just long enough for it to tell me when the last login was, then boots me. '''(Steve here) I was able to log in and do stuff, but programs were intermittently slow.''' This hasn't happened again. Just a fluke?
+
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
* Fermi has problems allowing me to log in. nsswitch.conf looks fine, getent passwd shows all the users like it's supposed to. There are no restrictions in ''/etc/security/access.conf'', either. '''I don't care about fermi. Nobody has ever used it, and just about everything is higher priority. If people have legacy code to be running, most of the workstations are outdated enough. If they really need power/speed, it's probably worth updating the code or dependencies to be runnable on pumpkin/corn. If there's actual requests for it, I'll get it working.'''
+
#Run fsck on data volumes
* Fixed backup on einstein.
+
#Clean/Dust out systems
* Roentgen moved to Pumpking Virtual Machine.
+
#Rotate old disks out of RAID arrays
* Restored the mysql database on Roentgen: [[Mysql database restore]].
+
#Take an inventory of our server room / computing equipment
  
== Previous Months Completed ==
+
<!--{| cellpadding="5" cellspacing="0" border="1"
[[Completed in June 2007|June 2007]]
+
! Time of Year !! Things to Do !! Misc.
 
+
|-
[[Completed in July 2007|July 2007]]
+
| Summer Break || ||
 
+
|-
[[Completed in August 2007|August 2007]]
+
|  || Major Kernel Upgrades ||
 
+
|-
[[Completed in September 2007|September 2007]]
+
|  || Run FDisk ||
 
+
|-
[[Completed in October 2007|October 2007]]
+
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 
+
|-
[[Completed in November/December 2007|NovDec 2007]]
+
| Thanksgiving Break || ||
 
+
|-
[[Completed in January 2008|January 2008]]
+
| Winter Break || ||
 
+
|-
[[Completed in February 2008|February 2008]]
+
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 
+
|--
[[Completed in March/April/May/June 2008|March/April/May/June 2008]] (I'm doing a great job keeping track of this, eh?)
+
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment