Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(401 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This is an unordered set of tasks.  Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
== Important ==
 
=== Einstein Upgrade ===
 
[http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/index.html Massive amount of deployment documentation for RHEL 5]
 
# Pick a date within the next week '''Monday, 7/23/2007'''
 
# Send an e-mail to Aaron, warning him of the future takedown of tomato '''Done'''
 
# Update Tomato to RHEL5 '''Installed w/ basic configuration (auth, autofs, etc)'''
 
# Check all services einstein currently provides. Locate as many custom scripts, etc. as is reasonable and label/copy them.
 
## [[DNS]] ''Installed, set up, working''
 
## [[LDAP]] ''Installed, set up, working.'' Changed config files on tomato and einstein to do replication, but their LDAP services need restarted. Need to schedule a time to do it on einstein. Double-check configs!
 
## [[Postfix]] ''Installed, set up, working!''
 
## [[AMaViS]] ''Installed, set up''
 
## [[ClamAV]] ''Installed, set up''
 
## [[SpamAssassin]] ''Installed, set up, working? (need to test to make sure)''
 
## [[Cyrus Imap|IMAP]] <code>cyradm localhost</code> gives "cannot connect to server". This all seems to be sasl-related. It'd be probably be easy if there was a way to have cyrus use PAM. <del>[http://www.openldap.org/doc/admin23/sasl.html LDAP and sasl]</del> <ins>Nevermind, that has to do with using SASL to authenticate LDAP</ins><code>saslauthd -v</code> lists pam and ldap as available authentication mechanisms, and /etc/sysconfig/saslauthd has an entry "MECH=pam"&hellip;! What am I missing? '''Tried making a new "mail.physics.unh.edu.crt" for tomato, but couldn't because that would have required revoking einstein's cert of the same name. Tried using the "tomato.unh.edu.crt" and "tomato.unh.edu.key", but is giving the same results as the "mail.physics.unh.edu.*" copied from einstein.''' Tried using tomato's UNH address instead of hostname: same result. '''I'm able to login using the <code>imtest</code> program, but the server doesn't send the same messages as shown [http://cyrusimap.web.cmu.edu/twiki/bin/view/Cyrus/ImtestByHand here].'''
 
## [[automount|/home]] ''Installed, set up, working''
 
## [[Samba]] ''Installed, set up, working.'' If anyone needs samba access, they need to find us and have us make them a samba account. No LDAP integration.
 
## [[Web Servers|Web]]?
 
## Fortran compilers and things like that? (Also needs compat libs--'''Nope, tomato is 32-bit.''')
 
# Clone those services to tomato
 
# Switch einstein <-> tomato, and then upgrade what was originally einstein
 
# Look into making an einstein, tomato failsafe setup.
 
  
=== Miscellaneous ===
+
== Projects ==
* Booted tomato with a Xen kernel. I'm going to try to set up some VMs. Apparently VMs of RHEL are free with the host's license. It should be easier to experiment with IMAP and the other upgrade issues (such as the swap) with some VMs running on the network. [http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/user/user.html#SECTION01110000000000000000 Xen Docs] Tomato isn't cooperating with the installation, as usual. We need to resize or rearrange the data partition to give some space for installations. Apparently it's not possible to simply resize an LVM partition. '''Something very wrong happened to tomato: I changed grub.conf to boot at runlevel 3 (normal, excludes X) and rebooted, but there was a kernel panic mentioning not being able to find vg_tomato. I rebooted again at the default runlevel, and got the same error. I booted with the intall disk, and it doesn't see any LVM anywhere.'''
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7.  
* The weather might be getting too cold for running two air conditioners. The top one has been having some issues. Yesterday I came in and it had left the floor wet. Today, it had collected a major amount of ice and started to flash its lights and beep. I turned it off after that. '''The other day I came in and both were coated in ice and the room was stifling. I defrosted with the door wide open and fans on high. Any chance we can start leaving the window open without running environmental risks to the machines?'''
+
**VMs: Einstein
* Swap jalapeno power supply. No need to schedule downtime, considering it's always down. '''Has this been done? It's been up all week.'''
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
* Idea for a future virtual machine: Set it up with vital services like LDAP and mail, and let it get the latest and greatest updates. Test that these updates don't break anything, then send the packages the npg-custom repository.
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
* Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why.
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
* Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery.
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
* Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? '''Or maybe just wean him off of NPG&hellip;''' It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out.
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored.  <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font>  Nevermind, lentil doesn't have any restrictions.  Need to find out the requirements for a machine to be monitored by cacti/rrdtools.  The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. '''Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed.''' Tried configuring things on improv as in the Cacti HowTo, but no go.  Must be some other settings as well. '''At some point on friday, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down".'''
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
* Install the right SNMP stuff on tomato so that it can be graphed
 
* '''jalapeno hangups:''' Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. '''lm_sensors fails to detect anything readable. Is there a way around this?'''
 
* Try to pull as much data from Jim William's old drives as possible, if there's even anything on them. '''They seem dead. Maybe we can swap one board to the other drive and see if it works?''' What room is he in? His computer is working now (the ethernet devices will have to be changed to a non-farm setup once the machine is back in his office).
 
  
== Ongoing ==
+
==Daily Tasks==
=== Documentation ===
 
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
 
** Main function
 
** Hardware
 
** OS
 
** Network
 
* Continue homogenizing the configurations of the machines.
 
* Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
 
=== Maintenance ===
 
* Check e-mails to root every morning
 
* Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent. '''Yup, bohr died. Expanded his root by 2.5 gigs. Still serious monitor problems though, temporarily bypassed with vesa...''' Bohr's problem seems tied to the nvidia drivers, let's wait until the next release and see how those work out.
 
* Check up on security [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-sec-network.html#ch-wstation]
 
  
=== On-the-Side ===
+
These are things that should be done every day when you come into work.
* See if we can get the busted printer in 322 to work down here.
 
* Certain settings are similar or identical for all machines, such as resolv.conf.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.  '''Since resolv.conf was mentioned, I made a [[Script Prototypes#setres|prototype]] that seems to work.''' Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed. '''Learn how to use ssh-agent for most of these tasks'''
 
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.''' <font color="green"> I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups.</font> Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-acls.html#s2-acls-mounting-nfs] '''Put this on the backburner for now, since the current rate of backup disk consumption will give about 10 months before the next empty disk is needed.'''
 
  
== Waiting ==
+
#Do a physical walk-through/visual inspection of the server room
* It turns out that business services didn't actually order the new server.
+
#Verify that all systems are running and all necessary services are functioning properly
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?''' (Silviu Covrig, probably) The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. '''Got RMA, sending out on wed.''' Waiting on ASUS to send us a working one!''' Called ASUS on 8/6, they said it's getting repaired right now. '''Wohoo! Got a notification that it shipped!''' ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. '''What should we do about this?'''
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu '''Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font> '''Did he ever come?''' No, he didn't, and did not respond to a voice message left. Will call again.
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
 +
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
 +
#*[[Roentgen]]: Make sure website/MySQL are available
 +
#*[[Jalapeno]]: Named and Cups
 +
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
 +
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
 +
#*Check logs for errors, keep an eye out for other irregularities.
 +
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
 +
#*Verify that temperatures are acceptable.
 +
#*Monitor other graphs/indicators for any unusual activity.
  
== Completed ==
+
==Weekly Tasks==
* The computer that was once improv, then quark, is now feynman. Jim Williams had the name quark first.
 
* Removed the ancient kernels from symanzik's boot partition, because they were taking up space needed by kernels from up2date
 
* Figure out how to password-protect a webpage for Silas. He hosts from his personal space on nuclear.unh.edu '''http://httpd.apache.org/docs/2.0/howto/auth.html#gettingitworking'''
 
* pepper, taro got kernel updates, but we have to wait on some users to finish some long jobs. <del>lzana says that he'll be finished around Tuesday (9/25/2007), so we should send out an e-mail on Monday to inform others. Tenetatively schedule the reboots for that Friday.</del> He's still running his jobs. New estimate for completion is over the weekend. '''Done on 9/28'''
 
* Jalapeno was hung when I came in, so I took the opportunity to boot it with the latest uniprocessor kernel.  Let's see how long it can last with this.  If it hangs again soon, then the issue probably isn't the SMP kernel. '''"Found" a newer SMP kernel, <del>but it panics on boot.</del>''' Tried the SMP again, but it made it to startup. Let's see how long it goes this time. It could be a power issue (e.g. Taro). '''Hung up again this morning (8/22). Let's look into the power angle.''' Hung up last night with the single-processor kernel (8/24). '''Hung sometime between the mornings of 8/26 and 8/27. Restarted this morning (8/27) with the default kernel.''' Was hung on 9/03 when the backup script came by, judging by the email records. '''Stopped by wed night, noticed jalapeno panicked. Copied down everything visible on-screen. Maybe we can use this to narrow down what's happening here.... (9/12, 7:40)''' Unsolved, just tidying; todo list has another jalapeno entry
 
* Removed the "delete" set of fake users from the database
 
* Updated clamav on roentgen <del>so that the logwatch e-mails aren't filled with warnings about it being out-of-date. Had to create a symbolic link <code>ln -s libclamav.so.2.0.8 libclamav.so.1</code> and slightly modify the /etc/clamd.conf file to get service clamd to start</del> Turns out that clamd wanted libclamav.so.1 because it was an old version. Updating the clamd package fixed everything, and let me revert clamd.conf to its proper form. I wonder what led to them deciding to have two seperate packages if one depends on the other&hellip;
 
* The latest kernel on pepper doesn't have a SMP version? Had to go back to the second-most recent. '''Using most-recent version now; it just wasn't automatically added to the lilo menu.'''
 
* Fixed some things warned about by UNH's "Nessus":
 
** snmpd (mainly used for monitoring by cacti) on pepper was accepting the default community for v1 and v2c. ''/etc/snmp/snmpd.conf'' was changed to be more like einstein's and to not accept the public community. <code>snmpwalk -c public -v 1 localhost</code> and <code>snmpwalk -c public -v 2c localhost</code> return with "No response" so the changes seem to have worked.
 
** Issue with named on tomato is mysterious. Removed some Aaron chaff, but it's otherwise identical to einstein's ''/etc/named.conf''
 
* Wilson can't download ''http://physics.unh.edu/wheel/custom/rhel-4/i386/repodata/repomd.xml'' and get updates from the npg-custom repository. <code>wget</code> says that a connection is made, but gets a 404 File Not Found error. I'm not sure that it's a settings issue, because blackbody is able to successfully download the file with wget and Opera. '''Bigger trouble: Parikshit rebooted wilson while I was trying to solve some interdependecy problems, and so "e2fsprogs" was uninstalled. This package contains programs dealing with ext2 filesystems, including fsck. So, when wilson tries to boot, it wants to do a check, but can't find fsck, and just gives up booting. I tried using the backtrack CD, mounting the LVM, chrooting it, and using yum, but it's not finding e2fsprogs in the two repositories that I added. This could probably be fixed if the http issue is figured out.''' Fortunately for Parry, wilson dual-boots Windows, so he can run that in the meantime. '''Where is wilson anyways?''' It's the computer nearest the window in 303. '''Set Parry up with ennui for now. Wilson is currently in 202. My plan is to just install CentOS 5 on it.'''
 
* Set up Jim William's dual-boot. I might need to come in after midnight to catch him, considering his scheduling... '''Installed CentOS5, takes forever on system message bus on boot. Leaving it overnight to see if it fixes itself.''' Nope. Odds are it's LDAP. '''It had some major errors in the ldap.conf files (BASEDN is dc=physics,dc=unh,dc=edu not dc=einstein,dc=edu,dc=edu). Still hanging on message bus, though''' It seems to be unable to connect to NPG stuff, but only in certain ways. It can ping einstein and ssh into it, but can't scp or be ssh'd into. It can reach any website that I've tried except for physics.unh.edu. '''It was having ennui's MTU problem. Maybe this issue is caused by that port on the switch?'''
 
* '''Got sick of this so I created a new account for myself (uid "steve").''' Steve can't log into roentgen. Don't appear in <code>getent passwd | grep mccoyst</code>, but that's the case on several other machines that I can log into, such as einstein. However, <code>getent passwd mccoyst</code> ''does'' return my info on einstein, but not roentgen. <code>ldapsearch -x '(uid=mccoyst)'</code> returns me on roentgen. Fake user "fdelete" had my uid as its gid, so I finally deleted it and the other deletes "just in case". They still show up in roentgen's <code>getent passwd</code>, but not ldap searches, and I was able to log in directly as jdelete and remotely as ndelete&hellip; Bwah??:
 
  root@roentgen:root>luserdel fdelete
 
  User fdelete does not exist.
 
  root@roentgen:root>userdel fdelete
 
  userdel: error deleting password entry
 
  userdel: error deleting shadow password entry
 
  userdel: error removing group entry
 
  userdel: error removing shadow group entry
 
  root@roentgen:root>id fdelete
 
  uid=4239(fdelete) gid=4235(fdelete) groups=4235(fdelete),4199(farm)
 
  
== Previous Months Completed ==
+
These are things that should be done once every 7 days or so.
[[Completed in June 2007|June 2007]]
 
  
[[Completed in July 2007|July 2007]]
+
#Check physical interface connections
 +
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
 +
#Check Areca RAID interfaces
 +
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
 +
#Clean up the server room, sweep the floors.
  
[[Completed in August 2007|August 2007]]
+
==Monthly Tasks==
  
[[Completed in September 2007|September 2007]]
+
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
 +
 
 +
==Annual Tasks==
 +
 
 +
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.
 +
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
|  || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment