Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(501 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This is an unordered set of tasks.  Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
== Important ==
 
=== Einstein Upgrade ===
 
[http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/index.html Massive amount of deployment documentation for RHEL 5]
 
# Pick a date within the next week '''Monday, 7/23/2007'''
 
# Send an e-mail to Aaron, warning him of the future takedown of tomato '''Done'''
 
# Update Tomato to RHEL5 '''Installed w/ basic configuration (auth, autofs, etc)'''
 
# Check all services einstein currently provides. Locate as many custom scripts, etc. as is reasonable and label/copy them.
 
## [[DNS]] ''Installed, set up, working''
 
## [[LDAP]] ''Installed, set up, working.'' Changed config files on tomato and einstein to do replication, but their LDAP services need restarted. Need to schedule a time to do it on einstein. Double-check configs! '''<code>sudo</code> is hanging for me. <code>groups</code> shows my groups, ''/etc/sudoers'' has the "domain_admins" line'''. It eventually returned, saying that my user number doesn't exist in the passwd file. Something missing from nsswitch.conf? '''Nope, it's just like blackbody's, which works.''' getent passwd has everybody listed, as well.
 
## [[Postfix]] ''Installed, set up, working!''
 
## [[AMaViS]] ''Installed, set up''
 
## [[ClamAV]] ''Installed, set up''
 
## [[SpamAssassin]] ''Installed, set up, working? (need to test to make sure)''
 
## [[Cyrus Imap|IMAP]] <code>cyradm localhost</code> gives "cannot connect to server". This all seems to be sasl-related. It'd be probably be easy if there was a way to have cyrus use PAM. [http://www.openldap.org/doc/admin23/sasl.html LDAP and sasl]
 
## [[automount|/home]] ''Installed, set up, working''
 
## [[Samba]] ''Installed, set up, working.'' If anyone needs samba access, they need to find us and have us make them a samba account. No LDAP integration.
 
## [[Web Servers|Web]]?
 
## Fortran compilers and things like that? (Also needs compat libs--'''Nope, tomato is 32-bit.''')
 
# Clone those services to tomato
 
# Switch einstein <-> tomato, and then upgrade what was originally einstein
 
# Look into making an einstein, tomato failsafe setup.
 
  
=== Miscellaneous ===
+
== Projects ==
* Myriad is only printing 70 pages at a time?
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7.  
* At some point on friday, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down".
+
**VMs: Einstein
* Steve can't log into roentgen. Don't appear in <code>getent passwd | grep mccoyst</code>, but that's the case on several other machines that I can log into.
+
**Physical: [[endeavour]], [[taro]], and [[gourd]]
* Jalapeno was hung when I came in, so I took the opportunity to boot it with the latest uniprocessor kernel.  Let's see how long it can last with this.  If it hangs again soon, then the issue probably isn't the SMP kernel. '''"Found" a newer SMP kernel, <del>but it panics on boot.</del>''' Tried the SMP again, but it made it to startup. Let's see how long it goes this time. It could be a power issue (e.g. Taro). '''Hung up again this morning (8/22). Let's look into the power angle.''' Hung up last night with the single-processor kernel (8/24). '''Hung sometime between the mornings of 8/26 and 8/27. Restarted this morning (8/27) with the default kernel.'''
+
*Mailman: Clean up mailman and make sure all the groups and users are in order.
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored. <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font>  Nevermind, lentil doesn't have any restrictions. Need to find out the requirements for a machine to be monitored by cacti/rrdtools.  The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. '''Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed.''' Tried configuring things on improv as in the Cacti HowTo, but no go.  Must be some other settings as well.
+
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
* Set up a few VM's to play with for settings, scripts, etc. Either xen or qemu should work fine. <font color="green">Good idea! We will also need a VM on the new server which allows someone to log into the system with a 32-bit environment. This will be needed for legacy software.</font>
+
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
* Install the right SNMP stuff on tomato so that it can be graphed
+
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
* Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. '''lm_sensors fails to detect anything readable. Is there a way around this?'''
+
*Do a check on Lentil to see if there is any unneccessary data being backed up.
* Figure out proper monitor refresh rates in an effort to fix bohr's strange graphics setup.
 
* Set up 32-bit compatibility libraries on pepper and taro.
 
  
== Ongoing ==
+
==Daily Tasks==
=== Documentation ===
 
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
 
** Main function
 
** Hardware
 
** OS
 
** Network
 
* Continue homogenizing the configurations of the machines.
 
* Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
 
=== Maintenance ===
 
* Check e-mails to root every morning
 
* Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent. '''Yup, bohr died. Expanded his root by 2.5 gigs. Still serious monitor problems though, temporarily bypassed with vesa...'''
 
* Check up on security [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-sec-network.html#ch-wstation]
 
  
=== Cleaning ===
+
These are things that should be done every day when you come into work.
* Test unknown equipment:
 
** UPS '''I need a known good battery to play with. I'll probably get a surplus one cheap and bring it in. Seems like both UPSes I've looked at so far had bad batteries, since they were swollen and misshapen.''' The APC Smart-UPS 620 is good, just needs a new battery. The Belkin is dead. Is this the one the movers dropped? '''Applied for an RMA for the Belkin. Need to ship it out.'''
 
  
=== On-the-Side ===
+
#Do a physical walk-through/visual inspection of the server room
* See if we can get the busted printer in 322 to work down here.
+
#Verify that all systems are running and all necessary services are functioning properly
* Certain settings are similar or identical for all machines, such as resolv.conf.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.  '''Since resolv.conf was mentioned, I made a [[Script Prototypes#setres|prototype]] that seems to work.''' Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed. '''Learn how to use ssh-agent for most of these tasks'''
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.''' <font color="green"> I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups.</font> Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-acls.html#s2-acls-mounting-nfs] '''Put this on the backburner for now, since the current rate of backup disk consumption will give about 10 months before the next empty disk is needed.'''
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
 +
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
 +
#*[[Roentgen]]: Make sure website/MySQL are available
 +
#*[[Jalapeno]]: Named and Cups
 +
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
 +
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
 +
#*Check logs for errors, keep an eye out for other irregularities.
 +
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
 +
#*Verify that temperatures are acceptable.
 +
#*Monitor other graphs/indicators for any unusual activity.
  
== Waiting ==
+
==Weekly Tasks==
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?''' (Silviu Covrig, probably) The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. '''Got RMA, sending out on wed.''' Waiting on ASUS to send us a working one!''' Called ASUS on 8/6, they said it's getting repaired right now. '''Wohoo! Got a notification that it shipped!''' ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them.
 
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font> '''Did he ever come?''' No, he didn't, and did not respond to a voice message left. Will call again.
 
  
== Completed ==
+
These are things that should be done once every 7 days or so.
* '''Can add users with luseradd and lgroupadd, and those programs refuse duplicate entries, but doing an ldapsearch does not return added users/groups.'''  Groups and users added via something like Luma or JXplorer show up in ldapsearch, but not groups/users added by l*add!  Are they using different databases?  ''' ''/etc/libuser.conf'' controls how libuser (the 'l' in luseradd, etc.) does its thing.'''
 
* The LDAP password was stored in a plain text file on einstein, so I deleted it.  Only root had permissions, but still&hellip;  '''This "might" be needed by something, so let's not forget about it having existed''' Yes, it was needed by <font color=red>PAM</font>, at least
 
* Get the old public keys for pauli2 and 4 from einstein or whatever and replace the new ones on pauli2 and 4 with the old ones
 
* Got rid of the "wheel" entry in ''/etc/sudoers'' since we don't have a wheel group.
 
* Standardized the <code>/etc/ssh/[[ssh_known_hosts]]</code> file. Made one host per line, with all possible ways to get to the host (lone hostname, unh address, unh ip, farm ip, farm address, etc.). WAY cleaner and easier to read and blocked into obvious sections.
 
* Pepper's been running two instances of cortex for a REALLY long time with 100% cpu usage. Sent a message to Covrig to ask if he's actually doing something of if it's out of control. Turns out they were runaways, he killed them.
 
* tomato: trying to figure out how to configure and set up some fake users for testing[http://www.openldap.org/doc/admin23/quickstart.html] before copying einstein's setup''' Need to figure out TLS certificates, too. '''User passwords are stored encrypted in the LDAP database. We need the key that einstein uses or it won't work for authentication when we transfer the db to tomato.'''  There are several keys and certificates in ''/usr/share/ssl/certs'', but what each of them is for is not totally obvious. '''The LDAP-related ones are referenced in slapd.conf/ldap.conf'''  We need to figure out how to deal with transferring the LDAP database to tomato and have the passwords work.  '''Turns out to be not a big deal with salted hashing.  The client requests the salt from the server, hashes the password, and then sends the hashed version to the server for authentication. So no external keys are used for that.''' Certs and stuff are still needed for encryption over the network.
 
* rsync didn't backup tomato last night. Did we change a key? Maybe not, since tomato was in some weird state where root couldn't log in&hellip; '''Tomato was having serious ssl-related ldap issues, preventing it from starting ldap and authenticating anything far enough to log in, including root, strangely enough.'''
 
* Put jalapeno back on an SMP kernel. GRUB doesn't have it as the default, so the last time jalapeno froze up, the single got loaded.  However, the most recent SMP kernel led to a panic on initialization, so we're using a slightly older one.
 
* Price server hardware and see if we can beat the microway quote. '''Not worth our time to build it from scratch, especially considering the fact that the school year is approaching.'''
 
* Slapdcat'd einstein, slapadd'd tomato, works! A benefit of BerkleyDB is that slapcat can be safely run while slapd is running. [http://research.imb.uq.edu.au/~l.rathbone/ldap/tls.shtml Handy SSL Stuff] [http://www.openldap.org/doc/admin21/tls.html Specifics for OpenLdap] Setting up with TLS. '''Roentgen is NPG's Certificate Authority.''' Remember to concatenate einstein and tomato's unh_physics_ca.crt files, and distribute that to clients before the switch?
 
* Updated up2date on pepper, taro, symanzik, and parity  via the old up2date. Worked without errors, and used the new up2date to fetch updates.
 
* Figure out how to change log levels for snmpd on jalapeno.  It's logging every time okra makes a connection. '''/etc/sysconfig/snmpd.options ?'''  Changing it to be like einstein's didn't work. <font color="green">Looks to me like it did work. Check with ps auwwx, will show options are set properly.</font> Those options don't seem to be all there is to it, because splunk still shows ~550 logs per hour. '''Well, the issue seems to have gone away as of thursday'''
 
* Bohr, improv, and who knows how many others lost connection to /net/home at the same time. Turns out that the DNS servers running on both einstein and tomato died. Restarting named and rebooting affected workstations solved the problem. Why did this happen in the first place??
 
* After a reinstall, the errors went away. Copied all of einstein's /etc/postfix over to tomato, edited to say tomato instead of einstein. Postfix still won't start, maillog shows <code>fatal: bind 10.0.0.248 port 587: Cannot assign requested address</code> '''10.0.0.248 is einstein's farm address.''' <del>'''/var/log/maillog is showing einstein as a relay. What files other than the stuff in /etc/postfix were copied?'''</del> Nevermind, had the wrong value for $myorigin. '''Starting from scratch seemed to be the best solution.'''
 
* ([http://linuxgazette.net/124/pfeiffer.html Possibly helpful site]) '''All mail needs replicated<del>/master-slaved</del>''' Reinstalled postfix, cleaned out the old config files. We should try to first get it set up to send mail locally, at least, then go for the whole Internet. '''Local mail is now working!''' Sending/<del>receiving</del><ins>(Need to modify DNS so that it doesn't get grabbed by einstein)</ins> mail across the subnet now works! Sending mail outside of the subnet works, too, but receiving foreign mail is disabled <ins>(same as for receiving over subnet?)</ins>. Mail from my gmail account doesn't even result in any sort of entry in maillog&hellip; '''It all works just fine and dandy!'''
 
* tomato: Maps are synched in the LDAP database, script is referenced by adduser-npg. '''Replicated/mounted-elsewhere?''' rsync'ed einstein's home to a folder on tomato's data drive, we can just keep this copied version up to date via rsync until the switchover, to minimize effort during the switch.
 
* tomato: [http://www.scalix.com/wiki/index.php?title=Scalix/Sendmail_%26_Amavisd-New_HOWTO Um, yes????] '''Actually, the amavis/postfix/clamav/spamassassin guide on the spamassassin website is better. Set everything up until amavis' config file, because I can't seem to get amavisd-new installed, thanks to dependencies.''' Nevermind, I played with repositories a bit and got it all installed.
 
  
== Previous Months Completed ==
+
#Check physical interface connections
[[Completed in June 2007|June 2007]]
+
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
 +
#Check Areca RAID interfaces
 +
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
 +
#Clean up the server room, sweep the floors.
  
[[Completed in July 2007|July 2007]]
+
==Monthly Tasks==
 +
 
 +
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
 +
 
 +
==Annual Tasks==
 +
 
 +
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.
 +
 
 +
#Server software upgrades
 +
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
 +
#Run fsck on data volumes
 +
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
 +
 
 +
<!--{| cellpadding="5" cellspacing="0" border="1"
 +
! Time of Year !! Things to Do !! Misc.
 +
|-
 +
| Summer Break || ||
 +
|-
 +
|  || Major Kernel Upgrades ||
 +
|-
 +
|  || Run FDisk ||
 +
|-
 +
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
 +
|-
 +
| Thanksgiving Break || ||
 +
|-
 +
| Winter Break || ||
 +
|-
 +
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
 +
|--
 +
| Spring Break || ||
 +
|-
 +
|} -->

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment