|
|
(633 intermediate revisions by 7 users not shown) |
Line 1: |
Line 1: |
− | This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]]. | + | This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating. |
− | == Important ==
| |
− | === Einstein Upgrade ===
| |
− | [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/index.html Massive amount of deployment documentation for RHEL 5]
| |
− | # Pick a date within the next week '''Monday, 7/23/2007'''
| |
− | # Send an e-mail to Aaron, warning him of the future takedown of tomato '''Done'''
| |
− | # Update Tomato to RHEL5 '''Installed w/ basic configuration (auth, autofs, etc)'''
| |
− | # Check all services einstein currently provides. Locate as many custom scripts, etc. as is reasonable and label/copy them.
| |
− | ## [[DNS]] '''Set up w/ einstein's configuration files'''
| |
− | ## [[LDAP]] '''Installed programs, trying to figure out how to configure and set up some fake users for testing[http://www.openldap.org/doc/admin23/quickstart.html] before copying einstein's setup''' Need to figure out TLS certificates, too
| |
− | ## [[Postfix]]
| |
− | ## [[AMaViS]]
| |
− | ## [[ClamAV]]
| |
− | ## [[SpamAssassin]]
| |
− | ## [[Cyrus Imap|IMAP]]
| |
− | ## [[automount|/home]]
| |
− | ## [[Samba]]
| |
− | ## [[Web Servers|Web]]?
| |
− | ## Fortran compilers and things like that?
| |
− | # Clone those services to tomato
| |
− | # Switch einstein <-> tomato, and then upgrade what was originally einstein
| |
− | # Look into making an einstein, tomato failsafe setup.
| |
| | | |
− | === Miscellaneous === | + | == Projects == |
− | * Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.''' <font color="green"> I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups.</font> Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-acls.html#s2-acls-mounting-nfs] | + | *Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. |
− | * Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil and okra (and tomato?) aren't being read by [[cacti]]. Could be related to the warnings that repeat in ''okra:/var/www/cacti/log/cacti.log''.''' Not related to the warnings; those are for other machines that are otherwise being monitored. <font color="blue">Try adding cacti to the exclude exclude list in access.conf</font> Nevermind, lentil doesn't have any restrictions. Need to find out the requirements for a machine to be monitored by cacti/rrdtools. The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. '''Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed.''' Tried configuring things on improv as in the Cacti HowTo, but no go. Must be some other settings as well. | + | **VMs: Einstein |
− | * Set up a few VM's to play with for settings, scripts, etc. Either xen or qemu should work fine.
| + | **Physical: [[endeavour]], [[taro]], and [[gourd]] |
− | * Figure out how to change log levels for snmpd on jalapeno. It's logging every time okra makes a connection. '''/etc/sysconfig/snmpd.options ?''' Changing it to be like einstein's didn't work. | + | *Mailman: Clean up mailman and make sure all the groups and users are in order. |
− | * Install the right SNMP stuff on tomato so that it can be graphed
| + | *CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba. |
− | * Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta, since it runs entirely in userspace. | + | *Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages. |
| + | *Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage. |
| + | *Do a check on Lentil to see if there is any unneccessary data being backed up. |
| | | |
− | == Ongoing == | + | ==Daily Tasks== |
− | === Documentation ===
| |
− | * '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
| |
− | ** Main function
| |
− | ** Hardware
| |
− | ** OS
| |
− | ** Network
| |
− | * Continue homogenizing the configurations of the machines.
| |
− | * Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
| |
− | === Maintenance ===
| |
− | * Check e-mails to root every morning
| |
− | * Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent.
| |
− | * Check up on security [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-sec-network.html#ch-wstation]
| |
− | === Cleaning ===
| |
− | * Test unknown equipment:
| |
− | ** UPS '''I need a known good battery to play with. I'll probably get a surplus one cheap and bring it in. Seems like both UPSes I've looked at so far had bad batteries, since they were swollen and misshapen.'''
| |
− | * Printer in 323 is '''not''' hooked up to a dead network port. Actually managed to ping it. One person reportedly got it to print, nobody else has, and that user has been unable ever since. Is this printer dead? We need to find out. '''Matt votes it's dead.'''
| |
| | | |
− | === On-the-Side ===
| + | These are things that should be done every day when you come into work. |
− | * Certain settings are similar or identical for all machines, such as resolv.conf. It would be beneficial to write a program to do remote configuration. This would also simplify the process of adding/upgrading machines. '''Since resolv.conf was mentioned, I made a [[Script Prototypes#setres|prototype]] that seems to work.''' Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed.
| |
− | * Learn how to use ssh-agent for most of these tasks
| |
− | * Price server hardware and see if we can beat the microway quote.
| |
− | ** Figure out if we should build opteron dual-core or xeon quad-core.
| |
− | * Get the old public keys for pauli2 and 4 from einstein or whatever and replace the new ones on pauli2 and 4 with the old ones
| |
| | | |
− | == Waiting ==
| + | #Do a physical walk-through/visual inspection of the server room |
− | * That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen? '''Who was that guy, anyhow?''' (Silviu Covrig, probably) The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info''' Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. '''Got RMA, sending out on wed.''' Waiting on ASUS to send us a working one! | + | #Verify that all systems are running and all necessary services are functioning properly |
− | * Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu '''Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working, IT guy should be here week of June 26th</font> '''Did he ever come?''' No, he didn't, and did not respond to a voice message left. Will call again.
| + | #*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]] |
| + | #*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running |
| + | #*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running |
| + | #*[[Roentgen]]: Make sure website/MySQL are available |
| + | #*[[Jalapeno]]: Named and Cups |
| + | #*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed. |
| + | #Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]] |
| + | #*Check logs for errors, keep an eye out for other irregularities. |
| + | #Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti] |
| + | #*Verify that temperatures are acceptable. |
| + | #*Monitor other graphs/indicators for any unusual activity. |
| | | |
− | == Completed == | + | ==Weekly Tasks== |
− | * Jalapeno was dead when I came in this morning. Rebooted, seems back to normal. Is splunk beta doing this?
| |
− | * Steve's pay. Supposedly going to be remedied in the next paycheck. '''Fixed and backpaid'''
| |
− | * Figure out tomato tunnel setup for paulis 2 and 4. '''So simple. Add <code>PEERDNS=no</code> to the needed ifcfg-* files.'''
| |
− | * Learn how to set up [[evolution]] fully so we can support users. Need LDAP address book. '''What schema does our LDAP setup suport? Evolution uses "evolutionPerson", apparently it doesn't work without using that schema for describing people. Schemas can be combined: [http://cweiske.de/tagebuch/LDAP%20addressbook.htm]" Typing the name of someone evolution is aware of (that is, someone you've been in communication with before) allows address book like features. Close, but not quite what we're looking for.''' Messing with all the user's ldap settings is the sort of MAJOR thing that could make a lot of things blow up if we do something wrong. It would be nice to have, but maybe after we get everything else out of the way; too high-risk now.
| |
− | * THIS HOUSE IS CLEAN!
| |
− | * I wonder if we can go to the demerrit remains and get some drop ceiling panels or similar sound-absorbing material and put it in the server room.... The hum isn't unbearable, but I'm sick of saying "what?" all the time. '''Maurik is considering setting up a workstation room, seperate from the farm room.''' Maybe we can find some egg cartons to duct tape to the walls as a short-term fix. '''Duct-taped cardboard to the walls. Seems to have cut some of the high frequency noise. Could be placebo though.'''
| |
− | * OOPS, crashed tomato. Please reboot by button. (Maurik: 7/12/07 @ 10pm) '''Rebooted. How'd that happen?'''
| |
− | * Investigate printing from lepton (in 305) to myriad. Got word from John Calarco that it doesn't work. '''Apparently his root partition ran out of space. How many machines does that make now? Fixed by deleting his FC2 logical drive and expanding his FC3 to take its place.'''
| |
− | * Test LDAP authentication on farm and general machines. We should create a number of users, each with different group settings, in order to narrow down what groups are required to access what. Seems less error-prone than using one user and modifying the settings over and over. <font color="green">See the [[LDAP]] doc, write answers there.</font> '''Made a user named "Joe Delete" that is only in his own group, and he can log into einstein, okra, lentil, tomato, gourd, ennui, and blackbody.'''
| |
− | * Finally completed backup consolidation! No more amanda backups are left. Lentil presently has npg-daily-28 in use, 29 ready, the 500gb waiting until the jalapeno problems subside (in case we need to rebuild jalapeno), and an empty slot.
| |
− | * The Log Level for nmbd on einstein was set to 7. WOA, that is a lot of junk only useful for expert debugging. Please, please, pretty please, don't leave log levels so high and then leave. <font color="red">How do you even set log levels?</font> See the [[samba]] page
| |
− | * Add a "flavicon" to some areas of web, so that we log fewer errors, for one.
| |
− | * The snmpd deamon on einstein was very verbose, generating 600 messages per hour, all access from Okra. I changed the default options in /etc/sysconfig/snmpd.options from # OPTIONS="-Lsd -Lf /dev/null -p /var/un/snmpd.pid -a" to OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a". Now snmpd is QUIET! We could consider a slight more verbose? (This was discovered with splunk.)
| |
− | *'''SPLUNK''' is now set up on Jalapeno. It is combing the logs of einstein and roentgen and its own logs. See [[Splunk]].
| |
− | * Checked if the backups are actually happening and working - they are.
| |
− | * Renewed XML books for Amrita. They're due at the end of the month.
| |
− | * Fixed the amandabackup.sh script for consolidating amanda-style backups.
| |
− | * Investigate the change in ennui's host key '''Almost certainly caused by one of the updates. Just remembered that I was using ennui for a few minutes and I saw the "updates ready!" icon in the corner and habitually clicked it. Darn ubuntu habits. Doesn't explain WHY it changed, only how.''' It probably wasn't an individual update, but almost certainly was the transition from Fedora 5 to 7. ennui isn't a very popular machine to SSH into, so the change probably just went unnoticed for the two-or-so weeks since the upgrade. I had early thought that it couldn't have been the OS change, since it had been awhile since the change, but upon further thought, it makes perfect sense.
| |
− | * Look into getting computers to pull scheduled updates from rhn when they check in. '''See [[RHN|updates through RHN]]'''
| |
− | * Look into monitoring RAID, disk usage, etc. '''[[SMARTD]], etc.'''
| |
− | * Removed Aaron from "who should get root's mail?" part of einstein's /etc/aliases file. Now he won't get einstein's email all the time. Replaced him with "root@einstein.unh.edu", since both of us check that now.
| |
− | * Added Matt and Steve to the ACL for the backups shared mail folder. It was pretty simple with ''cyradm''.
| |
− | * Karpiusp remembered his password, no change necessary.
| |
− | * Need to get onto the "backups" shared folder, as well as be added as members to the lists. '''"backups" wasn't even a mailing list, according to the Mailman interface.''' Added Steve and Matt to the CC list for <i>/etc/cron.daily/rsync_backup</i>'s <code>mail</code> command. If the message gets sent to us, then we'll know something's wrong with the list. If we don't get it, then the problem is probably in the script. '''E-mails were received, so there's something up with the mailing list.''' Yup. Checking the mailing list archives shows no messages on the list. '''Figured out how to do shared folders with ''cyrusadm''.
| |
− | * Nobody is currently reading the mail that is send to "root". Einstein had 3000+ unread messages. I deleted almost all. There are some useful messages that are send to root with diagnostics in them, we should find a solution for this. '''Temporarily, both Matt and Steve have email clients set up to access root's account. The largest chunk of the e-mails concern updating ClamAV. Maybe we should just do that?''' <font color="red">Doing this caused some ''major'' mail problems. It's a punishment for 1) Typing a command in the wrong terminal, 2) Not thoroughly understanding the configuration and importance of a component before updating it, and 3) Not restarting the program after updating it</font>
| |
− | * Updated SpamAssassin and ran <code>sa-update</code> to get new rules. '''The SA documentation seems to indicate that having ''procmail'' send mail is the typical scenario. However, ''procmail'' isn't mentioned in the appropriate Postfix configuration file[http://wiki.apache.org/spamassassin/UsedViaProcmail]. ''procmail'' and ''postfix'' are installed, though. Do we have a special mail setup?''' It seems like ''[[postfix]]'' is what does it. '''File this confusion under "improve mail chain documentation" rather than clutter up the list'''
| |
− | * okra was the only machine that jdelete could log into, that also had restrictions in it's ''etc/security/access.conf'', so I commented out the old setting, then copied the content of another machine's file to okra's.
| |
− | * jalapeno was mysteriously unreachable when I came in this morning (7/9/2007). The cacti graphs show it going down sometime mid-Saturday. The logs show several authentication failures beforehand... <font color="green">On Saturday, that looks like a failed breakin attempt. A repeat on Monday around 2pm. It is not clear why Jalapeno is being targetted. Check /var/log/secure. Note, I was on 7/9/2007, between 9:30am and 11:30am. I was trying to figure out how farm access is controlled. Specifically, I wanted to deny access to jalapeno to non-admin users.</font> Seems this worked ('''<font color="red">How did you do it?</font>''' '''<font color="blue">Looking like it's access.conf</font>'''), but perhaps it had an unintended side effects. We need more documentation on the [[Cacti]] page. I've not used it, so doe not know what access it needs to jalapeno. Perhaps it does not need monitorring by cacti. '''Cacti is still monitoring jalapeno.'''
| |
− | * Figure out what to do about the mass Samba login attempts. Since Maurik turned it off, does that mean that we don't really use it for anything important? <font color="green">Samba is still running on einstein. It is more important there. Roentgen samba access was for web stuff and now no longer needed.</font> The system causing access problems was supposedly rebooted. Also samba (einstein and roentgen) is set to be non-verbose into syslog.
| |
− | * Test pauli4's network card by booting with a livecd. onboard works, e1000 doesn't. Still isn't working even after specifying the onboard port in ifcfg-eth0. Installing Fedora 7 got the onboard fixed, but I don't know how to interface it with the tomato tunnel
| |
− | * blackbody's behaving oddly. It was hung when I came in, and a couple of services failed to startup when I restarted it. Had to restart again, because the "greeter application" (graphical login screen) crashed instantly over and over, and when logging in via terminal, my home directory didn't mount. Now it's mostly working, but a bunch of GNOME desktop apps crashed upon my logging in. '''Getting segmentation faults with random programs, including yum and pirut.''' Nothing jumped out at me in the logs for updates, etc. Going to do a memtest on blackbody, just in case. '''Neither the Fedora 7 or Backtrack discs have memtest on them, and we don't have any blank CDs, so never mind for now. After further investigation, it probably isn't memory anyhow, since it's consistently the same set of programs that are segfaulting/crashing.''' I do think that the ubunutu disc that was in gluon when we got it has memtest. I put it on the stack of discs where we keep the rest of them. I agree though, it probably isn't the memory. '''I think I've figured it out. The "installonlyn" plugin is a Python script, and the two programs that fail on startup are written in Python. Conclusion: something's wrong with the Python installation.''' Yum relies on some python stuff, which makes it quite difficult to reinstall python. Also, yum doesn't actually HAVE a reinstall function. Forcing install seems to do nothing. Reinstallation of fedora should do it... '''Reinstalled, and am not running Python updates for a while'''
| |
− | * Figure out what network devices on tomato are doing what '''Guess we're waiting for Aaron on this one. He needs to do something soon, because while I'm sure most of these extra devices aren't important to NPG, Xemed and the Paulis probably use them somehow, and we need to know what the deal is before installing RHEL5.''' It's do or die for Xemed.
| |
− | * "roentgen kernel: 10.10.88.111 sent an invalid ICMP type 3, code 3 error to a broadcast: 132.177.91.255 on eth0". <code>whois</code> says that 132.177.91.255 is a NIC in Kingsbury. Looking at the splunk graph for this error, it seems to have happened almost entirely over the course of Wednesday; Thursday only gets 1-5 per hour, while late Tuesday and all of Wednesday got ~200 per hour.'''Partially Solved''': This has been solved: The node is 10.10.88.111, which would be a system on the Xemed end. I simply block this system from getting through the tunnel on tomato, using the iptables (/etc/sysconfig/iptables-npg) with an entry: "-A INPUT -s 10.10.88.111 -j REJECT". This rejects that node silently. We may need to add such a line on einstein or roentgen, but I think it should now be stopped at the tunnel level. '''Still getting a lot of the same error on roentgen overnight, even with that line in roentgen's iptables-npg''' Haven't seen the error on logwatch or splunk in a while; Xemed must have fixed the rogue machine.
| |
− | * Our newly-made loaner machine Hobo has FC7 installed now. Needs LDAP configuration, I can't seem to get it to download from einstein as usual, steve, you're gonna need to show me how it's done. Seems to be fairly responsive, but it could use some work to make it run faster. At least in the process we'll learn more about the structure of redhat-esque systems. '''Probably can't do much since it's probably not registered.''' Registered, set up, sitting around waiting for somebody to use it
| |
− | * Reinstalled pauli2 and got it working
| |
− | * Keep an eye on jalapeno. Make sure that the changes to the access rules haven't screwed anything up. '''jalapeno crashed at about 3AM on Sunday. No peculiar logins or any weird events listed by splunk, just the typical tons of connections from okra every five minutes. Cacti doesn't show any unusual CPU, memory, or traffic near that time, either.''' Nothing out-of-the-ordinary in the logs, etc. Probably "just" a crash? '''I'm of the opinion it's splunk beta.''' Let's see if it happens again. That would be the third weekend in a row. '''No crash, no unusual behavior over weekend'''
| |
− | * Matt's learning a bit of [[Perl]] so we can figure out exactly how the backup works, as well as create more programs in the future, specifically thinking of monitoring. '''Look into the CPAN modules under ''Net::'', etc.''' I just found out that it's actually very easy to use ssh to log onto a machine and execute a command rather than a login shell: <code>ssh who@a.b.c cmd</code>. For example, I made a bash function called "whoson" that tells me who's on the machine that I pass to it:<code>whoson roentgen</code> will prompt me for my password, then log on, run ''w'', and display the output. <font color="green">We can set up the einstein root so that you can issue such commands without needing to supply your password. This is already set up by Aaron, but nobody but he knows the passphrase to use.</font> I'll ask Aaron. '''Hopefully it checks SOME sort of authentication, otherwise that's a biiig security risk...''' Hopefully Aaron isn't mad at us for the tomato thing, then. '''Looking into ssh-agent stuff'''
| |
− | * Splunk Enterprise trial license expires on August 9th, 2007 '''Won't be renewed'''
| |
− | * Tomato installed, but getting [http://www.gentoo.org/doc/en/grub-error-guide.xml#doc_chap4 GRUB error 15]''' There doesn't seem to be anything on the boot partition. After all of the trouble with the disks yesterday, I'm going to make all new CD's with the blanks that I brought in today, and start all over again.. '''Installed! It inconsistently hangs when loading X on startup, though...''' Okay, I'm starting to become quite sour toward RHEL5. There's no info on this black screen crap and I found this while looking at [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/release-notes/RELEASE-NOTES-x86-en.html the long list of "known issues"]: "Boot-time logging to /var/log/boot.log is not available in this release of Red Hat Enterprise Linux 5. An equivalent functionality will be added in a future update of Red Hat Enterprise Linux 5." '''Started working once Matt came in. Mysteriously. Everything works now.''' Or not: logging out caused the black screen crap! '''Reinstalling, without Virtualisation packages, since that's all that differed from lentil's install''' That didn't work either, but setting grub to boot into runlevel 3 did get things loaded. So we're keeping it that way. Set up all of the basic stuff (auth, automount, etc.), so it's a good enough solution for now.
| |
− | * tomato just BSC'd when I did a <code>startx</code> as root. Guess the problem is with X and/or video driver after all. Running updates, some of which involve X. Maybe these will have a fix... '''Nope!''' Seems to be using an ati card, maybe we should try different ati drivers? There's quite a few out there. '''Matt's idea to change drivers seems to have worked, based on several tests. The standard ati driver doesn't support that card, so we're using VESA. We don't need 3d on tomato, so it'll be fine.'''
| |
− | * tomato DNS: Tried to use einstein's iptables configuration files so that clients could actually use the DNS, but trying that resulted in them being unable to ping, ssh, etc. via farm IP addresses. '''Had to comment out line 44 of ''/etc/sysconfig/iptables-npg'' to get it to work. What's different between einstein and tomato to cause this? Is it important?'''
| |
| | | |
− | == Previous Months Completed == | + | These are things that should be done once every 7 days or so. |
− | [[Completed in June 2007|June 2007]] | + | |
− | [[Completed in July 2007|July 2007]]
| + | #Check physical interface connections |
| + | #*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network. |
| + | #Check Areca RAID interfaces |
| + | #*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution. |
| + | #Clean up the server room, sweep the floors. |
| + | |
| + | ==Monthly Tasks== |
| + | |
| + | #Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units. |
| + | #Check S.M.A.R.T. information on all server hard drives |
| + | #*Make a record of any drives which are reporting errors or nearing failure. |
| + | |
| + | ==Annual Tasks== |
| + | |
| + | These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users. |
| + | |
| + | #Server software upgrades |
| + | #*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime. |
| + | #Run fsck on data volumes |
| + | #Clean/Dust out systems |
| + | #Rotate old disks out of RAID arrays |
| + | #Take an inventory of our server room / computing equipment |
| + | |
| + | <!--{| cellpadding="5" cellspacing="0" border="1" |
| + | ! Time of Year !! Things to Do !! Misc. |
| + | |- |
| + | | Summer Break || || |
| + | |- |
| + | | || Major Kernel Upgrades || |
| + | |- |
| + | | || Run FDisk || |
| + | |- |
| + | | || Clean (Dust-off/Filters) while Systems are Shut down || |
| + | |- |
| + | | Thanksgiving Break || || |
| + | |- |
| + | | Winter Break || || |
| + | |- |
| + | | || Upgrade RAID disks || Upgrade only disks connected to a RAID card |
| + | |-- |
| + | | Spring Break || || |
| + | |- |
| + | |} --> |