Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(249 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under [[Sysadmin Todo List#Completed|Completed]].
+
This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to [[Old Sysadmin Todo List]]. This list list is incomplete, and needs updating.
== Daily Check off list ==
 
Each day when you come in check the following:
 
# Einstein ([[Script Prototypes|script]]):
 
## Up and running?
 
## Disks are at less than 90% full?
 
## Mail system OK? (spamassasin, amavisd, ...)
 
# Temperature OK? No water blown into room?
 
# Systems up: Taro, Pepper, Pumpkin/Corn ?
 
# Backups:
 
## Did backup succeed?
 
## Does Lentil need a new disk?
 
  
== Important ==
+
== Projects ==
=== Weather ===
+
*Convert physical and VMs to CentOS 6 for compute servers ([[taro]],[[endeavour]]) and all others to either 6 or 7. 
Judging by the look of the post-it-notes on the wall, the fan was blowing some sort of weather in. We need to figure out a way to prevent the outside from coming inside. We're lucky roentgen seems okay. '''What about using screen material in front of the fan, oriented in such a way that any water will run down the screen and collect at the water sensor?'''
+
**VMs: Einstein
 +
**Physical: [[endeavour]], [[taro]], and [[gourd]]
 +
*Mailman: Clean up mailman and make sure all the groups and users are in order.
 +
*CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
 +
*Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
 +
*Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
 +
*Do a check on Lentil to see if there is any unneccessary data being backed up.
  
Now that we're experiencing a mini heat wave, the fan and line air conditioner aren't quite enough to keep the temperature below 70°F. The standard operating procedure has been to leave the door open during the day.
+
==Daily Tasks==
  
=== Pumpkin/Corn ===
+
These are things that should be done every day when you come into work.
Our new system needs to be setup and integrated/tied in. Read more: [[Pumpkin]] <br>
 
<font color="red">'''New problem:'''</font> I set up virtual hosts corn (32-bit RHEL5) and Fermi (64-bit RHEL4), both went fine. I tried installing compton with 32-bit RHEL4, but the installer keeps crashing someway into the install. Most annoying. Instead, I then installed compton "directly" from a backup. This worked (hurray!) EXCEPT, the system seems to occasionally, well is stops responding to the ssh session. No clue what is going on here.
 
  
<font color="red">'''New problem Continued:'''</font> OK, this now downright scares me about virtual machines. At least, an RHEL4 32-bit one, where I see the following REALLY BAD behavior regarding disks:
+
#Do a physical walk-through/visual inspection of the server room
# A disk label set on the host, pumpkin, is not recognized then the guest boots. Setting the disklabel to something else in the guest results in different labels being seen by guest and host.
+
#Verify that all systems are running and all necessary services are functioning properly
# A files edited when the disk is mounted on the host (and guest not booted) does not appear to be changed when looking at this file on the booted guest (and unmounted from the host).
+
#*For a quick look at which systems are up you can use /usr/local/bin/[[serversup.py]]
'''Conclusion:''' While these things work transparently in a para-virtualized RHEL5 (and I think RHEL4 64 bit), they are seriously messed up with RHEL4 32-bit. Should we ask RedHat for comments?
+
#*[[Gourd]]: Make sure that home folders are accessible, all virtual machines are running
 +
#*[[Einstein]]: Make sure that [[LDAP]] and all [[e-mail]] services (dovecot, spamassassain, postfix, mailman) are running
 +
#*[[Roentgen]]: Make sure website/MySQL are available
 +
#*[[Jalapeno]]: Named and Cups
 +
#*[[Lentil]]: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
 +
#Check [[Splunk]]: [https://pumpkin.farm.physics.unh.edu:8000 click here if you're in the server room], or open localhost:8000 (use https) from [[Pumpkin]]
 +
#*Check logs for errors, keep an eye out for other irregularities.
 +
#Check [[Cacti]]: [http://roentgen.unh.edu/cacti http://roentgen.unh.edu/cacti]
 +
#*Verify that temperatures are acceptable.
 +
#*Monitor other graphs/indicators for any unusual activity.
  
<font color="red">''More 'New problems:'''</font> It seems the internet connection to virtual hosts "goes to sleep". The system will be fine, but trying to ping it will result in a no reply. This only happens  with fermi and compton. I suspect we need to add some config stuff....
+
==Weekly Tasks==
  
=== Einstein Upgrade ===
+
These are things that should be done once every 7 days or so.
  
Einstein upgrade project and status page: [[Einstein Status]]
+
#Check physical interface connections
'''Note:''' Einstein (current one) has a problem with / getting full occasionally. See [[Einstein#Special_Considerations_for_Einstein]]
+
#*Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
 +
#Check Areca RAID interfaces
 +
#*The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
 +
#Clean up the server room, sweep the floors.
  
=== Environmental Monitor ===
+
==Monthly Tasks==
  
We have an environmental monitor running at http://10.0.0.98 This is capable of sending email and turning the fan on and off (needs to be set up more intelligently). It responds to SNMP so we can integrate it with Cacti (needs to be done). '''Cacti doesn't support traps, as it's a polling tool. A possible workaround is to have another daemon run that captures traps and writes them somewhere cacti can pick them up, such as syslog. Or, maybe we can just use splunk instead.'''
+
#Perform [[Enviromental_Control_Info#Scheduled_Maintenance|scheduled maintenance]] on the server room air conditioning units.
 +
#Check S.M.A.R.T. information on all server hard drives
 +
#*Make a record of any drives which are reporting errors or nearing failure.
  
=== Miscellaneous ===
+
==Annual Tasks==
* Roentgen was plugged into one of the non-battery-backup slots of its UPS, so I shut it down and moved the plug. After starting back up, root got a couple of mysterious e-mails about /dev/md0 and /dev/md2: <code>Array /dev/md2 has experienced event "DeviceDisappeared"</code>. However, <code>mount</code> seems to indicate that everything important is around:
 
<pre>
 
/dev/vg_roentgen/rhel3 on / type ext3 (rw,acl)
 
none on /proc type proc (rw)
 
none on /dev/pts type devpts (rw,gid=5,mode=620)
 
usbdevfs on /proc/bus/usb type usbdevfs (rw)
 
/dev/md1 on /boot type ext3 (rw)
 
none on /dev/shm type tmpfs (rw)
 
/dev/vg_roentgen/rhel3_var on /var type ext3 (rw)
 
/dev/vg_roentgen/wheel on /wheel type ext3 (rw,acl)
 
/dev/vg_roentgen/srv on /srv type ext3 (rw,acl)
 
/dev/vg_roentgen/dropbox on /var/www/dropbox type ext3 (rw)
 
/usr/share/ssl on /etc/ssl type none (rw,bind)
 
/proc on /var/lib/bind/proc type none (rw,bind)
 
automount(pid1503) on /net type autofs (rw,fd=5,pgrp=1503,minproto=2,maxproto=4)
 
</pre>and all of the sites listed on [[Web Servers]] work. Were those just old arrays that aren't around anymore but are still listed in some config file?
 
* Clean out some users who have left a while ago. (Maurik should do this.)
 
* '''Monitoring''': I would like to see the new temp-monitor integrated with Cacti, and fix some of the cacti capabilities, i.e. tie it in with the sensors output from pepper and taro (and tomato/einstein). Setup sensors on the corn/pumpkin. Have an intelligent way in which we are warned when conditions are too hot, a drive has failed, a system is down. 
 
* Check into smartd monitoring (and processing its output) on Pepper, Taro, Corn/Pumpkin, Einstein, Tomato.
 
* Decommission Okra. - This system is way too outdated to bother with it. Move Cacti to another system. Perhaps a VM, once we get that figured out?
 
* Decide whether we want to decommission Jalapeno. It is currently not a stable system, and perhaps not worth the effort trying to make it stable. It's only service is Splunk, which can be moved to another system (which?). We could "rebuild" the HW if there is need.
 
* Gourd's been giving smartd errors, namely<code>
 
Offline uncorrectable sectors detected:
 
        /dev/sda [3ware_disk_00] - 48 Time(s)
 
        1 offline uncorrectable sectors detected
 
</code>Okra also has an offline uncorrectable sector!
 
* Continue purgin NIS from ancient workstations, and replacing with files. The following remain:
 
** pauli nodes -- Low priority!
 
* Learn how to use [[cacti]] on okra. Seems like a nice tool, mostly set up for us already. '''Find out why lentil isn't being read by cacti.''' Install the net-snmp package, copy the ''/etc/snmpd/snmpd.conf'' from a working machine to the new one, start the snmpd service. Still not working though. Lentil won't start iptables-netgroups <br/>("Net::SSLeay object version 1.30 does not match bootstrap parameter 1.25 at /usr/lib/perl5/5.8.8/i386-linux-thread-multi/DynaLoader.pm line 253, <DATA> line 225. <br/>Compilation failed in require at /usr/lib/perl5/vendor_perl/5.8.8/IO/Socket/SSL.pm line 17, <DATA> line 225. <br/>BEGIN failed--compilation aborted at /usr/lib/perl5/vendor_perl/5.8.8/IO/Socket/SSL.pm line 17, <DATA> line 225. <br/>Compilation failed in require at /usr/lib/perl5/vendor_perl/5.8.8/Net/LDAP.pm line 156, <DATA> line 225."),<br/> maybe that's why. Lentil and pumpkin have the same Perl packages installed, yet pumpkin doesn't fail at starting the script.
 
  
== Ongoing ==
+
These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.  
=== Documentation ===
 
* '''<font color="red" size="+1">Maintain the Documentation of all systems!</font>'''
 
** Main function
 
** Hardware
 
** OS
 
** Network
 
* Continue homogenizing the configurations of the machines.
 
* Improve documentation of [[Software Issues#Mail Chain Dependencies|mail software]], specifically SpamAssassin, Cyrus, etc.
 
=== Maintenance ===
 
* Check e-mails to root every morning
 
* Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent. '''Yup, bohr died. Expanded his root by 2.5 gigs. Still serious monitor problems though, temporarily bypassed with vesa...''' Bohr's problem seems tied to the nvidia drivers, let's wait until the next release and see how those work out.
 
* Check up on security [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-sec-network.html#ch-wstation]
 
* Clean up Room 202.
 
** Ask UNH if they have are willing/able to recycle/reuse the three CRTs that we have sitting around.
 
  
=== On-the-Side ===
+
#Server software upgrades
* See if we can get the busted printer in 322 to work down here.
+
#*Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
* Learn how to use ssh-agent for task automation.
+
#Run fsck on data volumes
* Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. '''I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting.''' <font color="green"> I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups.</font> Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does. [http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/ch-acls.html#s2-acls-mounting-nfs] '''Put this on the backburner for now, since the current rate of backup disk consumption will give about 10 months before the next empty disk is needed.'''
+
#Clean/Dust out systems
 +
#Rotate old disks out of RAID arrays
 +
#Take an inventory of our server room / computing equipment
  
== Waiting ==
+
<!--{| cellpadding="5" cellspacing="0" border="1"
* '''jalapeno hangups:''' Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. '''lm_sensors fails to detect anything readable. Is there a way around this?''' Jalapeno's been on for two weeks with no problems, let's keep our fingers crossed&hellip;
+
! Time of Year !! Things to Do !! Misc.
* That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen?  '''Who was that guy, anyhow?''' (Silviu Covrig, probably) The machine is gluon, according to him. '''Waiting on ASUS tech support for warranty info'''  Aaron said it might be power-supply-related. '''Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. '''Got RMA, sending out on wed.''' Waiting on ASUS to send us a working one!''' Called ASUS on 8/6, they said it's getting repaired right now. '''Wohoo! Got a notification that it shipped!''' ...they didn't fix it... Still has the EXACT same error it had when we shipped it to them. '''What should we do about this?''' I'm going to call them up and have a talk, considering looking at the details on their shipment reveals that they sent us a different motherboard, different serial number and everything but with the same problem.
+
|-
* Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu  '''Seems like we need info from the Konica guy to get it set up on Red Hat.  The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific.  Katie says that if he doesn't come on Monday, she'll make an inquiry.''' <font color="green">Mac OS X now working,  IT guy should be here week of June 26th</font> '''Did he ever come?''' No, he didn't, and did not respond to a voice message left. Will call again.
+
| Summer Break || ||
* Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? '''Or maybe just wean him off of NPG&hellip;''' It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. '''The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains.''' Heisenberg's complaining now. Fixed his pauli machine by walking in the room (still don't know what he was talking about) and dirac had LDAP shut off. He wants the paulis up whenever possible, which I explained could be awhile because of the heat issues.
+
|-
* Sent an email to UNH Property Control asking what the procedure is to get rid of untagged equipment, namely, the two old monitors in the corner. Apparently they want us to fill out lots of information on the scrapping form like if it was paid for with government money, etc, as well as give them serial numbers, model numbers, and everything we can get ahold of. Then, we get to hang onto them until the hazardous equipment people come in and take it out, at their leisure. Waiting to figure out what we want to do with them.
+
|  || Major Kernel Upgrades ||
 
+
|-
== Completed ==
+
|  || Run FDisk ||
* <font color="green">'''Corn Virtualization issues resolved!'''</font>
+
|-
** Subscription is now a "virtual subscription"
+
|  || Clean (Dust-off/Filters) while Systems are Shut down ||
** Corn now has 2 ethernets, one to ''farm'' one to ''unh'', and resolves "einstein" to 10.0.0.248
+
|-
** All disks are now mountable.
+
| Thanksgiving Break || ||
* Properly configure iptables on corn and <font color="green">'''Pumpkin'''</font>
+
|-
** Copy /usr/local/bin/netgroup2iptables.pl from taro (requires perl-LDAP RPM). Copy /etc/init.d/iptables-netgroups, make sure it starts for run level 3 and 5.
+
| Winter Break || ||
** On pumpkin, make sure the guest OSes don't get blocked by pumpkin's iptables.
+
|-
** I edited the iptables-npg on pumpkin to be '''far more rational'''. The version on Roentgen has '''far too many ports open for a standard server'''
+
|  || Upgrade RAID disks || Upgrade only disks connected to a RAID card
* Properly configure access restrictions to "farm" and root login only from einstein.
+
|--
* <font color="green">'''Lentil Backup issue resolved!'''</font>
+
| Spring Break || ||  
** <del>The cron job is mailing this message: "archive disk '/mnt/npg-daily-current' does not exist or is not a symlink at /usr/local/bin/rsync_backup.pl line 44, <DATA> line 1." That link exists, though. I can see it and its contents as a regular user; why can't the script when run by cron? The e-mail may have been outdated &hellip; today's was a successful listing. It is fixed, hooray. We just need to fix the ssh keys for tomato and corn so they can be connected to. </del> Lentil now knows all the ssh keys of everyone, made as human-readable as possible.
+
|-
** First of all '''Do NOT use disks smaller than 350 GB for backup!!''', since those will not even fit one copy of what needs to be backed up.
+
|} -->
** The link /mnt/npg-daily-current must exist and point to an actual drive.
 
** Old entry: Lentil's not doing backups. I tried manually runing the script friday afternoon and the email log looks like it was backing up and stopped for no real reason. Checking the space on the drives (since the script seems unable to do so now), I found that npg-daily/28 is basically full, and npg-daily/29 is an untouched 250gb. Maybe an update screwed around with how the script checks free space, preventing it from knowing how to move to the next drive. '''It's probably not any update - lentil was working fine until "The Friday Taro Event".''' I manually made the new symbolic link from /mnt/npg-daily-current -> /mnt/npg-daily/29 . Maybe this'll fix it? '''That seems to be a no. Lots of unable to make hard link errors, invalid cross-device link, and similar errors. It needs to know to copy the data it's backing up to the disk since it's a new disk. I still think it's got something to do with that unable to statfs error.'''
 
* New pumpkin network problems: It's possible to reach the farm subnet if pumpkin is booted without starting iptables. Double-check the configs. '''The problem was that iptables was getting its config from both /etc/iptables and /etc/iptables-npg. Since pepper doesn't have /etc/iptables, I just moved it to /etc/iptables.bak and voilà: everything works.'''
 
* benfranklin is apparently up and running somewhere, because it's reporting drive issues too: '''Benfranklin is Dan's workstation, it's in the room next to Maurik's office.''' <font color="green"><b>BenFranklin is a Pentium III "Coppermine" at 800MHz. I have ordered a replacement system already, so we can decommission the old BenFranklin.</b></font> The new BF has arrived.
 
* Try to pull as much data from Jim William's old drives as possible, if there's even anything on them. '''They seem dead. Maybe we can swap one board to the other drive and see if it works?''' What room is he in? His computer is working now (the ethernet devices will have to be changed to a non-farm setup once the machine is back in his office). '''The computer is delivered, and he says everything's back. Leads me to believe that all his data wasn't on his drives, but on his home directory. Those drives can be junked now.'''
 
* At some point, cacti stopped being able to monitor einstein. Update-related? There are no errors in cacti.log, but the status page for einstein just says "down". '''Cacti was set to use the wrong version of rrdtool.'''
 
* Added Steve and Matt to the environmental mailing list. There seems to be a problem with more than 1 cc recipient, so to get around this the monitor sends to npg-admins@physics.unh.edu. Tested and it works.
 
* Install the right SNMP stuff on tomato so that it can be graphed
 
* Service snmpd won't start on okra ("Starting snmpd: /usr/sbin/snmpd: error while loading shared libraries: libbeecrypt.so.6: cannot enable executable stack as shared object requires: Permission denied", '''supposedly SELinux-related'''). It was SELinux. Fixed!
 
* Taro has become unstable again when running multi-processor. Try another Power supply. If that is not it, give up? '''Put the new one in, passed memtest, and booted smp. Let's see how it handles itself!''' -- <font color="red">'''Note: The /data disk on Taro is still read only!''' </font>
 
* Lentil has a dead disk ("hde0", probably IDE) in its RAID1. It needs replaced. '''Had a spare Seagate sitting around of just the right size.'''
 
* Heiseinberg dropped pauli off today. Says it's his power supply. Very low priority. '''Gave it jalapeno's old power suplly and got rid of its broken fans and it seems to work fine'''
 
 
 
== Previous Months Completed ==
 
[[Completed in June 2007|June 2007]]
 
 
 
[[Completed in July 2007|July 2007]]
 
 
 
[[Completed in August 2007|August 2007]]
 
 
 
[[Completed in September 2007|September 2007]]
 
 
 
[[Completed in October 2007|October 2007]]
 
 
 
[[Completed in November/December 2007|NovDec 2007]]
 

Latest revision as of 16:42, 15 February 2015

This is the new Sysadmin Todo List as of 05/27/2010. The previous list was moved to Old Sysadmin Todo List. This list list is incomplete, and needs updating.

Projects

  • Convert physical and VMs to CentOS 6 for compute servers (taro,endeavour) and all others to either 6 or 7.
  • Mailman: Clean up mailman and make sure all the groups and users are in order.
  • CUPS: Look into getting CUPS authenticating users through LDAP instead of using Samba.
  • Printer: Get printtracker.py working and see if you can get a driver to properly recognize page number count instead of just giving the value as a number of 1 which corresponds to a job submission not the number of pages.
  • Check /etc/apcupsd/shutdown2 script on Gourd to make sure all the keys are correctly implemented so the machines go down properly during a power outage.
  • Do a check on Lentil to see if there is any unneccessary data being backed up.

Daily Tasks

These are things that should be done every day when you come into work.

  1. Do a physical walk-through/visual inspection of the server room
  2. Verify that all systems are running and all necessary services are functioning properly
    • For a quick look at which systems are up you can use /usr/local/bin/serversup.py
    • Gourd: Make sure that home folders are accessible, all virtual machines are running
    • Einstein: Make sure that LDAP and all e-mail services (dovecot, spamassassain, postfix, mailman) are running
    • Roentgen: Make sure website/MySQL are available
    • Jalapeno: Named and Cups
    • Lentil: Verify that backups ran successfully overnight. Check space on backup drives, and add new drives as needed.
  3. Check Splunk: click here if you're in the server room, or open localhost:8000 (use https) from Pumpkin
    • Check logs for errors, keep an eye out for other irregularities.
  4. Check Cacti: http://roentgen.unh.edu/cacti
    • Verify that temperatures are acceptable.
    • Monitor other graphs/indicators for any unusual activity.

Weekly Tasks

These are things that should be done once every 7 days or so.

  1. Check physical interface connections
    • Verify that all devices are connected appropriately, that cables are labeled properly, and that all devices (including RAID and IPMI cards) are accessible on the network.
  2. Check Areca RAID interfaces
    • The RAID interfaces on each machine are configured to send e-mail to the administrators if an error occurs. It may still be a good idea to login and check them manually on occasion as well, just for the sake of caution.
  3. Clean up the server room, sweep the floors.

Monthly Tasks

  1. Perform scheduled maintenance on the server room air conditioning units.
  2. Check S.M.A.R.T. information on all server hard drives
    • Make a record of any drives which are reporting errors or nearing failure.

Annual Tasks

These are tasks that are necessary but not critical, or that might require some amount of downtime. These should be done during semester breaks (probably mostly in the summer) when we're likely to have more time, and when downtime won't have as detrimental of an impact on users.

  1. Server software upgrades
    • Kernel updates, or updates for any software related to critical services, should only be performed during breaks to minimize the inconvenience caused by reboots, or unexpected problems and downtime.
  2. Run fsck on data volumes
  3. Clean/Dust out systems
  4. Rotate old disks out of RAID arrays
  5. Take an inventory of our server room / computing equipment