Difference between revisions of "Sysadmin Todo List"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
Line 50: Line 50:
 
* Certain settings are similar or identical for all machines, such as resolv.conf.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.  '''Since resolv.conf was mentioned, I made a [[Script Prototypes#setres|prototype]] that seems to work.''' Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed.
 
* Certain settings are similar or identical for all machines, such as resolv.conf.  It would be beneficial to write a program to do remote configuration.  This would also simplify the process of adding/upgrading machines.  '''Since resolv.conf was mentioned, I made a [[Script Prototypes#setres|prototype]] that seems to work.''' Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed.
 
* Learn how to use ssh-agent for most of these tasks
 
* Learn how to use ssh-agent for most of these tasks
 +
* Price server hardware and see if we can beat the microway quote.
 +
** Figure out if we should build opteron dual-core or xeon quad-core.
  
 
== Waiting ==
 
== Waiting ==

Revision as of 15:05, 27 July 2007

This is an unordered set of tasks. Detailed information on any of the tasks typically goes in related topics' pages, although usually not until the task has been filed under Completed.

Important

Einstein Upgrade

Massive amount of deployment documentation for RHEL 5

  1. Pick a date within the next week Monday, 7/23/2007
  2. Send an e-mail to Aaron, warning him of the future takedown of tomato Done
  3. Update Tomato to RHEL5 Installed w/ basic configuration (auth, autofs, etc)
  4. Check all services einstein currently provides. Locate as many custom scripts, etc. as is reasonable and label/copy them.
    1. DNS
    2. LDAP
    3. Postfix
    4. AMaViS
    5. ClamAV
    6. SpamAssassin
    7. IMAP
    8. /home
    9. Samba
    10. Web?
    11. Fortran compilers and things like that?
  5. Clone those services to tomato
  6. Switch einstein <-> tomato, and then upgrade what was originally einstein
  7. Look into making an einstein, tomato failsafe setup.

Miscellaneous

  • Backup stuff: We need exclude filters on the backups. We need to plan and execute extensive tests before modifying the production backup program. Also, see if we can implement some sort of NFS user access. I've set up both filters and read-only snapshot access to backups at home. Uses what essentially amounts to a bash script version of the fancy perl thing we use now, only far less sophisticated. However, the filtering and user access uses a standard rsync exclude file (syntax in man page) and the user access is fairly obvious NFS read-only hosting. I am wondering if this is needed. The current scheme (ie the perl script) uses excludes by having a .rsync-filter is each of the directories where you want excluded contents. This has worked well. See ~maurik/tmp/.rsync-filter . The current script takes care of some important issues, like incomplete backups. Ah. So we need to get users to somehow keep that .rsync-filter file fairly updated. And to get them to use data to hold things, not home. Also, I wasn't suggesting we get rid of the perl script, I was saying that I've become familiar with a number of the things it does.
  • Learn how to use cacti on okra. Seems like a nice tool, mostly set up for us already. Find out why lentil and okra (and tomato?) aren't being read by cacti. Could be related to the warnings that repeat in okra:/var/www/cacti/log/cacti.log. Not related to the warnings; those are for other machines that are otherwise being monitored. Try adding cacti to the exclude exclude list in access.conf Nevermind, lentil doesn't have any restrictions. Need to find out the requirements for a machine to be monitored by cacti/rrdtools. The documentaion makes it sound like only the cacti host needs any configuration, but I'm dubious. Ahh, it looks like every client has a file snmpd.conf, which affects what can be graphed. Tried configuring things on improv as in the Cacti HowTo, but no go. Must be some other settings as well.
  • Set up a few VM's to play with for settings, scripts, etc. Either xen or qemu should work fine.
  • Figure out how to change log levels for snmpd on jalapeno. It's logging every time okra makes a connection. /etc/sysconfig/snmpd.options ? Changing it to be like einstein's didn't work.
  • Install the right SNMP stuff on tomato so that it can be graphed

Ongoing

Documentation

  • Maintain the Documentation of all systems!
    • Main function
    • Hardware
    • OS
    • Network
  • Continue homogenizing the configurations of the machines.
  • Improve documentation of mail software, specifically SpamAssassin, Cyrus, etc.

Maintenance

  • Check e-mails to root every morning
  • Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. Symanzik, bohr seem imminent.

Cleaning

  • Test unknown equipment:
    • UPS I need a known good battery to play with. I'll probably get a surplus one cheap and bring it in. Seems like both UPSes I've looked at so far had bad batteries, since they were swollen and misshapen.
  • Printer in 323 is not hooked up to a dead network port. Actually managed to ping it. One person reportedly got it to print, nobody else has, and that user has been unable ever since. Is this printer dead? We need to find out. Matt votes it's dead.

On-the-Side

  • Certain settings are similar or identical for all machines, such as resolv.conf. It would be beneficial to write a program to do remote configuration. This would also simplify the process of adding/upgrading machines. Since resolv.conf was mentioned, I made a prototype that seems to work. Another idea that was tossed around was a program that periodically compared such files against master copies, to see if the settings somehow got changed.
  • Learn how to use ssh-agent for most of these tasks
  • Price server hardware and see if we can beat the microway quote.
    • Figure out if we should build opteron dual-core or xeon quad-core.

Waiting

  • That guy's computer has a BIOS checksum error. Flashing the BIOS to the newest version succeeds, but doesn't fix the problem. No obvious mobo damage either. What happen? Who was that guy, anyhow? (Silviu Covrig, probably) The machine is gluon, according to him. Waiting on ASUS tech support for warranty info Aaron said it might be power-supply-related. Nope. Definitely not. Used a known good PSU and still got error, reflashed bios with it and still got error. Got RMA, sending out on wed. Waiting on ASUS to send us a working one!
  • Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry. Mac OS X now working, IT guy should be here week of June 26th Did he ever come? No, he didn't, and did not respond to a voice message left. Will call again.

Completed

  • Jalapeno was dead when I came in this morning. Rebooted, seems back to normal. Is splunk beta doing this?
  • Steve's pay. Supposedly going to be remedied in the next paycheck. Fixed and backpaid
  • Figure out tomato tunnel setup for paulis 2 and 4. So simple. Add PEERDNS=no to the needed ifcfg-* files.
  • Learn how to set up evolution fully so we can support users. Need LDAP address book. What schema does our LDAP setup suport? Evolution uses "evolutionPerson", apparently it doesn't work without using that schema for describing people. Schemas can be combined: [1]" Typing the name of someone evolution is aware of (that is, someone you've been in communication with before) allows address book like features. Close, but not quite what we're looking for. Messing with all the user's ldap settings is the sort of MAJOR thing that could make a lot of things blow up if we do something wrong. It would be nice to have, but maybe after we get everything else out of the way; too high-risk now.
  • THIS HOUSE IS CLEAN!
  • I wonder if we can go to the demerrit remains and get some drop ceiling panels or similar sound-absorbing material and put it in the server room.... The hum isn't unbearable, but I'm sick of saying "what?" all the time. Maurik is considering setting up a workstation room, seperate from the farm room. Maybe we can find some egg cartons to duct tape to the walls as a short-term fix. Duct-taped cardboard to the walls. Seems to have cut some of the high frequency noise. Could be placebo though.
  • OOPS, crashed tomato. Please reboot by button. (Maurik: 7/12/07 @ 10pm) Rebooted. How'd that happen?
  • Investigate printing from lepton (in 305) to myriad. Got word from John Calarco that it doesn't work. Apparently his root partition ran out of space. How many machines does that make now? Fixed by deleting his FC2 logical drive and expanding his FC3 to take its place.
  • Test LDAP authentication on farm and general machines. We should create a number of users, each with different group settings, in order to narrow down what groups are required to access what. Seems less error-prone than using one user and modifying the settings over and over. See the LDAP doc, write answers there. Made a user named "Joe Delete" that is only in his own group, and he can log into einstein, okra, lentil, tomato, gourd, ennui, and blackbody.
  • Finally completed backup consolidation! No more amanda backups are left. Lentil presently has npg-daily-28 in use, 29 ready, the 500gb waiting until the jalapeno problems subside (in case we need to rebuild jalapeno), and an empty slot.
  • The Log Level for nmbd on einstein was set to 7. WOA, that is a lot of junk only useful for expert debugging. Please, please, pretty please, don't leave log levels so high and then leave. How do you even set log levels? See the samba page
  • Add a "flavicon" to some areas of web, so that we log fewer errors, for one.
  • The snmpd deamon on einstein was very verbose, generating 600 messages per hour, all access from Okra. I changed the default options in /etc/sysconfig/snmpd.options from # OPTIONS="-Lsd -Lf /dev/null -p /var/un/snmpd.pid -a" to OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a". Now snmpd is QUIET! We could consider a slight more verbose? (This was discovered with splunk.)
  • SPLUNK is now set up on Jalapeno. It is combing the logs of einstein and roentgen and its own logs. See Splunk.
  • Checked if the backups are actually happening and working - they are.
  • Renewed XML books for Amrita. They're due at the end of the month.
  • Fixed the amandabackup.sh script for consolidating amanda-style backups.
  • Investigate the change in ennui's host key Almost certainly caused by one of the updates. Just remembered that I was using ennui for a few minutes and I saw the "updates ready!" icon in the corner and habitually clicked it. Darn ubuntu habits. Doesn't explain WHY it changed, only how. It probably wasn't an individual update, but almost certainly was the transition from Fedora 5 to 7. ennui isn't a very popular machine to SSH into, so the change probably just went unnoticed for the two-or-so weeks since the upgrade. I had early thought that it couldn't have been the OS change, since it had been awhile since the change, but upon further thought, it makes perfect sense.
  • Look into getting computers to pull scheduled updates from rhn when they check in. See updates through RHN
  • Look into monitoring RAID, disk usage, etc. SMARTD, etc.
  • Removed Aaron from "who should get root's mail?" part of einstein's /etc/aliases file. Now he won't get einstein's email all the time. Replaced him with "root@einstein.unh.edu", since both of us check that now.
  • Added Matt and Steve to the ACL for the backups shared mail folder. It was pretty simple with cyradm.
  • Karpiusp remembered his password, no change necessary.
  • Need to get onto the "backups" shared folder, as well as be added as members to the lists. "backups" wasn't even a mailing list, according to the Mailman interface. Added Steve and Matt to the CC list for /etc/cron.daily/rsync_backup's mail command. If the message gets sent to us, then we'll know something's wrong with the list. If we don't get it, then the problem is probably in the script. E-mails were received, so there's something up with the mailing list. Yup. Checking the mailing list archives shows no messages on the list. Figured out how to do shared folders with cyrusadm.
  • Nobody is currently reading the mail that is send to "root". Einstein had 3000+ unread messages. I deleted almost all. There are some useful messages that are send to root with diagnostics in them, we should find a solution for this. Temporarily, both Matt and Steve have email clients set up to access root's account. The largest chunk of the e-mails concern updating ClamAV. Maybe we should just do that? Doing this caused some major mail problems. It's a punishment for 1) Typing a command in the wrong terminal, 2) Not thoroughly understanding the configuration and importance of a component before updating it, and 3) Not restarting the program after updating it
  • Updated SpamAssassin and ran sa-update to get new rules. The SA documentation seems to indicate that having procmail send mail is the typical scenario. However, procmail isn't mentioned in the appropriate Postfix configuration file[2]. procmail and postfix are installed, though. Do we have a special mail setup? It seems like postfix is what does it. File this confusion under "improve mail chain documentation" rather than clutter up the list
  • okra was the only machine that jdelete could log into, that also had restrictions in it's etc/security/access.conf, so I commented out the old setting, then copied the content of another machine's file to okra's.
  • jalapeno was mysteriously unreachable when I came in this morning (7/9/2007). The cacti graphs show it going down sometime mid-Saturday. The logs show several authentication failures beforehand... On Saturday, that looks like a failed breakin attempt. A repeat on Monday around 2pm. It is not clear why Jalapeno is being targetted. Check /var/log/secure. Note, I was on 7/9/2007, between 9:30am and 11:30am. I was trying to figure out how farm access is controlled. Specifically, I wanted to deny access to jalapeno to non-admin users. Seems this worked (How did you do it? Looking like it's access.conf), but perhaps it had an unintended side effects. We need more documentation on the Cacti page. I've not used it, so doe not know what access it needs to jalapeno. Perhaps it does not need monitorring by cacti. Cacti is still monitoring jalapeno.
  • Figure out what to do about the mass Samba login attempts. Since Maurik turned it off, does that mean that we don't really use it for anything important? Samba is still running on einstein. It is more important there. Roentgen samba access was for web stuff and now no longer needed. The system causing access problems was supposedly rebooted. Also samba (einstein and roentgen) is set to be non-verbose into syslog.
  • Test pauli4's network card by booting with a livecd. onboard works, e1000 doesn't. Still isn't working even after specifying the onboard port in ifcfg-eth0. Installing Fedora 7 got the onboard fixed, but I don't know how to interface it with the tomato tunnel
  • blackbody's behaving oddly. It was hung when I came in, and a couple of services failed to startup when I restarted it. Had to restart again, because the "greeter application" (graphical login screen) crashed instantly over and over, and when logging in via terminal, my home directory didn't mount. Now it's mostly working, but a bunch of GNOME desktop apps crashed upon my logging in. Getting segmentation faults with random programs, including yum and pirut. Nothing jumped out at me in the logs for updates, etc. Going to do a memtest on blackbody, just in case. Neither the Fedora 7 or Backtrack discs have memtest on them, and we don't have any blank CDs, so never mind for now. After further investigation, it probably isn't memory anyhow, since it's consistently the same set of programs that are segfaulting/crashing. I do think that the ubunutu disc that was in gluon when we got it has memtest. I put it on the stack of discs where we keep the rest of them. I agree though, it probably isn't the memory. I think I've figured it out. The "installonlyn" plugin is a Python script, and the two programs that fail on startup are written in Python. Conclusion: something's wrong with the Python installation. Yum relies on some python stuff, which makes it quite difficult to reinstall python. Also, yum doesn't actually HAVE a reinstall function. Forcing install seems to do nothing. Reinstallation of fedora should do it... Reinstalled, and am not running Python updates for a while
  • Figure out what network devices on tomato are doing what Guess we're waiting for Aaron on this one. He needs to do something soon, because while I'm sure most of these extra devices aren't important to NPG, Xemed and the Paulis probably use them somehow, and we need to know what the deal is before installing RHEL5. It's do or die for Xemed.
  • "roentgen kernel: 10.10.88.111 sent an invalid ICMP type 3, code 3 error to a broadcast: 132.177.91.255 on eth0". whois says that 132.177.91.255 is a NIC in Kingsbury. Looking at the splunk graph for this error, it seems to have happened almost entirely over the course of Wednesday; Thursday only gets 1-5 per hour, while late Tuesday and all of Wednesday got ~200 per hour.Partially Solved: This has been solved: The node is 10.10.88.111, which would be a system on the Xemed end. I simply block this system from getting through the tunnel on tomato, using the iptables (/etc/sysconfig/iptables-npg) with an entry: "-A INPUT -s 10.10.88.111 -j REJECT". This rejects that node silently. We may need to add such a line on einstein or roentgen, but I think it should now be stopped at the tunnel level. Still getting a lot of the same error on roentgen overnight, even with that line in roentgen's iptables-npg Haven't seen the error on logwatch or splunk in a while; Xemed must have fixed the rogue machine.
  • Our newly-made loaner machine Hobo has FC7 installed now. Needs LDAP configuration, I can't seem to get it to download from einstein as usual, steve, you're gonna need to show me how it's done. Seems to be fairly responsive, but it could use some work to make it run faster. At least in the process we'll learn more about the structure of redhat-esque systems. Probably can't do much since it's probably not registered. Registered, set up, sitting around waiting for somebody to use it
  • Reinstalled pauli2 and got it working
  • Keep an eye on jalapeno. Make sure that the changes to the access rules haven't screwed anything up. jalapeno crashed at about 3AM on Sunday. No peculiar logins or any weird events listed by splunk, just the typical tons of connections from okra every five minutes. Cacti doesn't show any unusual CPU, memory, or traffic near that time, either. Nothing out-of-the-ordinary in the logs, etc. Probably "just" a crash? I'm of the opinion it's splunk beta. Let's see if it happens again. That would be the third weekend in a row. No crash, no unusual behavior over weekend
  • Matt's learning a bit of Perl so we can figure out exactly how the backup works, as well as create more programs in the future, specifically thinking of monitoring. Look into the CPAN modules under Net::, etc. I just found out that it's actually very easy to use ssh to log onto a machine and execute a command rather than a login shell: ssh who@a.b.c cmd. For example, I made a bash function called "whoson" that tells me who's on the machine that I pass to it:whoson roentgen will prompt me for my password, then log on, run w, and display the output. We can set up the einstein root so that you can issue such commands without needing to supply your password. This is already set up by Aaron, but nobody but he knows the passphrase to use. I'll ask Aaron. Hopefully it checks SOME sort of authentication, otherwise that's a biiig security risk... Hopefully Aaron isn't mad at us for the tomato thing, then. Looking into ssh-agent stuff
  • Splunk Enterprise trial license expires on August 9th, 2007 Won't be renewed
  • Tomato installed, but getting GRUB error 15 There doesn't seem to be anything on the boot partition. After all of the trouble with the disks yesterday, I'm going to make all new CD's with the blanks that I brought in today, and start all over again.. Installed! It inconsistently hangs when loading X on startup, though... Okay, I'm starting to become quite sour toward RHEL5. There's no info on this black screen crap and I found this while looking at the long list of "known issues": "Boot-time logging to /var/log/boot.log is not available in this release of Red Hat Enterprise Linux 5. An equivalent functionality will be added in a future update of Red Hat Enterprise Linux 5." Started working once Matt came in. Mysteriously. Everything works now. Or not: logging out caused the black screen crap! Reinstalling, without Virtualisation packages, since that's all that differed from lentil's install That didn't work either, but setting grub to boot into runlevel 3 did get things loaded. So we're keeping it that way. Set up all of the basic stuff (auth, automount, etc.), so it's a good enough solution for now.
  • tomato just BSC'd when I did a startx as root. Guess the problem is with X and/or video driver after all. Running updates, some of which involve X. Maybe these will have a fix... Nope! Seems to be using an ati card, maybe we should try different ati drivers? There's quite a few out there. Matt's idea to change drivers seems to have worked, based on several tests. The standard ati driver doesn't support that card, so we're using VESA. We don't need 3d on tomato, so it'll be fine.

Previous Months Completed

June 2007