Completed in July 2007

From Nuclear Physics Group Documentation Pages
Revision as of 19:57, 1 August 2007 by Minuti (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
  • Jalapeno was dead when I came in this morning. Rebooted, seems back to normal. Is splunk beta doing this?
  • Steve's pay. Supposedly going to be remedied in the next paycheck. Fixed and backpaid
  • Figure out tomato tunnel setup for paulis 2 and 4. So simple. Add PEERDNS=no to the needed ifcfg-* files.
  • Learn how to set up evolution fully so we can support users. Need LDAP address book. What schema does our LDAP setup suport? Evolution uses "evolutionPerson", apparently it doesn't work without using that schema for describing people. Schemas can be combined: [1]" Typing the name of someone evolution is aware of (that is, someone you've been in communication with before) allows address book like features. Close, but not quite what we're looking for. Messing with all the user's ldap settings is the sort of MAJOR thing that could make a lot of things blow up if we do something wrong. It would be nice to have, but maybe after we get everything else out of the way; too high-risk now.
  • THIS HOUSE IS CLEAN!
  • I wonder if we can go to the demerrit remains and get some drop ceiling panels or similar sound-absorbing material and put it in the server room.... The hum isn't unbearable, but I'm sick of saying "what?" all the time. Maurik is considering setting up a workstation room, seperate from the farm room. Maybe we can find some egg cartons to duct tape to the walls as a short-term fix. Duct-taped cardboard to the walls. Seems to have cut some of the high frequency noise. Could be placebo though.
  • OOPS, crashed tomato. Please reboot by button. (Maurik: 7/12/07 @ 10pm) Rebooted. How'd that happen?
  • Investigate printing from lepton (in 305) to myriad. Got word from John Calarco that it doesn't work. Apparently his root partition ran out of space. How many machines does that make now? Fixed by deleting his FC2 logical drive and expanding his FC3 to take its place.
  • Test LDAP authentication on farm and general machines. We should create a number of users, each with different group settings, in order to narrow down what groups are required to access what. Seems less error-prone than using one user and modifying the settings over and over. See the LDAP doc, write answers there. Made a user named "Joe Delete" that is only in his own group, and he can log into einstein, okra, lentil, tomato, gourd, ennui, and blackbody.
  • Finally completed backup consolidation! No more amanda backups are left. Lentil presently has npg-daily-28 in use, 29 ready, the 500gb waiting until the jalapeno problems subside (in case we need to rebuild jalapeno), and an empty slot.
  • The Log Level for nmbd on einstein was set to 7. WOA, that is a lot of junk only useful for expert debugging. Please, please, pretty please, don't leave log levels so high and then leave. How do you even set log levels? See the samba page
  • Add a "flavicon" to some areas of web, so that we log fewer errors, for one.
  • The snmpd deamon on einstein was very verbose, generating 600 messages per hour, all access from Okra. I changed the default options in /etc/sysconfig/snmpd.options from # OPTIONS="-Lsd -Lf /dev/null -p /var/un/snmpd.pid -a" to OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a". Now snmpd is QUIET! We could consider a slight more verbose? (This was discovered with splunk.)
  • SPLUNK is now set up on Jalapeno. It is combing the logs of einstein and roentgen and its own logs. See Splunk.
  • Checked if the backups are actually happening and working - they are.
  • Renewed XML books for Amrita. They're due at the end of the month.
  • Fixed the amandabackup.sh script for consolidating amanda-style backups.
  • Investigate the change in ennui's host key Almost certainly caused by one of the updates. Just remembered that I was using ennui for a few minutes and I saw the "updates ready!" icon in the corner and habitually clicked it. Darn ubuntu habits. Doesn't explain WHY it changed, only how. It probably wasn't an individual update, but almost certainly was the transition from Fedora 5 to 7. ennui isn't a very popular machine to SSH into, so the change probably just went unnoticed for the two-or-so weeks since the upgrade. I had early thought that it couldn't have been the OS change, since it had been awhile since the change, but upon further thought, it makes perfect sense.
  • Look into getting computers to pull scheduled updates from rhn when they check in. See updates through RHN
  • Look into monitoring RAID, disk usage, etc. SMARTD, etc.
  • Removed Aaron from "who should get root's mail?" part of einstein's /etc/aliases file. Now he won't get einstein's email all the time. Replaced him with "root@einstein.unh.edu", since both of us check that now.
  • Added Matt and Steve to the ACL for the backups shared mail folder. It was pretty simple with cyradm.
  • Karpiusp remembered his password, no change necessary.
  • Need to get onto the "backups" shared folder, as well as be added as members to the lists. "backups" wasn't even a mailing list, according to the Mailman interface. Added Steve and Matt to the CC list for /etc/cron.daily/rsync_backup's mail command. If the message gets sent to us, then we'll know something's wrong with the list. If we don't get it, then the problem is probably in the script. E-mails were received, so there's something up with the mailing list. Yup. Checking the mailing list archives shows no messages on the list. Figured out how to do shared folders with cyrusadm.
  • Nobody is currently reading the mail that is send to "root". Einstein had 3000+ unread messages. I deleted almost all. There are some useful messages that are send to root with diagnostics in them, we should find a solution for this. Temporarily, both Matt and Steve have email clients set up to access root's account. The largest chunk of the e-mails concern updating ClamAV. Maybe we should just do that? Doing this caused some major mail problems. It's a punishment for 1) Typing a command in the wrong terminal, 2) Not thoroughly understanding the configuration and importance of a component before updating it, and 3) Not restarting the program after updating it
  • Updated SpamAssassin and ran sa-update to get new rules. The SA documentation seems to indicate that having procmail send mail is the typical scenario. However, procmail isn't mentioned in the appropriate Postfix configuration file[2]. procmail and postfix are installed, though. Do we have a special mail setup? It seems like postfix is what does it. File this confusion under "improve mail chain documentation" rather than clutter up the list
  • okra was the only machine that jdelete could log into, that also had restrictions in it's etc/security/access.conf, so I commented out the old setting, then copied the content of another machine's file to okra's.
  • jalapeno was mysteriously unreachable when I came in this morning (7/9/2007). The cacti graphs show it going down sometime mid-Saturday. The logs show several authentication failures beforehand... On Saturday, that looks like a failed breakin attempt. A repeat on Monday around 2pm. It is not clear why Jalapeno is being targetted. Check /var/log/secure. Note, I was on 7/9/2007, between 9:30am and 11:30am. I was trying to figure out how farm access is controlled. Specifically, I wanted to deny access to jalapeno to non-admin users. Seems this worked (How did you do it? Looking like it's access.conf), but perhaps it had an unintended side effects. We need more documentation on the Cacti page. I've not used it, so doe not know what access it needs to jalapeno. Perhaps it does not need monitorring by cacti. Cacti is still monitoring jalapeno.
  • Figure out what to do about the mass Samba login attempts. Since Maurik turned it off, does that mean that we don't really use it for anything important? Samba is still running on einstein. It is more important there. Roentgen samba access was for web stuff and now no longer needed. The system causing access problems was supposedly rebooted. Also samba (einstein and roentgen) is set to be non-verbose into syslog.
  • Test pauli4's network card by booting with a livecd. onboard works, e1000 doesn't. Still isn't working even after specifying the onboard port in ifcfg-eth0. Installing Fedora 7 got the onboard fixed, but I don't know how to interface it with the tomato tunnel
  • blackbody's behaving oddly. It was hung when I came in, and a couple of services failed to startup when I restarted it. Had to restart again, because the "greeter application" (graphical login screen) crashed instantly over and over, and when logging in via terminal, my home directory didn't mount. Now it's mostly working, but a bunch of GNOME desktop apps crashed upon my logging in. Getting segmentation faults with random programs, including yum and pirut. Nothing jumped out at me in the logs for updates, etc. Going to do a memtest on blackbody, just in case. Neither the Fedora 7 or Backtrack discs have memtest on them, and we don't have any blank CDs, so never mind for now. After further investigation, it probably isn't memory anyhow, since it's consistently the same set of programs that are segfaulting/crashing. I do think that the ubunutu disc that was in gluon when we got it has memtest. I put it on the stack of discs where we keep the rest of them. I agree though, it probably isn't the memory. I think I've figured it out. The "installonlyn" plugin is a Python script, and the two programs that fail on startup are written in Python. Conclusion: something's wrong with the Python installation. Yum relies on some python stuff, which makes it quite difficult to reinstall python. Also, yum doesn't actually HAVE a reinstall function. Forcing install seems to do nothing. Reinstallation of fedora should do it... Reinstalled, and am not running Python updates for a while
  • Figure out what network devices on tomato are doing what Guess we're waiting for Aaron on this one. He needs to do something soon, because while I'm sure most of these extra devices aren't important to NPG, Xemed and the Paulis probably use them somehow, and we need to know what the deal is before installing RHEL5. It's do or die for Xemed.
  • "roentgen kernel: 10.10.88.111 sent an invalid ICMP type 3, code 3 error to a broadcast: 132.177.91.255 on eth0". whois says that 132.177.91.255 is a NIC in Kingsbury. Looking at the splunk graph for this error, it seems to have happened almost entirely over the course of Wednesday; Thursday only gets 1-5 per hour, while late Tuesday and all of Wednesday got ~200 per hour.Partially Solved: This has been solved: The node is 10.10.88.111, which would be a system on the Xemed end. I simply block this system from getting through the tunnel on tomato, using the iptables (/etc/sysconfig/iptables-npg) with an entry: "-A INPUT -s 10.10.88.111 -j REJECT". This rejects that node silently. We may need to add such a line on einstein or roentgen, but I think it should now be stopped at the tunnel level. Still getting a lot of the same error on roentgen overnight, even with that line in roentgen's iptables-npg Haven't seen the error on logwatch or splunk in a while; Xemed must have fixed the rogue machine.
  • Our newly-made loaner machine Hobo has FC7 installed now. Needs LDAP configuration, I can't seem to get it to download from einstein as usual, steve, you're gonna need to show me how it's done. Seems to be fairly responsive, but it could use some work to make it run faster. At least in the process we'll learn more about the structure of redhat-esque systems. Probably can't do much since it's probably not registered. Registered, set up, sitting around waiting for somebody to use it
  • Reinstalled pauli2 and got it working
  • Keep an eye on jalapeno. Make sure that the changes to the access rules haven't screwed anything up. jalapeno crashed at about 3AM on Sunday. No peculiar logins or any weird events listed by splunk, just the typical tons of connections from okra every five minutes. Cacti doesn't show any unusual CPU, memory, or traffic near that time, either. Nothing out-of-the-ordinary in the logs, etc. Probably "just" a crash? I'm of the opinion it's splunk beta. Let's see if it happens again. That would be the third weekend in a row. No crash, no unusual behavior over weekend
  • Matt's learning a bit of Perl so we can figure out exactly how the backup works, as well as create more programs in the future, specifically thinking of monitoring. Look into the CPAN modules under Net::, etc. I just found out that it's actually very easy to use ssh to log onto a machine and execute a command rather than a login shell: ssh who@a.b.c cmd. For example, I made a bash function called "whoson" that tells me who's on the machine that I pass to it:whoson roentgen will prompt me for my password, then log on, run w, and display the output. We can set up the einstein root so that you can issue such commands without needing to supply your password. This is already set up by Aaron, but nobody but he knows the passphrase to use. I'll ask Aaron. Hopefully it checks SOME sort of authentication, otherwise that's a biiig security risk... Hopefully Aaron isn't mad at us for the tomato thing, then. Looking into ssh-agent stuff
  • Splunk Enterprise trial license expires on August 9th, 2007 Won't be renewed
  • Tomato installed, but getting GRUB error 15 There doesn't seem to be anything on the boot partition. After all of the trouble with the disks yesterday, I'm going to make all new CD's with the blanks that I brought in today, and start all over again.. Installed! It inconsistently hangs when loading X on startup, though... Okay, I'm starting to become quite sour toward RHEL5. There's no info on this black screen crap and I found this while looking at the long list of "known issues": "Boot-time logging to /var/log/boot.log is not available in this release of Red Hat Enterprise Linux 5. An equivalent functionality will be added in a future update of Red Hat Enterprise Linux 5." Started working once Matt came in. Mysteriously. Everything works now. Or not: logging out caused the black screen crap! Reinstalling, without Virtualisation packages, since that's all that differed from lentil's install That didn't work either, but setting grub to boot into runlevel 3 did get things loaded. So we're keeping it that way. Set up all of the basic stuff (auth, automount, etc.), so it's a good enough solution for now.
  • tomato just BSC'd when I did a startx as root. Guess the problem is with X and/or video driver after all. Running updates, some of which involve X. Maybe these will have a fix... Nope! Seems to be using an ati card, maybe we should try different ati drivers? There's quite a few out there. Matt's idea to change drivers seems to have worked, based on several tests. The standard ati driver doesn't support that card, so we're using VESA. We don't need 3d on tomato, so it'll be fine.
  • tomato DNS: Tried to use einstein's iptables configuration files so that clients could actually use the DNS, but trying that resulted in them being unable to ping, ssh, etc. via farm IP addresses. Had to comment out line 44 of /etc/sysconfig/iptables-npg to get it to work. What's different between einstein and tomato to cause this? Is it important?