Completed in March/April/May/June 2008

From Nuclear Physics Group Documentation Pages
Revision as of 19:30, 1 July 2008 by Minuti (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
  • npg-daily-32 has been filled. 29 was removed from lentil and replaced with 34. Current rate of consumption is around 4-5 months per 750gb disk.
  • Printer queue for Copier: Konica Minolta Bizhub 750. IP=pita.unh.edu Seems like we need info from the Konica guy to get it set up on Red Hat. The installation documentation for the driver doesn't mention things like the passcode, because those are machine-specific. Katie says that if he doesn't come on Monday, she'll make an inquiry. Mac OS X now working, IT guy should be here week of June 26th. He never came, and did not respond to a voice message left. Will call again. It's been a YEAR now. I think we can ignore this.
  • Lepton constantly has problems printing. It seems that at least once a month the queue locks up. This machine has Fedora Core 3 installed, I wonder if it would be more worth it to just put CentOS on it and be done with this recurring problem. This hasn't happened again in months. Problem solved?
  • Clean out some users who have left a while ago. (Maurik should do this.) Since amrita's account is gone, it seems this has been done.
  • pumpkin/lentil/mariecurie/einstein2/corn: It was all thanks to nss_ldapHere's a summary of the symptoms, none of which occur for root:
    • bash has piping problems for Steve, but sometimes not for Matt. Something like ls | wc will print nothing and echo $? will print 141, aka SIGPIPE. Backticks will cause similar problems. Something like echo `ls` will print nothing, and `ls`; echo $? also prints 141. Since several system-provided startup scripts rely on strings returned from backticks, bash errors will print upon login.
    • None of the bash problems seem to happen when logged onto the physical machine, rather than over SSH, but all of the tcsh problems still occur. Turned out that the newest version of bash on el5 systems is broken. Replacing it with an older version fixes the issue with bash. Tcsh still gives issues, but that appears to be unrelated.
    • Everything else seems fine on these two machines: disk usage, other programs, network, etc.
    • Since this is now not just a pumpkin issue, the problem probably isn't corrupt files, but maybe some update messed something up. Figuring out what update that is could be tough though since there was a huge chunk of updates at some point (although I suspect the problem might be bash since tcsh depends on it). However, einstein2 is pretty much a clean slate, so a simple if tedious method would be to apply updates to it gradually until the symptoms pop up.
    • tcsh can only run built-in commands; anything else results in a "Broken pipe" and the program not running. "Broken pipe" appears even for non-existent programs (e.g. "hhhhhhhh" will make "Broken pipe" appear).
  • Lentil: Gotta reinstall a whole bunch of things and/or a new disk; looks like there was some damage from the power problem on Monday (the size 0 files have returned).
  • jalapeno hangups: Look at sensors on jalapeno, so that cacti can monitor the temp. The crashing probably isn't the splunk beta (no longer beta!), since it runs entirely in userspace. lm_sensors fails to detect anything readable. Is there a way around this? Jalapeno's been on for two weeks with no problems, let's keep our fingers crossed… This system is too unstable to maintain, like tomato and old einstein. Got an e-mail today, saying it's got a degraded array. I just turned it off since it's just a crappy space heater at this point.
  • Resize/clean up partitions as necessary. Seems to be a running trend that a computer gets 0 free space and problems crop up. This hasn't happened in half a year. I think it was a coincidence that a few computers had it happen at once.
  • Put new drive in lentil, npg-daily-33. That's good, because 32 is almost full already. 81%!
  • CLAMAV died and no one noticed! The update of clamav (mail virus scanner on einstein) on April 23rd killed this mail subsystem because some of the option in /etc/clamd.conf were now obsolete. See http://www.sfr-fresh.com/unix/misc/clamav-0.93.tar.gz:a/clamav-0.93/NEWS. This seemed to have gone unnoticed for a while. Are we sleeping at the wheel? Edited /etc/clamd.conf to comment out these options.
  • When I came in today (22nd), taro had kernel panicked and einstein was acting strangely. Checking root's email, I saw that all day the 21st and 22nd, there were SMTP errors, around 2 per minute. A quick glance at them gives me the impression that they're spam attempts, due to ridiculous FROM fields like pedrofinancialcompany.inc.net@tiscali.dk. I rebooted taro and einstein, everything seems fine now.
  • Pauli crashes nearly every day, not when backups come around. We need to set up detailed system logging to find out why. Pauli2 and 4 don't give out their data via /net to the other paulis. This doesn't seem to be an autofs setting, since I see nothing about it in the working nodes' configs. Similarly, 2,4, and 6 won't access the other paulis via /net. 2,4 were nodes we rebuilt this summer, so it makes sense they don't have the right settings, but 6 is a mystery. Pauli2's hard drive may be dying. Some files in /data are inaccessible, and smartctl shows a large number of errors (98 if I'm reading this right...). Time to get Heisenberg a new hard drive? Or maybe just wean him off of NPG… It may be done for; can't connect to pauli2 and rebooting didn't seem to work. Need to set up the monitor/keyboard for it & check things out. The pauli nodes are all off for now. They've been deemed to produce more heat than they're worth. We'll leave them off until Heisenberg complains. Heisenberg's complaining now. Fixed his pauli machine by walking in the room (still don't know what he was talking about) and dirac had LDAP shut off. He wants the paulis up whenever possible, which I explained could be awhile because of the heat issues. Pauli doesn't crash anymore, as far as I can tell. Switching the power supply seems to have done it.
  • Pumpkin is now stable. Read more on the configuration at Pumpkin and Xen.
  • Roentgen was plugged into one of the non-battery-backup slots of its UPS, so I shut it down and moved the plug. After starting back up, root got a couple of mysterious e-mails about /dev/md0 and /dev/md2: Array /dev/md2 has experienced event "DeviceDisappeared". However, mount seems to indicate that everything important is around:
/dev/vg_roentgen/rhel3 on / type ext3 (rw,acl)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
/dev/md1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/vg_roentgen/rhel3_var on /var type ext3 (rw)
/dev/vg_roentgen/wheel on /wheel type ext3 (rw,acl)
/dev/vg_roentgen/srv on /srv type ext3 (rw,acl)
/dev/vg_roentgen/dropbox on /var/www/dropbox type ext3 (rw)
/usr/share/ssl on /etc/ssl type none (rw,bind)
/proc on /var/lib/bind/proc type none (rw,bind)
automount(pid1503) on /net type autofs (rw,fd=5,pgrp=1503,minproto=2,maxproto=4)

and all of the sites listed on Web Servers work. Were those just old arrays that aren't around anymore but are still listed in some config file? We haven't seen any issues, and roentgen's going to be virtualized in the not-to-distant future, so this is fairly irrelevant.

  • Gourd's been giving smartd errors, namely
Offline uncorrectable sectors detected:
       /dev/sda [3ware_disk_00] - 48 Time(s)
       1 offline uncorrectable sectors detected

Okra also has an offline uncorrectable sector! No sign of problems since this was posted.