Tbow's Log

From Nuclear Physics Group Documentation Pages
Revision as of 13:39, 16 May 2014 by Tbow (talk | contribs) (→‎Nisc)
Jump to navigationJump to search

This is a log of everything Josh (Systems Administrator) has done over the years.

Projects, Scripts, and Daemons

This section includes things like:

  • Scripts I have written
  • Daemons I have setup
  • Projects I have attempted or completed

Upgrades

This a list of my notes on the sysems upgrades I have performed in the past.

System Upgrade 2013-12-30

The order we will be updating is: jalapeno, pumpkin, gourd, einstein, taro, roentgen, and endeavour. The reason I picked this order is because we need a physical machine to test this update on and Pumpkin is the lowest priority physical machine to do tthis with. Taro needs to stay after gourd and einstein because I will want to be able to recover the VMs on a working virtualized server (the backup will come from the pulled drive on Gourd, described below). If pumpkin goes well, then it should follow that gourd will go smoothly. Jalapeno goes first because it is the lowest priority VM and it will help us get our feet wet with the updating of CentOS 5 to 6, which will also help in pumpkin's update from RHEL 5 to 6.

This will require (for the physical machines) us to get in touch with UNH IT and make sure we can get the proper keys to update with official RHEL 6 repos. Gourd could be problematic, that is why we will update her and make sure she runs properly (including the VMs) then we will detach one of the software RAID drives (for backup) and rebuild the RAID with a new drive, and then we will proceed to upgrading to RHEL 6.

There are a few problems I foresee, that is: upgrading from 5 to 6, endeavour's yum and cluster software, making sure that latest version GCC (and anyother crucial software to the physicists projects) is backwards compatible with older version (in other words, how many problems will they have), the video cards in pumpkin and taro, and finally einstein's mail and LDAP (will it be compatible with CentOS 6).

RAID and Areca

Drive Life 2012-06-24

This is a list of expected drive life from manufacturer. All of these drives are in are RAIDs.

Pumpkin

ST3750640NS (p.23)
 8,760 power-on-hours per year.
 250 average motor start/stop cycles per year.
ST3750640AS (p.37)
 2400 power-on-hours per year.
 10,000 average motor start/stop cycles per year.
WDC WD7500AAKS-00RBA0
 Start/stop cycles 50,000

Endeavour

ST31000340NS
ST31000524AS
ST31000526SV
 MTBF 1,000,000 hours
 Start / Stop Cycles 50,000
 Non-Recoverable Errors 1 per 10^14

Areca 1680 2010-01-10

4.3 Driver Installation for Linux

This chapter describes how to install the SAS RAID controller driver to Red Hat Linux, SuSE and other versions of Linux. Before installing the SAS RAID driver to the Linux, complete the following actions:

  1. Install and configure the controller and hard disk drives according to the instructions in Chapter 2 Hardware Installation.
  2. Start the system and then press Tab+F6 to enter the McBIOS RAID manager configuration utility. Using the McBIOS RAID manager to create the RAID set and volume set. For details, see Chapter 3, McBIOS RAID Manager.

If you are using a Linux distribution for which there is not a compiled driver available from Areca, you can copy the source from the SAS software CD or download the source from the Areca website and compile a new driver.

Compiled and tested drivers for Red Hat and SuSE Linux are included on the shipped CD. You can download updated versions of com- piled and tested drivers for RedHat or SuSE Linux from the Areca web site at http://www.areca.com.tw. Included in these downloads is the Linux driver source, which can be used to compile the updat- ed version driver for RedHat, SuSE and other versions of Linux. Please refer to the “readme.txt” file on the included Areca CD or website to make driver diskette and to install driver to the system.

Areca Scripts

This is a collection of the Areca Scripts I have attempted to build.

grep_areca_info.sh 2012-10-09

#!/bin/bash
cat /net/data/taro/areca/areca_info | grep -A 52 $1 | grep \#$3 | grep $2

areca_info.sh 2014-01-14

#!/bin/bash
info=areca_info
echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" >> $info
echo "`date +%Y-%m-%d_%T`_`echo $HOSTNAME`" >> $info
echo "------------------------------------------------------------------" >> $info
echo -e "Drv#\t`areca_cli64 disk smart info drv=1 | grep Attribute`" >> $info
echo "======================================================================================" >> $info
for i in `seq 1 $1`
do
 areca_cli64 disk smart info drv=$i > .areca_temp
 echo -e "`echo \#$i`\t`cat .areca_temp | grep Start`" >> $info
done
for i in `seq 1 $1`
do
 areca_cli64 disk smart info drv=$i > .areca_temp
 echo -e "`echo \#$i`\t`cat .areca_temp | grep Power-on`" >> $info
done
for i in `seq 1 $1`
do
 areca_cli64 disk info drv=$i > .areca_temp
 echo -e "`echo \#$i`\t`cat .areca_temp | grep Temperature`" >> $info
done
rm .areca_temp
echo "------------------------------------------------------------------" >> $info
areca_cli64 hw info | grep Temp >> $info

mydata.py 2012-06-19

#!/usr/bin/python
import sqlite3
import re
data = open("mydata","r")
all_data = data.read()
all_data_split = all_data.split("+++")
for i in all_data_split:
 print i
#Make connection to database mydata.db,
#	which is in the current directory.
conn = sqlite3.connect('mydata.db')
c = conn.cursor()
# Insert a row of data
c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
# Save (commit) the changes
conn.commit()
# We can also close the cursor if we are done with it
c.close()
# Create table
#c.execute(CREATE TABLE stocks
#             (date text, trans text, symbol text, qty real, price real))

LDAP

Elog

Elog notes 2009-05-20

Info from the site https://midas.psi.ch/elog/adminguide.html

Download: http://midas.psi.ch/elog/download/

RPM Install Notes

Since version 2.0, ELOG contains a RPM file which eases the installation. Get the file elog-x.x.x-x.i386.rpm from the download section and execute as root "rpm -i elog-x.x.x-x.i386.rpm". This will install the elogd daemon in /usr/local/sbin and the elog and elconv programs in /usr/local/bin. The sample configuration file elogd.cfg together with the sample logbook will be installed under /usr/local/elog and the documentation goes to /usr/share/doc. The elogd startup script will be installed at /etc/rc.d/init.d/elogd. To start the daemon, enter

/etc/rc.d/init.d/elogd start

It will listen under the port specified in /usr/local/elog/elogd.cfg which is 8080 by default. So one can connect using any browser with the URL:

http://localhost:8080

To start the daemon automatically, enter:

chkconfig --add elogd
chkconfig --level 345 elogd on 

which will start the daemon on run levels 3,4 and 5 after the next reboot.

Note that the RPM installation creates a user and group elog, under which the daemon runs.

Notes on running elog under apache

For cases where elogd should run under port 80 in parallel to an Apache server, Apache can be configured to run Elog in a subdirectory of Apache. Start elogd normally under port 8080 (or similarly) as noted above and make sure it's working there. Then put following redirection into the Apache configuration file:

Redirect permanent /elog http://your.host.domain/elog/
ProxyPass /elog/ http://your.host.domain:8080/

Make sure that the Apache modules mod_proxy.c and mod_alias.c are activated. Justin Dieters <enderak@yahoo.com> reports that mod_proxy_http.c is also required. The Redirect statement is necessary to automatically append a "/" to a request like http://your.host.domain/elog. Apache then works as a proxy and forwards all requests staring with /elog to the elogd daemon.

Note: Do not put "ProxyRequests On" into your configuration file. This option is not necessary and can be misused for spamming and proxy forwarding of otherwise blocked sites.

Because elogd uses links to itself (for example in the email notification and the redirection after a submit), it has to know under which URL it is running. If you run it under a proxy, you have to add the line:

     URL = http://your.proxy.host/subdir/

into elogd.cfg.

Notes on Apache:

Another possibility is to use the Apache web server as a proxy server allowing secure connections. To do so, Apache has to be configured accordingly and a certificate has to be generated. See some instructions on how to create a certificate, and see Running elogd under Apache before on this page on how to run elogd under Apache. Once configured correctly, elogd can be accessed via http://your.host and via https://your.host simultaneously.

The redirection statement has to be changed to

     Redirect permanent /elog https://your.host.domain/elog/
     ProxyPass /elog/ http://your.host.domain:8080/
and following has to be added to the section "VirtualHOst ...:443 in /etc/httpd/conf.d/ssl.conf:
     # Proxy setup for Elog
     <Proxy *>
     Order deny,allow
     Allow from all
     </Proxy>
     ProxyPass /elog/ http://host.where.elogd.is.running:8080/
     ProxyPassReverse /elog/ http://host.where.elogd.is.running:8080/
Then, following URL statement has to be written to elogd.cfg:
     URL = https://your.host.domain/elog

There is a more detailed step-by-step instructions at the contributions section.

Using ssh: elogd can be accessed through a a SSH tunnel. To do so, open an SSH tunnel like:

ssh -L 1234:your.server.name:8080 your.server.name

This opens a secure tunnel from your local host, port 1234, to the server host where the elogd daemon is running on port 8080. Now you can access http://localhost:1234 from your browser and reach elogd in a secure way.

Notes on Server Configuration

The ELOG daemon elogd can be executed with the following options :

elogd [-p port] [-h hostname/IP] [-C] [-m] [-M] [-D] [-c file] [-s dir] [-d dir] [-v] [-k] [-f file] [-x]

with :

   * -p <port>  TCP port number to use for the http server (if other than 80)
   * -h <hostname or IP address> in the case of a "multihomed" server, host name or IP address of the interface ELOG should run on
   * -C <url>  clone remote elogd configuration 
   * -m  synchronize logbook(s) with remote server
   * -M  synchronize with removing deleted entries
   * -l <logbook>  optionally specify logbook for -m and -M commands
   * -D   become a daemon (Unix only)
   * -c <file>  specify the configuration file (full path mandatory if -D is used)
   * -s <dir> specify resource directory (themes, icons, ...)
   * -d <dir> specify logbook root directory
   * -v  verbose output for debugging
   * -k  do not use TCP keep-alive
   * -f <file> specify PID file where elogd process ID is written when server is started
   * -x  enables execution of shell commands

It may also be used to generate passwords :

     elogd [-r pwd] [-w pwd] [-a pwd] [-l logbook]

with :

   * -r <pwd> create/overwrite read password in config file
   * -w <pwd> create/overwrite write password in config file
   * -a <pwd> create/overwrite administrative password in config file
   * -l <logbook> specify logbook for -r and -w commands

The appearance, functionality and behaviour of the various logbooks on an ELOG server are determined by the single elogd.cfg file in the ELOG installation directory.

This file may be edited directly from the file system, or from a form in the ELOG Web interface (when the Config menu item is available). In this case, changes are applied dynamically without having to restart the server. Instead of restarting the server, under Unix one can send a HUP signal like "killall -HUP elogd" to tell the server to re-read its configuration.

The many options of this unique but very important file are documented on the separate elogd.cfg syntax page.

To better control appearance and layout of the logbooks, elogd.cfg may optionally specify the use of additional files containing HTML code, and/or custom "themes" configurations. These need to be edited directly from the file system right now.

The meaning of the directory flags -s and -d is explained in the section covering the configuration options Resource dir and Logbook dir in the elogd.cfg description.

Notes on tarball install Make sure you have the libssl-dev package installed. Consult your distribution for details.

Expand the compressed TAR file with tar -xzvf elog-x.x.x.tar.gz. This creates a subdirectory elog-x.x.x where x.x.x is the version number. In that directory execute make, which creates the executables elogd, elog and elconv. These executables can then be copied to a convenient place like /usr/local/bin or ~/bin. Alternatively, a "make install" will copy the daemon elogd to SDESTDIR (by default /usr/local/sbin) and the other files to DESTDIR (by default /usr/local/bin). These directories can be changed in the Makefile. The elogd executable can be started manually for testing with :

elogd -p 8080

where the -p flag specifies the port. Without the -p flag, the server uses the standard WWW port 80. Note that ports below 1024 can only be used if elogd is started under root, or the "sticky bit" is set on the executable.

When elogd is started under root, it attaches to the specified port and tries to fall-back to a non-root account. This is necessary to avoid security problems. It looks in the configuration file for the statements Usr and Grp.. If found, elogd uses that user and goupe name to run under. The names must of course be present on the system (usually /etc/passwd and /etc/group). If the statements Usr and Grp. are not present, elogd tries user and group elog, then the default user and group (normally nogroup and nobody). Care has to be taken that elogd, when running under the specific user and group account, has read and write access to the configuration file and logbook directories. Note that the RPM installation automatically creates a user and group elog.

If the program complains with something like "cannot bind to port...", it could be that the network is not started on the Linux box. This can be checked with the /sbin/ifconfig program, which must show that eth0 is up and running.

The distribution contains a sample configuration file elogd.cfg and a demo logbook in the demo subdirectory. If the elogd server is started in the elogd-x.x.x directory, the demo logbook can be directly accessed with a browser by specifying the URL http://localhost:8080 (or whatever port you started the elog daemon on). If the elogd server is started in some other directory, you must specify the full path of the elogd file with the "-c" flag and change the Data dir = option in the configuration file to a full path like /usr/local/elog.

Once testing is complete, elogd will typically be started with the -D flag to run as a daemon in the background, like this :

elogd -p 8080 -c /usr/local/elog/elogd.cfg -D

Note that it is mandatory to specify the full path for the elogd file when started as a daemon. To test the daemon, connect to your host via :

http://your.host:8080/

If port 80 is used, the port can be omitted in the URL. If several logbooks are defined on a host, they can be specified in the URL :

http://your.host/<logbook>

where <logbook> is the name of the logbook.

The contents of the all-important configuration file elogd.cfg are described below:

[Tbow@gluon documentation-notes]$ ll elog*
-rw-r--r-- 1 Tbow npg 9.4K May 20  2009 elog
-rw-r--r-- 1 Tbow npg  623 Jan 26  2010 elog.roentgen.messages.problem
-rw-r--r-- 1 Tbow npg 1.2K Feb 11 19:12 elog_users_setup
[Tbow@gluon documentation-notes]$ text
text2pcap  text2wave  textools

elog_users_setup 2010-02-11

You can find some instructions/information here:

http://pbpl.physics.ucla.edu/old_stuff/elogold/current/doc/config.html#access

The thing you have to remember is that you want the new users to end up being users of just the logbook they will be using, not a global user. So, if you look at where my name is in the elogd.cfg file, I am designated as an admin user, and am a global user that can log into any logbook to fix things. If you look through the file for a user like Daniel, he can only log into the nuclear group logbooks, not my private one, or Karl's, or Maurik's. So, if you want to add someone to the nuclear group's logbooks, for example, add that new person's user name to where you find people like Daniel and Ethan, and set the thing to allow self-registering at the top. Restart, and then go ahead and use the self-register to register the new person's password and account. Then go back into the elogd.cfg file and comment out the self register, so other people cannot do that, and restart. That should be the easiest way to do it, but you can read the info and decide about that. How does that sound? Does this make sense?

elog_roentgen_messages_problems 2010-01-26

Jan 26 09:48:00 roentgen elogd[15215]: elogd 2.7.8 built Dec  2 2009, 11:54:27 
Jan 26 09:48:00 roentgen elogd[15215]: revision 2278
Jan 26 09:48:00 roentgen elogd[15215]: Falling back to default group "elog"
Jan 26 09:48:01 roentgen elogd[15215]: Falling back to default user "elog"
Jan 26 09:48:01 roentgen elogd[15215]: FCKedit detected
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default group "elog"
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default user "elog"
Jan 26 09:48:01 roentgen elogd[15215]: ImageMagick detected
Jan 26 09:48:02 roentgen elogd[15215]: SSLServer listening on port 8080

CUPS

CUPS quota accounting 2009-06-10

3. 3. Print quotas and accounting

CUPS has also basic page accounting and quota capabilities.

Every printed page is logged in the file /var/log/cups/page_log So one can everytime read out this file and determine who printed how many pages. The system is based on the CUPS filters. They simply analyse the PostScript data stream to determine the number of pages. And there fore it depends on the quality of the PostScript generated by the applications whether the pages get correctly counted. And if there is a paper jam, pages are already counted and do not get printed. Also Jobs which get rendered printer-ready on the client (Windows) will not get accounted correctly, as CUPS does not understand the proprietary language of the printer.

In addition, one can restrict the amount of pages (or kBytes) which a user is allowed to print in a certain time frame. Such restrictions can be applied to the print queues with the "lpadmin" command.

lpadmin -p printer1 -o job-quota-period=604800 -o job-k-limit=1024
lpadmin -p printer2 -o job-quota-period=604800 -o job-page-limit=100

The first command means that within the "job-quota-period" (time always given in seconds, in this example we have one week) users can only print a maximum of 1024 kBytes (= 1 MByte) of data on the printer "printer1". The second command restricts printing on "printer2" to 100 pages per week. One can also give both "job-k-limit" and "job-page-limit" to one queue. Then both limits apply so the printer rejects jobs when the user already reaches one of the limits, either the 1 MByte or the 100 pages.

This is a very simple quota system: Quotas cannot be given per-user, so a certain user's quota cannot be raised independent of the other users, for example if the user pays his pages or gets a more printing-intensive job. Also counting of the pages is not very sophisticated as it was already shown above.

So for more sophisticated accounting it is recommended to use add-on software which is specialized for this job. This software can limit printing per-user, can create bills for the users, use hardware page counting methods of laser printers, and even estimate the actual amount of toner or ink needed for a page sent to the printer by counting the pixels.

The most well-known and complete free software package for print accounting and quotas id PyKota:

http://www.librelogiciel.com/software/PyKota/

A simple system based on reading out the hardware counter of network printers via SNMP is accsnmp:

http://fritz.potsdam.edu/projects/cupsapps/

CUPS Basic Info 2009-06-11

This file contains some basic cups commands and info:

The device can be a parallel port, a network interface, and so forth. Devices within CUPS use Uniform Resource Identifiers ("URIs") which are a more general form of Uniform Resource Locators ("URLs") that are used in your web browser. For example, the first parallel port in Linux usually uses a device URI of parallel:/dev/lp1

Lookup printer info:

lpinfo -v ENTER
 network socket
 network http
 network ipp
 network lpd
 direct parallel:/dev/lp1
 serial serial:/dev/ttyS1?baud=115200
 serial serial:/dev/ttyS2?baud=115200
 direct usb:/dev/usb/lp0
 network smb

File devices have device URIs of the form file:/directory/filename while network devices use the more familiar method://server or method://server/path format. Printer queues usually have a PostScript Printer Description ("PPD") file associated with them. PPD files describe the capabilities of each printer, the page sizes supported, etc.

Adding a printer:

/usr/sbin/lpadmin -p printer -E -v device -m ppd

Managing printers:

/usr/sbin/lpadmin -p printer options

Starting and Stopping printer queues:

/usr/bin/enable printer ENTER
/usr/bin/disable printer ENTER

Accepting and Rejecting Print jobs:

/usr/sbin/accept printer ENTER
/usr/sbin/reject printer ENTER

Restrict Access:

/usr/sbin/lpadmin -p printer -u allow:all

Virtualization

Misc

denyhosts-undeny.py 2013-05-31

#!/usr/bin/env python
import os
import sys
import subprocess
#The only argument should be the host to undeny
try:
 goodhost = sys.argv[1]
except:
 print "Please specify a host to undeny!"
 sys.exit(1)
#These commands start/stop denyhosts. Set these as appropriate for your system.
stopcommand = '/etc/init.d/denyhosts stop'
startcommand = '/etc/init.d/denyhosts start'
#Check to see what distribution we're using.
distrocheckcommand = "awk '// {print $1}' /etc/redhat-release"
d = os.popen(distrocheckcommand)
distro = d.read()
distro = distro.rstrip('\n')
#Check to see what user we're being run as.
usercheckcommand = "whoami"
u = os.popen(usercheckcommand)
user = u.read()
user = user.rstrip('\n')
if user == 'root':
 pass
else:
 print "Sorry, this script requires root privileges."
 sys.exit(1)
#The files we should be purging faulty denials from.
if distro == 'Red':
 filestoclean = ['/etc/hosts.deny','/var/lib/denyhosts/hosts-restricted','/var/lib/denyhosts/sync-hosts','/var/lib/denyhosts/suspicious-logins']
elif distro == 'CentOS':
 filestoclean = ['/etc/hosts.deny','/usr/share/denyhosts/data/hosts-restricted','/usr/share/denyhosts/data/sync- hosts','/usr/share/denyhosts/data/suspicious-logins']
elif distro == 'Fedora':
 print "This script not yet supported on Fedora systems!"
 sys.exit(1)
else:
 print "This script is not yet supported on your distribution, or I can't properly detect it."
 sys.exit(1)
#Stop denyhosts so that we don't get any confusion.
os.system(stopcommand)
#Let's now remove the faulty denials.
for targetfile in filestoclean:
 purgecommand = "sed -i '/" + goodhost + "/ d' " + targetfile
 os.system(purgecommand)
#Now that the faulty denials have been removed, it's safe to restart denyhosts.
os.system(startcommand)
sys.exit(0)

Hosts

These are hosts that I have worked on. The services I have worked on may not carry the same services, but this is a log not a reflection of what is.

Gourd

Taro

Lentil

Pumpkin

Endeavour

Einstein

Corn

Jalapeno

Roentgen