Difference between revisions of "Tbow's Log"

From Nuclear Physics Group Documentation Pages
Jump to navigationJump to search
 
(11 intermediate revisions by the same user not shown)
Line 17: Line 17:
 
There are a few problems I foresee, that is: upgrading from 5 to 6, endeavour's yum and cluster software, making sure that latest version GCC (and anyother crucial software to the physicists projects) is backwards compatible with older version (in other words, how many problems will they have), the video cards in pumpkin and taro, and finally einstein's mail and LDAP (will it be compatible with CentOS 6).
 
There are a few problems I foresee, that is: upgrading from 5 to 6, endeavour's yum and cluster software, making sure that latest version GCC (and anyother crucial software to the physicists projects) is backwards compatible with older version (in other words, how many problems will they have), the video cards in pumpkin and taro, and finally einstein's mail and LDAP (will it be compatible with CentOS 6).
  
==RAID and Areca==
+
===Startup Procedure 2012-11-01===
 +
====How to start Gourd and the virtual machines====
 +
#Start Gourd.
 +
#NOTE: Make sure you boot gourd with the correct kernel with the correct kernel modules (like kvm-intel) use this command to check for the kvm module:
 +
#modprobe kvm-intel
 +
#Login as root
 +
#To start the virtual machines use these commands:
 +
#virsh domid vm
 +
#Example: virsh start einstein.unh.edu
 +
#Once einstein (LDAP and Mail) and jalapeno (DNS) has been started, start the netgroups2iptables script
 +
#service iptables-netgroups start or
 +
#/etc/init.d/iptables-netgroups start
 +
#NOTE: Gourd's netgroup iptables needs LDAP, so you need to start einstein for LDAP.  If you do not start iptables-netgroups, clients will not be able to properly automount their home folders.
 +
#Once you have finished the above, you can proceed to start all the servers (Virtual and Physical).
 +
 
 +
====General administration of virtual machines====
 +
Once you’ve got your virtual machine installed, you’ll need to know the various commands for everyday administration of KVM virtual machines. In these examples, change the name of the VM from ‘vm’ to whatever yours is called.
  
===Drive Life 2012-06-24===
+
To show general info about virtual machines, including names and current state:
This is a list of expected drive life from manufacturer. All of these drives are in are RAIDs.
+
virsh list --all
 +
To see a top-style monitor window for all VMs:
 +
virt-top
 +
To show info about a specific virtual machine:
 +
virsh dominfo vm
 +
To start a virtual machine:
 +
virsh start vm
 +
To pause a virtual machine:
 +
virsh suspend vm
 +
To resume a virtual machine:
 +
virsh resume vm
 +
To shut down a virtual machine (the ‘acpid’ service must be running on the guest for this to work):
 +
virsh shutdown vm
 +
To force a hard shutdown of a virtual machine:
 +
  virsh destroy vm
 +
To remove a domain (don’t do this unless you’re sure you really don’t want this virtual machine any more):
 +
virsh undefine vm
  
Pumpkin
+
====Initial host setup====
  ST3750640NS (p.23)
+
Firstly it’s necessary to make sure you have all the necessary software installed:
   8,760 power-on-hours per year.
+
  yum -y groupinstall Virtualization "Virtualization Client"
  250 average motor start/stop cycles per year.
+
  "Virtualization Platform" "Virtualization Tools" ;
  ST3750640AS (p.37)
+
   yum -y install libguestfs-tools
  2400 power-on-hours per year.
+
Next check that libvirtd is running:
  10,000 average motor start/stop cycles per year.
+
  service libvirtd status
  WDC WD7500AAKS-00RBA0
+
If not, make sure that messagebus and avahi-daemon are running, then start libvirtd:
  Start/stop cycles 50,000
+
service messagebus start
 +
  service avahi-daemon start
 +
service libvirtd start
 +
Use chkconfig to ensure that all three of these services start automatically on system boot.
  
Endeavour
+
Next it’s necessary to set up the network bridge so that the virtual machines can function on the network in the same way as physical servers. To do this, copy /etc/sysconfig/network-scripts/ifcfg-eth0 (or whichever is the file for the active network interface) to /etc/sysconfig/network-scripts/ifcfg-br0.
ST31000340NS
 
ST31000524AS
 
ST31000526SV
 
  MTBF 1,000,000 hours
 
  Start / Stop Cycles 50,000
 
  Non-Recoverable Errors 1 per 10^14
 
  
===Areca 1680 2010-01-10===
+
In /etc/sysconfig/network-scripts/ifcfg-eth0, comment out all the lines for ‘BOOTPROTO’, ‘DNS1', ‘GATEWAY’, ‘IPADDR’ and ‘NETMASK’, then add this line:
4.3 Driver Installation for Linux
+
BRIDGE="br0"
 +
Then edit /etc/sysconfig/network-scripts/ifcfg-br0, comment out the ‘HWADDR’ line, change the ‘DEVICE’ to “br0", and change the ‘TYPE’ to “Bridge”.
  
This chapter describes how to install the SAS RAID controller driver to Red Hat Linux, SuSE and other versions of Linux. Before installing the SAS RAID driver to the Linux, complete the following actions:
+
Then restart the network:
 +
service network restart
 +
The bridge should now be up and running. You can check its status with:
 +
ifconfig
 +
brctl show
  
# Install and configure the controller and hard disk drives according to the instructions in Chapter 2 Hardware Installation.
+
====Creating the disk volumes for a new virtual machine====
# Start the system and then press Tab+F6 to enter the McBIOS RAID manager configuration utility. Using the McBIOS RAID manager to create the RAID set and volume set. For details, see Chapter 3, McBIOS RAID Manager.
+
We need to create new LVM volumes for the root and swap partitions in the new virtual machine. I’m assuming LVM is already being used, that the volume group is called ‘sysvg’, and that there is sufficient free space available in the sysvg group for the new volumes. If your volume group has a different name then just modify the instructions below accordingly. Change the volume sizes to suit your requirements:
 +
lvcreate -L 20G -n vm-root sysvg
 +
lvcreate -L 4096M -n vm-swap sysvg
  
If you are using a Linux distribution for which there is not a compiled driver available from Areca, you can copy the source from the SAS software CD or download the source from the Areca website and compile a new driver.
+
====Installing the operating system on the new virtual machine====
 +
Here I’m installing CentOS 6 on the guest machine using Kickstart, although I will also explain how to perform a normal non-automated installation. You’ll need to modify the instructions accordingly to install different operating systems.
 +
To make CentOS easily available for the installation, firstly make sure you have Apache installed and running. If necessary, install it with:
 +
yum -y install httpd
 +
Then start it with:
 +
service httpd start
 +
Then create the directory /var/www/html/CentOS and copy the contents of the CentOS DVDs into it.
  
Compiled and tested drivers for Red Hat and SuSE Linux are included on the shipped CD. You can download updated versions of com- piled and tested drivers for RedHat or SuSE Linux from the Areca web site at http://www.areca.com.tw. Included in these downloads is the Linux driver source, which can be used to compile the updat- ed version driver for RedHat, SuSE and other versions of Linux. Please refer to the “readme.txt” file on the included Areca CD or website to make driver diskette and to install driver to the system.
+
If you’re using Kickstart then you’ll need these lines in your Kickstart config file to make sure that it can find the files from the CentOS DVDs. The IP address of the host in this example is 192.168.1.1, so change that as needed:
 +
install
 +
url --url http://192.168.1.1/CentOS
 +
These lines are also required to make sure that Kickstart can find and use the LVM volumes created earlier:
 +
zerombr
 +
clearpart --all --initlabel
 +
bootloader --location=mbr
 +
part / --fstype ext4 --size 1 --grow --ondrive=vda
 +
part swap --size 1 --grow --ondrive=vdb
 +
Once the Kickstart file is ready, call it ks.cfg and copy it to /var/www/html
  
===Areca Scripts===
+
This command installs CentOS on the guest using a Kickstart automated installation. The guest is called ‘vm’, it has a dedicated physical CPU core (core number 2) and 1 GB of RAM allocated to it. Again, the IP address of the host is 192.168.1.1, so change that as needed:
This is a collection of the Areca Scripts I have attempted to build.
+
virt-install --name=vm --cpuset=2 --ram=1024
 +
  --network bridge=br0 --disk=/dev/mapper/sysvg-vm--root
 +
  --disk=/dev/mapper/sysvg-vm--swap --vnc --vnclisten=0.0.0.0
 +
  --noautoconsole --location /var/www/html/CentOS
 +
  --extra-args "ks=http://192.168.1.1/ks.cfg"
 +
The installation screen can be seen by connecting to the host via VNC. This isn’t necessary for a Kickstart installation (unless something goes wrong). If you want to do a normal install rather than a Kickstart install then you will need to use VNC to get to the installation screen, and in that case you’ll want to use the virt-install command above but just leave off everything from ‘–extra-args’ onwards.
  
====grep_areca_info.sh 2012-10-09====
+
Also, you may want to install directly from a CDROM image, in which case replace the ‘–location’ bit with ‘–cdrom=’ and the path to the CD image, e.g. to install Ubuntu in your VM you might put ‘–cdrom=/tmp/ubuntu-12.04.1-server-i386.iso’.
#!/bin/bash
+
(If virtual servers are already using VNC on the host then you will need to add the appropriate number to the VNC port number to connect to, e.g. the standard VNC port is 5900, and if there are already two virtual servers using VNC on the host then you will need to connect VNC to port 5902 for this install.).
cat /net/data/taro/areca/areca_info | grep -A 52 $1 | grep \#$3 | grep $2
 
  
====areca_info.sh 2014-01-14====
+
====General administration of virtual machines====
#!/bin/bash
+
Once you’ve got your virtual machine installed, you’ll need to know the various commands for everyday administration of KVM virtual machines. In these examples, change the name of the VM from ‘vm’ to whatever yours is called.
info=areca_info
 
echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" >> $info
 
echo "`date +%Y-%m-%d_%T`_`echo $HOSTNAME`" >> $info
 
echo "------------------------------------------------------------------" >> $info
 
echo -e "Drv#\t`areca_cli64 disk smart info drv=1 | grep Attribute`" >> $info
 
echo "======================================================================================" >> $info
 
for i in `seq 1 $1`
 
do
 
  areca_cli64 disk smart info drv=$i > .areca_temp
 
  echo -e "`echo \#$i`\t`cat .areca_temp | grep Start`" >> $info
 
done
 
for i in `seq 1 $1`
 
do
 
  areca_cli64 disk smart info drv=$i > .areca_temp
 
  echo -e "`echo \#$i`\t`cat .areca_temp | grep Power-on`" >> $info
 
done
 
for i in `seq 1 $1`
 
do
 
  areca_cli64 disk info drv=$i > .areca_temp
 
  echo -e "`echo \#$i`\t`cat .areca_temp | grep Temperature`" >> $info
 
done
 
rm .areca_temp
 
echo "------------------------------------------------------------------" >> $info
 
areca_cli64 hw info | grep Temp >> $info
 
  
==== mydata.py 2012-06-19 ====
+
To show general info about virtual machines, including names and current state:
  #!/usr/bin/python
+
virsh list --all
  import sqlite3
+
To see a top-style monitor window for all VMs:
  import re
+
virt-top
  data = open("mydata","r")
+
To show info about a specific virtual machine:
  all_data = data.read()
+
  virsh dominfo vm
  all_data_split = all_data.split("+++")
+
To start a virtual machine:
for i in all_data_split:
+
  virsh start vm
  print i
+
To pause a virtual machine:
  #Make connection to database mydata.db,
+
  virsh suspend vm
  # which is in the current directory.
+
To resume a virtual machine:
conn = sqlite3.connect('mydata.db')
+
  virsh resume vm
  c = conn.cursor()
+
To shut down a virtual machine (the ‘acpid’ service must be running on the guest for this to work):
# Insert a row of data
+
  virsh shutdown vm
  c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
+
To force a hard shutdown of a virtual machine:
# Save (commit) the changes
+
virsh destroy vm
conn.commit()
+
To remove a domain (don’t do this unless you’re sure you really don’t want this virtual machine any more):
# We can also close the cursor if we are done with it
+
  virsh undefine vm
c.close()
+
 
# Create table
+
====Cloning virtual machines====
#c.execute('''CREATE TABLE stocks
+
To clone a guest VM, firstly it’s necessary to create new disk volumes for the clone, then we use the virt-clone command to clone the existing VM:
#            (date text, trans text, symbol text, qty real, price real)''')
+
lvcreate -L 20G -n newvm-root sysvg
 +
  lvcreate -L 4096M -n newvm-swap sysvg
 +
  virt-clone -o vm -n newvm -f /dev/mapper/sysvg-newvm--root
 +
  -f /dev/mapper/sysvg-newvm--swap
 +
Then dump the XML for the new VM:
 +
virsh dumpxml newvm > /tmp/newvm.xml
 +
Edit /tmp/newvm.xml. Look for the ‘vcpu’ line and change the ‘cpuset’ number to the CPU core you want to dedicate to this VM. Then make this change effective:
 +
  virsh define /tmp/newvm.xml
 +
You’ll also need to grab the MAC address from the XML. Keep this available as we’ll need it in a minute:
 +
  grep "mac address" /tmp/newvm.xml | awk -F ' '{print $2}'
 +
Start up the new VM and connect to it via VNC as per the instructions in the Installation section above. Edit /etc/sysconfig/network and change the hostname to whatever you want to use for this new machine. Then edit /etc/sysconfig/network-scripts/ifcfg-eth0 and change the ‘HOSTNAME’ and ‘IPADDR’ to the settings you want for this new machine. Change the ‘HWADDR’ to the MAC address you obtained a moment ago, making sure that the letters are capitalised.
 +
 
 +
Then reboot and the new VM should be ready.
  
==LDAP and Email==
+
====Backing up and migrating virtual machines====
 +
In order to take backups and to be able to move disk volumes from virtual machines to other hosts, we basically need to create disk image files from the LVM volumes. We’ll first snapshot the LVM volume and take the disk image from the snapshot, as this significantly reduces the amount of time that the VM needs to remain paused (i.e. effectively offline) for. We remove the snapshot at the end of the process so that the VM’s IO is not negatively affected.
  
===LDAP setup 2009-05-20===
+
This disk image, once created, can then be stored in a separate location as a backup, and/or transferred to another host server in order to copy or move the VM there.
Setting up through command line
 
sudo -s (to be root)
 
  
  env HOME=/root /usr/local/bin/adduser-npg
+
So, make sure that the VM is paused or shut down, then create a LVM snapshot, then resume the VM, then create the image from the snapshot, then remove the snapshot:
  make sure that in adduser-npg (script) that the location for luseradd is set to /usr/sbin/
+
virsh suspend vm
add user to farm, npg, and domain-admins
+
lvcreate -L 100M -n vm-root-snapshot -s /dev/sysvg/vm-root
Something is still wrong with the lgroupmod
+
  virsh resume vm
 +
dd if=/dev/mapper/sysvg-vm--root--snapshot
 +
  of=/tmp/vm-root.img bs=1M
 +
lvremove /dev/mapper/sysvg-vm--root--snapshot
 +
You can then do what you like with /tmp/vm-root.img – store it as a backup, move it to another server, and so forth.
 +
 
 +
In order to restore from it or create a VM from it on a new server, firstly use ‘lvcreate’ to create the LVM volume for restore if it isn’t already there, then copy the disk image to the LVM volume:
 +
  dd if=/tmp/vm-root.img of=/dev/mapper/sysvg-vm--root bs=1M
 +
You may also need to perform this procedure for the swap partition depending on what you are trying to achieve.
 +
 
 +
You’ll also want to back up the current domain configuration for the virtual machine:
 +
virsh dumpxml vm > /tmp/vm.xml
 +
Then just store the XML file alongside the disk image(s) you’ve taken.
 +
 
 +
If you’re moving the virtual machine to a new server then once you’ve got the root and swap LVM volumes in place, you’ll need to create the domain for the virtual machine on the new server. Firstly edit the XML file and change the locations of disk volumes to the layout on the new server if it’s different to the old server, then define the new domain:
 +
virsh define /tmp/vm.xml
 +
You should then be able to start up the ‘vm’ virtual machine on the new server.
 +
 
 +
====Resizing partitions on a guest====
 +
Let’s say we want to expand the root partition on our VM from 20G to 25G. Firstly make sure the VM is shut down, then use virt-filesystems to get the information we need for the resize procedure:
 +
virsh shutdown vm
 +
virt-filesystems -lh -a /dev/mapper/sysvg-vm--root
 +
This will probably tell you that the available filesystem on that volume is /dev/sda1, which is how these tools see the virtual machine’s /dev/vda1 partition. We’ll proceed on the basis that this is the case, but if the filesystem device name is different then alter the command below accordingly.
 +
 
 +
Next we create a new volume, then we perform the virt-resize command from the old volume to the new volume, then we set the new volume as the active root partition for our domain:
 +
lvcreate -L 25G -n vm-rootnew sysvg
 +
virt-resize --expand /dev/sda1 /dev/mapper/sysvg-vm--root
 +
  /dev/mapper/sysvg-vm--rootnew
 +
lvrename /dev/sysvg/vm-root /dev/sysvg/vm-rootold
 +
lvrename /dev/sysvg/vm-rootnew /dev/sysvg/vm-root
 +
virsh start vm
 +
Then, when you’re sure the guest is running OK with the new resized partition, remove the old root partition volume:
 +
lvremove /dev/mapper/sysvg-vm--rootold
 +
 
 +
==Personal bash Scripts==
 +
This is a collection of bash scripts I have written over the years.
  
===LDAP_output.py===
+
===bash_denyhost.sh===
  #!/usr/bin/env python
+
  ##!/bin/bash
 +
##For Gourd
 +
#DIR_HOST=/usr/share/denyhosts/data
 +
##For Endeavour
 +
##DIR_HOST=/var/lib/denyhosts
 +
#
 +
#echo "Enter IP or HOST"
 +
#read IP_HOST
 +
#
 +
#echo "/etc/hosts.deny"
 +
#cat /etc/hosts.deny | grep $IP_HOST
 +
#echo "hosts"
 +
#cat DIR_HOST/hosts | grep $IP_HOST
 +
#echo "hosts-restricted"
 +
#cat $DIR_HOST/hosts-restricted | grep $IP_HOST
 +
#echo "hosts-root"
 +
#cat $DIR_HOST/hosts-root | grep $IP_HOST
 +
#echo "hosts-valid"
 +
#cat $DIR_HOST/hosts-valid | grep $IP_HOST
 +
#echo "user-hosts"
 +
#cat $DIR_HOST/user-hosts | grep $IP_HOST
 +
 +
===bash_profile===
 +
## .bash_profile
 +
#
 +
## Get the aliases and functions
 +
#if [ -f ~/.bashrc ]; then
 +
# . ~/.bashrc
 +
#fi
 +
#
 +
## User specific environment and startup programs
 +
#PATH=$PATH:$HOME/bin:/sbin
 +
#export PATH
 +
#unset USERNAME
 +
 +
===bashrc===
 +
## .bashrc
 
  #
 
  #
  # Copyright (C) 2011 Adam Duston
+
  #PATH=$PATH:$HOME/bin:/sbin
 +
## Source global definitions
 +
#if [ -f /etc/bashrc ]; then
 +
# . /etc/bashrc
 +
#fi
 +
#export PATH="/opt/mono-1.2.4/bin:$PATH"
 +
#export PKG_CONFIG_PATH="/opt/mono-1.2.4/lib/pkgconfig:$PKG_CONFIG_PATH"
 +
#export MANPATH="/opt/mono-1.2.4/share/man:$MANPATH"
 +
#export LD_LIBRARY_PATH="/opt/mono-1.2.4/lib:$LD_LIBRARY_PATH"
 +
#export CLASSPATH=~/Download/BarabasiAlbertGenerator/jung-1.7.6.jar
 
  #
 
  #
  # This program is free software: you can redistribute it and/or modify
+
  ## Keep 10000 lines in .bash_history (default is 500)
# it under the terms of the GNU General Public License as published by
+
  #export HISTSIZE=100000
  # the Free Software Foundation, either version 3 of the License, or
+
  #export HISTFILESIZE=100000
  # (at your option) any later version.
 
 
  #
 
  #
  # This program is distributed in the hope that it will be useful,
+
  ## User specific aliases and functions
  # but WITHOUT ANY WARRANTY; without even the implied warranty of
+
  #alias ll='ls -lh --color=no'
  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+
  #alias ssh='ssh -X'
  # GNU General Public License for more details.
+
  #alias doc='cd /net/home/Tbow/Desktop/documentation-notes'
 
  #
 
  #
  # You should have received a copy of the GNU General Public License
+
  ##Work Servers and PC's
  # along with this programIf not, see <http://www.gnu.org/licenses/>.
+
  #alias sshpsplunk='ssh -l Tbow -L 8080:localhost:8000 pumpkin.unh.edu'
 +
  #alias sshptemp='ssh -l Tbow -L 8081:10.0.0.98:80 pumpkin.unh.edu'
 +
#alias sshpipmitomato='ssh -l Tbow -L 8082:10.0.0.148:80 pumpkin.unh.edu'
 +
#alias sshvncokra='ssh -L 5900:localhost:5900 admin@okra'
 +
#alias ssht='ssh -l Tbow taro.unh.edu'
 +
#alias sshg='ssh -l Tbow gourd.unh.edu'
 +
#alias sshe='ssh -l Tbow einstein.unh.edu'
 +
#alias sshl='ssh -l Tbow lentil.unh.edu'
 +
#alias sshp='ssh -l Tbow pumpkin.unh.edu'
 +
#alias sshj='ssh -l Tbow jalapeno.unh.edu'
 
  #
 
  #
  import os,sys,getopt,random,ldif,ldap,subprocess
+
  ##Reading notes script
  import ldap.modlist as modlist
+
#alias reading_notes='python ~/.bash_reading_notes.py'
  from string import letters,digits
+
#
  from getpass import getpass
+
##aliases that link to bash scripts
  from crypt import crypt
+
#alias denyhostscheck='sh ~/.bash_denyhosts'
  from grp import getgrnam
+
#
  from time import sleep
+
## Wake on LAN commands
  from shutil import copytree
+
#alias wollen='sudo ether-wake 00:23:54:BC:70:F1'
 +
#alias wollis='sudo ether-wake 00:1e:4f:9b:26:d5'
 +
#alias wolben='sudo ether-wake 00:1e:4f:9b:13:90'
 +
#alias wolnode2='sudo ether-wake 00:30:48:C6:F6:80'
 +
#
 +
##alias for grep command
 +
#alias grepconfig="grep '^[^ #]'"
 +
#
 +
##Command to search log files with date
 +
##cat /var/log/secure* | grep "`date --date="2013-05-14" +%b\ %e`">> temp_secure_log.txt
 +
 +
===bash_reading_notes.py===
 +
##!/usr/bin/python
 +
#
 +
#import sys
 +
  #import re
 +
#
 +
#def printusage():
 +
# print "Usage:"
 +
# print "  reading_notes input_file"
 +
# print "  reading_notes input_file btag etag"
 +
# print "  reading_notes input_file btag etag output_file"
 +
#
 +
#try:
 +
# sys.argv[1]
 +
  #except IndexError:
 +
# print "Need input_file"
 +
# printusage()
 +
# sys.exit()
 +
#
 +
#if sys.argv[1] == "--help":
 +
# printusage()
 +
# sys.exit()
 +
#elif len(sys.argv) == 4:
 +
# ifile = sys.argv[1]
 +
# btag = sys.argv[2]
 +
# etag = sys.argv[3]
 +
# ofile = "output_notes"
 +
#elif len(sys.argv) == 5:
 +
# ifile = sys.argv[1]
 +
# btag = sys.argv[2]
 +
# etag = sys.argv[3]
 +
# ofile = sys.argv[4]
 +
  #else:
 +
  # ifile = sys.argv[1]
 +
  # btag = 'Xx'
 +
  # etag = 'Yy'
 +
  # ofile = "output_notes"
 
  #
 
  #
  ldap_server = "ldaps://einstein.farm.physics.unh.edu:636"
+
  ##Open file for read only access
  basedn      = "dc=physics,dc=unh,dc=edu"
+
  #input = open(ifile,"r")
domain      = "physics.unh.edu"
+
  ##Open file with write access
  homedir    = "/home"
+
  #output = open(ofile,"w")
  maildir    = "/mail"
 
admin_dn    = "cn=root,dc=physics,dc=unh,dc=edu"
 
users_ou    = "ou=People"
 
skel_dir    = "/etc/skel/"
 
 
  #
 
  #
  def usage():
+
  ##Organize initial data into a raw array.
    """  
+
#def splitin(infile):
        Print usage information
+
# #Export data from the file into variable i
    """
+
# i = infile.read()
    print "Usage: usergen.py [options] USERNAME"
+
# #Split o into an array based on "--- Page "
    print "Creates a new NPG user account and adds to the LDAP database."
+
# i_split = i.split("--- Page ")
    print "Will prompt for necessary values if not provided."
+
# #Write data to output array
    print "The--ldif and --disable options effect existing accounts,"
+
# a = []
    print "and will not attempt to add new users to the LDAP database."
+
# for z in i_split:
    print " "
+
#  if z.find(btag) >= 0:
    print "Options:"
+
#  a.append(z)
    print "-d, --create-dirs"
+
# return a
    print "    Create home and mail directories for the new account. "
+
#
    print "-f, --firstname NAME"
+
##Sifts through data array and outputs to file.
    print "    The user's first name."
+
#def processdata(t, outfile):
    print "-l, --lastname NAME"
+
# for m in t:
    print "   The user's last name."
+
#  c = cleanup(m)
    print "-m, --mail ADDRESS"  
+
#  for u in c:
    print "    The user's e-mail address."  
+
#  #output is a global variable
    print "-u, --uid UID"
+
#  outfile.write(u)
    print "   The user's numerical UID value."
+
#
    print "-g, --gid GID"
+
##Process the array based on delimiters
    print "    The numerical value of the user's primary group."
+
#def cleanup(v):
    print "-s, --shell SHELL"
+
# q = []
    print "    The user's login shell."
+
# q.append('--- ' + v[0:v.find('\n')])
    print "-h, --help"
+
# s = v.split(btag)
    print "    Display this help message and exit."
+
# for p in s:
    print "--disable"
+
if p.find(etag) >= 0:
    print "    Disables logins by changing user's login shell to /bin/false."
+
#  q.append(p[0:p.find(etag)])
    print "--ldif"
+
#  #Detects end of array and doesn't append
    print "    Save user details to an LDIF file, but do not add the user to LDAP."
+
#  if p != s[-1]:
  #   
+
#    q.append('---')
  def makeuser( login, firstname, lastname, mail, \
+
# q.append('\n')
              uidnum, gidnum, shell, password ):
+
# return q
    """
+
#
        Returns a tuple containing full dn and a dictionary of
+
##This is the main function
        attributes for the user information given. Output intended
+
#indata = splitin(input)
        to be used for adding new user to LDAP database or generating
+
#processdata(indata, output)
        an LDIF file for that user.
+
    """
+
===bash_useful_commands===
#
+
##!/bin/bash
    dn = "uid=%s,%s,%s" % (login,users_ou,basedn)
+
##sites for grep:
    attrs = {}
+
##http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
    attrs['uid'] = [login]
+
##sites for find:
    attrs['objectClass'] = ['top', 'posixAccount', 'shadowAccount',
+
##http://www.codecoffee.com/tipsforlinux/articles/21.html
                            'inetOrgPerson', 'organizationalPerson',
+
##http://www.cyberciti.biz/faq/find-large-files-linux/
                            'person']
+
##Site for using find with dates
    attrs['loginShell'] = [shell]
+
##http://www.cyberciti.biz/faq/howto-finding-files-by-date/
    attrs['uidNumber'] = [uidnum]
+
##
    attrs['gidNumber'] = [gidnum]
+
##Command to search through log with specific date
    attrs['mail'] = [mail]
+
#cat /var/log/secure* | grep "`date --date="2013-05-14" +%b\ %e`">> temp_secure_log.txt
    attrs['homeDirectory'] = ['%s/%s' % (homedir, login)]
+
##will look through files for specific date
    attrs['cn'] = ['%s %s' % (firstname, lastname)]
+
#grep -HIRn "`date --date="2013-05-14" +%b\ %e`" /var/log/*
    attrs['sn'] = [lastname]
+
##Output all lines except commented ones
    attrs['gecos'] = ['%s %s' % (firstname, lastname)]
+
#grep -HIRn '^[^ #]' /etc/nsswitch.conf
    attrs['userPassword'] = [password]
+
##grep recursively with line number and filename
  #
+
#grep -HIRn "ldap" /var/log/*
    return (dn, attrs)
+
##find a file in a specific directory with particular string
  #
+
#find /etc/ -name 'string'
  def getsalt():
+
##grep files that have been output by find
    """
+
#grep -HIRn 'einstein' `find /etc/ -name 'mailman'`
        Return a two-character salt to use for hashing passwords.
+
##find a files above a certain size
    """
+
#find . -type f -size +10000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
    chars = letters + digits
+
##Insert file1 into file at line 3
    return random.choice(chars) + random.choice(chars)
+
#sed -i '3r file1' file
  #
+
##Insert text at line 3
  def user_exists(username):
+
#sed -i "3i this tesxt" file
    """
+
##grep for lines with no comments and contains /www or /log
        Search LDAP database to verify whether username already exists.
+
#grep -EHIRn '^[^#].*(\/www|\/log).*' file
        Return a boolean value.
+
##grep for email addresses
    """
+
#grep -EHIRni '^[^#].*[a-z0-9]{1,}@.*(com|net|org|uk|mil|gov|edu).*' file
  #
+
##Commenting multiple lines in vim
    search_base = "%s,%s" % (users_ou,basedn)
+
## go to first char of a line and use blockwise visual mode (CTRL-V)
    search_string = "(&(uid=%s)(objectClass=posixAccount))" % username
+
## go down/up until first char of all lines I want to comment out are selected
  #  
+
## use SHIFT-I and then type my comment character (# for ruby)
    try:
+
## use ESC to insert the comment character for all lines
        # Open LDAP Connection
+
#
        ld = ldap.initialize(ldap_server)
+
##In Vim command mode use this to add a comment to the beginning of a range (N-M) of lines.
  #       
+
#:N,Ms/^/#/
        # Bind anonymously to the server
+
#To take that comment away use:
        ld.simple_bind_s("","")  
+
#:N,Ms/^//
  #
+
 
        # Search for username
+
==RAID and Areca==
        result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
+
 
                            ['distinguisedName'])
+
===Drive Life 2012-06-24===
  #
+
This is a list of expected drive life from manufacturer.  All of these drives are in are RAIDs.
        # Close connection
+
 
        ld.unbind_s()                    
+
Pumpkin
  #   
+
ST3750640NS (p.23)
    except ldap.LDAPError, err:
+
  8,760 power-on-hours per year.
        print "Error searching LDAP database: %s" % err
+
  250 average motor start/stop cycles per year.
        sys.exit(1)
+
ST3750640AS (p.37)
 +
  2400 power-on-hours per year.
 +
  10,000 average motor start/stop cycles per year.
 +
  WDC WD7500AAKS-00RBA0
 +
  Start/stop cycles 50,000
 +
 
 +
Endeavour
 +
ST31000340NS
 +
ST31000524AS
 +
  ST31000526SV
 +
  MTBF 1,000,000 hours
 +
  Start / Stop Cycles 50,000
 +
  Non-Recoverable Errors 1 per 10^14
 +
 
 +
===Areca 1680 2010-01-10===
 +
4.3 Driver Installation for Linux
 +
 
 +
This chapter describes how to install the SAS RAID controller driver to Red Hat Linux, SuSE and other versions of Linux. Before installing the SAS RAID driver to the Linux, complete the following actions:
 +
 
 +
# Install and configure the controller and hard disk drives according to the instructions in Chapter 2 Hardware Installation.
 +
# Start the system and then press Tab+F6 to enter the McBIOS RAID manager configuration utility. Using the McBIOS RAID manager to create the RAID set and volume set. For details, see Chapter 3, McBIOS RAID Manager.
 +
 
 +
If you are using a Linux distribution for which there is not a compiled driver available from Areca, you can copy the source from the SAS software CD or download the source from the Areca website and compile a new driver.
 +
 
 +
Compiled and tested drivers for Red Hat and SuSE Linux are included on the shipped CD. You can download updated versions of com- piled and tested drivers for RedHat or SuSE Linux from the Areca web site at http://www.areca.com.tw. Included in these downloads is the Linux driver source, which can be used to compile the updat- ed version driver for RedHat, SuSE and other versions of Linux. Please refer to the “readme.txt” file on the included Areca CD or website to make driver diskette and to install driver to the system.
 +
 
 +
===Areca Scripts===
 +
This is a collection of the Areca Scripts I have attempted to build.
 +
 
 +
====grep_areca_info.sh 2012-10-09====
 +
#!/bin/bash
 +
cat /net/data/taro/areca/areca_info | grep -A 52 $1 | grep \#$3 | grep $2
 +
 
 +
====areca_info.sh 2014-01-14====
 +
#!/bin/bash
 +
info=areca_info
 +
echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" >> $info
 +
echo "`date +%Y-%m-%d_%T`_`echo $HOSTNAME`" >> $info
 +
echo "------------------------------------------------------------------" >> $info
 +
  echo -e "Drv#\t`areca_cli64 disk smart info drv=1 | grep Attribute`" >> $info
 +
echo "======================================================================================" >> $info
 +
  for i in `seq 1 $1`
 +
  do
 +
  areca_cli64 disk smart info drv=$i > .areca_temp
 +
  echo -e "`echo \#$i`\t`cat .areca_temp | grep Start`" >> $info
 +
done
 +
for i in `seq 1 $1`
 +
do
 +
  areca_cli64 disk smart info drv=$i > .areca_temp
 +
  echo -e "`echo \#$i`\t`cat .areca_temp | grep Power-on`" >> $info
 +
done
 +
for i in `seq 1 $1`
 +
do
 +
  areca_cli64 disk info drv=$i > .areca_temp
 +
  echo -e "`echo \#$i`\t`cat .areca_temp | grep Temperature`" >> $info
 +
  done
 +
  rm .areca_temp
 +
echo "------------------------------------------------------------------" >> $info
 +
areca_cli64 hw info | grep Temp >> $info
 +
 
 +
==== mydata.py 2012-06-19 ====
 +
#!/usr/bin/python
 +
import sqlite3
 +
  import re
 +
data = open("mydata","r")
 +
all_data = data.read()
 +
all_data_split = all_data.split("+++")
 +
for i in all_data_split:
 +
  print i
 +
  #Make connection to database mydata.db,
 +
# which is in the current directory.
 +
conn = sqlite3.connect('mydata.db')
 +
  c = conn.cursor()
 +
# Insert a row of data
 +
c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
 +
  # Save (commit) the changes
 +
conn.commit()
 +
# We can also close the cursor if we are done with it
 +
c.close()
 +
# Create table
 +
#c.execute('''CREATE TABLE stocks
 +
#            (date text, trans text, symbol text, qty real, price real)''')
 +
 
 +
==LDAP and Email==
 +
 
 +
===LDAP setup 2009-05-20===
 +
Setting up through command line
 +
  sudo -s (to be root)
 +
 
 +
env HOME=/root /usr/local/bin/adduser-npg
 +
make sure that in adduser-npg (script) that the location for luseradd is set to /usr/sbin/
 +
  add user to farm, npg, and domain-admins
 +
Something is still wrong with the lgroupmod
 +
 
 +
===LDAP_output.py===
 +
#!/usr/bin/env python
 
  #
 
  #
    # If user is not found, result should be an empty list.
+
# Copyright (C) 2011 Adam Duston
    if len(result) != 0:
 
        return True
 
    else:
 
        return False
 
 
  #
 
  #
  def get_uids():
+
  # This program is free software: you can redistribute it and/or modify
    """
+
# it under the terms of the GNU General Public License as published by
        Return a list of UID numbers currently in use in the LDAP database.
+
  # the Free Software Foundation, either version 3 of the License, or
    """
+
  # (at your option) any later version.
  #  
 
    search_base = "%s,%s" % (users_ou, basedn)
 
    search_string = "(objectClass=posixAccount)"
 
  #  
 
    try:
 
        # Bind anonymously
 
        ld = ldap.initialize(ldap_server)  
 
       
 
        ld.simple_bind_s("","")
 
       
 
        # Get UIDS from all posixAccount objects.
 
        result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
 
                            ['uidNumber'])
 
#   
 
        ld.unbind_s()
 
 
  #
 
  #
    except ldap.LDAPError, err:
+
# This program is distributed in the hope that it will be useful,
            print "Error connecting to LDAP server: %s" % err
+
# but WITHOUT ANY WARRANTY; without even the implied warranty of
            sys.exit(1)
+
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 +
# GNU General Public License for more details.
 
  #
 
  #
    # Pull the list of UIDs out of the results.  
+
# You should have received a copy of the GNU General Public License
    uids = [result[i][1]['uidNumber'][0] for i in range(len(result))]
+
# along with this program. If not, see <http://www.gnu.org/licenses/>.
 +
#
 +
import os,sys,getopt,random,ldif,ldap,subprocess
 +
import ldap.modlist as modlist
 +
from string import letters,digits
 +
from getpass import getpass
 +
from crypt import crypt
 +
from grp import getgrnam
 +
from time import sleep
 +
from shutil import copytree
 
  #
 
  #
     # Sort UIDS and return
+
ldap_server = "ldaps://einstein.farm.physics.unh.edu:636"
     return sorted(uids)
+
basedn      = "dc=physics,dc=unh,dc=edu"
 +
domain      = "physics.unh.edu"
 +
homedir     = "/home"
 +
maildir     = "/mail"
 +
admin_dn    = "cn=root,dc=physics,dc=unh,dc=edu"
 +
users_ou    = "ou=People"
 +
skel_dir    = "/etc/skel/"
 
  #
 
  #
  def create_ldif(dn, attrs):
+
  def usage():
 +
    """
 +
        Print usage information
 
     """
 
     """
        Output an LDIF file to the current directory.  
+
    print "Usage: usergen.py [options] USERNAME"
     """
+
    print "Creates a new NPG user account and adds to the LDAP database."
#
+
     print "Will prompt for necessary values if not provided."
     try:
+
    print "The--ldif and --disable options effect existing accounts,"
        file = open(str(attrs['uid'][0]) + ".ldif", "w")
+
     print "and will not attempt to add new users to the LDAP database."  
#       
+
    print " "  
        writer = ldif.LDIFWriter(file)
+
    print "Options:"
        writer.unparse(dn, attrs)
+
    print "-d, --create-dirs"
#
+
    print "    Create home and mail directories for the new account. "
        file.close()
+
     print "-f, --firstname NAME"
#   
+
    print "   The user's first name."
     except EnvironmentError, err:
+
    print "-l, --lastname NAME"
        print "Unable to open file: %s" % err
+
     print "   The user's last name."
        sys.exit(1)
+
    print "-m, --mail ADDRESS"  
#
+
    print "    The user's e-mail address."
def ldap_add(dn, attrs):
+
    print "-u, --uid UID"
     """
+
    print "    The user's numerical UID value."
        Add a user account with the given dn and attributes to the LDAP
+
     print "-g, --gid GID"
        database. Requires authentication as LDAP admin. If user added
+
    print "    The numerical value of the user's primary group."
        successfully return true, else return False.  
+
    print "-s, --shell SHELL"
     """  
+
    print "   The user's login shell."
#    
+
     print "-h, --help"
    try:
+
    print "     Display this help message and exit."
        # Open a connection to the ldap server
+
    print "--disable"
        ld = ldap.initialize(ldap_server)
+
    print "    Disables logins by changing user's login shell to /bin/false."
#       
+
    print "--ldif"
        print "\nAdding new user record. Authentication required."  
+
     print "   Save user details to an LDIF file, but do not add the user to LDAP."
#       
+
  #    
        # Bind to the server as administrator      
+
  def makeuser( login, firstname, lastname, mail, \
        ld.simple_bind_s(admin_dn,getpass("LDAP Admin Password: "))
+
              uidnum, gidnum, shell, password ):
#       
 
        # Convert attrs to correct syntax for ldap add_s function
 
        ldif = modlist.addModlist(attrs)
 
#
 
        # Add the entry to the LDAP server
 
        ld.add_s(dn, ldif)
 
#
 
        # Close connection to the server
 
        ld.unbind_s()
 
#     
 
        print "User account added successfully."  
 
        return True
 
#
 
     except ldap.LDAPError, err:
 
        print "Error adding new user: %s" % err
 
        return False
 
  #
 
  def ldap_disable(username):
 
 
     """
 
     """
         Disable login on a user a count by setting the login shell to  
+
         Returns a tuple containing full dn and a dictionary of
         /bin/false.
+
        attributes for the user information given. Output intended
 +
        to be used for adding new user to LDAP database or generating
 +
         an LDIF file for that user.
 
     """
 
     """
    try:
 
        # Open a connection to the ldap server
 
        ld = ldap.initialize(ldap_server)
 
 
  #
 
  #
        print "\nModifying user record. Authentication required."
+
    dn = "uid=%s,%s,%s" % (login,users_ou,basedn)
#
+
    attrs = {}
        ld.simple_bind_s(admin_dn,getpass("LDAP Admin Password: "))
+
    attrs['uid'] = [login]
#   
+
    attrs['objectClass'] = ['top', 'posixAccount', 'shadowAccount',
        # Set the dn to modify and the search parameters
+
                            'inetOrgPerson', 'organizationalPerson',
        mod_dn = "uid=%s,%s,%s" % (username,users_ou,basedn)
+
                            'person']
        search_base = "%s,%s" % (users_ou,basedn)
+
    attrs['loginShell'] = [shell]
        search_string = "(&(uid=%s)(objectClass=posixAccount))" % username
+
     attrs['uidNumber'] = [uidnum]
#       
+
    attrs['gidNumber'] = [gidnum]
        # Get the current value of loginShell from the user LDAP entry.
+
    attrs['mail'] = [mail]  
        result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
+
    attrs['homeDirectory'] = ['%s/%s' % (homedir, login)]
                            ['loginShell'])
+
    attrs['cn'] = ['%s %s' % (firstname, lastname)]
#      
+
    attrs['sn'] = [lastname]
        oldshell = result[0][1]
+
    attrs['gecos'] = ['%s %s' % (firstname, lastname)]
        newshell = {'loginShell':['/bin/false']}
+
    attrs['userPassword'] = [password]
#   
+
  #
        # Use modlist to configure changes
+
    return (dn, attrs)  
        diff = modlist.modifyModlist(oldshell,newshell)
 
  #      
 
        # Modify the LDAP entry.
 
        ld.modify_s(mod_dn,diff)
 
 
  #
 
  #
        # Unbind from the LDAP server
+
def getsalt():
        ld.unbind_s()
+
    """
#       
+
         Return a two-character salt to use for hashing passwords.
         # Return True if successful
+
    """
        return True
+
    chars = letters + digits
 +
    return random.choice(chars) + random.choice(chars)
 
  #
 
  #
    except ldap.LDAPError, err:
+
  def user_exists(username):
        print "Error connecting to LDAP server: %s" % err
 
        return False
 
 
  def chown_recursive(path, uid, gid):
 
 
     """
 
     """
         Recursively set ownership for the files in the given
+
         Search LDAP database to verify whether username already exists.
         directory to the given uid and gid.  
+
         Return a boolean value.  
 
     """
 
     """
    command = "chown -R %i:%i %s" % (uid,gid,path)
 
 
  #
 
  #
     subprocess.Popen(command, shell=True)  
+
     search_base = "%s,%s" % (users_ou,basedn)
#
+
    search_string = "(&(uid=%s)(objectClass=posixAccount))" % username
def create_directories(username, uid, gid):
+
  #  
    """
 
        Create user home and mail directories.
 
    """  
 
    # Create home directory
 
 
     try:
 
     try:
 +
        # Open LDAP Connection
 +
        ld = ldap.initialize(ldap_server)
 
  #         
 
  #         
         user_homedir = "%s/%s" % (homedir,username)
+
         # Bind anonymously to the server
 +
        ld.simple_bind_s("","")  
 
  #
 
  #
         # Copying skel dir to user's home dir makes the directory and
+
         # Search for username
         # adds the skeleton files.
+
         result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
        copytree(skel_dir,user_homedir)
+
                            ['distinguisedName'])
 
  #
 
  #
         chown_recursive(user_homedir,uid,gid)  
+
         # Close connection
  #
+
        ld.unbind_s()                    
     except OSError, err:
+
  #  
         print "Unable to create home directory: %s" % err
+
     except ldap.LDAPError, err:  
 +
         print "Error searching LDAP database: %s" % err
 
         sys.exit(1)  
 
         sys.exit(1)  
 
  #
 
  #
     # Create mail directory
+
     # If user is not found, result should be an empty list.
     try:
+
     if len(result) != 0:
        # Get GID for the mail group
+
         return True
        mailgid = getgrnam('mail')[2]
+
    else:
#       
+
         return False
        user_maildir = "%s/%s" % (maildir,username)
 
#       
 
         os.mkdir(user_maildir)
 
        # There also needs to be a "cur" subdirectory or IMAP will cry.
 
         os.mkdir(user_maildir + "/cur")
 
 
  #
 
  #
        chown_recursive(user_maildir, uid, mailgid)
+
  def get_uids():
#
 
    except OSError, err:
 
        print "Unable to create mail directory: %s" % err
 
        sys.exit(1)
 
#   
 
  def main(argv):
 
 
     """
 
     """
         Parse command line arguments, prompt the user for any missing
+
         Return a list of UID numbers currently in use in the LDAP database.  
        values that might be needed to create a new user.  
 
 
     """
 
     """
    # Parse command line args using getopt
 
    try:
 
        opts, args = getopt.getopt(argv, "hf:l:m:u:g:s:d", \
 
                                  ["help", "ldif", "create-dirs","disable", "firstname=", \
 
                                    "lastname=", "mail=", "uid=", "gid=", \
 
                                    "shell="])
 
    except getopt.GetoptError:
 
        # An exception should mean misuse of command line options, so print
 
        # help and quit.
 
        usage()
 
        sys.exit(2)
 
 
  #     
 
  #     
     # Defining variables ahead of time should help later on when I want to
+
     search_base = "%s,%s" % (users_ou, basedn)
    # check whether they were set by command line arguments or not.
+
     search_string = "(objectClass=posixAccount)"
    firstname      = ""
+
#   
     lastname        = ""
+
     try:
     mail            = ""  
+
        # Bind anonymously
    uid            = ""
+
        ld = ldap.initialize(ldap_server)
    gid            = ""
+
       
    shell          = ""
+
        ld.simple_bind_s("","")
 +
       
 +
        # Get UIDS from all posixAccount objects.
 +
        result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
 +
                            ['uidNumber'])
 +
#   
 +
        ld.unbind_s()
 
  #
 
  #
     # Booleans for run options
+
    except ldap.LDAPError, err:
     run_add      = True
+
            print "Error connecting to LDAP server: %s" % err
     run_ldif     = False
+
            sys.exit(1)
    run_disable = False
+
#
     create_dirs  = False      
+
     # Pull the list of UIDs out of the results.
 +
     uids = [result[i][1]['uidNumber'][0] for i in range(len(result))]
 +
#
 +
     # Sort UIDS and return
 +
     return sorted(uids)
 +
#
 +
  def create_ldif(dn, attrs):
 +
     """
 +
        Output an LDIF file to the current directory.
 +
     """
 
  #
 
  #
     # Parse command line options
+
     try:
    for opt, arg in opts:
+
        file = open(str(attrs['uid'][0]) + ".ldif", "w")
 
  #         
 
  #         
         if opt in ("-h", "--help"):
+
         writer = ldif.LDIFWriter(file)  
            usage()
+
         writer.unparse(dn, attrs)
            sys.exit()
+
#
         elif opt in "--ldif":
+
         file.close()  
            # If creating LDIF don't add a new user.  
 
            run_ldif = True
 
            run_add = False
 
        elif opt in "--disable":
 
            # If disabling a user, turn off adding new user
 
            run_disable = True
 
            run_add = False
 
        elif opt in ("-d","--create-dirs"):
 
            create_dirs = True
 
         elif opt in ("-f", "--firstname"):
 
            firstname = arg
 
        elif opt in ("-l", "--lastname"):
 
            lastname = arg
 
        elif opt in ("-m", "--mail"):
 
            mail = arg
 
        elif opt in ("-u", "--uid"):
 
            uid = arg
 
        elif opt in ("-g", "--gid"):
 
            gid = arg
 
        elif opt in ("-s", "--shell"):
 
            shell = arg
 
 
  #     
 
  #     
     # Whatever was left over after parsing arguments should be the login name
+
     except EnvironmentError, err:
    username = "".join(args)
+
        print "Unable to open file: %s" % err
  #  
+
        sys.exit(1)
     # Make sure the user entered a username.
+
  #
     while not username:
+
def ldap_add(dn, attrs):
        username = raw_input("Enter a username: ")
+
     """
 +
        Add a user account with the given dn and attributes to the LDAP
 +
        database. Requires authentication as LDAP admin. If user added
 +
        successfully return true, else return False.  
 +
     """
 
  #     
 
  #     
     if run_disable:
+
     try:
         # Make sure the user exists before trying to delete it.
+
         # Open a connection to the ldap server
         if user_exists(username):
+
         ld = ldap.initialize(ldap_server)
            print "Warning: This will disable logins for user %s. Proceed?" \
+
#       
                  % username
+
        print "\nAdding new user record. Authentication required."  
            answer = raw_input("y/N: ")
+
#       
#          
+
        # Bind to the server as administrator   
            if answer in ("y","yes","Y"):
+
        ld.simple_bind_s(admin_dn,getpass("LDAP Admin Password: "))
                # If user is disabled print success message and quit.
+
#      
                # If an error occurs here quit anyway.
+
        # Convert attrs to correct syntax for ldap add_s function
                if ldap_disable(username):
+
         ldif = modlist.addModlist(attrs)  
                    print "Logins for user %s disabled." % username
+
  #
                    sys.exit(1)
+
        # Add the entry to the LDAP server
                else:
+
         ld.add_s(dn, ldif)  
                    print "An error occurred. Exiting."
 
                    sys.exit(1)
 
            else:
 
                print "User account not modified."
 
                sys.exit(1)
 
         else:
 
            print "User %s does not exist in LDAP database. Exiting." % username
 
            sys.exit(1)  
 
  #  
 
    # Don't continue if this account already exists.
 
    if run_add and user_exists(username):
 
        print "Error: account with username %s already exists." % username
 
         sys.exit(1)  
 
 
  #
 
  #
#   
+
         # Close connection to the server
    # Prompt user for any values that were not defined as a command line option
+
         ld.unbind_s()
    while not firstname:
+
  #    
        firstname = raw_input("First Name: ")
+
         print "User account added successfully."
    while not lastname:
+
         return True
         lastname = raw_input("Last Name: ")
 
    while not mail:
 
        addr_default = "%s@%s" % (username,domain)
 
        mail = raw_input("E-mail address [%s]: " % addr_default)
 
        if not mail:
 
            mail = addr_default
 
#   
 
    # Get the uid. Make sure it's not already in use.
 
    while not uid:
 
         # Get a list of in-use UID numbers
 
        existing_uids = get_uids()
 
  #      
 
         # Get one plus the highest used uid       
 
         next_uid = int(existing_uids[-1]) + 1
 
 
  #
 
  #
         uid = raw_input("UID [%i]: " % next_uid)
+
    except ldap.LDAPError, err:
 +
         print "Error adding new user: %s" % err
 +
        return False
 
  #
 
  #
         if not uid:  
+
def ldap_disable(username):
            uid = str(next_uid)  
+
    """
         elif uid in existing_uids:
+
        Disable login on a user a count by setting the login shell to
            print "UID " + uid + " is already in use."  
+
         /bin/false.
            uid = ""
+
    """
 +
    try:
 +
        # Open a connection to the ldap server
 +
        ld = ldap.initialize(ldap_server)
 +
#
 +
         print "\nModifying user record. Authentication required."
 +
#
 +
        ld.simple_bind_s(admin_dn,getpass("LDAP Admin Password: "))
 
  #     
 
  #     
    # Get the user's default group. Use 5012 (npg) if none other specified.
+
        # Set the dn to modify and the search parameters
     while not gid:
+
        mod_dn = "uid=%s,%s,%s" % (username,users_ou,basedn)
         gid = raw_input("GID [5012]: ")
+
        search_base = "%s,%s" % (users_ou,basedn)
 +
        search_string = "(&(uid=%s)(objectClass=posixAccount))" % username
 +
#       
 +
        # Get the current value of loginShell from the user LDAP entry.
 +
        result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
 +
                            ['loginShell'])
 +
#      
 +
         oldshell = result[0][1]
 +
        newshell = {'loginShell':['/bin/false']}
 
  #     
 
  #     
         if not gid:
+
         # Use modlist to configure changes
            gid = "5012"
+
        diff = modlist.modifyModlist(oldshell,newshell)
  #  
+
  #      
    # Prompt for a shell, if user doesn't enter anything just use the default
+
        # Modify the LDAP entry.  
    # Make sure the shell exists before accepting it.
+
         ld.modify_s(mod_dn,diff)
    while not shell:
+
#
         shell = raw_input("Shell [/bin/bash]: ")
+
         # Unbind from the LDAP server
         if not shell:
+
         ld.unbind_s()
            shell = "/bin/bash"
+
#       
         elif not os.path.exists(shell):
+
        # Return True if successful
            print shell + " is not a valid shell."
+
        return True
            shell = ""
+
  #
  #  
+
     except ldap.LDAPError, err:
     # Get the password from the user. Make sure it's correct.
+
        print "Error connecting to LDAP server: %s" % err
    pwCorrect = False  
+
        return False  
    while not pwCorrect:
+
        salt = getsalt()
+
def chown_recursive(path, uid, gid):
        password1 = crypt(getpass(),salt)
+
    """
        password2 = crypt(getpass('Retype password: '),salt)
+
        Recursively set ownership for the files in the given
        if password1 == password2:
+
         directory to the given uid and gid.
            ldap_password = "{CRYPT}" + password1
+
    """
            pwCorrect = True
+
     command = "chown -R %i:%i %s" % (uid,gid,path)  
         else:
+
  #
            print "Passwords do not match. Try again."
+
     subprocess.Popen(command, shell=True)  
#   
 
     # Build the account info       
 
    account = makeuser(username, firstname, lastname, mail, \
 
                      uid, gid, shell, ldap_password)
 
  #  
 
     # Decide what to do with it. Only one of these should run at a time.  
 
    if run_add:
 
        if ldap_add(account[0],account[1]):
 
            if create_dirs:
 
                create_directories(username, int(uid), int(gid))
 
                print "User directories created successfully."
 
            else:
 
                print "Create home and mail directories for %s?" % username
 
                answer = raw_input("y/N")
 
 
  #
 
  #
                if answer in ("y","Y","yes"):
+
def create_directories(username, uid, gid):
                    create_directories(username, int(uid), int(gid))
+
    """
         else:
+
        Create user home and mail directories.
            print "Create user failed."
+
    """
            sys.exit(1)
+
    # Create home directory
 +
    try:
 +
#       
 +
        user_homedir = "%s/%s" % (homedir,username)
 +
#
 +
         # Copying skel dir to user's home dir makes the directory and
 +
        # adds the skeleton files.
 +
        copytree(skel_dir,user_homedir)
 +
#
 +
        chown_recursive(user_homedir,uid,gid)  
 
  #
 
  #
     if run_ldif:
+
     except OSError, err:
         create_ldif(account[0],account[1])
+
         print "Unable to create home directory: %s" % err
 +
        sys.exit(1)  
 
  #
 
  #
if __name__ == "__main__":
+
    # Create mail directory
     if os.geteuid() != 0:  
+
     try:
         print "This program must be run as an administrator."
+
         # Get GID for the mail group
    else:
+
         mailgid = getgrnam('mail')[2]
         main(sys.argv[1:])
+
#       
 
+
        user_maildir = "%s/%s" % (maildir,username)
===Mailman Notes 2009-05-20===
+
#       
In /etc/mailman/ there is a python script pointing to the /usr/lib/mailman/ with a sym link
+
        os.mkdir(user_maildir)
 
+
        # There also needs to be a "cur" subdirectory or IMAP will cry.
==Elog==
+
        os.mkdir(user_maildir + "/cur")
===Elog notes 2009-05-20===
+
#
Info from the site https://midas.psi.ch/elog/adminguide.html
+
        chown_recursive(user_maildir, uid, mailgid)
 
+
#
Download: http://midas.psi.ch/elog/download/
+
    except OSError, err:
 
+
        print "Unable to create mail directory: %s" % err
RPM Install Notes
+
        sys.exit(1)
 
+
#   
Since version 2.0, ELOG contains a RPM file which eases the installation. Get the file elog-x.x.x-x.i386.rpm from the download section and execute as root "rpm -i elog-x.x.x-x.i386.rpm". This will install the elogd daemon in /usr/local/sbin and the elog and elconv programs in /usr/local/bin. The sample configuration file elogd.cfg together with the sample logbook will be installed under /usr/local/elog and the documentation goes to /usr/share/doc. The elogd startup script will be installed at /etc/rc.d/init.d/elogd. To start the daemon, enter
+
def main(argv):
/etc/rc.d/init.d/elogd start
+
    """
It will listen under the port specified in /usr/local/elog/elogd.cfg which is 8080 by default. So one can connect using any browser with the URL:
+
        Parse command line arguments, prompt the user for any missing
http://localhost:8080
+
        values that might be needed to create a new user.  
To start the daemon automatically, enter:
+
    """
  chkconfig --add elogd
+
    # Parse command line args using getopt
chkconfig --level 345 elogd on  
+
    try:
which will start the daemon on run levels 3,4 and 5 after the next reboot.
+
        opts, args = getopt.getopt(argv, "hf:l:m:u:g:s:d", \
 
+
                                  ["help", "ldif", "create-dirs","disable", "firstname=", \
Note that the RPM installation creates a user and group elog, under which the daemon runs.
+
                                    "lastname=", "mail=", "uid=", "gid=", \
 
+
                                    "shell="])
Notes on running elog under apache
+
    except getopt.GetoptError:
 
+
        # An exception should mean misuse of command line options, so print
For cases where elogd should run under port 80 in parallel to an Apache server, Apache can be configured to run Elog in a subdirectory of Apache. Start elogd normally under port 8080 (or similarly) as noted above and make sure it's working there. Then put following redirection into the Apache configuration file:
+
        # help and quit.  
  Redirect permanent /elog http://your.host.domain/elog/
+
        usage()
  ProxyPass /elog/ http://your.host.domain:8080/
+
        sys.exit(2)
Make sure that the Apache modules mod_proxy.c and mod_alias.c are activated. Justin Dieters <enderak@yahoo.com> reports that mod_proxy_http.c is also required. The Redirect statement is necessary to automatically append a "/" to a request like http://your.host.domain/elog. Apache then works as a proxy and forwards all requests staring with /elog to the elogd daemon.
+
  #   
 
+
    # Defining variables ahead of time should help later on when I want to
Note: Do not put "ProxyRequests On" into your configuration file. This option is not necessary and can be misused for spamming and proxy forwarding of otherwise blocked sites.
+
    # check whether they were set by command line arguments or not.  
 
+
    firstname      = ""
Because elogd uses links to itself (for example in the email notification and the redirection after a submit), it has to know under which URL it is running. If you run it under a proxy, you have to add the line:
+
    lastname        = ""
      URL = http://your.proxy.host/subdir/
+
    mail            = ""
into elogd.cfg.
+
    uid            = ""
 
+
    gid            = ""
Notes on Apache:
+
    shell          = ""
 
+
  #
Another possibility is to use the Apache web server as a proxy server allowing secure connections. To do so, Apache has to be configured accordingly and a certificate has to be generated. See some instructions on how to create a certificate, and see Running elogd under Apache before on this page on how to run elogd under Apache. Once configured correctly, elogd can be accessed via http://your.host and via https://your.host simultaneously.
+
    # Booleans for run options
 
+
    run_add      = True
The redirection statement has to be changed to
+
    run_ldif    = False
      Redirect permanent /elog https://your.host.domain/elog/
+
    run_disable = False
      ProxyPass /elog/ http://your.host.domain:8080/
+
    create_dirs  = False   
and following has to be added to the section "VirtualHOst ...:443 in /etc/httpd/conf.d/ssl.conf:
+
#
      # Proxy setup for Elog
+
    # Parse command line options
      <Proxy *>
+
    for opt, arg in opts:
      Order deny,allow
+
#       
      Allow from all
+
        if opt in ("-h", "--help"):
      </Proxy>
+
            usage()
      ProxyPass /elog/ http://host.where.elogd.is.running:8080/
+
            sys.exit()
      ProxyPassReverse /elog/ http://host.where.elogd.is.running:8080/
+
        elif opt in "--ldif":
  Then, following URL statement has to be written to elogd.cfg:
+
            # If creating LDIF don't add a new user.  
      URL = https://your.host.domain/elog
+
            run_ldif = True
There is a more detailed step-by-step instructions at the contributions section.
+
            run_add = False
 
+
        elif opt in "--disable":
Using ssh:
+
            # If disabling a user, turn off adding new user
elogd can be accessed through a a SSH tunnel. To do so, open an SSH tunnel like:
+
            run_disable = True
  ssh -L 1234:your.server.name:8080 your.server.name
+
            run_add = False
This opens a secure tunnel from your local host, port 1234, to the server host where the elogd daemon is running on port 8080. Now you can access http://localhost:1234 from your browser and reach elogd in a secure way.  
+
        elif opt in ("-d","--create-dirs"):
 
+
            create_dirs = True
Notes on Server Configuration
+
        elif opt in ("-f", "--firstname"):
 
+
            firstname = arg
The ELOG daemon elogd can be executed with the following options :
+
        elif opt in ("-l", "--lastname"):
elogd [-p port] [-h hostname/IP] [-C] [-m] [-M] [-D] [-c file] [-s dir] [-d dir] [-v] [-k] [-f file] [-x]
+
            lastname = arg
with :
+
        elif opt in ("-m", "--mail"):
     * -p <port>  TCP port number to use for the http server (if other than 80)
+
            mail = arg
    * -h <hostname or IP address> in the case of a "multihomed" server, host name or IP address of the interface ELOG should run on
+
        elif opt in ("-u", "--uid"):
    * -C <url> clone remote elogd configuration
+
            uid = arg
    * -m synchronize logbook(s) with remote server
+
        elif opt in ("-g", "--gid"):
     * -M  synchronize with removing deleted entries
+
            gid = arg
    * -l <logbook>  optionally specify logbook for -m and -M commands
+
        elif opt in ("-s", "--shell"):
    * -D  become a daemon (Unix only)
+
            shell = arg
     * -c <file>  specify the configuration file (full path mandatory if -D is used)
+
#   
     * -s <dir> specify resource directory (themes, icons, ...)
+
    # Whatever was left over after parsing arguments should be the login name
    * -d <dir> specify logbook root directory
+
    username = "".join(args)
    * -v  verbose output for debugging
+
  #   
     * -k  do not use TCP keep-alive
+
    # Make sure the user entered a username.
    * -f <file> specify PID file where elogd process ID is written when server is started
+
    while not username:
    * -x  enables execution of shell commands
+
        username = raw_input("Enter a username: ")
It may also be used to generate passwords :
+
#   
      elogd [-r pwd] [-w pwd] [-a pwd] [-l logbook]
+
    if run_disable:
with :
+
        # Make sure the user exists before trying to delete it.  
     * -r <pwd> create/overwrite read password in config file
+
        if user_exists(username):
     * -w <pwd> create/overwrite write password in config file
+
            print "Warning: This will disable logins for user %s. Proceed?" \
    * -a <pwd> create/overwrite administrative password in config file
+
                  % username
    * -l <logbook> specify logbook for -r and -w commands
+
            answer = raw_input("y/N: ")
 
+
  #           
The appearance, functionality and behaviour of the various logbooks on an ELOG server are determined by the single elogd.cfg file in the ELOG installation directory.
+
            if answer in ("y","yes","Y"):
 
+
                # If user is disabled print success message and quit.
This file may be edited directly from the file system, or from a form in the ELOG Web interface (when the Config menu item is available). In this case, changes are applied dynamically without having to restart the server. Instead of restarting the server, under Unix one can send a HUP signal like "killall -HUP elogd" to tell the server to re-read its configuration.
+
                # If an error occurs here quit anyway.  
 
+
                if ldap_disable(username):
The many options of this unique but very important file are documented on the separate elogd.cfg syntax page.
+
                    print "Logins for user %s disabled." % username
 
+
                    sys.exit(1)
To better control appearance and layout of the logbooks, elogd.cfg may optionally specify the use of additional files containing HTML code, and/or custom "themes" configurations. These need to be edited directly from the file system right now.
+
                else:
 
+
                    print "An error occurred. Exiting."
The meaning of the directory flags -s and -d is explained in the section covering the configuration options Resource dir and Logbook dir in the elogd.cfg description.
+
                    sys.exit(1)
 
+
            else:
Notes on tarball install
+
                print "User account not modified."
Make sure you have the libssl-dev package installed. Consult your distribution for details.
+
                sys.exit(1)
 
+
        else:
Expand the compressed TAR file with tar -xzvf elog-x.x.x.tar.gz. This creates a subdirectory elog-x.x.x where x.x.x is the version number. In that directory execute make, which creates the executables elogd, elog and elconv. These executables can then be copied to a convenient place like /usr/local/bin or ~/bin. Alternatively, a "make install" will copy the daemon elogd to SDESTDIR (by default /usr/local/sbin) and the other files to DESTDIR (by default /usr/local/bin). These directories can be changed in the Makefile. The elogd executable can be started manually for testing with :
+
            print "User %s does not exist in LDAP database. Exiting." % username
  elogd -p 8080
+
            sys.exit(1)
where the -p flag specifies the port. Without the -p flag, the server uses the standard WWW port 80. Note that ports below 1024 can only be used if elogd is started under root, or the "sticky bit" is set on the executable.
+
#   
 
+
     # Don't continue if this account already exists.
When elogd is started under root, it attaches to the specified port and tries to fall-back to a non-root account. This is necessary to avoid security problems. It looks in the configuration file for the statements Usr and Grp.. If found, elogd uses that user and goupe name to run under. The names must of course be present on the system (usually /etc/passwd and /etc/group). If the statements Usr and Grp. are not present, elogd tries user and group elog, then the default user and group (normally nogroup and nobody). Care has to be taken that elogd, when running under the specific user and group account, has read and write access to the configuration file and logbook directories. Note that the RPM installation automatically creates a user and group elog.
+
    if run_add and user_exists(username):
 
+
        print "Error: account with username %s already exists." % username
If the program complains with something like "cannot bind to port...", it could be that the network is not started on the Linux box. This can be checked with the /sbin/ifconfig program, which must show that eth0 is up and running.
+
        sys.exit(1)
 
+
  #
The distribution contains a sample configuration file elogd.cfg and a demo logbook in the demo subdirectory. If the elogd server is started in the elogd-x.x.x directory, the demo logbook can be directly accessed with a browser by specifying the URL http://localhost:8080 (or whatever port you started the elog daemon on). If the elogd server is started in some other directory, you must specify the full path of the elogd file with the "-c" flag and change the Data dir = option in the configuration file to a full path like /usr/local/elog.
+
  #   
 
+
     # Prompt user for any values that were not defined as a command line option
Once testing is complete, elogd will typically be started with the -D flag to run as a daemon in the background, like this :
+
     while not firstname:
elogd -p 8080 -c /usr/local/elog/elogd.cfg -D
+
        firstname = raw_input("First Name: ")
Note that it is mandatory to specify the full path for the elogd file when started as a daemon.
+
     while not lastname:
To test the daemon, connect to your host via :
+
        lastname = raw_input("Last Name: ")
http://your.host:8080/
+
     while not mail:
If port 80 is used, the port can be omitted in the URL. If several logbooks are defined on a host, they can be specified in the URL :
+
        addr_default = "%s@%s" % (username,domain)
  http://your.host/<logbook>
+
        mail = raw_input("E-mail address [%s]: " % addr_default)
where <logbook> is the name of the logbook.
+
        if not mail:
 
+
            mail = addr_default
The contents of the all-important configuration file elogd.cfg are described below:
+
#   
[Tbow@gluon documentation-notes]$ ll elog*
+
     # Get the uid. Make sure it's not already in use.
-rw-r--r-- 1 Tbow npg 9.4K May 20  2009 elog
+
     while not uid:
-rw-r--r-- 1 Tbow npg  623 Jan 26  2010 elog.roentgen.messages.problem
+
        # Get a list of in-use UID numbers
-rw-r--r-- 1 Tbow npg 1.2K Feb 11 19:12 elog_users_setup
+
        existing_uids = get_uids()
[Tbow@gluon documentation-notes]$ text
+
#     
text2pcap  text2wave  textools
+
        # Get one plus the highest used uid       
 
+
        next_uid = int(existing_uids[-1]) + 1
===elog_users_setup 2010-02-11===
+
#
You can find some instructions/information here:
+
        uid = raw_input("UID [%i]: " % next_uid)
http://pbpl.physics.ucla.edu/old_stuff/elogold/current/doc/config.html#access
+
#
The thing you have to remember is that you want the new users to end up being users of just the logbook they will be using, not a global user. So, if you look at where my name is in the elogd.cfg file, I am designated as an admin user, and am a global user that can log into any logbook to fix things.  If you look through the file for a user like Daniel, he can only log into the nuclear group logbooks,  not my private one, or Karl's, or Maurik's.  So, if you want to add someone to the nuclear group's logbooks, for example, add that new person's user name to where you find people like Daniel and Ethan, and set the thing to allow self-registering at the top.  Restart, and then go ahead and use the self-register to register the new person's password and account.  Then go back into the elogd.cfg file and comment out the self register, so other people cannot do that, and restart.  That should be the easiest way to do it, but you can read the info and decide about that.  How does that sound?  Does this make sense?
+
        if not uid:
 
+
            uid = str(next_uid)
===elog_roentgen_messages_problems 2010-01-26===
+
        elif uid in existing_uids:
Jan 26 09:48:00 roentgen elogd[15215]: elogd 2.7.8 built Dec  2 2009, 11:54:27
+
            print "UID " + uid + " is already in use."  
Jan 26 09:48:00 roentgen elogd[15215]: revision 2278
+
            uid = ""
Jan 26 09:48:00 roentgen elogd[15215]: Falling back to default group "elog"
+
#   
Jan 26 09:48:01 roentgen elogd[15215]: Falling back to default user "elog"
+
    # Get the user's default group. Use 5012 (npg) if none other specified.  
Jan 26 09:48:01 roentgen elogd[15215]: FCKedit detected
+
    while not gid:
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default group "elog"
+
        gid = raw_input("GID [5012]: ")
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default user "elog"
+
#   
Jan 26 09:48:01 roentgen elogd[15215]: ImageMagick detected
+
        if not gid:
Jan 26 09:48:02 roentgen elogd[15215]: SSLServer listening on port 8080
+
            gid = "5012"
 
+
#   
==CUPS==
+
    # Prompt for a shell, if user doesn't enter anything just use the default
===CUPS quota accounting 2009-06-10===
+
    # Make sure the shell exists before accepting it.
3. 3. Print quotas and accounting
+
    while not shell:
 
+
        shell = raw_input("Shell [/bin/bash]: ")
CUPS has also basic page accounting and quota capabilities.
+
        if not shell:
 
+
            shell = "/bin/bash"
Every printed page is logged in the file /var/log/cups/page_log So one can everytime read out this file and determine who printed how many pages. The system is based on the CUPS filters. They simply analyse the PostScript data stream to determine the number of pages. And there fore it depends on the quality of the PostScript generated by the applications whether the pages get correctly counted. And if there is a paper jam, pages are already counted and do not get printed. Also Jobs which get rendered printer-ready on the client (Windows) will not get accounted correctly, as CUPS does not understand the proprietary language of the printer.
+
        elif not os.path.exists(shell):
 
+
            print shell + " is not a valid shell."
In addition, one can restrict the amount of pages (or kBytes) which a user is allowed to print in a certain time frame. Such restrictions can be applied to the print queues with the "lpadmin" command.
+
            shell = ""
 
+
  #   
lpadmin -p printer1 -o job-quota-period=604800 -o job-k-limit=1024
+
    # Get the password from the user. Make sure it's correct.  
lpadmin -p printer2 -o job-quota-period=604800 -o job-page-limit=100
+
    pwCorrect = False
 
+
    while not pwCorrect:
The first command means that within the "job-quota-period" (time always given in seconds, in this example we have one week) users can only print a maximum of 1024 kBytes (= 1 MByte) of data on the printer "printer1". The second command restricts printing on "printer2" to 100 pages per week. One can also give both "job-k-limit" and "job-page-limit" to one queue. Then both limits apply so the printer rejects jobs when the user already reaches one of the limits, either the 1 MByte or the 100 pages.
+
        salt = getsalt()
 
+
        password1 = crypt(getpass(),salt)
This is a very simple quota system: Quotas cannot be given per-user, so a certain user's quota cannot be raised independent of the other users, for example if the user pays his pages or gets a more printing-intensive job. Also counting of the pages is not very sophisticated as it was already shown above.
+
        password2 = crypt(getpass('Retype password: '),salt)
 
+
        if password1 == password2:
So for more sophisticated accounting it is recommended to use add-on software which is specialized for this job. This software can limit printing per-user, can create bills for the users, use hardware page counting methods of laser printers, and even estimate the actual amount of toner or ink needed for a page sent to the printer by counting the pixels.
+
            ldap_password = "{CRYPT}" + password1
 
+
            pwCorrect = True
The most well-known and complete free software package for print accounting and quotas id PyKota:
+
        else:
 
+
            print "Passwords do not match. Try again."
http://www.librelogiciel.com/software/PyKota/
+
#   
 
+
    # Build the account info       
A simple system based on reading out the hardware counter of network printers via SNMP is accsnmp:
+
    account = makeuser(username, firstname, lastname, mail, \
 
+
                      uid, gid, shell, ldap_password)
http://fritz.potsdam.edu/projects/cupsapps/
+
#   
 +
    # Decide what to do with it. Only one of these should run at a time.  
 +
    if run_add:
 +
        if ldap_add(account[0],account[1]):
 +
            if create_dirs:
 +
                create_directories(username, int(uid), int(gid))
 +
                print "User directories created successfully."
 +
            else:
 +
                print "Create home and mail directories for %s?" % username
 +
                answer = raw_input("y/N")
 +
#
 +
                if answer in ("y","Y","yes"):
 +
                    create_directories(username, int(uid), int(gid))
 +
        else:
 +
            print "Create user failed."
 +
            sys.exit(1)
 +
#
 +
    if run_ldif:
 +
        create_ldif(account[0],account[1])
 +
  #
 +
if __name__ == "__main__":
 +
    if os.geteuid() != 0:  
 +
        print "This program must be run as an administrator."
 +
    else:
 +
        main(sys.argv[1:])
  
===CUPS Basic Info 2009-06-11===
+
===Mailman Notes 2009-05-20===
This file contains some basic cups commands and info:
+
In /etc/mailman/ there is a python script pointing to the /usr/lib/mailman/ with a sym link
  
The device can be a parallel port, a network interface, and so forth. Devices within CUPS use Uniform Resource Identifiers ("URIs") which are a more general form of Uniform Resource Locators ("URLs") that are used in your web browser. For example, the first parallel port in Linux usually uses a device URI of parallel:/dev/lp1
+
===SSSD Setup Files 2013-07-16===
 +
====SSSD Notes====
 +
*yum install sssd libsss_sudo
 +
*authconfig --enablesssd --enablesssdauth --enablelocauthorize --update
 +
*/etc/sssd/sssd.conf:
 +
  [sssd]
 +
  config_file_version = 2
 +
  services = nss, pam
 +
  domains = default
 +
  [nss]
 +
  filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd
 +
  [domain/default]
 +
  ldap_tls_reqcert = never
 +
  auth_provider = ldap
 +
  ldap_schema = rfc2307bis
 +
  krb5_realm = EXAMPLE.COM
 +
  ldap_search_base = dc=physics,dc=unh,dc=edu
 +
  id_provider = ldap
 +
  ldap_id_use_start_tls = False
 +
  chpass_provider = ldap
 +
  ldap_uri = ldaps://einstein.unh.edu
 +
  krb5_kdcip = kerberos.example.com
 +
  cache_credentials = True
 +
  ldap_tls_cacertdir = /etc/openldap/cacerts
 +
  entry_cache_timeout = 600
 +
  ldap_network_timeout = 3
 +
  ldap_access_filter = (&(objectclass=shadowaccount)(objectclass=posixaccount))
 +
*/etc/nsswitch.conf:
 +
  passwd    files sss
 +
  shadow    files sss
 +
  group      files sss
 +
  sudoers    files sss
 +
*service sssd restart
 +
*Test settings: id (username)
 +
Note: If you are not able to get back proper information with the 'id' command try removing the ca certs from the /etc/openldap/cacerts/ directory. Always back that directory up before removing the contents of it.
  
Lookup printer info:
+
====sssd.conf====
  lpinfo -v ENTER
+
[sssd]
  network socket
+
config_file_version = 2
  network http
+
#
  network ipp
+
# Number of times services should attempt to reconnect in the
  network lpd
+
# event of a crash or restart before they give up
  direct parallel:/dev/lp1
+
reconnection_retries = 3
  serial serial:/dev/ttyS1?baud=115200
+
#
  serial serial:/dev/ttyS2?baud=115200
+
# If a back end is particularly slow you can raise this timeout here
  direct usb:/dev/usb/lp0
+
sbus_timeout = 30
  network smb
+
services = nss, pam, sudo
 +
#
 +
# SSSD will not start if you do not configure any domains.
 +
# Add new domain configurations as [domain/<NAME>] sections, and
 +
# then add the list of domains (in the order you want them to be
 +
# queried) to the "domains" attribute below and uncomment it.
 +
# domains = LOCAL,LDAP
 +
#
 +
domains = default
 +
[nss]
 +
# The following prevents SSSD from searching for the root user/group in
 +
# all domains (you can add here a comma-separated list of system accounts that
 +
# are always going to be /etc/passwd users, or that you want to filter out).
 +
filter_groups = root
 +
#filter_users = root
 +
filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd
 +
reconnection_retries = 3
 +
#
 +
# The entry_cache_timeout indicates the number of seconds to retain an
 +
# entry in cache before it is considered stale and must block to refresh.
 +
# The entry_cache_nowait_timeout indicates the number of seconds to
 +
  # wait before updating the cache out-of-band. (NSS requests will still
 +
# be returned from cache until the full entry_cache_timeout). Setting this
 +
# value to 0 turns this feature off (default).
 +
# entry_cache_timeout = 600
 +
# entry_cache_nowait_timeout = 300
 +
#
 +
[pam]
 +
reconnection_retries = 3
 +
#
 +
[sudo]
 +
#
 +
# Example domain configurations
 +
# Note that enabling enumeration in the following configurations will have a
 +
# moderate performance impact while enumerations are actually running, and
 +
# may increase the time necessary to detect network disconnection.
 +
# Consequently, the default value for enumeration is FALSE.
 +
# Refer to the sssd.conf man page for full details.
 +
#
 +
# Example LOCAL domain that stores all users natively in the SSSD internal
 +
# directory. These local users and groups are not visible in /etc/passwd; it
 +
# now contains only root and system accounts.
 +
# [domain/LOCAL]
 +
# description = LOCAL Users domain
 +
# id_provider = local
 +
# enumerate = true
 +
# min_id = 500
 +
# max_id = 999
 +
#
 +
# Example native LDAP domain
 +
# ldap_schema can be set to "rfc2307", which uses the "memberuid" attribute
 +
# for group membership, or to "rfc2307bis", which uses the "member" attribute
 +
# to denote group membership. Changes to this setting affect only how we
 +
# determine the groups a user belongs to and will have no negative effect on
 +
# data about the user itself. If you do not know this value, ask an
 +
# administrator.
 +
# [domain/LDAP]
 +
# id_provider = ldap
 +
# auth_provider = ldap
 +
# ldap_schema = rfc2307
 +
# ldap_uri = ldap://ldap.mydomain.org
 +
# ldap_search_base = dc=mydomain,dc=org
 +
# ldap_tls_reqcert = demand
 +
# cache_credentials = true
 +
# enumerate = False
 +
#
 +
# Example LDAP domain where the LDAP server is an Active Directory server.
 +
#
 +
# [domain/AD]
 +
# description = LDAP domain with AD server
 +
# enumerate = false
 +
# min_id = 1000
 +
#
 +
# id_provider = ldap
 +
# auth_provider = ldap
 +
# ldap_uri = ldap://your.ad.server.com
 +
# ldap_schema = rfc2307bis
 +
# ldap_user_search_base = cn=users,dc=example,dc=com
 +
# ldap_group_search_base = cn=users,dc=example,dc=com
 +
# ldap_default_bind_dn = cn=Administrator,cn=Users,dc=example,dc=com
 +
# ldap_default_authtok_type = password
 +
# ldap_default_authtok = YOUR_PASSWORD
 +
# ldap_user_object_class = person
 +
# ldap_user_name = msSFU30Name
 +
# ldap_user_uid_number = msSFU30UidNumber
 +
# ldap_user_gid_number = msSFU30GidNumber
 +
# ldap_user_home_directory = msSFU30HomeDirectory
 +
# ldap_user_shell = msSFU30LoginShell
 +
# ldap_user_principal = userPrincipalName
 +
# ldap_group_object_class = group
 +
# ldap_group_name = msSFU30Name
 +
# ldap_group_gid_number = msSFU30GidNumber
 +
# ldap_force_upper_case_realm = True
 +
#
 +
[domain/default]
 +
enumerate = True
 +
#
 +
ldap_tls_reqcert = never
 +
auth_provider = ldap
 +
krb5_realm = EXAMPLE.COM
 +
ldap_search_base = dc=physics,dc=unh,dc=edu
 +
id_provider = ldap
 +
ldap_id_use_start_tls = False
 +
chpass_provider = ldap
 +
ldap_uri = ldaps://einstein.unh.edu
 +
chpass_provider = ldap
 +
krb5_kdcip = kerberos.example.com
 +
cache_credentials = True
 +
ldap_tls_cacertdir = /etc/openldap/cacerts
 +
entry_cache_timeout = 600
 +
ldap_network_timeout = 3
 +
ldap_access_filter = (&(objectclass=shadowaccount)(objectclass=posixaccount))
 +
#
 +
#ldap_schema = rfc2307bis
 +
ldap_schema = rfc2307
 +
#ldap_group_member = memberUid
 +
#ldap_group_search_base = ou=groups,dc=physics,dc=unh,dc=edu
 +
ldap_rfc2307_fallback_to_local_users = True
 +
#
 +
sudo_provider = ldap
 +
ldap_sudo_search_base = ou=groups,dc=physics,dc=unh,dc=edu
 +
ldap_sudo_full_refresh_interval=86400
 +
ldap_sudo_smart_refresh_interval=3600
 +
 
 +
==Elog==
 +
===Elog notes 2009-05-20===
 +
Info from the site https://midas.psi.ch/elog/adminguide.html
  
File devices have device URIs of the form file:/directory/filename while network devices use the more familiar method://server or method://server/path  format.  Printer queues usually have a PostScript Printer Description ("PPD") file associated with them. PPD files describe the capabilities of each printer, the page sizes supported, etc.
+
Download: http://midas.psi.ch/elog/download/
  
Adding a printer:
+
RPM Install Notes
/usr/sbin/lpadmin -p printer -E -v device -m ppd
+
 
Managing printers:
+
Since version 2.0, ELOG contains a RPM file which eases the installation. Get the file elog-x.x.x-x.i386.rpm from the download section and execute as root "rpm -i elog-x.x.x-x.i386.rpm". This will install the elogd daemon in /usr/local/sbin and the elog and elconv programs in /usr/local/bin. The sample configuration file elogd.cfg together with the sample logbook will be installed under /usr/local/elog and the documentation goes to /usr/share/doc. The elogd startup script will be installed at /etc/rc.d/init.d/elogd. To start the daemon, enter
/usr/sbin/lpadmin -p printer options
+
  /etc/rc.d/init.d/elogd start
Starting and Stopping printer queues:
+
It will listen under the port specified in /usr/local/elog/elogd.cfg which is 8080 by default. So one can connect using any browser with the URL:
/usr/bin/enable printer ENTER
+
  http://localhost:8080
/usr/bin/disable printer ENTER
+
To start the daemon automatically, enter:
Accepting and Rejecting Print jobs:
+
chkconfig --add elogd
  /usr/sbin/accept printer ENTER
+
chkconfig --level 345 elogd on
/usr/sbin/reject printer ENTER
+
which will start the daemon on run levels 3,4 and 5 after the next reboot.
Restrict Access:
 
  /usr/sbin/lpadmin -p printer -u allow:all
 
  
==Virtualization==
+
Note that the RPM installation creates a user and group elog, under which the daemon runs.
  
===Xen Basic Commands 2009-06-04===
+
Notes on running elog under apache
Basic management options
 
  
The following are basic and commonly used xm commands:
+
For cases where elogd should run under port 80 in parallel to an Apache server, Apache can be configured to run Elog in a subdirectory of Apache. Start elogd normally under port 8080 (or similarly) as noted above and make sure it's working there. Then put following redirection into the Apache configuration file:
  xm help [--long]: view available options and help text.
+
Redirect permanent /elog http://your.host.domain/elog/
  use the xm list command to list active domains:
+
  ProxyPass /elog/ http://your.host.domain:8080/
$ xm list
+
Make sure that the Apache modules mod_proxy.c and mod_alias.c are activated. Justin Dieters <enderak@yahoo.com> reports that mod_proxy_http.c is also required. The Redirect statement is necessary to automatically append a "/" to a request like http://your.host.domain/elog. Apache then works as a proxy and forwards all requests staring with /elog to the elogd daemon.
  Name                                ID  Mem(MiB)  VCPUs      State      Time(s)
+
 
  Domain-0                            0    520      2        r-----    1275.5
+
Note: Do not put "ProxyRequests On" into your configuration file. This option is not necessary and can be misused for spamming and proxy forwarding of otherwise blocked sites.
  r5b2-mySQL01                      13    500      1        -b----      16.1
 
  
xm create [-c] DomainName/ID: start a virtual machine. If the -c option is used, the start up process will attach to the guest's console.
+
Because elogd uses links to itself (for example in the email notification and the redirection after a submit), it has to know under which URL it is running. If you run it under a proxy, you have to add the line:
 +
      URL = http://your.proxy.host/subdir/
 +
into elogd.cfg.
  
xm console DomainName/ID: attach to a virtual machine's console.
+
Notes on Apache:
xm destroy DomainName/ID: terminate a virtual machine , similar to a power off.
 
xm reboot DomainName/ID: reboot a virtual machine, runs through the normal system shut down and start up process.
 
xm shutdown DomainName/ID: shut down a virtual machine, runs a normal system shut down procedure.
 
xm pause
 
xm unpause
 
xm save
 
xm restore
 
xm migrate
 
  
===Research 2011-08-24===
+
Another possibility is to use the Apache web server as a proxy server allowing secure connections. To do so, Apache has to be configured accordingly and a certificate has to be generated. See some instructions on how to create a certificate, and see Running elogd under Apache before on this page on how to run elogd under Apache. Once configured correctly, elogd can be accessed via http://your.host and via https://your.host simultaneously.
This is a collection of notes I took on virtualization over the summer.
 
  
====KVM Commands====
+
The redirection statement has to be changed to
#Installing KVM
+
      Redirect permanent /elog https://your.host.domain/elog/
yum groupinstall KVM
+
      ProxyPass /elog/ http://your.host.domain:8080/
Adding storage pools
+
and following has to be added to the section "VirtualHOst ...:443 in /etc/httpd/conf.d/ssl.conf:
virsh pool-dumpxml default > pool.xml
+
      # Proxy setup for Elog
edit pool.xml # with new name and path
+
      <Proxy *>
  virsh pool-create pool.xml
+
      Order deny,allow
  virsh pool-refresh name
+
      Allow from all
 +
      </Proxy>
 +
      ProxyPass /elog/ http://host.where.elogd.is.running:8080/
 +
      ProxyPassReverse /elog/ http://host.where.elogd.is.running:8080/
 +
  Then, following URL statement has to be written to elogd.cfg:
 +
      URL = https://your.host.domain/elog
 +
There is a more detailed step-by-step instructions at the contributions section.
 +
 
 +
Using ssh:
 +
elogd can be accessed through a a SSH tunnel. To do so, open an SSH tunnel like:
 +
  ssh -L 1234:your.server.name:8080 your.server.name
 +
This opens a secure tunnel from your local host, port 1234, to the server host where the elogd daemon is running on port 8080. Now you can access http://localhost:1234 from your browser and reach elogd in a secure way.
  
====XCP XE Commands====
+
Notes on Server Configuration
*SR Creation
+
 
xe sr-create content-type=user type=nfs name-label=yendi shared=true device-config:server=10.0.0.237 device-config:serverpath=/data1/Xen/VMs/
+
The ELOG daemon elogd can be executed with the following options :
  xe pool-list
+
  elogd [-p port] [-h hostname/IP] [-C] [-m] [-M] [-D] [-c file] [-s dir] [-d dir] [-v] [-k] [-f file] [-x]
xe pool-param-set uuid=<pool-uuid> default-SR=<newly_created_SR_uuid>
+
with :
xe sr-list
+
    * -p <port> TCP port number to use for the http server (if other than 80)
*VM Creation from CD
+
    * -h <hostname or IP address> in the case of a "multihomed" server, host name or IP address of the interface ELOG should run on
xe vm-install template="Other install media" new-name-label=<vm-name>
+
    * -C <url> clone remote elogd configuration
xe vbd-list vm-uuid=<vm_uuid> userdevice=0 params=uuid --minimal
+
    * -m synchronize logbook(s) with remote server
*Using the UUID returned from vbd-list, set the root disk to not be bootable:
+
    * -M  synchronize with removing deleted entries
xe vbd-param-set uuid=<root_disk_uuid> bootable=false
+
    * -l <logbook> optionally specify logbook for -m and -M commands
*CD Creation
+
    * -D  become a daemon (Unix only)
xe cd-list
+
    * -c <filespecify the configuration file (full path mandatory if -D is used)
xe vm-cd-add vm=<vm-uuid> cd-name="<cd-name>" device=3
+
    * -s <dir> specify resource directory (themes, icons, ...)
xe vbd-param-set uuid=<cd_uuid> bootable=true
+
    * -d <dir> specify logbook root directory
  xe vm-param-set uuid=<vm_uuid> other-config:install-repository=cdrom
+
    * -v  verbose output for debugging
*Network Installation
+
    * -k do not use TCP keep-alive
xe sr-list
+
    * -f <file> specify PID file where elogd process ID is written when server is started
xe vm-install template="Other install media" new-name-label=<name_for_vm> sr-uuid=<storage_repository_uuid>
+
    * -x  enables execution of shell commands
  xe network-list bridge=xenbr0 --minimal
+
It may also be used to generate passwords :
xe vif-create vm-uuid=<vm-uuid> network-uuid=<network-uuid> mac=random device=0
+
      elogd [-r pwd] [-w pwd] [-a pwd] [-l logbook]
xe vm-param-set uuid=<vm_uuid> other-config:install-repository=<http://server/redhat/5.0>
+
with :
*Lookup dom-id for VNC connections
+
    * -r <pwd> create/overwrite read password in config file
  xe vm-list uuid=<vm-uuid> params=dom-id
+
    * -w <pwd> create/overwrite write password in config file
*Use this command to port forward to the local system
+
    * -a <pwd> create/overwrite administrative password in config file
ssh -l root -L 5901:127.0.0.1:5901 <xcp_server>
+
    * -l <logbook> specify logbook for -r and -w commands
*Or by adding this line to iptables file on XCP host server
 
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 5901 -j ACCEPT
 
*you can use this ssh command
 
ssh -l root -L 5901:tomato:5901 gourd.unh.edu
 
*or you can ssh to gourd locally then to tomato using
 
ssh -l root -L 5901:127.0.0.1:5901 gourd.unh.edu
 
*then on gourd run
 
ssh -l root -L 5901:127.0.0.1:5901 tomato
 
*Virtual Disk Creation
 
xe vm-disk-add disk-size=10000000 device=4
 
xe vm-disk-lisst
 
  
====VMware ESXi Notes====
+
The appearance, functionality and behaviour of the various logbooks on an ELOG server are determined by the single elogd.cfg file in the ELOG installation directory.
VMWare ESXi
 
  
Key areas of interest for us:
+
This file may be edited directly from the file system, or from a form in the ELOG Web interface (when the Config menu item is available). In this case, changes are applied dynamically without having to restart the server. Instead of restarting the server, under Unix one can send a HUP signal like "killall -HUP elogd" to tell the server to re-read its configuration.
*vMotion
 
*SAN
 
*hypervisor
 
*Pricing for gourd would be $2600
 
  
==Yum==
+
The many options of this unique but very important file are documented on the separate elogd.cfg syntax page.
  
===RHEL to CentOS 2010-01-12===
+
To better control appearance and layout of the logbooks, elogd.cfg may optionally specify the use of additional files containing HTML code, and/or custom "themes" configurations. These need to be edited directly from the file system right now.
Display priority scores for all repositories
 
  
You can list all repositories set up on your system by a yum repolist all. However, this does not show priority scores. Here's a one liner for that. If no number is defined, the default is the lowest priority (99).
+
The meaning of the directory flags -s and -d is explained in the section covering the configuration options Resource dir and Logbook dir in the elogd.cfg description.
  
cat /etc/yum.repos.d/*.repo | sed -n -e "/^\[/h; /priority *=/{ G; s/\n/ /; s/ity=/ity = /; p }"  | sort -k3n
+
Notes on tarball install
 +
Make sure you have the libssl-dev package installed. Consult your distribution for details.
  
Installing yum
+
Expand the compressed TAR file with tar -xzvf elog-x.x.x.tar.gz. This creates a subdirectory elog-x.x.x where x.x.x is the version number. In that directory execute make, which creates the executables elogd, elog and elconv. These executables can then be copied to a convenient place like /usr/local/bin or ~/bin. Alternatively, a "make install" will copy the daemon elogd to SDESTDIR (by default /usr/local/sbin) and the other files to DESTDIR (by default /usr/local/bin). These directories can be changed in the Makefile. The elogd executable can be started manually for testing with :
 +
elogd -p 8080
 +
where the -p flag specifies the port. Without the -p flag, the server uses the standard WWW port 80. Note that ports below 1024 can only be used if elogd is started under root, or the "sticky bit" is set on the executable.
  
Okay, okay -- I get it -- it is not CentOS. But, I still want yum, or to try to remove and repair a crippled set of yum configurations.
+
When elogd is started under root, it attaches to the specified port and tries to fall-back to a non-root account. This is necessary to avoid security problems. It looks in the configuration file for the statements Usr and Grp.. If found, elogd uses that user and goupe name to run under. The names must of course be present on the system (usually /etc/passwd and /etc/group). If the statements Usr and Grp. are not present, elogd tries user and group elog, then the default user and group (normally nogroup and nobody). Care has to be taken that elogd, when running under the specific user and group account, has read and write access to the configuration file and logbook directories. Note that the RPM installation automatically creates a user and group elog.
  
<!> First, take full backups and make sure they may be read. This may not work.
+
If the program complains with something like "cannot bind to port...", it could be that the network is not started on the Linux box. This can be checked with the /sbin/ifconfig program, which must show that eth0 is up and running.
  
Then, you need the following package to get a working yum - all of which can be downloaded from any CentOS mirror:
+
The distribution contains a sample configuration file elogd.cfg and a demo logbook in the demo subdirectory. If the elogd server is started in the elogd-x.x.x directory, the demo logbook can be directly accessed with a browser by specifying the URL http://localhost:8080 (or whatever port you started the elog daemon on). If the elogd server is started in some other directory, you must specify the full path of the elogd file with the "-c" flag and change the Data dir = option in the configuration file to a full path like /usr/local/elog.
*centos-release
 
  
You should already have this package installed. You can check that with
+
Once testing is complete, elogd will typically be started with the -D flag to run as a daemon in the background, like this :
  rpm -q centos-release
+
elogd -p 8080 -c /usr/local/elog/elogd.cfg -D
  centos-release-4-4.3.i386
+
Note that it is mandatory to specify the full path for the elogd file when started as a daemon.
 +
To test the daemon, connect to your host via :
 +
  http://your.host:8080/
 +
If port 80 is used, the port can be omitted in the URL. If several logbooks are defined on a host, they can be specified in the URL :
 +
  http://your.host/<logbook>
 +
where <logbook> is the name of the logbook.
  
If it is already on your system, please check that the yum configuration hasn't been pulled and is available on your system:
+
The contents of the all-important configuration file elogd.cfg are described below:
  ls -l /etc/yum.repos.d/
+
  [Tbow@gluon documentation-notes]$ ll elog*
 +
-rw-r--r-- 1 Tbow npg 9.4K May 20  2009 elog
 +
-rw-r--r-- 1 Tbow npg  623 Jan 26  2010 elog.roentgen.messages.problem
 +
-rw-r--r-- 1 Tbow npg 1.2K Feb 11 19:12 elog_users_setup
 +
[Tbow@gluon documentation-notes]$ text
 +
text2pcap  text2wave  textools
  
This directory should contain only the files: CentOS-Base.repo and CentOS-Media.repo. If those aren't there, you should make a directory: 'attic' there, and 'mv' a backup of the current content into that attic, to prepare for the reinstall of the centos-release package:
+
===elog_users_setup 2010-02-11===
  rpm -Uvh --replacepkgs centos-release.*.rpm
+
You can find some instructions/information here:
 +
http://pbpl.physics.ucla.edu/old_stuff/elogold/current/doc/config.html#access
 +
The thing you have to remember is that you want the new users to end up being users of just the logbook they will be using, not a global user. So, if you look at where my name is in the elogd.cfg file, I am designated as an admin user, and am a global user that can log into any logbook to fix things. If you look through the file for a user like Daniel, he can only log into the nuclear group logbooks,  not my private one, or Karl's, or Maurik's.  So, if you want to add someone to the nuclear group's logbooks, for example, add that new person's user name to where you find people like Daniel and Ethan, and set the thing to allow self-registering at the top.  Restart, and then go ahead and use the self-register to register the new person's password and account. Then go back into the elogd.cfg file and comment out the self register, so other people cannot do that, and restart. That should be the easiest way to do it, but you can read the info and decide about that. How does that sound?  Does this make sense?
  
If centos-release isn't installed on your machine, you can drop the --replacepkgs from the command above. Make a backup directory ./attic/ and move any other files present into it, so that you can back out of this proccess later, if you decide you are in 'over your head'.
+
===elog_roentgen_messages_problems 2010-01-26===
 +
Jan 26 09:48:00 roentgen elogd[15215]: elogd 2.7.8 built Dec  2 2009, 11:54:27
 +
Jan 26 09:48:00 roentgen elogd[15215]: revision 2278
 +
Jan 26 09:48:00 roentgen elogd[15215]: Falling back to default group "elog"
 +
Jan 26 09:48:01 roentgen elogd[15215]: Falling back to default user "elog"
 +
Jan 26 09:48:01 roentgen elogd[15215]: FCKedit detected
 +
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default group "elog"
 +
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default user "elog"
 +
Jan 26 09:48:01 roentgen elogd[15215]: ImageMagick detected
 +
Jan 26 09:48:02 roentgen elogd[15215]: SSLServer listening on port 8080
  
Then you need the following packages:
+
==CUPS==
 +
===CUPS quota accounting 2009-06-10===
 +
3. 3. Print quotas and accounting
 +
 
 +
CUPS has also basic page accounting and quota capabilities.
  
CentOS 4
+
Every printed page is logged in the file /var/log/cups/page_log So one can everytime read out this file and determine who printed how many pages. The system is based on the CUPS filters. They simply analyse the PostScript data stream to determine the number of pages. And there fore it depends on the quality of the PostScript generated by the applications whether the pages get correctly counted. And if there is a paper jam, pages are already counted and do not get printed. Also Jobs which get rendered printer-ready on the client (Windows) will not get accounted correctly, as CUPS does not understand the proprietary language of the printer.
  
(available from where you also got the centos-release package):
+
In addition, one can restrict the amount of pages (or kBytes) which a user is allowed to print in a certain time frame. Such restrictions can be applied to the print queues with the "lpadmin" command.
    * yum
 
    * sqlite
 
    * python-sqlite
 
    * python-elementtree
 
    * python-urlgrabber
 
  
CentOS 5
+
lpadmin -p printer1 -o job-quota-period=604800 -o job-k-limit=1024
 +
lpadmin -p printer2 -o job-quota-period=604800 -o job-page-limit=100
  
(available from where you also got the centos-release package):
+
The first command means that within the "job-quota-period" (time always given in seconds, in this example we have one week) users can only print a maximum of 1024 kBytes (= 1 MByte) of data on the printer "printer1". The second command restricts printing on "printer2" to 100 pages per week. One can also give both "job-k-limit" and "job-page-limit" to one queue. Then both limits apply so the printer rejects jobs when the user already reaches one of the limits, either the 1 MByte or the 100 pages.
    * m2crypto
+
 
    * python-elementtree
+
This is a very simple quota system: Quotas cannot be given per-user, so a certain user's quota cannot be raised independent of the other users, for example if the user pays his pages or gets a more printing-intensive job. Also counting of the pages is not very sophisticated as it was already shown above.
    * python-sqlite
 
    * python-urlgrabber
 
    * rpm-python
 
    * yum
 
  
Download those into a separate directory and install them with
+
So for more sophisticated accounting it is recommended to use add-on software which is specialized for this job. This software can limit printing per-user, can create bills for the users, use hardware page counting methods of laser printers, and even estimate the actual amount of toner or ink needed for a page sent to the printer by counting the pixels.
rpm -Uvh *.rpm
 
from that directory. As before, take a backup of /etc/yum.conf so that you might back out any changes.
 
  
==Transana==
+
The most well-known and complete free software package for print accounting and quotas id PyKota:
This is for Dawn's research and graduate students.  It is transcription software for videos.
 
  
===Notes 2010-03-16===
+
http://www.librelogiciel.com/software/PyKota/
So far this is all the info I have form Bo
 
The Transana should work now. The following information is maybe what you need during the client setup.
 
Username: dawn
 
password: dawnpass (This is your mysql username and password)
 
MySQL Host: roentgen.unh.edu or 132.177.88.61
 
port 3306
 
Database: test or in the mysql server you can create your own database
 
Transana Message Server: pumpkin.unh.edu
 
port 17595
 
  
Setup Instructions for Client Computers
+
A simple system based on reading out the hardware counter of network printers via SNMP is accsnmp:
  
Once you've got the network server up and running, you need the following information to run Transana 2.3-MU and connect to the servers:
+
http://fritz.potsdam.edu/projects/cupsapps/
* A username and password
 
* The DSN or IP address of the computer running MySQL
 
* The name(s) of the database(s) for your project(s).
 
* The DSN or IP address of the computer running the Transana Message Server.
 
* The path to the common Video Storage folder.
 
Please note that all computers accessing the same database must enter the MySQL computer information in the same way, and must use the same Transana Message Server. They do NOT need to connect to the same common video storage folder, but subfolders within each separate video storage folder must be named and arranged identically.
 
#Install Transana 2.3-MU.
 
#Start Transana 2.3-MU. You will see the following screen:
 
#*Enter your Username, password, the DSN or IP address of your MySQL Server, and the name of your project database.
 
#If this is the first time you've used Transana 2.3-MU, you will see this message next:
 
#*Click the "Yes" button to specify your Transana Message Server. You will see this screen:
 
#*Enter the DSN or IP address of your Transana Message Server.
 
#You need to configure your Video Root Directory before you will be able to connect to the project videos. If you haven't yet closed the Transana Settings dialog box, click the "Directories" tab. If you alreadly closed it, go to the "Options" menu and select "Program Settings". You will see the following screen:
 
  
Under "Video Root Directory", browse to the common Video Storage folder.
+
===CUPS Basic Info 2009-06-11===
 +
This file contains some basic cups commands and info:
  
We recommend also setting the "Waveform Directory" to a common waveforms folder so that each video only needs to go through waveform extraction once for everyone on the team to share.
+
The device can be a parallel port, a network interface, and so forth. Devices within CUPS use Uniform Resource Identifiers ("URIs") which are a more general form of Uniform Resource Locators ("URLs") that are used in your web browser. For example, the first parallel port in Linux usually uses a device URI of parallel:/dev/lp1
  
Also, on Windows we recommend mapping a network drive to the common Video folder if it is on another server, rather than using machine-qualified path names. We find that mapping drive V: to "\\VideoServer\ProjectDirectory" produces faster connections to videos than specifying "\\VideoServer\ProjectDirectory" in the Video Root directory.
+
Lookup printer info:
If you have any questions about this, please feel free to tell me.
+
lpinfo -v ENTER
 
+
  network socket
===Transana Survival Guide 2013-08-24===
+
  network http
====Setup Instructions for Mac OS X Network Servers====
+
  network ipp
The first step is to install MySQL. Transana 2.4-MU requires MySQL 4.1.x or later. We have tested Transana 2.4-MU with a variety of MySQL versions on a variety of operating systems without difficulty, but we are unable to test all possible combinations. Please note that MySQL 4.0.x does not support the UTF-8 character set, so should not be used with Transana 2.4-MU.
+
  network lpd
====Install MySQL====
+
  direct parallel:/dev/lp1
Follow these directions to set up MySQL.
+
  serial serial:/dev/ttyS1?baud=115200
#Download the "Max" version of MySQL for Mac OS X, not the "Standard" version. It is available at http://www.mysql.com. 

NOTE: The extensive MySQL documentation available on the MySQL Web Site can help you make sense of the rest of these instructions. We strongly recommend you familiarize yourself with the MySQL Manual, as it can answer many of your questions.
+
  serial serial:/dev/ttyS2?baud=115200
#You probably want to download and install the MySQL GUI Tools as well. The MySQL Administrator is the easiest way to create and manage user accounts, in my opinion.
+
  direct usb:/dev/usb/lp0
#Install MySQL from the Disk Image file. Follow the on screen instructions. Be sure to assign a password to the root user account. (This prevents unauthorized access to your MySQL database by anyone who knows about this potential security hole.)
+
  network smb
#You need to set the value of the "max_allowed_packet" variable to at least 8,388,608.
 
  
On OS X 10.5.8, using MySQL 4.1.22 on one computer and MySQL 5.0.83 on another, I edited the file /etc/my.conf so that it included the following lines:
+
File devices have device URIs of the form file:/directory/filename while network devices use the more familiar method://server or method://server/path  format. Printer queues usually have a PostScript Printer Description ("PPD") file associated with them. PPD files describe the capabilities of each printer, the page sizes supported, etc.
  [mysqld]
     lower_case_table_names=1
     max_allowed_packet=8500000
 
This should work for MySQL 5.1 as well.
 
  
Using MySQL 4.1.14-max on OS X 10.3, I edited the "my.cnf" file in /etc, adding the following line to the [mysqld] section:
+
Adding a printer:
  set-variable=max_allowed_packet=8500000
+
/usr/sbin/lpadmin -p printer -E -v device -m ppd
 
+
Managing printers:
Exactly what you do may differ, of course.
+
/usr/sbin/lpadmin -p printer options
 
+
Starting and Stopping printer queues:
====Setup MySQL User Accounts====
+
  /usr/bin/enable printer ENTER
Here's what I do. It's the easiest way I've found to manage databases and accounts while maintaining database security. You are, of course, free to manage MySQL however you choose.
+
/usr/bin/disable printer ENTER
 +
Accepting and Rejecting Print jobs:
 +
/usr/sbin/accept printer ENTER
 +
/usr/sbin/reject printer ENTER
 +
Restrict Access:
 +
/usr/sbin/lpadmin -p printer -u allow:all
  
I have downloaded and installed the MySQL GUI Tools from the MySQL Web Site. These tools work nicely to manage databases and user accounts, as well as to manipulate data in MySQL tables. The tools have minor differences on different platforms, so the following directions are necessarily a bit vague on the details.
+
==Virtualization==
  
First I use the MySQL Administrator tool to create databases (called "catalogs" and "schemas" in the tool.) Go to the "Catalogs" page and choose to create a new "schema."
+
===Xen Basic Commands 2009-06-04===
 +
Basic management options
  
Second, still within the MySQL Administrator tool, I go to the Accounts page. I create a new user account, filling in (at least) the User Name and Password fields on the General tab. I then go to the Schema Privileges tab, select a user account (in some versions, you select a host, usually "%" under the user account, in others you select the user account itself,) and a specific database (schema), then assign specific privileges. I generally assign all privileges except "Grant" but you may choose to try a smaller subset if you wish. The "Select," "Insert," "Update," "Delete," "Create," and "Alter" privileges are all required. You may assign privileges to multiple databases for a single user account if you wish. Once I'm done setting privileges, I save or apply the settings and move on to the next user account.
+
The following are basic and commonly used xm commands:
 +
xm help [--long]: view available options and help text.
 +
  use the xm list command to list active domains:
 +
$ xm list
 +
  Name                               ID  Mem(MiB)   VCPUs      State      Time(s)
 +
  Domain-0                            0    520      2        r-----    1275.5
 +
  r5b2-mySQL01                      13    500      1        -b----      16.1
  
I have chosen to give my own user account "God-like" privileges within MySQL so that I can look at and manipulate all data in all database without having to assign myself specific privileges. This also allows me to create new Transana databases from within Transana-MU rather than having to run the MySQL Administrator. To accomplish this, I used the MySQL Query tool to go into MySQL's "mysql" database and edit my user account's entry in the "users" table to give my account global privileges. Please note that this is NOT a "best practice" or a recommendation, and is not even a good idea for most users. I mention it here, however, as I know some users will want to do this.
+
xm create [-c] DomainName/ID: start a virtual machine. If the -c option is used, the start up process will attach to the guest's console.
  
These instructions are not meant to be detailed or comprehensive. They are intended only to help people get started with Transana-MU. Please see the documentation on the MySQL site for more information on manipulating databases, user accounts, and privileges.
+
xm console DomainName/ID: attach to a virtual machine's console.
 
+
xm destroy DomainName/ID: terminate a virtual machine , similar to a power off.
====Set up the Transana Message Server====
+
xm reboot DomainName/ID: reboot a virtual machine, runs through the normal system shut down and start up process.
Once you've set up MySQL user accounts, you should set up version 2.40 of the Transana Message Server. It does not need to be on the same server as MySQL, though it may be.
+
xm shutdown DomainName/ID: shut down a virtual machine, runs a normal system shut down procedure.
 +
xm pause
 +
xm unpause
 +
xm save
 +
xm restore
 +
xm migrate
  
Follow these directions to set up the Message Server.
+
===Research 2011-08-24===
#If your server is running an earlier version of the Transana Message Server, you need to remove the old Message Server before installing the new one. See the Transana Message Server 2.40 Upgrade guide.
+
This is a collection of notes I took on virtualization over the summer.
#Download TransanaMessageServer240Mac.dmg from the Transana web site. The link to the download page is in your Transana-MU Purchase Receipt e-mail.
 
#Install it on the server.
 
#If you want the Transana Message Server to start automatically when the server starts up, follow these instructions:
 
#*#Open a Terminal Window. Type su to work as a superuser with the necessary privileges.
 
#*#In your /Library/StartupItems folder, (NOT your /HOME/Library, but the root /Library folder,) create a subfolder called TransanaMessageServer.
 
#*#Enter the following (single line) command:
 
#*#*cp /Applications/TransanaMessageServer/StartupItems/* /Library/StartupItems/TransanaMessageServer
 
#*#*This will copy the files necessary for the TransanaMessage Server to auto-start.
 
#*#Reboot your computer now so that the Transana Message Server will start. Alternately, you can start the Transana Message Server manually, just this once, to avoid rebooting.
 
#If you want to start the Message Server manually for testing purposes, you will need to type the following (single line) command into a Terminal window:
 
#*sudo python /Applications/TransanaMessageServer/MessageServer.py
 
  
====Configure the Firewall====
+
====KVM Commands====
If you will have Transana-MU users connecting to the MySQL and Transana Message Server instances you just set up from outside the network, you need to make sure port 3306 for MySQL and port 17595 for the Transana Message Server are accessible from outside the network. This will probably require explicitly configuring your firewall software to allow traffic through to these ports. Consult your firewall software's documentation to learn how to do this.
+
#Installing KVM
 
+
yum groupinstall KVM
====Creating a Shared Network Volume for Video Storage====
+
Adding storage pools
Finally, you must create a shared network volume where users can store any video that will be shared with all Transana-MU users. Be sure to allocate sufficient disk space for all necessary video files. This volume may be on your Mac Server or on another computer, but it must be accessible to all Transana-MU users on your network.
+
virsh pool-dumpxml default > pool.xml
 +
edit pool.xml # with new name and path
 +
virsh pool-create pool.xml
 +
virsh pool-refresh name
  
If you will have Transana-MU users connecting to the MySQL and Transana Message Server instances you just set up from outside the network, they will need to set up their own parallel Video Storage volumes.
+
====XCP XE Commands====
 
+
*SR Creation
====Now configure the client computers====
+
xe sr-create content-type=user type=nfs name-label=yendi shared=true device-config:server=10.0.0.237 device-config:serverpath=/data1/Xen/VMs/
Each user will need the following information to connect to the server programs you have just set up:
+
xe pool-list
 
+
xe pool-param-set uuid=<pool-uuid> default-SR=<newly_created_SR_uuid>
Username and password. (Don't create a single user account for users to share. The analytic process flows more smoothly when users can tell who else is interacting with the data, who has locked a record, and so on.)

+
xe sr-list
 
+
*VM Creation from CD
The DSN or IP address of the MySQL Server computer.
+
xe vm-install template="Other install media" new-name-label=<vm-name>
 
+
xe vbd-list vm-uuid=<vm_uuid> userdevice=0 params=uuid --minimal
The name of the database set up for the project.
+
*Using the UUID returned from vbd-list, set the root disk to not be bootable:
 
+
xe vbd-param-set uuid=<root_disk_uuid> bootable=false
The DSN or IP address of the Transana Message Server computer, if different from the MySQL Server computer.

+
*CD Creation
Instructions on how to connect to the local network's common video storage folder.
+
xe cd-list
 +
xe vm-cd-add vm=<vm-uuid> cd-name="<cd-name>" device=3
 +
xe vbd-param-set uuid=<cd_uuid> bootable=true
 +
xe vm-param-set uuid=<vm_uuid> other-config:install-repository=cdrom
 +
*Network Installation
 +
xe sr-list
 +
xe vm-install template="Other install media" new-name-label=<name_for_vm> sr-uuid=<storage_repository_uuid>
 +
xe network-list bridge=xenbr0 --minimal
 +
xe vif-create vm-uuid=<vm-uuid> network-uuid=<network-uuid> mac=random device=0
 +
xe vm-param-set uuid=<vm_uuid> other-config:install-repository=<http://server/redhat/5.0>
 +
*Lookup dom-id for VNC connections
 +
xe vm-list uuid=<vm-uuid> params=dom-id
 +
*Use this command to port forward to the local system
 +
ssh -l root -L 5901:127.0.0.1:5901 <xcp_server>
 +
*Or by adding this line to iptables file on XCP host server
 +
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 5901 -j ACCEPT
 +
*you can use this ssh command
 +
ssh -l root -L 5901:tomato:5901 gourd.unh.edu
 +
*or you can ssh to gourd locally then to tomato using
 +
ssh -l root -L 5901:127.0.0.1:5901 gourd.unh.edu
 +
*then on gourd run
 +
ssh -l root -L 5901:127.0.0.1:5901 tomato
 +
*Virtual Disk Creation
 +
xe vm-disk-add disk-size=10000000 device=4
 +
xe vm-disk-lisst
  
Once you have this information, you are ready to start setting up client computers for the members of the project.
+
====VMware ESXi Notes====
Wait, you lost me. Take me back to the overview.
+
VMWare ESXi
  
====Setup with MySQL Workbench====
+
Key areas of interest for us:
Here's what I do. I don't know if it's the optimum way, but it works for me. You are, of course, free to manage MySQL however you choose.
+
*vMotion
 +
*SAN
 +
*hypervisor
 +
*Pricing for gourd would be $2600
  
I have downloaded and installed the MySQL Workbench Tool from the MySQL Web Site. This tool works nicely to manage databases (called schemas in Workbench) and user accounts, as well as to manipulate data in MySQL tables. The tools have minor differences on different platforms, so the following directions are necessarily a bit vague on the details. These directions assume you have alread defined your Server Instance in the MySQL Workbench.
+
===Xen Removal on Pumpkin 2009-08-26===
 +
When removing kernel-xen use these commands:
 +
yum groupremove Virtualization
 +
yum remove kernel-xenU
 +
yum update
  
If I need to create a new database for Transana, I start at the MySQL Workbench Home screen. On the left side, under "Open Connection to Start Querying", I double-click the connection for the server I want to use and enter my password. This takes me to the SQL Editor. In the top toolbar, I select the "Create a new schema in the connected server" icon, which for me is the 3rd one. I give my schema (my Transana database) a name (avoiding spaces in the name), and press the Apply button. The "Apply SQL Script to Database" window pops up, and I again press the Apply button, followed by pressing the "Finish" button when the script is done executing. My Transana Database now exists, so I close teh SQL Editor tab and move on to adding or authorizing user accounts.
+
===Xen and NVidia 2011-06-07===
 
+
#Running Binary nVidia Drivers under Xen Host
Each person using Transana-MU needs a unique user account with permission to access the Transana database. To create user accounts for an existing database, I start at the MySQL Workbench Home screen. On the right side, under "Server Administration", I double-click the server instance I want to work with and enter my password. Under "Security", I select "Users and Privileges". At the bottom of the screen, I press the "Add Account" button, then provide a Login Name and password. I then press the Apply button to create the user account. After that, I go to the Schema Privileges tab, select the user account I just created, and click the "Add Entry..." button on the right-hand side of the screen. This pops up the New Schema Privilege Definion window. I select Any Host under hosts, and Selected Schema, followed by the name of the database I want to provide access to. Once this is done, the bottom section of the screen, where privileges are managed, will be enabled. Click the "Select ALL" button. Make sure that the "GRANT" or "GRANT OPTION" right is NOT checked, then press the "Save Changes" button.
+
#Sun, 06/22/2008 - 00:50 — jbreland
 
+
#
These instructions are not meant to be detailed or comprehensive. They are intended only to help people get started with Transana-MU. Please see the documentation on the MySQL site for more information on manipulating databases, user accounts, and privileges.
+
#In my last post I mentioned that I recently had a hardware failure that took down my server. I needed to get it back up and running again ASAP, but due to a large number of complications I was unable to get the original hardware up and running again, nor could I get any of the three other systems I had at my disposal to work properly. Seriously, it was like Murphy himself had taken up residence here. In the end, rather desperate and out of options, I turned to Xen (for those unfamiliar with it, it's similar to VMware or Virtual Box, but highly geared towards server0. I'd recently had quite a bit of experience getting Xen running on another system, so I felt it'd be a workable, albeit temporary, solution to my problem.
 
+
#
====Appendix A: Resetting MySQL Root Password====
+
#Unfortunately, the only working system I had suitable for this was my desktop, and while the process of installing and migrating the server to a Xen guest host was successful (this site is currently on that Xen instance) it was not without it's drawbacks. For one thing, there's an obvious performance hit on my desktop while running under Xen concurrently with my server guest, though fortunately my desktop is powerful enough that this mostly isn't an issue (except when the guest accesses my external USB drive to backup files; for some reason that consumes all CPU available for about 2 minutes and kills performance on the host). There were a few other minor issues, but by far the biggest problem was that the binary nVidia drivers would not install under Xen. Yes, the open source 'nv' driver would work, but that had a number of problems/limitations:
http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html
+
#
 
+
#  1. dramatically reduced video performance, both in video playback and normal 2d desktop usage
On Unix, use the following procedure to reset the password for all MySQL root accounts. The instructions assume that you will start the server so that it runs using the Unix login account that you normally use for running the server. For example, if you run the server using the mysql login account, you should log in as mysql before using the instructions. Alternatively, you can log in as root, but in this case you must start mysqld with the --user=mysql option. If you start the server as root without using --user=mysql, the server may create root-owned files in the data directory, such as log files, and these may cause permission-related problems for future server startups. If that happens, you will need to either change the ownership of the files to mysql or remove them.
+
#  2. no 3d acceleration whatsoever (remember, this is my desktop system, so I sometimes use it for gaming)
#Log on to your system as the Unix user that the mysqld server runs as (for example, mysql).
+
#  3. no (working) support for multiple monitors
#Locate the .pid file that contains the server's process ID. The exact location and name of this file depend on your distribution, host name, and configuration. Common locations are /var/lib/mysql/, /var/run/mysqld/, and /usr/local/mysql/data/. Generally, the file name has an extension of .pid and begins with either mysqld or your system's host name. 
You can stop the MySQL server by sending a normal kill (not kill -9) to the mysqld process, using the path name of the .pid file in the following command: 
shell> kill `cat /mysql-data-directory/host_name.pid`
+
#  4. significantly different xorg.conf configuration
#
Use backticks (not forward quotation marks) with the cat command. These cause the output of cat to be substituted into the kill command.
+
#
#Create a text file containing the following statements. Replace the password with the password that you want to use. 
UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root';
+
#In fairness, issues 1 and 2 are a direct result of nVidia not providing adequate specifications for proper driver development. Nonetheless, I want my hardware to actually work, so the performance was not acceptable. Issue 3 was a major problem as well, as I have two monitors and use both heavily while working. I can only assume that this is due to a bug in the nv driver for the video card I'm using (a GeForce 8800 GTS), as dual monitors should be supported by this driver. It simply wouldn't work, though. Issue 4 wasn't that significant, but it did require quite a bit of time to rework it, which was ultimately pointless anyway due to issue 3.
#FLUSH PRIVILEGES;
Write the UPDATE and FLUSH statements each on a single line. The UPDATE statement resets the password for all root accounts, and the FLUSH statement tells the server to reload the grant tables into memory so that it notices the password change.  
+
#
#Save the file. For this example, the file will be named /home/me/mysql-init. The file contains the password, so it should not be saved where it can be read by other users. If you are not logged in as mysql (the user the server runs as), make sure that the file has permissions that permit mysql to read it.  
+
#So, with all that said, I began my quest to get the binary nVidia drivers working under Xen. Some basic searches showed that this was possible, but in every case the referenced material was written for much older versions of Xen, the Linux kernel, and/or the nVidia driver. I tried several different suggestions and patches, but none would work. I actually gave up, but then a few days later I got so fed up with performance that I started looking into it again and trying various different combinations of suggestions. It took a while, but I finally managed hit on the special sequence of commands necessary to get the driver to compile AND load AND run under X. Sadly, the end result is actually quite easy to do once you know what needs to be done, but figuring it out sure was a bitch. So, I wanted to post the details here to hopefully save some other people a lot of time and pain should they be in a similar situation.
#Start the MySQL server with the special --init-file option: 
shell> mysqld_safe --init-file=/home/me/mysql-init &
+
#
#
The server executes the contents of the file named by the --init-file option at startup, changing each root account password.  
+
#This guide was written with the following system specs in mind:
#After the server has started successfully, delete /home/me/mysql-init.
+
#
 
+
#    * Xen 3.2.1
You should now be able to connect to the MySQL server as root using the new password. Stop the server and restart it normally.
+
#    * Gentoo dom0 host using xen-sources-2.6.21 kernel package
 
+
#          o a non-Xen kernel must also be installed, such as gentoo-sources-2.6.24-r8
==Misc==
+
#    * GeForce 5xxx series or newer video card using nvidia-drivers-173.14.09 driver package
 
+
#
===denyhosts-undeny.py 2013-05-31===
+
#Version differences shouldn't be too much of an issue; however, a lot of this is Gentoo-specific. If you're running a different distribution, you may be able to modify this technique to suit your needs, but I haven't tested it myself (if you do try and have any success, please leave a comment to let others know what you did). The non-Xen kernel should be typically left over from before you installed Xen on your host; if you don't have anything else installed, however, you can do a simple emerge gentoo-source to install it. You don't need to run it, just build against it.
  #!/usr/bin/env python
+
#
import os
+
#Once everything is in place, and you're running the Xen-enabled (xen-sources) kernel, I suggest uninstalling any existing binary nVidia drivers with emerge -C nvidia-drivers. I had a version conflict when trying to start X at one point as the result of some old libraries not being properly updated, so this is just to make sure that the system's in a clean state. Also, while you can do most of this while in X while using the nv driver, I suggest logging out of X entirely before the modprobe line.
import sys
+
#
import subprocess
+
#Here's the step-by-step guide:
#The only argument should be the host to undeny
+
#
  try:
+
#  1. Run uname -r to verify the version of your currently running Xen-enabled kernel; eg., mine's 2.6.21-xen
   goodhost = sys.argv[1]
+
#  2. verify that you have both Xen and non-Xen kernels installed: cd /usr/src/ && ls -l
  except:
+
#         * eg., I have both linux-2.6.21-xen and linux-2.6.24-gentoo-r8
   print "Please specify a host to undeny!"
+
#   3. create a symlink to the non-Xen kernel: ln -sfn linux-2.6.24-gentoo-r8 linux
   sys.exit(1)
+
#  4. install the nVidia-drivers package, which includes the necessary X libraries: emerge -av nvidia-drivers
#These commands start/stop denyhosts. Set these as appropriate for your system.
+
  #         * this will also install the actual driver, but it'll be built and installed for the non-Xen kernel, not your current Xen-enabled kernel
stopcommand = '/etc/init.d/denyhosts stop'
+
  #   5. determine the specific name and version of the nVidia driver package that was just installed; this can be found by examining the output of emerge -f nvidia-drivers (look for the NVIDIA-Linux-* line)
  startcommand = '/etc/init.d/denyhosts start'
+
  #   6. extract the contents of the nVidia driver package: bash /usr/portage/distfiles/NVIDIA-Linux-x86_64-173.14.09-pkg2.run -a -x
  #Check to see what distribution we're using.
+
#   7. change to the driver source code directory: cd NVIDIA-Linux-x86_64-173.14.09-pkg2/usr/src/nv/
  distrocheckcommand = "awk '// {print $1}' /etc/redhat-release"
+
  #  8. build the driver for the currently-running Xen-enabled kernel: IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
  d = os.popen(distrocheckcommand)
+
  #   9. assuming there are no build errors (nvidia.ko should exist), install the driver:
  distro = d.read()
+
  #          * mkdir /lib/modules/`uname -r`/video
  distro = distro.rstrip('\n')
+
  #          * cp -i nvidia.ko /lib/modules/`uname -r`/video/
  #Check to see what user we're being run as.
+
  #          * depmod -a
  usercheckcommand = "whoami"
+
  #  10. if necessary, log out of X, then load the driver: modprobe nvidia
  u = os.popen(usercheckcommand)
+
  # 11. if necessary, reconfigure xorg.conf to use the nvidia binary driver rather than the nv driver
  user = u.read()
+
  # 12. test that X will now load properly with startx
  user = user.rstrip('\n')
+
  #  13. if appropriate, start (or restart) the display manager with /etc/init.d/xdm start
  if user == 'root':
+
  #
  pass
+
  #Assuming all went well, you should now have a fully functional and accelerated desktop environment, even under a Xen dom0 host. W00t. If not, feel free to post a comment and I'll try to help if I can. You should also hit up the Gentoo Forums, where you can get help from people far smarter than I.
else:
+
  #
  print "Sorry, this script requires root privileges."
+
#I really hope this helps this helps some people out. It was a royal pain in the rear to get this working, but believe me, it makes a world of difference when using the system.
  sys.exit(1)
+
  #
#The files we should be purging faulty denials from.
+
#    * Linux
  if distro == 'Red':
+
#    * Tips
  filestoclean = ['/etc/hosts.deny','/var/lib/denyhosts/hosts-restricted','/var/lib/denyhosts/sync-hosts','/var/lib/denyhosts/suspicious-logins']
+
#
  elif distro == 'CentOS':
+
#Comment viewing options
  filestoclean = ['/etc/hosts.deny','/usr/share/denyhosts/data/hosts-restricted','/usr/share/denyhosts/data/sync- hosts','/usr/share/denyhosts/data/suspicious-logins']
+
#Select your preferred way to display the comments and click "Save settings" to activate your changes.
  elif distro == 'Fedora':
+
#Sat, 07/12/2008 - 16:37 — Simon (not verified)
  print "This script not yet supported on Fedora systems!"
+
  #Re: Running Binary nVidia Drivers under Xen Host
  sys.exit(1)
+
#
  else:
+
#Hi,
  print "This script is not yet supported on your distribution, or I can't properly detect it."
+
#
  sys.exit(1)
+
  #A question:
  #Stop denyhosts so that we don't get any confusion.
+
#Why do I need to install the non-Xen kernel? Is this only to be able to properly install the nvidia driver using it's setup-script?
os.system(stopcommand)
+
#Im using openSuSE 10 x64 with a almost recent kernel (2.6.25.4) and currently without xen.
  #Let's now remove the faulty denials.
+
#
  for targetfile in filestoclean:
+
  #According to you writing the nvidia-driver/xen support each other (and compile fine under xen). My last state was that this setup is only possible for an old patched nvidia driver (with several performance and stability problems).
  purgecommand = "sed -i '/" + goodhost + "/ d' " + targetfile
+
#
  os.system(purgecommand)
+
#Thanks ahead!
  #Now that the faulty denials have been removed, it's safe to restart denyhosts.
+
  #
  os.system(startcommand)
+
  #PS: sorry for my bad english
  sys.exit(0)
+
#
 
+
#- Simon
===Temperature Shutdown Procedure 2009-06-06===
+
#Thu, 07/17/2008 - 17:28 — jbreland
Room temp greater than 25C
+
  #jbreland's picture
*If the outdoor temp is lower than indoor, open the windows.
+
  #Re: Running Binary nVidia Drivers under Xen Host
*Shut down any unused workstations.
+
  #
*Shut down any workstations being built or configured.
+
#There are two parts to the binary driver package:
Room temp greater than 28C
+
#
*Follow procedure for >25C.
+
#    * the driver itself (the kernel module - nvidia.ko)
*Write an email to npg-users@physics.unh.edu:
+
#    * the various libraries needed to make things work
Subject: Systems being shut down due to temperature
+
#
 +
#While the kernel module will indeed build against the Xen kernel (provided the appropriate CLI options are used, as discussed above), I was unable to get the necessary libraries installed using the Xen kernel. It might be possible to do this, but I don't know how. For me, it was easier to let my package manager (Portage, for Gentoo) install the package. This would only install when I'm using the non-Xen kernel. After that was installed, I could then switch back to the Xen kernel and manually build/install the kernel module.
 +
#
 +
#Of course, as I mentioned above, this was done on a Gentoo system. Other distributions behave differently, and I'm not sure what may be involved in getting the binary drivers setup correctly on them. If you have any luck, though, please consider posting your results here for the benefits of others.
 +
#
 +
#Good luck.
 +
#
 +
#--
 +
#http://www.legroom.net/
 +
#Wed, 10/08/2008 - 14:47 — Jonathan (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#I have it working on CentOS 5.2 with a Xen kernel as well, thanks to this I have TwinView available again:
 +
#
 +
#[root@mythtv ~]# dmesg | grep NVRM
 +
#NVRM: loading NVIDIA UNIX x86_64 Kernel Module 100.14.19 Wed Sep 12 14:08:38 PDT 2007
 +
#NVRM: builtin PAT support disabled, falling back to MTRRs.
 +
#NVRM: bad caching on address 0xffff880053898000: actual 0x77 != expected 0x73
 +
#NVRM: please see the README section on Cache Aliasing for more information
 +
#NVRM: bad caching on address 0xffff880053899000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff88005389a000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff88005389b000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff88005389c000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff88005389d000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff88005389e000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff88005389f000: actual 0x77 != expected 0x73
 +
#NVRM: bad caching on address 0xffff8800472f4000: actual 0x67 != expected 0x63
 +
#NVRM: bad caching on address 0xffff880045125000: actual 0x67 != expected 0x63
 +
#[root@mythtv ~]# uname -r
 +
#2.6.18-92.1.13.el5xen
 +
#[root@mythtv ~]#
 +
#
 +
#Now see if I can fix the bad caching errors... and see if I can run a dom host.
 +
#
 +
#Thanks heaps!
 +
#Thu, 11/06/2008 - 21:18 — CentOS N00b (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#Can you explain how you got that to work. I'm still getting a error on the modprobe step.
 +
#
 +
#[root@localhost ~]# modprobe nvidia
 +
#nvidia: disagrees about version of symbol struct_module
 +
#FATAL: Error inserting nvidia (/lib/modules/2.6.18-92.el5xen/kernel/drivers/video/nvidia.ko): Invalid module format
 +
#
 +
#Any ideas anyone?
 +
#Tue, 11/18/2008 - 15:32 — Jonathan (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#I have the following kernel related packages installed and am compiling some older drivers (100.14.19) as they work for my card in non-xen kernels as well:
 +
#
 +
#[root@mythtv ~]# rpm -qa kernel* | grep $(uname -r | sed -e 's/xen//') | sort
 +
#kernel-2.6.18-92.1.18.el5
 +
#kernel-devel-2.6.18-92.1.18.el5
 +
#kernel-headers-2.6.18-92.1.18.el5
 +
#kernel-xen-2.6.18-92.1.18.el5
 +
#kernel-xen-devel-2.6.18-92.1.18.el5
 +
#[root@mythtv ~]#
 +
#
 +
#I am booted into the xen kernel:
 +
#
 +
#[root@mythtv ~]# uname -r
 +
#2.6.18-92.1.18.el5xen
 +
#[root@mythtv ~]#
 +
#
 +
#I already have my source extracted like explained in the article and navigated to it. Inside the ./usr/src/nv folder of the source tree I issue the following command (from the article as well) which starts compiling:
 +
#
 +
#[root@mythtv nv]# IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
 +
#
 +
#Above command should start compilation. After compilation I copy the driver to my lib tree:
 +
#
 +
#[root@mythtv nv]# mkdir -p /lib/modules/`uname -r`/kernel/drivers/video/nvidia/
 +
#[root@mythtv nv]# cp -i nvidia.ko /lib/modules/`uname -r`/kernel/drivers/video/nvidia/
 +
#
 +
#Then to load the driver:
 +
#[root@mythtv ~]# depmod -a
 +
#[root@mythtv ~]# modprobe nvidia
 +
#
 +
#To see if it was loaded I issue this command:
 +
#[root@mythtv ~]# dmesg | grep NVIDIA
 +
#
 +
#which in my case outputs this:
 +
#[root@mythtv ~]# dmesg |grep NVIDIA
 +
#nvidia: module license 'NVIDIA' taints kernel.
 +
#NVRM: loading NVIDIA UNIX x86_64 Kernel Module 100.14.19 Wed Sep 12 14:08:38 PDT 2007
 +
#[root@mythtv ~]#
 +
#
 +
#I do not worry about the tainting of the kernel as it seems to work pretty well for me as well as for this error:
 +
#
 +
#[root@mythtv nv]# dmesg |grep NVRM
 +
#NVRM: loading NVIDIA UNIX x86_64 Kernel Module 100.14.19 Wed Sep 12 14:08:38 PDT 2007
 +
#NVRM: builtin PAT support disabled, falling back to MTRRs.
 +
#[root@mythtv nv]#
 +
#Sat, 07/12/2008 - 18:18 — Anonymous (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#A really intersting article you created here -- if there were not (I hope) a typo that destroys everything:
 +
#
 +
#The last paragraph reads:
 +
#"Assuming all went well, you should not have a fully functional ..."
 +
#
 +
#The word "not" is disturbing me, and I have some hope that it should be a "now", as that would make sense with all your efforts.
 +
#
 +
#Can you please comment on this issue?
 +
#
 +
#Thanks
 +
#Thu, 07/17/2008 - 17:30 — jbreland
 +
#jbreland's picture
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#Oops. Yeah, that was a typo. I guess it would pretty much defeat the purpose of going through this exercise, considering it's already not working at the start. :-)
 +
#
 +
#Corrected now - thanks for pointing it out.
 +
#
 +
#--
 +
#http://www.legroom.net/
 +
#Fri, 07/18/2008 - 22:04 — SCAP (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#Xcellent! Works great for me... Now I have to choose between VMWare, VirtualBox and Xen...
 +
#Thu, 07/31/2008 - 14:53 — Anonymous (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#works with 173.14.12 drivers too! ;)
 +
#Wed, 09/24/2008 - 12:13 — VladSTar (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#Thanks for the solution - it works like a charm.
 +
#Fedora Core 8, kernel 2.6.21.7-5, XEN 3.1.2-5 and latest nvidia driver (173.14.12).
 +
#Thu, 10/23/2008 - 21:17 — Jamesttgrays (not verified)
 +
#Re: Running Binary nVidia Drivers under Xen Host
 +
#
 +
#Hm.. strange - I wasn't able to get this to work with the newest version of the Nvidia drivers. it says something along the lines of "will not install to Xen-enabled kernel." Darned Nvidia - serves me right.. I ought've gotten me an ATI card!
 +
#Wed, 11/05/2008 - 05:54 — kdvr (not verified)
 +
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
 +
#
 +
#openSUSE 11.0 with linux-2.6.27 (its a fresh install and dont remember exact version of kernel, im under windows), Leadtek Winfast 9600GT
 +
#
 +
#For me it doesn't work. It won't build with: IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module. It says something like this kernel is not supported, the same error as the setup, nothing about xen though.
 +
#
 +
#I need xen for studying purposes on my desktop pc, and running it without drivers is not an option as the cooler is blowing at full speed.
 +
#Sun, 11/09/2008 - 00:55 — jbreland
 +
#jbreland's picture
 +
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
 +
#
 +
#I think you need to have kernel headers installed in order to build the module. I'm not sure what you'd need to install in OpenSUSE, but query your package repository for kernel-headers, linux-headers, etc. and see if you can find something that matches your specific kernel.
 +
#
 +
#--
 +
#http://www.legroom.net/
 +
#Sat, 11/15/2008 - 19:42 — Anonymous (not verified)
 +
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
 +
#
 +
#Same here, with 2.6.27 (x86/64) and 177.80 step 8 fails ("Unable to determine kernel version").
 +
#Tue, 11/18/2008 - 15:29 — Anonymous (not verified)
 +
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
 +
#
 +
#For OpenSuse 11 I got it working doing this
 +
#cd /usr/src/linux
 +
#make oldconfig && make scripts && make prepare
 +
#
 +
## Extract the source code from nvidia installer
 +
#sh NVIDIA-Linux-whateverversion-pkg2.run -a -x
 +
#
 +
#cd NVIDIA-Linux-whateverversion-pkg2/usr/src/nv/
 +
##build
 +
#IGNORE_XEN_PRESENCE=y make SYSSRC=/usr/src/linux module
 +
##should have built a kernel module
 +
#cp nvidia.ko /lib/modules/`uname -r`/kernel/drivers/video/
 +
#cd /lib/modules/`uname -r`/kernel/drivers/video/
 +
#depmod -a
 +
#modprobe nvidia
 +
#
 +
#glxinfo is showing direct rendering: yes
 +
#So it seems to be working.
 +
#Tue, 12/23/2008 - 00:42 — Andy (not verified)
 +
#No luck with RHEL 5.1
 +
#
 +
#I tried different combinations of Red Hat kernels (2.6.18-92.1.22.el5xen-x86_64, 2.6.18-120.el5-xen-x86_64, 2.6.18-92.el5-xen-x86_64) and NVIDIA drivers (177.82, 173.08) but I couldn't get it to run. I succeed in compiling the kernel module but once I start the X server (either with startx or init 5) the screen just turns black and a hard reset is needed.
 +
#
 +
#/var/log/messages contains the lines:
 +
#
 +
#
 +
#Dec 23 14:23:56 jt8qm1sa kernel: BUG: soft lockup - CPU#0 stuck for 10s! [X:8177]
 +
#Dec 23 14:23:56 jt8qm1sa kernel: CPU 0:
 +
#Dec 23 14:23:56 jt8qm1sa kernel: Modules linked in: nvidia(PU) ...
 +
#
 +
#I'm giving up now. Any hint anyone?
 +
#
 +
#Andy
 +
#Thu, 01/22/2009 - 21:43 — Hannes (not verified)
 +
#Error on starting X
 +
#
 +
#Juche jbreland,
 +
#first of all thank you for your article, it gave me confidence that it will work some time. But this time is still to come.
 +
#I Did everything as you said (exept that I unmerged the old nvidia Driver at the beginning) and Every time I want to start X I get this Error:
 +
#
 +
#
 +
#(II) Module already built-in
 +
#NVIDIA: could not open the device file /dev/nvidia0 (Input/output error).
 +
#(EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device PCI:1:0:0.
 +
#(EE) NVIDIA(0): Please see the COMMON PROBLEMS section in the README for
 +
#(EE) NVIDIA(0): additional information.
 +
#(EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device!
 +
#(EE) Screen(s) found, but none have a usable configuration.
 +
#
 +
#I am using different Versions than you did:
 +
#
 +
#* sys-kernel/xen-sources: 2.6.21
 +
#* app-emulation/xen: 3.3.0
 +
#* x11-drivers/nvidia-drivers: 177.82
 +
#* sys-kernel/gentoo-sources: 2.6.27-r7
 +
#
 +
#It looks like everything works out fine, lsmod has the nvidia module listed, the File /dev/nvidia0 exists and there is no Problem in Accessing it, I have the same problem if I try to start X as root user.
 +
#
 +
#Do you have any Idea?
 +
#
 +
#Hannes
 +
#
 +
#PS.: could you please post your obsolet xorg.conf configuration of the open Source Driver, that would help me too.
 +
#Fri, 01/23/2009 - 22:17 — jbreland
 +
#jbreland's picture
 +
#Re: Error on starting X
 +
#
 +
#Off the top of my head, no. You installed the new nvidia-drivers package first, right? All of those libraries are needed when loading the module. Did you get any build errors or errors when inserting the new module? Did you try rebooting just to be certain that an older version of the nvidia or nv module was not already loaded or somehow lingering in memory?
 +
#
 +
#As for the xorg.conf file using the nv driver, you can grab it from the link below, but keep in mind that it's not fully functional. It provided me with a basic, unaccelerated, single monitor desktop that was usable, but rather miserable.
 +
#xorg.conf.nv
 +
#
 +
#--
 +
#http://www.legroom.net/
 +
#Sun, 01/25/2009 - 17:49 — Anonymous (not verified)
 +
#Re: Error on starting X
 +
#
 +
#THX for your reply.
 +
#I am still trying. Here my new enlightenments:
 +
#I found out, that when I compile the nvidia Driver the regular way it uses this make command:
 +
#
 +
#make -j3 HOSTCC=i686-pc-linux-gnu-gcc CROSS_COMPILE=i686-pc-linux-gnu- LDFLAGS= IGNORE_CC_MISMATCH=yes V=1 SYSSRC=/usr/src/linux SYSOUT=/lib/modules/2.6.21-xen/build HOST_CC=i686-pc-linux-gnu-gcc clean module
 +
#
 +
#This is verry different from your suggestion of only running
 +
#
 +
#IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
 +
#
 +
#So I tried my long make Command with the Prefix of IGNORE_XEN_PRESENCE=y but this lead to a Build error (some other error then the "This is a XEN Kernel" Error) see below. Then I tired (why did not you take this approach):
 +
#
 +
#IGNORE_XEN_PRESENCE=y emerge -av nvidia-drivers
 +
#
 +
#which was an easier way to produce the same error:
 +
#
 +
#include/asm/fixmap.h:110: error: expected declaration specifiers or '...' before 'maddr_t'
 +
#
 +
#Very Strange, If I just run your short command, compilation runs without any Problem.
 +
#
 +
#Another thing that I found out was that the Log in dmsg is different when loadning the nvidia module under XEN:
 +
#
 +
#nvidia: module license 'NVIDIA' taints kernel.
 +
#NVRM: loading NVIDIA UNIX x86 Kernel Module 177.82 Tue Nov 4 13:35:57 PST 2008
 +
#
 +
#or under a regular Kernel:
 +
#
 +
#nvidia: module license 'NVIDIA' taints kernel.
 +
#nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
 +
#nvidia 0000:01:00.0: setting latency timer to 64
 +
#NVRM: loading NVIDIA UNIX x86 Kernel Module 177.82 Tue Nov 4 13:35:57 PST 2008
 +
#
 +
#There are two lines missing maybe important, maybe the reason why the /dev/nvidia0 device was not created.
 +
#
 +
#I will continue trying, if I Find some solution or new enlightenments I will keep you and your fan-club informed. Thanks for your comments,
 +
#
 +
#Hannes
 +
#Wed, 01/28/2009 - 08:14 — Hannes (not verified)
 +
#Re: Error on starting X
 +
#
 +
#Juche Jbreland,
 +
#
 +
#I give up I am one of those who could not make it.
 +
#I will now use Virtual box. That ist sad but more efficient.
 +
#
 +
#Hannes
 +
#Sat, 01/31/2009 - 01:56 — jbreland
 +
#jbreland's picture
 +
#Re: Error on starting X
 +
#
 +
#Sorry to hear you couldn't get it working. Nothing wrong with VirtualBox, though. I use it myself everytime I need to run Windows. :-)
 +
#--
 +
#http://www.legroom.net/
 +
 
 +
==Yum==
  
Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: gourd, pepper, and tomato. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.
+
===RHEL to CentOS 2010-01-12===
Thank you,
+
Display priority scores for all repositories
(Your name)
 
*Wait 10 minutes. If the temperature is still too high, shut down gourd, pepper, and tomato.
 
Room temp greater than 30C
 
*Follow procedure for >25C, then >27C.
 
*Wait 10 minutes after shutting down gourd, pepper, and tomato.
 
*If temperatures are still greater than 30C, write an email to npg-users@physics.unh.edu:
 
Subject: Systems being shut down due to temperature
 
  
Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: endeavour. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.
+
You can list all repositories set up on your system by a yum repolist all. However, this does not show priority scores. Here's a one liner for that. If no number is defined, the default is the lowest priority (99).
*Wait 10 minutes. If the temperature is still too high, shut down endeavour.
 
*Wait 5 minutes. If the temperature is still too high, write an email to npg-users@physics.unh.edu:
 
Subject: Systems being shut down due to temperature
 
  
Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: taro and pumpkin. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.
+
cat /etc/yum.repos.d/*.repo | sed -n -e "/^\[/h; /priority *=/{ G; s/\n/ /; s/ity=/ity = /; p }"  | sort -k3n
*Wait 10 minutes. If the temperature is still too high, shut down taro and pumpkin.
 
*Wait 5 minutes. If the temperature is still too high, shut down lentil.
 
Room temp greater than 35C
 
*Immediately shut down all workstations, gourd, pepper, tomato, lentil, endeavour, taro, and pumpkin, in that order. If the temperature drops to another category, follow the instructions for that category.
 
*Wait 5 minutes. If the temperature is still too high, send an email to npg-users@physics.unh.edu:
 
Subject: Critical temperatures in server room
 
  
Body: The server room temperatures are now critical. In order to avoid hardware damage, we have performed an emergency shutdown of all servers, and the mail server will be shut down shortly. We apologize for any inconvenience.
+
Installing yum
*Wait 15 minutes so that people can get your notification.
 
*If the temperature has not dropped below 35C, shut down einstein.
 
  
=Hosts=
+
Okay, okay -- I get it -- it is not CentOS. But, I still want yum, or to try to remove and repair a crippled set of yum configurations.
These are hosts that I have worked on. The services I have worked on may not carry the same services, but this is a log not a reflection of what is.
 
  
==Gourd==
+
<!> First, take full backups and make sure they may be read. This may not work.
  
===Network Config 2012-11-05===
+
Then, you need the following package to get a working yum - all of which can be downloaded from any CentOS mirror:
 +
*centos-release
  
====ifcfg-farm====
+
You should already have this package installed. You can check that with
  DEVICE=eth0
+
  rpm -q centos-release
  ONBOOT=yes
+
  centos-release-4-4.3.i386
HWADDR=00:30:48:ce:e2:38
 
BRIDGE=farmbr
 
  
====ifcfg-farmbr====
+
If it is already on your system, please check that the yum configuration hasn't been pulled and is available on your system:
  ONBOOT=yes
+
  ls -l /etc/yum.repos.d/
TYPE=bridge
 
DEVICE=farmbr
 
BOOTPROTO=static
 
IPADDR=10.0.0.252
 
NETMASK=255.255.0.0
 
GATEWAY=10.0.0.1
 
NM_CONTROLLED=no
 
DELAY=0
 
  
====ifcfg-farmbr:1====
+
This directory should contain only the files: CentOS-Base.repo and CentOS-Media.repo. If those aren't there, you should make a directory: 'attic' there, and 'mv' a backup of the current content into that attic, to prepare for the reinstall of the centos-release package:
ONBOOT=yes
+
  rpm -Uvh --replacepkgs centos-release.*.rpm
TYPE=Ethernet
+
 
DEVICE=farmbr:1
+
If centos-release isn't installed on your machine, you can drop the --replacepkgs from the command above. Make a backup directory ./attic/ and move any other files present into it, so that you can back out of this proccess later, if you decide you are in 'over your head'.
  BOOTPROTO=static
 
IPADDR=10.0.0.240
 
NETMADK=255.255.0.0
 
GATEWAY=10.0.0.1
 
NM_CONTROLLED=no
 
ONPARENT=yes
 
  
====ifcfg-unh====
+
Then you need the following packages:
DEVICE=eth1
 
ONBOOT=yes
 
HWADDR=00:30:48:ce:e2:39
 
BRIDGE=unhbr
 
  
====ifcfg-unhbr====
+
CentOS 4
ONBOOT=yes
 
TYPE=bridge
 
DEVICE=unhbr
 
BOOTPROTO=static
 
IPADDR=132.177.88.75
 
NETMASK=255.255.252.0
 
GATEWAY=132.177.88.1
 
NM_CONTROLLED=no
 
DELAY=0
 
  
====ifcfg-unhbr:1====
+
(available from where you also got the centos-release package):
ONBOOT=yes
+
    * yum
TYPE=Ethernet
+
    * sqlite
DEVICE=unhbr:1
+
    * python-sqlite
BOOTPROTO=static
+
    * python-elementtree
IPADDR=132.177.91.210
+
    * python-urlgrabber
NETMASK=255.255.252.0
 
GATEWAY=132.177.88.1
 
NM_CONTROLLED=no
 
ONPARENT=yes
 
  
===rc.local 2009-05-20===
+
CentOS 5
#!/bin/sh
 
#
 
# This script will be executed *after* all the other init scripts.
 
# You can put your own initialization stuff in here if you don't
 
# want to do the full Sys V style init stuff.
 
touch /var/lock/subsys/local
 
#This will send an email to the npg-admins at startup with the hostname and the boot.log file
 
mail -s "$HOSTNAME Started, Here is the boot.log" npg-admins@physics.unh.edu < /var/log/boot.log
 
  
===Yum 2009-05-21===
+
(available from where you also got the centos-release package):
Fixing yum on gourd
+
    * m2crypto
 +
    * python-elementtree
 +
    * python-sqlite
 +
    * python-urlgrabber
 +
    * rpm-python
 +
    * yum
 +
 
 +
Download those into a separate directory and install them with
 +
rpm -Uvh *.rpm
 +
from that directory. As before, take a backup of /etc/yum.conf so that you might back out any changes.
  
In order to get RHN support (repo files) you must download and install off the rhn network
+
==Transana==
yum-rhn-plugin
+
This is for Dawn's research and graduate studentsIt is transcription software for videos.
and then these errors
 
[Tbow@gourd ~]$ sudo rpm -i Desktop/documentation-notes/downloads/yum-rhn-plugin-0.5.3-30.el5.noarch.rpm
 
Password:
 
warning: Desktop/documentation-notes/downloads/yum-rhn-plugin-0.5.3-30.el5.noarch.rpm: V3 DSA signature: NOKEY, key ID 37017186
 
error: Failed dependencies:
 
rhn-client-tools >= 0.4.19-9 is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
 
rhn-setup is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
 
  yum >= 3.2.19-15 is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
 
[Tbow@gourd nut-2.4.1]$ less /proc/version|grep Linux
 
Linux version 2.6.9-67.0.15.EL (brewbuilder@hs20-bc2-2.build.redhat.com) (gcc version 3.4.6  20060404 (Red Hat 3.4.6-9)) #1 Tue Apr 22 13:42:17 EDT 2008
 
When I tried installing the package for el3 this came up
 
[Tbow@gourd nut-2.4.1]$ sudo rpm -Uvh /yum-2.0.8-0.1.el3.rf.noarch.rpm
 
Preparing...                ########################################### [100%]
 
package yum-2.4.2-0.4.el4.rf (which is newer than yum-2.0.8-0.1.el3.rf) is already installed
 
  
Tried using the --replacefiles, but didn't work with this command, look into it
+
===Notes 2010-03-16===
[Tbow@gourd nut-2.4.1]$ sudo rpm -U --replacefiles /yum-2.4.2-0.4.el4.rf.noarch.rpm
+
So far this is all the info I have form Bo
  package yum-2.4.2-0.4.el4.rf is already installed
+
The Transana should work now. The following information is maybe what you need during the client setup.
Tried updating then go this
+
  Username: dawn
  [Tbow@gourd nut-2.4.1]$ sudo yum update
+
password: dawnpass (This is your mysql username and password)
  Setting up Update Process
+
MySQL Host: roentgen.unh.edu or 132.177.88.61
Setting up repositories
+
  port 3306
No Repositories Available to Set Up
+
  Database: test or in the mysql server you can create your own database
Reading repository metadata in from local files
+
Transana Message Server: pumpkin.unh.edu
No Packages marked for Update/Obsoletion
+
  port 17595
Either go to the red hat network website to find the repos.d/ files or run rhn_check
 
/usr/sbin/rhn_check
 
/usr/sbin/rhn_register
 
  Upgrade yum for rhel 3
 
  
Old repository files are still on this system so I will reinstall yum on the is system
+
Setup Instructions for Client Computers
  
===smartd.conf 2009-05-20===
+
Once you've got the network server up and running, you need the following information to run Transana 2.3-MU and connect to the servers:
# Home page is: http://smartmontools.sourceforge.net
+
* A username and password
# $Id: smartd.conf,v 1.38 2004/09/07 12:46:33 ballen4705 Exp $
+
* The DSN or IP address of the computer running MySQL
# smartd will re-read the configuration file if it receives a HUP
+
* The name(s) of the database(s) for your project(s).
# signal
+
* The DSN or IP address of the computer running the Transana Message Server.
# The file gives a list of devices to monitor using smartd, with one
+
* The path to the common Video Storage folder.
# device per line. Text after a hash (#) is ignored, and you may use
+
Please note that all computers accessing the same database must enter the MySQL computer information in the same way, and must use the same Transana Message Server. They do NOT need to connect to the same common video storage folder, but subfolders within each separate video storage folder must be named and arranged identically.
# spaces and tabs for white space. You may use '\' to continue lines.  
+
#Install Transana 2.3-MU.
# You can usually identify which hard disks are on your system by
+
#Start Transana 2.3-MU. You will see the following screen:
# looking in /proc/ide and in /proc/scsi.
+
#*Enter your Username, password, the DSN or IP address of your MySQL Server, and the name of your project database.
# The word DEVICESCAN will cause any remaining lines in this
+
#If this is the first time you've used Transana 2.3-MU, you will see this message next:
# configuration file to be ignored: it tells smartd to scan for all
+
#*Click the "Yes" button to specify your Transana Message Server. You will see this screen:
# ATA and SCSI devices. DEVICESCAN may be followed by any of the
+
#*Enter the DSN or IP address of your Transana Message Server.
# Directives listed below, which will be applied to all devices that
+
#You need to configure your Video Root Directory before you will be able to connect to the project videos. If you haven't yet closed the Transana Settings dialog box, click the "Directories" tab. If you alreadly closed it, go to the "Options" menu and select "Program Settings". You will see the following screen:
# are found. Most users should comment out DEVICESCAN and explicitly
+
 
# list the devices that they wish to monitor.
+
Under "Video Root Directory", browse to the common Video Storage folder.
#DEVICESCAN
+
 
# First (primary) ATA/IDE hard disk. Monitor all attributes, enable
+
We recommend also setting the "Waveform Directory" to a common waveforms folder so that each video only needs to go through waveform extraction once for everyone on the team to share.
# automatic online data collection, automatic Attribute autosave, and
+
 
# start a short self-test every day between 2-3am, and a long self test
+
Also, on Windows we recommend mapping a network drive to the common Video folder if it is on another server, rather than using machine-qualified path names. We find that mapping drive V: to "\\VideoServer\ProjectDirectory" produces faster connections to videos than specifying "\\VideoServer\ProjectDirectory" in the Video Root directory.
# Saturdays between 3-4am.
+
If you have any questions about this, please feel free to tell me.
#/dev/hda -a -o on -S on -s (S/../.././02|L/../../6/03)
+
 
# Monitor SMART status, ATA Error Log, Self-test log, and track
+
===Transana Survival Guide 2013-08-24===
# changes in all attributes except for attribute 194
+
====Setup Instructions for Mac OS X Network Servers====
#/dev/hda -H -l error -l selftest -t -I 194
+
The first step is to install MySQL. Transana 2.4-MU requires MySQL 4.1.x or later. We have tested Transana 2.4-MU with a variety of MySQL versions on a variety of operating systems without difficulty, but we are unable to test all possible combinations. Please note that MySQL 4.0.x does not support the UTF-8 character set, so should not be used with Transana 2.4-MU.
# A very silent check. Only report SMART health status if it fails
+
====Install MySQL====
# But send an email in this case
+
Follow these directions to set up MySQL.
#/dev/hda -H -m npg-admins@physics.unh.edu
+
#Download the "Max" version of MySQL for Mac OS X, not the "Standard" version. It is available at http://www.mysql.com. 

NOTE: The extensive MySQL documentation available on the MySQL Web Site can help you make sense of the rest of these instructions. We strongly recommend you familiarize yourself with the MySQL Manual, as it can answer many of your questions.
# First two SCSI disks.  This will monitor everything that smartd can
+
#You probably want to download and install the MySQL GUI Tools as well. The MySQL Administrator is the easiest way to create and manage user accounts, in my opinion.
# monitor. Start extended self-tests Wednesdays between 6-7pm and
+
#Install MySQL from the Disk Image file. Follow the on screen instructions. Be sure to assign a password to the root user account. (This prevents unauthorized access to your MySQL database by anyone who knows about this potential security hole.)
# Sundays between 1-2 am
+
#You need to set the value of the "max_allowed_packet" variable to at least 8,388,608.
#/dev/sda -d scsi -s L/../../3/18
+
 
#/dev/sdb -d scsi -s L/../../7/01
+
On OS X 10.5.8, using MySQL 4.1.22 on one computer and MySQL 5.0.83 on another, I edited the file /etc/my.conf so that it included the following lines:
# Monitor 4 ATA disks connected to a 3ware 6/7/8000 controller which uses
+
  [mysqld]
     lower_case_table_names=1
     max_allowed_packet=8500000
# the 3w-xxxx driver. Start long self-tests Sundays between 1-2, 2-3, 3-4,
+
This should work for MySQL 5.1 as well.
# and 4-5 am.
+
 
# Note: one can also use the /dev/twe0 character device interface.
+
Using MySQL 4.1.14-max on OS X 10.3, I edited the "my.cnf" file in /etc, adding the following line to the [mysqld] section:
#/dev/sdc -d 3ware,0 -a -s L/../../7/01
+
  set-variable=max_allowed_packet=8500000
#/dev/sdc -d 3ware,1 -a -s L/../../7/02
+
 
#/dev/sdc -d 3ware,2 -a -s L/../../7/03
+
Exactly what you do may differ, of course.
#/dev/sdc -d 3ware,3 -a -s L/../../7/04
+
 
# Monitor 2 ATA disks connected to a 3ware 9000 controller which uses
+
====Setup MySQL User Accounts====
# the 3w-9xxx driver. Start long self-tests Tuesdays between 1-2 and 3-4 am
+
Here's what I do. It's the easiest way I've found to manage databases and accounts while maintaining database security. You are, of course, free to manage MySQL however you choose.
#/dev/sda -d 3ware,0 -a -s L/../../2/01
+
 
#/dev/sda -d 3ware,1 -a -s L/../../2/03
+
I have downloaded and installed the MySQL GUI Tools from the MySQL Web Site. These tools work nicely to manage databases and user accounts, as well as to manipulate data in MySQL tables. The tools have minor differences on different platforms, so the following directions are necessarily a bit vague on the details.
  #Send quick test email at smartd startud
+
 
#/dev/sda -d 3ware,0 -a -m npg-admins@physics.unh.edu -M test
+
First I use the MySQL Administrator tool to create databases (called "catalogs" and "schemas" in the tool.) Go to the "Catalogs" page and choose to create a new "schema."
#/dev/sda -d 3ware,1 -a -m npg-admins@physics.unh.edu -M test
+
 
#/dev/sda -d 3ware,2 -a -m npg-admins@physics.unh.edu -M test
+
Second, still within the MySQL Administrator tool, I go to the Accounts page. I create a new user account, filling in (at least) the User Name and Password fields on the General tab. I then go to the Schema Privileges tab, select a user account (in some versions, you select a host, usually "%" under the user account, in others you select the user account itself,) and a specific database (schema), then assign specific privileges. I generally assign all privileges except "Grant" but you may choose to try a smaller subset if you wish. The "Select," "Insert," "Update," "Delete," "Create," and "Alter" privileges are all required. You may assign privileges to multiple databases for a single user account if you wish. Once I'm done setting privileges, I save or apply the settings and move on to the next user account.
  #/dev/sda -d 3ware,3 -a -m npg-admins@physics.unh.edu -M test
+
 
#/dev/sda -d 3ware,4 -a -m npg-admins@physics.unh.edu -M test
+
I have chosen to give my own user account "God-like" privileges within MySQL so that I can look at and manipulate all data in all database without having to assign myself specific privileges. This also allows me to create new Transana databases from within Transana-MU rather than having to run the MySQL Administrator. To accomplish this, I used the MySQL Query tool to go into MySQL's "mysql" database and edit my user account's entry in the "users" table to give my account global privileges. Please note that this is NOT a "best practice" or a recommendation, and is not even a good idea for most users. I mention it here, however, as I know some users will want to do this.
#/dev/sda -d 3ware,5 -a -m npg-admins@physics.unh.edu -M test
+
 
#/dev/sda -d 3ware,6 -a -m npg-admins@physics.unh.edu -M test
+
These instructions are not meant to be detailed or comprehensive. They are intended only to help people get started with Transana-MU. Please see the documentation on the MySQL site for more information on manipulating databases, user accounts, and privileges.
#/dev/sda -d 3ware,7 -a -m npg-admins@physics.unh.edu -M test
+
 
#Email all (-a) the information gathered for each drive
+
====Set up the Transana Message Server====
/dev/sda -d 3ware,0 -a -m npg-admins@physics.unh.edu
+
Once you've set up MySQL user accounts, you should set up version 2.40 of the Transana Message Server. It does not need to be on the same server as MySQL, though it may be.
/dev/sda -d 3ware,1 -a -m npg-admins@physics.unh.edu
+
 
/dev/sda -d 3ware,2 -a -m npg-admins@physics.unh.edu
+
Follow these directions to set up the Message Server.
/dev/sda -d 3ware,3 -a -m npg-admins@physics.unh.edu
+
#If your server is running an earlier version of the Transana Message Server, you need to remove the old Message Server before installing the new one. See the Transana Message Server 2.40 Upgrade guide.
/dev/sda -d 3ware,4 -a -m npg-admins@physics.unh.edu
+
#Download TransanaMessageServer240Mac.dmg from the Transana web site. The link to the download page is in your Transana-MU Purchase Receipt e-mail.
/dev/sda -d 3ware,5 -a -m npg-admins@physics.unh.edu
+
#Install it on the server.
/dev/sda -d 3ware,6 -a -m npg-admins@physics.unh.edu
+
#If you want the Transana Message Server to start automatically when the server starts up, follow these instructions:
/dev/sda -d 3ware,7 -a -m npg-admins@physics.unh.edu
+
#*#Open a Terminal Window. Type su to work as a superuser with the necessary privileges.
#Does a Long test on all 12 drives on the 3ware card
+
#*#In your /Library/StartupItems folder, (NOT your /HOME/Library, but the root /Library folder,) create a subfolder called TransanaMessageServer.
#and is scheduled on saturday to run at specified (Military) time.
+
#*#Enter the following (single line) command:
/dev/sda -d 3ware,0 -a -s L/../../7/01
+
#*#*cp /Applications/TransanaMessageServer/StartupItems/* /Library/StartupItems/TransanaMessageServer
/dev/sda -d 3ware,1 -a -s L/../../7/03
+
#*#*This will copy the files necessary for the TransanaMessage Server to auto-start.
/dev/sda -d 3ware,2 -a -s L/../../7/05
+
#*#Reboot your computer now so that the Transana Message Server will start. Alternately, you can start the Transana Message Server manually, just this once, to avoid rebooting.
/dev/sda -d 3ware,3 -a -s L/../../7/07
+
#If you want to start the Message Server manually for testing purposes, you will need to type the following (single line) command into a Terminal window:
/dev/sda -d 3ware,4 -a -s L/../../7/09
+
#*sudo python /Applications/TransanaMessageServer/MessageServer.py
/dev/sda -d 3ware,5 -a -s L/../../7/11
+
 
/dev/sda -d 3ware,6 -a -s L/../../7/13
+
====Configure the Firewall====
/dev/sda -d 3ware,7 -a -s L/../../7/15
+
If you will have Transana-MU users connecting to the MySQL and Transana Message Server instances you just set up from outside the network, you need to make sure port 3306 for MySQL and port 17595 for the Transana Message Server are accessible from outside the network. This will probably require explicitly configuring your firewall software to allow traffic through to these ports. Consult your firewall software's documentation to learn how to do this.
# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
+
 
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
+
====Creating a Shared Network Volume for Video Storage====
#
+
Finally, you must create a shared network volume where users can store any video that will be shared with all Transana-MU users. Be sure to allocate sufficient disk space for all necessary video files. This volume may be on your Mac Server or on another computer, but it must be accessible to all Transana-MU users on your network.
#   -d TYPE Set the device type: ata, scsi, removable, 3ware,N
+
 
#   -T TYPE set the tolerance to one of: normal, permissive
+
If you will have Transana-MU users connecting to the MySQL and Transana Message Server instances you just set up from outside the network, they will need to set up their own parallel Video Storage volumes.
#   -o VAL  Enable/disable automatic offline tests (on/off)
+
 
#  -S VAL  Enable/disable attribute autosave (on/off)
+
====Now configure the client computers====
#  -n MODE No check. MODE is one of: never, sleep, standby, idle
+
Each user will need the following information to connect to the server programs you have just set up:
-H      Monitor SMART Health Status, report if failed
+
 
#  -l TYPE Monitor SMART log.  Type is one of: error, selftest
+
Username and password. (Don't create a single user account for users to share. The analytic process flows more smoothly when users can tell who else is interacting with the data, who has locked a record, and so on.)

#  -f      Monitor for failure of any 'Usage' Attributes
+
 
-m ADD  Send warning email to ADD for -H, -l error, -l selftest, and -f
+
The DSN or IP address of the MySQL Server computer.

#  -M TYPE Modify email warning behavior (see man page)
+
 
#  -s REGE Start self-test when type/date matches regular expression (see man page)
+
The name of the database set up for the project.

#  -p      Report changes in 'Prefailure' Normalized Attributes
+
 
#  -u      Report changes in 'Usage' Normalized Attributes
+
The DSN or IP address of the Transana Message Server computer, if different from the MySQL Server computer.

#  -t     Equivalent to -p and -u Directives
+
Instructions on how to connect to the local network's common video storage folder.
#  -r ID  Also report Raw values of Attribute ID with -p, -u or -t
+
 
#  -R ID  Track changes in Attribute ID Raw value with -p, -u or -t
+
Once you have this information, you are ready to start setting up client computers for the members of the project.
#  -i ID  Ignore Attribute ID for -f Directive
+
Wait, you lost me. Take me back to the overview.
#  -I ID  Ignore Attribute ID for -p, -u or -t Directive
+
 
#  -C ID  Report if Current Pending Sector count non-zero
+
====Setup with MySQL Workbench====
#  -U ID  Report if Offline Uncorrectable count non-zero
+
Here's what I do. I don't know if it's the optimum way, but it works for me. You are, of course, free to manage MySQL however you choose.
#  -v N,ST Modifies labeling of Attribute N (see man page)
 
#  -a      Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
 
#  -F TYPE Use firmware bug workaround. Type is one of: none, samsung
 
#  -P TYPE Drive-specific presets: use, ignore, show, showall
 
#    #      Comment: text after a hash sign is ignored
 
#    \      Line continuation character
 
# Attribute ID is a decimal integer 1 <= ID <= 255
 
# except for -C and -U, where ID = 0 turns them off.
 
# All but -d, -m and -M Directives are only implemented for ATA devices
 
#
 
# If the test string DEVICESCAN is the first uncommented text
 
# then smartd will scan for devices /dev/hd[a-l] and /dev/sd[a-z]
 
# DEVICESCAN may be followed by any desired Directives.
 
  
===rc3.d 2010-01-16===
+
I have downloaded and installed the MySQL Workbench Tool from the MySQL Web Site. This tool works nicely to manage databases (called schemas in Workbench) and user accounts, as well as to manipulate data in MySQL tables. The tools have minor differences on different platforms, so the following directions are necessarily a bit vague on the details. These directions assume you have alread defined your Server Instance in the MySQL Workbench.
K00ipmievd
+
 
K01dnsmasq
+
If I need to create a new database for Transana, I start at the MySQL Workbench Home screen. On the left side, under "Open Connection to Start Querying", I double-click the connection for the server I want to use and enter my password. This takes me to the SQL Editor. In the top toolbar, I select the "Create a new schema in the connected server" icon, which for me is the 3rd one. I give my schema (my Transana database) a name (avoiding spaces in the name), and press the Apply button. The "Apply SQL Script to Database" window pops up, and I again press the Apply button, followed by pressing the "Finish" button when the script is done executing. My Transana Database now exists, so I close teh SQL Editor tab and move on to adding or authorizing user accounts.
K02avahi-dnsconfd
+
 
  K02NetworkManager
+
Each person using Transana-MU needs a unique user account with permission to access the Transana database. To create user accounts for an existing database, I start at the MySQL Workbench Home screen. On the right side, under "Server Administration", I double-click the server instance I want to work with and enter my password. Under "Security", I select "Users and Privileges". At the bottom of the screen, I press the "Add Account" button, then provide a Login Name and password. I then press the Apply button to create the user account. After that, I go to the Schema Privileges tab, select the user account I just created, and click the "Add Entry..." button on the right-hand side of the screen. This pops up the New Schema Privilege Definion window. I select Any Host under hosts, and Selected Schema, followed by the name of the database I want to provide access to. Once this is done, the bottom section of the screen, where privileges are managed, will be enabled. Click the "Select ALL" button. Make sure that the "GRANT" or "GRANT OPTION" right is NOT checked, then press the "Save Changes" button.
  K05conman
+
 
  K05saslauthd
+
These instructions are not meant to be detailed or comprehensive. They are intended only to help people get started with Transana-MU. Please see the documentation on the MySQL site for more information on manipulating databases, user accounts, and privileges.
  K05wdaemon
+
 
K10dc_server
+
====Appendix A: Resetting MySQL Root Password====
  K10psacct
+
http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html
  K12dc_client
+
 
  K15httpd
+
On Unix, use the following procedure to reset the password for all MySQL root accounts. The instructions assume that you will start the server so that it runs using the Unix login account that you normally use for running the server. For example, if you run the server using the mysql login account, you should log in as mysql before using the instructions. Alternatively, you can log in as root, but in this case you must start mysqld with the --user=mysql option. If you start the server as root without using --user=mysql, the server may create root-owned files in the data directory, such as log files, and these may cause permission-related problems for future server startups. If that happens, you will need to either change the ownership of the files to mysql or remove them.
  K24irda
+
#Log on to your system as the Unix user that the mysqld server runs as (for example, mysql).
  K25squid
+
#Locate the .pid file that contains the server's process ID. The exact location and name of this file depend on your distribution, host name, and configuration. Common locations are /var/lib/mysql/, /var/run/mysqld/, and /usr/local/mysql/data/. Generally, the file name has an extension of .pid and begins with either mysqld or your system's host name. 
You can stop the MySQL server by sending a normal kill (not kill -9) to the mysqld process, using the path name of the .pid file in the following command: 
shell> kill `cat /mysql-data-directory/host_name.pid`
  K30spamassassin
+
#
Use backticks (not forward quotation marks) with the cat command. These cause the output of cat to be substituted into the kill command.
  K34yppasswdd
+
#Create a text file containing the following statements. Replace the password with the password that you want to use. 
UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root';
  K35dhcpd
+
#FLUSH PRIVILEGES;
Write the UPDATE and FLUSH statements each on a single line. The UPDATE statement resets the password for all root accounts, and the FLUSH statement tells the server to reload the grant tables into memory so that it notices the password change.
  K35dhcrelay
+
#Save the file. For this example, the file will be named /home/me/mysql-init. The file contains the password, so it should not be saved where it can be read by other users. If you are not logged in as mysql (the user the server runs as), make sure that the file has permissions that permit mysql to read it.
  K35dovecot
+
#Start the MySQL server with the special --init-file option: 
shell> mysqld_safe --init-file=/home/me/mysql-init &
  K35vncserver
+
#
The server executes the contents of the file named by the --init-file option at startup, changing each root account password.
  K35winbind
+
#After the server has started successfully, delete /home/me/mysql-init.
  K36lisa
+
 
  K50netconsole
+
You should now be able to connect to the MySQL server as root using the new password. Stop the server and restart it normally.
  K50tux
+
 
  K69rpcsvcgssd
+
==Network==
  K73ypbind
+
 
  K74ipmi
+
===Network Manager: Fedora 17 Static IP 2013-03-12===
K74nscd
+
First disable the gnome network manager from starting up
  K74ntpd
+
  systemctl stop NetworkManager.service
K74ypserv
+
  systemctl disable NetworkManager.service
  K74ypxfrd
+
Now start the network service and set to run on boot
  K80kdump
+
  systemctl restart network.service
  K85mdmpd
+
  systemctl enable network.service
  K87multipathd
+
Check which interface(s) you want to set to static
  K88wpa_supplicant
+
  [root@server ~]# ifconfig
  K89dund
+
  em1: flags=4163 mtu 1500
  K89hidd
+
  inet 192.168.1.148 netmask 255.255.255.0 broadcast 192.168.1.255
  K89netplugd
+
  inet6 fe80::dad3:85ff:feae:dd4c prefixlen 64 scopeid 0x20 ether d8:d3:85:ae:dd:4c txqueuelen 1000 (Ethernet)
  K89pand
+
  RX packets 929 bytes 90374 (88.2 KiB)
  K89rdisc
+
  RX errors 0 dropped 0 overruns 0 frame 0
  K90bluetooth
+
  TX packets 1010 bytes 130252 (127.1 KiB)
  K91capi
+
  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
K91isdn
+
  device interrupt 19
  K99readahead_later
+
   
S00microcode_ctl
+
  lo: flags=73mtu 16436
S02lvm2-monitor
+
  inet 127.0.0.1 netmask 255.0.0.0
  S04readahead_early
+
  inet6 ::1 prefixlen 128 scopeid 0x10
  S05kudzu
+
  loop txqueuelen 0 (Local Loopback)
  S06cpuspeed
+
  RX packets 32 bytes 3210 (3.1 KiB)
  S08ip6tables
+
  RX errors 0 dropped 0 overruns 0 frame 0
S08iptables
+
  TX packets 32 bytes 3210 (3.1 KiB)
  S08mcstrans
+
  TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  S10network
+
Now you will need to edit the config file for that interface
S11auditd
+
  vi /etc/sysconfig/network-scripts/ifcfg-em1
S12restorecond
+
Edit the config to look like so. You will need to change BOOTPROTO from dhcp to static and add IPADDR, NETMASK, BROADCAST and NETWORK variables. Also make sure ONBOOT is set to yes.
S12syslog
+
  UUID="e88f1292-1f87-4576-97aa-bb8b2be34bd3"
S13irqbalance
+
  NM_CONTROLLED="yes"
  S13portmap
+
  HWADDR="D8:D3:85:AE:DD:4C"
  S14nfslock
+
  BOOTPROTO="static"
  S15mdmonitor
+
  DEVICE="em1"
  S18rpcidmapd
+
  ONBOOT="yes"
  S19nfs
+
  IPADDR=192.168.1.2
  S19rpcgssd
+
  NETMASK=255.255.255.0
S20vmware
+
  BROADCAST=192.168.1.255
  S22messagebus
+
  NETWORK=192.168.1.0
S23setroubleshoot
+
  GATEWAY=192.168.1.1
S25netfs
+
   
  S25pcscd
+
Now to apply the settings restart the network service
  S26acpid
+
  systemctl restart network.service
  S26lm_sensors
+
 
  S28autofs
+
===Network Monitoring Tools 2009-05-29===
  S29iptables-netgroups
+
The tools best used for Traffic monitoring (these are in the centos repo)
  S50hplip
+
  wireshark
  S55sshd
+
  iptraf
  S56cups
+
  ntop
  S56rawdevices
+
  tcpdump
  S56xinetd
+
Other products found
  S60apcupsd
+
  vnStat
  S80sendmail
+
  bwm-ng
  S85arecaweb
+
 
  S85gpm
+
==Misc==
  S90crond
+
 
  S90splunk
+
===denyhosts-undeny.py 2013-05-31===
  S90xfs
+
  #!/usr/bin/env python
  S95anacron
+
  import os
  S95atd
+
  import sys
  S97rhnsd
+
  import subprocess
  S97yum-updatesd
+
  #The only argument should be the host to undeny
  S98avahi-daemon
+
  try:
  S98haldaemon
+
  goodhost = sys.argv[1]
  S99denyhosts
+
  except:
  S99firstboot
+
  print "Please specify a host to undeny!"
  S99local
+
  sys.exit(1)
  S99smartd
+
  #These commands start/stop denyhosts. Set these as appropriate for your system.
 +
  stopcommand = '/etc/init.d/denyhosts stop'
 +
  startcommand = '/etc/init.d/denyhosts start'
 +
  #Check to see what distribution we're using.
 +
  distrocheckcommand = "awk '// {print $1}' /etc/redhat-release"
 +
  d = os.popen(distrocheckcommand)
 +
  distro = d.read()
 +
  distro = distro.rstrip('\n')
 +
  #Check to see what user we're being run as.
 +
  usercheckcommand = "whoami"
 +
  u = os.popen(usercheckcommand)
 +
  user = u.read()
 +
  user = user.rstrip('\n')
 +
  if user == 'root':
 +
  pass
 +
  else:
 +
  print "Sorry, this script requires root privileges."
 +
  sys.exit(1)
 +
  #The files we should be purging faulty denials from.
 +
  if distro == 'Red':
 +
  filestoclean = ['/etc/hosts.deny','/var/lib/denyhosts/hosts-restricted','/var/lib/denyhosts/sync-hosts','/var/lib/denyhosts/suspicious-logins']
 +
  elif distro == 'CentOS':
 +
  filestoclean = ['/etc/hosts.deny','/usr/share/denyhosts/data/hosts-restricted','/usr/share/denyhosts/data/sync- hosts','/usr/share/denyhosts/data/suspicious-logins']
 +
  elif distro == 'Fedora':
 +
  print "This script not yet supported on Fedora systems!"
 +
  sys.exit(1)
 +
  else:
 +
  print "This script is not yet supported on your distribution, or I can't properly detect it."
 +
  sys.exit(1)
 +
  #Stop denyhosts so that we don't get any confusion.
 +
  os.system(stopcommand)
 +
  #Let's now remove the faulty denials.
 +
  for targetfile in filestoclean:
 +
  purgecommand = "sed -i '/" + goodhost + "/ d' " + targetfile
 +
  os.system(purgecommand)
 +
  #Now that the faulty denials have been removed, it's safe to restart denyhosts.
 +
  os.system(startcommand)
 +
  sys.exit(0)
  
==Taro==
+
===Temperature Shutdown Procedure 2009-06-06===
==Lentil==
+
Room temp greater than 25C
==Pumpkin==
+
*If the outdoor temp is lower than indoor, open the windows.
==Endeavour==
+
*Shut down any unused workstations.
 +
*Shut down any workstations being built or configured.
 +
Room temp greater than 28C
 +
*Follow procedure for >25C.
 +
*Write an email to npg-users@physics.unh.edu:
 +
Subject: Systems being shut down due to temperature
  
===Yum Problems 2012-10-11===
+
Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: gourd, pepper, and tomato. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.
libsdp.x86_64
+
Thank you,
libsdp-devel.x86_64
+
(Your name)
 +
*Wait 10 minutes. If the temperature is still too high, shut down gourd, pepper, and tomato.
 +
Room temp greater than 30C
 +
*Follow procedure for >25C, then >27C.
 +
*Wait 10 minutes after shutting down gourd, pepper, and tomato.
 +
*If temperatures are still greater than 30C, write an email to npg-users@physics.unh.edu:
 +
Subject: Systems being shut down due to temperature
  
Journal of Process
+
Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: endeavour. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.
 +
*Wait 10 minutes. If the temperature is still too high, shut down endeavour.
 +
*Wait 5 minutes. If the temperature is still too high, write an email to npg-users@physics.unh.edu:
 +
Subject: Systems being shut down due to temperature
  
Install both libsdp (i386 and x86_64) and libxml2 from rpm
+
Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: taro and pumpkin. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.
 
+
*Wait 10 minutes. If the temperature is still too high, shut down taro and pumpkin.
There is still a seg fault when yum tries to read the primary.xml, this is seen when I run strace yum check-update.
+
*Wait 5 minutes. If the temperature is still too high, shut down lentil.
 +
Room temp greater than 35C
 +
*Immediately shut down all workstations, gourd, pepper, tomato, lentil, endeavour, taro, and pumpkin, in that order. If the temperature drops to another category, follow the instructions for that category.
 +
*Wait 5 minutes. If the temperature is still too high, send an email to npg-users@physics.unh.edu:
 +
Subject: Critical temperatures in server room
 +
 
 +
Body: The server room temperatures are now critical. In order to avoid hardware damage, we have performed an emergency shutdown of all servers, and the mail server will be shut down shortly. We apologize for any inconvenience.
 +
*Wait 15 minutes so that people can get your notification.
 +
*If the temperature has not dropped below 35C, shut down einstein.
 +
 
 +
===NUT UPS 2009-05-22===
 +
I am trying to get nut (network ups tool) working on gourd.
 +
 
 +
Initial install
 +
create group nut user ups
 +
/configure --with-user=ups --with-group=nut --with-usb
 +
make
 +
sudo -s
 +
make install
 +
If you want to know the build path info use these commands
 +
  make DESTDIR=/tmp/package install
 +
  make DESTDIR=/tmp/package install-conf
 +
Create dir to be used by user
 +
mkdir -p /var/state/ups
 +
chmod 0770 /var/state/ups
 +
chown root:nut /var/state/ups
 +
To set up the correct permissions for the USB device, you may need to set up (operating system dependent)
 +
hotplugging scripts.  Sample scripts and information are provided in the scripts/hotplug and scripts/udev
 +
directories. For most users, the hotplugging scripts will be installed automatically by "make install".
 +
 
 +
Go to /usr/local/ups/etc/ups.conf and add the following lines
 +
[myupsname]
 +
driver = mydriver
 +
port = /dev/ttyS1
 +
desc = "Workstation"
 +
with the usbhid-ups the port field is ignored
 +
Start drivers for hardware
 +
/usr/local/ups/bin/upsdrvctl -u root start
 +
or you can use
 +
/usr/local/ups/bin/upsdrvctl -u ups start
 +
 
 +
===Fedora 11 Root Login 2009-08-11===
 +
This is how to login as root in fedora 11 as this is disabled only allowing terminal access and no gui.
 +
 
 +
First:
 +
cd /etc/pam.d/
 +
Then find the
 +
gdm
 +
gdm-password
 +
gdm-fingerprint
 +
files and uncomment or remove the line that says this:
 +
auth required pam_succeed_if.so user != root quiet
  
===Wake-On LAN 2013-08-20===
+
===hardinfo.sh 2009-09-15===
First run this command on the node
+
#!/bin/bash
  ethtool -s eth0 wol g
+
  #
Then add this line to the /etc/sysconfig/network-scripts/ifcfg-eth0
+
hardinfo -r -m devices.so -m network.so -m computer.so -f text > hardinfo.`echo $HOSTNAME`
  ETHTOOL_OPTS="wol g"
+
  mail -s "$HOSTNAME hardinfo file" Tbow@physics.unh.edu < hardinfo.`echo $HOSTNAME`
  
List of the nodes and their MACs:
+
===DNS Setup 2009-10-15===
Node2 (10.0.0.2) at 00:30:48:C6:F6:80
+
Things to think about when setting up DNS in a small VM
node3 (10.0.0.3) at 00:30:48:C7:03:FE
+
#IP address and changing our FQDN address on root server
node4 (10.0.0.4) at 00:30:48:C7:2A:0E
+
#change all resolv.conf on all clients to point to new DNS address
  node5 (10.0.0.5) at 00:30:48:C7:2A:0C
+
#Give VM a domain name (probably compton)
node6 (10.0.0.6) at 00:30:48:C7:04:54
+
 
node7 (10.0.0.7) at 00:30:48:C7:04:A8
+
===Automounting for Macs 2013-06-02===
node8 (10.0.0.8) at 00:30:48:C7:04:98
+
I have attached the relevant files for automounting the NFS directories on a Mac. Drop these in /etc/ and then reload the automount maps with:
node9 (10.0.0.9) at 00:30:48:C7:04:F4
+
  automount -vc
node16 (10.0.0.16) at 00:30:48:C7:04:A4
+
Please note that with these automount maps we break some of the (unused) automatic mounting functionality that Mac tries to have, since we overwrite the /net entry.
node17 (10.0.0.17) at 00:30:48:C7:04:A6
+
 
node18 (10.0.0.18) at 00:30:48:C7:04:4A
+
OLD WAY OF DOING THIS UNDER 10.4
node19 (10.0.0.19) at 00:30:48:C7:04:62
+
 
node20 (10.0.0.20) at 00:30:48:C6:F6:14
+
For the 10.4 folks, here are the brief instructions on doing automount on 10.4. This also works with 10.5 and 10.6 but is cumbersome. Please note that a laptop does not have a static IP and will thus never be allowed past the firewall!
  node21 (10.0.0.21) at 00:30:48:C6:F6:12
+
 
  node22 (10.0.0.22) at 00:30:48:C6:EF:A6
+
The Mac OS X automounter is configured with NetInfo
  node23 (10.0.0.23) at 00:30:48:C6:EB:CC
+
 
  node24 (10.0.0.24) at 00:30:48:C7:04:5A
+
Create a new entry under the "mounts" subdirectory.
node25 (10.0.0.25) at 00:30:48:C7:04:5C
+
 
  node26 (10.0.0.26) at 00:30:48:C7:04:4C
+
Name the entry "servername:/dir"
node27 (10.0.0.27) at 00:30:48:C7:04:40
+
 
 +
Add properties:  
 +
  "dir"  "/mnt/einstein" ( Directory where to mount)
 +
  "opts" "resvport"        (Mount options)
 +
  "vfstype" "nfs"            (Type of mount)
 +
Notfy the automounterkill -1 `cat /var/run/automount.pid`
 +
Note: The new directory is a special link to /automount/static/mnt/einstein
  
==Einstein==
+
====auto_data====
 +
pumpkin1 pumpkin.unh.edu:/data1
 +
pumpkin2 pumpkin.unh.edu:/data2
 +
pumpkin pumpkin.unh.edu:/data1
 +
gourd gourd.unh.edu:/data
 +
pepper pepper.unh.edu:/data
 +
taro taro.unh.edu:/data
 +
tomato tomato.unh.edu:/data
 +
endeavour endeavour.unh.edu:/data1
 +
endeavour1 endeavour.unh.edu:/data1
 +
endeavour2 endeavour.unh.edu:/data2
 +
einsteinVM einstein.unh.edu:/dataVM
 +
VM              einstein.unh.edu:/dataVM
  
===rc3.d 2010-01-16===
+
====auto_home====
  K01dnsmasq
+
  #
  K02avahi-dnsconfd
+
  # Automounter map for /home
  K02dhcdbd
+
  #
  K02NetworkManager
+
  +auto_home # Use directory service
  K05conman
+
 
  K05saslauthd
+
====auto_home_nfs====
  K05wdaemon
+
  #
  K10dc_server
+
  # Automatic NFS home directories from Einstein.
  K10psacct
+
  #
  K12dc_client
+
  *          einstein.unh.edu:/home/&
  K12mailman
+
 
  K15httpd
+
====auto_master====
  K19ntop
+
  #
  K20nfs
+
  # Automounter master map
  K24irda
+
  #
  K25squid
+
  +auto_master # Use directory service
  K30spamassassin
+
  #/net -hosts -nobrowse,hidefromfinder,nosuid
  K35dovecot
+
  #/home auto_home -nobrowse,hidefromfinder
  K35smb
+
  /Network/Servers -fstab
  K35vncserver
+
  /- -static
  K35winbind
+
  #
  K50netconsole
+
  # UNH:
  K50snmptrapd
+
  #
  K50tux
+
  /net/home auto_home_nfs -nobrowse,resvport,intr,soft
  K69rpcsvcgssd
+
  #/net/data auto_data -nobrowse,resvport,intr,soft,locallocks,rsize=32768,wsize=32768
  K73ldap
+
  /net/data auto_data -nobrowse,resvport,intr,soft,rsize=32768,wsize=32768
  K73ypbind
+
  /net/www auto_www -nobrowse,resvport,intr,soft
  K74ipmi
+
 
K74nscd
+
====auto_www====
K74ntpd
+
  nuclear roentgen.unh.edu:/var/www/nuclear
  K80kdump
+
  physics roentgen.unh.edu:/var/www/physics
K85mdmpd
+
  theory roentgen.unh.edu:/var/www/theory
K87multipathd
+
  einstein einstein.unh.edu:/var/www/einstein
K87named
+
  personal_pages  roentgen.unh.edu:/var/www/personal_pages
K88wpa_supplicant
+
 
K89dund
+
=Hosts=
K89netplugd
+
These are hosts that I have worked on. The services I have worked on may not carry the same services, but this is a log not a reflection of what is.
  K89pand
+
 
  K89rdisc
+
==Gourd==
  K91capi
+
 
  K92ip6tables
+
===Network Config 2012-11-05===
K99readahead_later
+
 
S02lvm2-monitor
+
====ifcfg-farm====
  S04readahead_early
+
  DEVICE=eth0
  S05kudzu
+
  ONBOOT=yes
  S06cpuspeed
+
  HWADDR=00:30:48:ce:e2:38
  S07iscsid
+
  BRIDGE=farmbr
  S08ip6tables
+
 
  S08iptables
+
====ifcfg-farmbr====
  S08mcstrans
+
  ONBOOT=yes
  S09isdn
+
  TYPE=bridge
  S10network
+
  DEVICE=farmbr
  S11auditd
+
  BOOTPROTO=static
  S12restorecond
+
  IPADDR=10.0.0.252
  S12syslog
+
  NETMASK=255.255.0.0
  S13irqbalance
+
  GATEWAY=10.0.0.1
  S13iscsi
+
  NM_CONTROLLED=no
  S13mcstrans
+
  DELAY=0
  S13named
+
 
  S13portmap
+
====ifcfg-farmbr:1====
  S14nfslock
+
  ONBOOT=yes
  S15mdmonitor
+
  TYPE=Ethernet
  S18rpcidmapd
+
  DEVICE=farmbr:1
  S19rpcgssd
+
  BOOTPROTO=static
  S22messagebus
+
  IPADDR=10.0.0.240
  S23setroubleshoot
+
  NETMADK=255.255.0.0
  S25bluetooth
+
  GATEWAY=10.0.0.1
  S25netfs
+
  NM_CONTROLLED=no
  S25pcscd
+
  ONPARENT=yes
  S26acpid
+
 
  S26hidd
+
====ifcfg-unh====
  S26lm_sensors
+
  DEVICE=eth1
  S27ldap
+
  ONBOOT=yes
  S28autofs
+
  HWADDR=00:30:48:ce:e2:39
S29iptables-npg
+
  BRIDGE=unhbr
  S50denyhosts
+
 
  S50hplip
+
====ifcfg-unhbr====
  S50snmpd
+
  ONBOOT=yes
  S55sshd
+
  TYPE=bridge
  S56cups
+
  DEVICE=unhbr
  S56rawdevices
+
  BOOTPROTO=static
  S56xinetd
+
  IPADDR=132.177.88.75
  S58ntpd
+
  NETMASK=255.255.252.0
  S60apcupsd
+
  GATEWAY=132.177.88.1
  S65dovecot
+
  NM_CONTROLLED=no
  S78spamassassin
+
  DELAY=0
  S80postfix
+
 
  S85gpm
+
====ifcfg-unhbr:1====
  S85httpd
+
  ONBOOT=yes
  S90crond
+
  TYPE=Ethernet
  S90elogd
+
  DEVICE=unhbr:1
  S90splunk
+
  BOOTPROTO=static
  S90xfs
+
  IPADDR=132.177.91.210
  S95anacron
+
  NETMASK=255.255.252.0
  S95atd
+
  GATEWAY=132.177.88.1
  S95saslauthd
+
  NM_CONTROLLED=no
  S97libvirtd
+
  ONPARENT=yes
  S97rhnsd
+
 
  S97yum-updatesd
+
===rc.local 2009-05-20===
  S98avahi-daemon
+
  #!/bin/sh
  S98haldaemon
+
  #
  S98mailman
+
  # This script will be executed *after* all the other init scripts.
  S99firstboot
+
  # You can put your own initialization stuff in here if you don't
  S99local
+
  # want to do the full Sys V style init stuff.
  S99smartd
+
  touch /var/lock/subsys/local
 +
  #This will send an email to the npg-admins at startup with the hostname and the boot.log file
 +
  mail -s "$HOSTNAME Started, Here is the boot.log" npg-admins@physics.unh.edu < /var/log/boot.log
 +
 
 +
===Yum 2009-05-21===
 +
Fixing yum on gourd
 +
 
 +
In order to get RHN support (repo files) you must download and install off the rhn network
 +
  yum-rhn-plugin
 +
and then these errors
 +
  [Tbow@gourd ~]$ sudo rpm -i Desktop/documentation-notes/downloads/yum-rhn-plugin-0.5.3-30.el5.noarch.rpm
 +
  Password:
 +
  warning: Desktop/documentation-notes/downloads/yum-rhn-plugin-0.5.3-30.el5.noarch.rpm: V3 DSA signature: NOKEY, key ID 37017186
 +
  error: Failed dependencies:
 +
  rhn-client-tools >= 0.4.19-9 is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
 +
  rhn-setup is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
 +
  yum >= 3.2.19-15 is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
 +
  [Tbow@gourd nut-2.4.1]$ less /proc/version|grep Linux
 +
  Linux version 2.6.9-67.0.15.EL (brewbuilder@hs20-bc2-2.build.redhat.com) (gcc version 3.4.6  20060404 (Red Hat 3.4.6-9)) #1 Tue Apr 22 13:42:17 EDT 2008
 +
When I tried installing the package for el3 this came up
 +
  [Tbow@gourd nut-2.4.1]$ sudo rpm -Uvh /yum-2.0.8-0.1.el3.rf.noarch.rpm
 +
  Preparing...                ########################################### [100%]
 +
  package yum-2.4.2-0.4.el4.rf (which is newer than yum-2.0.8-0.1.el3.rf) is already installed
  
==Corn==
+
Tried using the --replacefiles, but didn't work with this command, look into it
==Jalapeno==
+
[Tbow@gourd nut-2.4.1]$ sudo rpm -U --replacefiles /yum-2.4.2-0.4.el4.rf.noarch.rpm
==Roentgen==
+
package yum-2.4.2-0.4.el4.rf is already installed
 +
Tried updating then go this
 +
[Tbow@gourd nut-2.4.1]$ sudo yum update
 +
Setting up Update Process
 +
Setting up repositories
 +
No Repositories Available to Set Up
 +
Reading repository metadata in from local files
 +
No Packages marked for Update/Obsoletion
 +
Either go to the red hat network website to find the repos.d/ files or run rhn_check
 +
/usr/sbin/rhn_check
 +
/usr/sbin/rhn_register
 +
Upgrade yum for rhel 3
  
===Xen to VMware Conversion 2009-06-23===
+
Old repository files are still on this system so I will reinstall yum on the is system
The transfer process
 
#Shutdown the xen virtual machine and make a backup of the .img file.
 
#Make a tarball of roentgens filesystem
 
#*This must be done as root
 
#*tar -cvf machine.tar /lib /lib64 /etc /usr /bin /sbin /var /root 
 
#Set up an identical OS (CentOS 5.3) on VMWare Server.
 
#Mount the location of the tarball and extract to the /
 
#*Make sure to backup the original OSs /etc/ to /etc.bak/
 
#*tar -xvf machine.tar
 
  
Files to copy back over from the /etc.bak/
+
===smartd.conf 2009-05-20===
  /etc/sysconfig/network-scripts/ifcfg-*
+
# Home page is: http://smartmontools.sourceforge.net
  /etc/inittab
+
# $Id: smartd.conf,v 1.38 2004/09/07 12:46:33 ballen4705 Exp $
  /etc/fstab
+
# smartd will re-read the configuration file if it receives a HUP
  /etc/yum*
+
# signal
  /etc/X11*
+
# The file gives a list of devices to monitor using smartd, with one
Turn roentgen on to prepare for rsync transfer.
+
# device per line. Text after a hash (#) is ignored, and you may use
 
+
# spaces and tabs for white space. You may use '\' to continue lines.
Make sure to shutdown all important services (httpd, mysqld, etc)
+
# You can usually identify which hard disks are on your system by
 
+
# looking in /proc/ide and in /proc/scsi.
Log on to roentgen as root and run the following command for each folder archived above.
+
# The word DEVICESCAN will cause any remaining lines in this
  rsync -av --delete /src/(lib) newserver.unh.edu:/dest/(lib)>>rsync.(lib).log
+
# configuration file to be ignored: it tells smartd to scan for all
 
+
# ATA and SCSI devices.  DEVICESCAN may be followed by any of the
Rsync process
+
# Directives listed below, which will be applied to all devices that
  --delete  delete extraneous files from dest dirs
+
# are found.  Most users should comment out DEVICESCAN and explicitly
  -a, --archive              archive mode; equals -rlptgoD (no -H,-A,-X)
+
# list the devices that they wish to monitor.
   --no-OPTION            turn off an implied OPTION (e.g. --no-D)
+
#DEVICESCAN
+
# First (primary) ATA/IDE hard disk.  Monitor all attributes, enable
 
+
# automatic online data collection, automatic Attribute autosave, and
 
+
# start a short self-test every day between 2-3am, and a long self test
This tells us how to convert xen to vmware
+
# Saturdays between 3-4am.
#download the current kernel for the xen virtual machine (not the xen kernel) and install it on the virtual machine.  This is done so when the virtual machine is transitioned into a fully virtualized setup, it can boot a normal kernel not the xen kernel.
+
#/dev/hda -a -o on -S on -s (S/../.././02|L/../../6/03)
#shutdown roentgen to copy the image file to a back for exporting
+
# Monitor SMART status, ATA Error Log, Self-test log, and track
#Install qemu-img
+
# changes in all attributes except for attribute 194
#Run the following command:
+
#/dev/hda -H -l error -l selftest -t -I 194
#*qemu-img convert <source_xen_machine> -O vmdk <destination_vmware.vmdk>
+
# A very silent check.  Only report SMART health status if it fails
#Now it boots but, it also kernel panics.
+
# But send an email in this case
 
+
#/dev/hda -H -m npg-admins@physics.unh.edu
This was scratched and instead made a tarball of roentgens filesystem.
+
# First two SCSI disks.  This will monitor everything that smartd can
 +
# monitor.  Start extended self-tests Wednesdays between 6-7pm and
 +
# Sundays between 1-2 am
 +
#/dev/sda -d scsi -s L/../../3/18
 +
#/dev/sdb -d scsi -s L/../../7/01
 +
# Monitor 4 ATA disks connected to a 3ware 6/7/8000 controller which uses
 +
# the 3w-xxxx driver. Start long self-tests Sundays between 1-2, 2-3, 3-4,
 +
# and 4-5 am.
 +
# Note: one can also use the /dev/twe0 character device interface.
 +
#/dev/sdc -d 3ware,0 -a -s L/../../7/01
 +
#/dev/sdc -d 3ware,1 -a -s L/../../7/02
 +
#/dev/sdc -d 3ware,2 -a -s L/../../7/03
 +
#/dev/sdc -d 3ware,3 -a -s L/../../7/04
 +
# Monitor 2 ATA disks connected to a 3ware 9000 controller which uses
 +
# the 3w-9xxx driver. Start long self-tests Tuesdays between 1-2 and 3-4 am
 +
#/dev/sda -d 3ware,0 -a -s L/../../2/01
 +
#/dev/sda -d 3ware,1 -a -s L/../../2/03
 +
#Send quick test email at smartd startud
 +
#/dev/sda -d 3ware,0 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,1 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,2 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,3 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,4 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,5 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,6 -a -m npg-admins@physics.unh.edu -M test
 +
#/dev/sda -d 3ware,7 -a -m npg-admins@physics.unh.edu -M test
 +
#Email all (-a) the information gathered for each drive
 +
/dev/sda -d 3ware,0 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,1 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,2 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,3 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,4 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,5 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,6 -a -m npg-admins@physics.unh.edu
 +
/dev/sda -d 3ware,7 -a -m npg-admins@physics.unh.edu
 +
#Does a Long test on all 12 drives on the 3ware card
 +
#and is scheduled on saturday to run at specified (Military) time.
 +
/dev/sda -d 3ware,0 -a -s L/../../7/01
 +
/dev/sda -d 3ware,1 -a -s L/../../7/03
 +
/dev/sda -d 3ware,2 -a -s L/../../7/05
 +
/dev/sda -d 3ware,3 -a -s L/../../7/07
 +
/dev/sda -d 3ware,4 -a -s L/../../7/09
 +
/dev/sda -d 3ware,5 -a -s L/../../7/11
 +
/dev/sda -d 3ware,6 -a -s L/../../7/13
 +
/dev/sda -d 3ware,7 -a -s L/../../7/15
 +
# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
 +
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
 +
#
 +
#  -d TYPE Set the device type: ata, scsi, removable, 3ware,N
 +
#  -T TYPE set the tolerance to one of: normal, permissive
 +
#  -o VAL  Enable/disable automatic offline tests (on/off)
 +
#  -S VAL  Enable/disable attribute autosave (on/off)
 +
#  -n MODE No check. MODE is one of: never, sleep, standby, idle
 +
#  -H      Monitor SMART Health Status, report if failed
 +
#  -l TYPE Monitor SMART log.  Type is one of: error, selftest
 +
#  -f      Monitor for failure of any 'Usage' Attributes
 +
#  -m ADD  Send warning email to ADD for -H, -l error, -l selftest, and -f
 +
#  -M TYPE Modify email warning behavior (see man page)
 +
#  -s REGE Start self-test when type/date matches regular expression (see man page)
 +
#  -p      Report changes in 'Prefailure' Normalized Attributes
 +
#  -u      Report changes in 'Usage' Normalized Attributes
 +
#  -t      Equivalent to -p and -u Directives
 +
#  -r ID  Also report Raw values of Attribute ID with -p, -u or -t
 +
#  -R ID  Track changes in Attribute ID Raw value with -p, -u or -t
 +
#  -i ID  Ignore Attribute ID for -f Directive
 +
#  -I ID  Ignore Attribute ID for -p, -u or -t Directive
 +
#  -C ID  Report if Current Pending Sector count non-zero
 +
#  -U ID  Report if Offline Uncorrectable count non-zero
 +
#  -v N,ST Modifies labeling of Attribute N (see man page)
 +
#  -a      Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
 +
#  -F TYPE Use firmware bug workaround. Type is one of: none, samsung
 +
#  -P TYPE Drive-specific presets: use, ignore, show, showall
 +
#    #      Comment: text after a hash sign is ignored
 +
#    \      Line continuation character
 +
# Attribute ID is a decimal integer 1 <= ID <= 255
 +
# except for -C and -U, where ID = 0 turns them off.
 +
# All but -d, -m and -M Directives are only implemented for ATA devices
 +
#
 +
# If the test string DEVICESCAN is the first uncommented text
 +
# then smartd will scan for devices /dev/hd[a-l] and /dev/sd[a-z]
 +
# DEVICESCAN may be followed by any desired Directives.
 +
 
 +
===rc3.d 2010-01-16===
 +
K00ipmievd
 +
K01dnsmasq
 +
K02avahi-dnsconfd
 +
K02NetworkManager
 +
K05conman
 +
K05saslauthd
 +
K05wdaemon
 +
K10dc_server
 +
K10psacct
 +
K12dc_client
 +
K15httpd
 +
K24irda
 +
K25squid
 +
K30spamassassin
 +
K34yppasswdd
 +
K35dhcpd
 +
K35dhcrelay
 +
K35dovecot
 +
K35vncserver
 +
K35winbind
 +
K36lisa
 +
K50netconsole
 +
K50tux
 +
K69rpcsvcgssd
 +
K73ypbind
 +
K74ipmi
 +
K74nscd
 +
K74ntpd
 +
K74ypserv
 +
K74ypxfrd
 +
K80kdump
 +
K85mdmpd
 +
K87multipathd
 +
K88wpa_supplicant
 +
K89dund
 +
K89hidd
 +
K89netplugd
 +
K89pand
 +
K89rdisc
 +
K90bluetooth
 +
K91capi
 +
K91isdn
 +
K99readahead_later
 +
S00microcode_ctl
 +
S02lvm2-monitor
 +
S04readahead_early
 +
S05kudzu
 +
S06cpuspeed
 +
S08ip6tables
 +
S08iptables
 +
S08mcstrans
 +
S10network
 +
S11auditd
 +
S12restorecond
 +
S12syslog
 +
S13irqbalance
 +
S13portmap
 +
S14nfslock
 +
S15mdmonitor
 +
S18rpcidmapd
 +
S19nfs
 +
S19rpcgssd
 +
S20vmware
 +
S22messagebus
 +
S23setroubleshoot
 +
S25netfs
 +
S25pcscd
 +
S26acpid
 +
S26lm_sensors
 +
S28autofs
 +
S29iptables-netgroups
 +
S50hplip
 +
S55sshd
 +
S56cups
 +
S56rawdevices
 +
S56xinetd
 +
S60apcupsd
 +
S80sendmail
 +
S85arecaweb
 +
S85gpm
 +
S90crond
 +
S90splunk
 +
S90xfs
 +
S95anacron
 +
S95atd
 +
S97rhnsd
 +
S97yum-updatesd
 +
S98avahi-daemon
 +
S98haldaemon
 +
S99denyhosts
 +
S99firstboot
 +
S99local
 +
S99smartd
 +
 
 +
==Taro==
 +
==Lentil==
 +
==Pumpkin==
 +
==Endeavour==
 +
 
 +
===Yum Problems 2012-10-11===
 +
libsdp.x86_64
 +
libsdp-devel.x86_64
 +
 
 +
Journal of Process
 +
 
 +
Install both libsdp (i386 and x86_64) and libxml2 from rpm
 +
 
 +
There is still a seg fault when yum tries to read the primary.xml, this is seen when I run strace yum check-update.
 +
 
 +
===Wake-On LAN 2013-08-20===
 +
First run this command on the node
 +
ethtool -s eth0 wol g
 +
Then add this line to the /etc/sysconfig/network-scripts/ifcfg-eth0
 +
ETHTOOL_OPTS="wol g"
 +
 
 +
List of the nodes and their MACs:
 +
Node2 (10.0.0.2) at 00:30:48:C6:F6:80
 +
node3 (10.0.0.3) at 00:30:48:C7:03:FE
 +
node4 (10.0.0.4) at 00:30:48:C7:2A:0E
 +
node5 (10.0.0.5) at 00:30:48:C7:2A:0C
 +
node6 (10.0.0.6) at 00:30:48:C7:04:54
 +
node7 (10.0.0.7) at 00:30:48:C7:04:A8
 +
node8 (10.0.0.8) at 00:30:48:C7:04:98
 +
node9 (10.0.0.9) at 00:30:48:C7:04:F4
 +
node16 (10.0.0.16) at 00:30:48:C7:04:A4
 +
node17 (10.0.0.17) at 00:30:48:C7:04:A6
 +
node18 (10.0.0.18) at 00:30:48:C7:04:4A
 +
node19 (10.0.0.19) at 00:30:48:C7:04:62
 +
node20 (10.0.0.20) at 00:30:48:C6:F6:14
 +
node21 (10.0.0.21) at 00:30:48:C6:F6:12
 +
node22 (10.0.0.22) at 00:30:48:C6:EF:A6
 +
node23 (10.0.0.23) at 00:30:48:C6:EB:CC
 +
node24 (10.0.0.24) at 00:30:48:C7:04:5A
 +
node25 (10.0.0.25) at 00:30:48:C7:04:5C
 +
node26 (10.0.0.26) at 00:30:48:C7:04:4C
 +
node27 (10.0.0.27) at 00:30:48:C7:04:40
 +
 
 +
===Infiniban===
 +
 
 +
====tc Commands 2009-05-24====
 +
I am using the following commands to "throttle" the bandwidth of the NIC at eth0 :
 +
tc qdisc del dev eth0 root
 +
tc qdisc add dev eth0 root handle 10: cbq bandwidth 100mbit avpkt 1000
 +
tc qdisc add dev eth0 root handle 10: htb
 +
tc class add dev eth0 parent 10: classid 10:1 cbq bandwidth 100mbit rate 128kbit allot 1514 maxburst 20 avpkt 1000 bounded prio 3
 +
tc class add dev eth0 parent 10: classid 10:1 htb rate 128kbit
 +
 
 +
====tc Script Bandwidth Throttle 2009-05-28====
 +
## Set some variables
 +
##!/bin/bash
 +
#EXT_IFACE=âeth0â³
 +
#INT_IFACE=âeth1â³
 +
#TC=âtcâ
 +
#UNITS=âkbitâ
 +
#LINE=â10000â³ #maximum ext link speed
 +
#LIMIT=â5000â³ #maximum that weâ™ll allow
 +
#
 +
#
 +
## Set some variables for individual âœclasses❠that weâ™ll use to shape internal upload speed, i.e. shaping eth0
 +
#CLS1_RATE=â200â³ # High Priority traffic class has 200kbit
 +
#CLS2_RATE=â300â³ # Medium Priority class has 300kbit
 +
#CLS3_RATE=â4500â³ # Bulk class has 4500kbit
 +
## (Weâ™ll set which ones can borrow from which later)
 +
#
 +
## Set some variables for individual âœclasses❠that weâ™ll use to shape internal download speed, i.e. shaping eth1
 +
#INT_CLS1_RATE=â1000â³ #Priority
 +
#INT_CLS2_RATE=â4000â³ #Bulk
 +
#
 +
## Delete current qdiscs. i.e. clean up
 +
#${TC} qdisc del dev ${INT_IFACE} root
 +
#${TC} qdisc del dev ${EXT_IFACE} root
 +
#
 +
## Attach root qdiscs. We are using HTB here, and attaching this qdisc to both interfaces. Weâ™ll label it âœ1:0â³
 +
#${TC} qdisc add dev ${INT_IFACE} root handle 1:0 htb
 +
#${TC} qdisc add dev ${EXT_IFACE} root handle 1:0 htb
 +
#
 +
## Create root classes, with the maximum limits defined
 +
## One for eth1
 +
#${TC} class add dev ${INT_IFACE} parent 1:0 classid 1:1 htb rate ${LIMIT}${UNITS} ceil ${LIMIT}${UNITS}
 +
## One for eth0
 +
#${TC} class add dev ${EXT_IFACE} parent 1:0 classid 1:1 htb rate ${LIMIT}${UNITS} ceil ${LIMIT}${UNITS}
 +
#
 +
## Create child classes
 +
## These are for our internal interface eth1
 +
## Create a class labelled âœ1:2â³ and give it the limit defined above
 +
#${TC} class add dev ${INT_IFACE} parent 1:1 classid 1:2 htb rate ${INT_CLS1_RATE}${UNITS} ceil ${LIMIT}${UNITS}
 +
## Create a class labelled âœ1:3â³ and give it the limit defined above
 +
#${TC} class add dev ${INT_IFACE} parent 1:1 classid 1:3 htb rate ${INT_CLS2_RATE}${UNITS} ceil ${INT_CLS2_RATE}${UNITS}
 +
#
 +
## EXT_IF (upload) now. We also set which classes can borrow and lend.
 +
## This class is guaranteed 200kbit and can burst up to 5000kbit if available
 +
#${TC} class add dev ${EXT_IFACE} parent 1:1 classid 1:2 htb rate ${CLS1_RATE}${UNITS} ceil ${LIMIT}${UNITS}
 +
## This class is guaranteed 300kbit and can burst up to 5000kbit-200kbit=4800kbit if available
 +
#${TC} class add dev ${EXT_IFACE} parent 1:1 classid 1:3 htb rate ${CLS2_RATE}${UNITS} ceil `echo ${LIMIT}-${CLS1_RATE}|bc`${UNITS}
 +
## This class can is guaranteed 4500kbit and canâ™t burst past it (5000kbit-200kbit-300kbit=4500kbit).
 +
## I.e. even if our bulk traffic goes crazy, the two classes above are still guaranteed availability.
 +
#${TC} class add dev ${EXT_IFACE} parent 1:1 classid 1:4 htb rate ${CLS3_RATE}${UNITS} ceil `echo ${LIMIT}-${CLS1_RATE}-${CLS2_RATE}|bc`${UNITS}
 +
#
 +
## Add pfifo. Read more about pfifo elsewhere, itâ™s outside the scope of this howto.
 +
#${TC} qdisc add dev ${INT_IFACE} parent 1:2 handle 12: pfifo limit 10
 +
#${TC} qdisc add dev ${INT_IFACE} parent 1:3 handle 13: pfifo limit 10
 +
#${TC} qdisc add dev ${EXT_IFACE} parent 1:2 handle 12: pfifo limit 10
 +
#${TC} qdisc add dev ${EXT_IFACE} parent 1:3 handle 13: pfifo limit 10
 +
#${TC} qdisc add dev ${EXT_IFACE} parent 1:4 handle 14: pfifo limit 10
 +
#
 +
#### Done adding all the classes, now set up some rules! ###
 +
## INT_IFACE
 +
## Note the â˜dst♠direction. Traffic that goes OUT of our internal interface and to our servers is out serverâ™s download speed, so SOME_IMPORTANT_IP is allocated to 1:2 class for download.
 +
#${TC} filter add dev ${INT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip dst SOME_IMPORTANT_IP/32 flowid 1:2
 +
#${TC} filter add dev ${INT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip dst SOME_OTHER_IMPORTANT_IP/32 flowid 1:2
 +
##All other servers download speed goes to 1:3 - not as important as the above two
 +
#${TC} filter add dev ${INT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip dst 0.0.0.0/0 flowid 1:3
 +
#
 +
## EXT_IFACE
 +
## Prioritize DNS requests
 +
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src IMPORTANT_IP/32 match ip sport 53 0xffff flowid 1:2
 +
## SSH is important
 +
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src IMPORTANT_IP/32 match ip sport 22 0xffff flowid 1:2
 +
## Our exim SMTP server is important too
 +
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src 217.10.156.197/32 match ip sport 25 0xffff flowid 1:3
 +
## The bulk
 +
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src 0.0.0.0/0 flowid 1:4
 +
 
 +
====tc BASH Script Traffic Shaper 2009-05-28====
 +
##!/bin/bash
 +
##
 +
##  tc uses the following units when passed as a parameter.
 +
##  kbps: Kilobytes per second
 +
##  mbps: Megabytes per second
 +
##  kbit: Kilobits per second
 +
##  mbit: Megabits per second
 +
##  bps: Bytes per second
 +
##      Amounts of data can be specified in:
 +
##      kb or k: Kilobytes
 +
##      mb or m: Megabytes
 +
##      mbit: Megabits
 +
##      kbit: Kilobits
 +
##  To get the byte figure from bits, divide the number by 8 bit
 +
##
 +
#TC=/sbin/tc
 +
#IF=eth0     # Interface
 +
#DNLD=1mbit          # DOWNLOAD Limit
 +
#UPLD=1mbit          # UPLOAD Limit
 +
#IP=216.3.128.12    # Host IP
 +
#U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32"
 +
#
 +
#start() {
 +
#
 +
#    $TC qdisc add dev $IF root handle 1: htb default 30
 +
#    $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD
 +
#    $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD
 +
#    $U32 match ip dst $IP/32 flowid 1:1
 +
#    $U32 match ip src $IP/32 flowid 1:2
 +
#
 +
#}
 +
#
 +
#stop() {
 +
#
 +
#    $TC qdisc del dev $IF root
 +
#
 +
#}
 +
#
 +
#restart() {
 +
#
 +
#    stop
 +
#    sleep 1
 +
#    start
 +
#
 +
#}
 +
#
 +
#show() {
 +
#
 +
#    $TC -s qdisc ls dev $IF
 +
#
 +
#}
 +
#
 +
#case "$1" in
 +
#
 +
#  start)
 +
#
 +
#    echo -n "Starting bandwidth shaping: "
 +
#    start
 +
#    echo "done"
 +
#    ;;
 +
#
 +
#  stop)
 +
#
 +
#    echo -n "Stopping bandwidth shaping: "
 +
#    stop
 +
#    echo "done"
 +
#    ;;
 +
#
 +
#  restart)
 +
#
 +
#    echo -n "Restarting bandwidth shaping: "
 +
#    restart
 +
#    echo "done"
 +
#    ;;
 +
#
 +
#  show)
 +
#           
 +
#    echo "Bandwidth shaping status for $IF:\n"
 +
#    show
 +
#    echo ""
 +
#    ;;
 +
#
 +
#  *)
 +
#
 +
#    pwd=$(pwd)
 +
#    echo "Usage: $(/usr/bin/dirname $pwd)/tc.bash {start|stop|restart|show}"
 +
#    ;;
 +
#
 +
#esac
 +
#
 +
#exit 0
 +
 
 +
====Testing Scripts 2012-07-17====
 +
 
 +
=====calculate.c=====
 +
##include<iostream>
 +
##include<fstream>
 +
##include<string>
 +
#using namespace std;
 +
#int main()
 +
#{
 +
#        ofstream outFile;
 +
#        ifstream inFile;
 +
#        string pktNumber, latency, jitter;
 +
#        int    pktN, ltcy, jtr;
 +
#        int numOfTest = 0;
 +
#
 +
#        //open the Sample.out file
 +
#        inFile.open("Sample.out");
 +
#        if(!inFile)
 +
#        {
 +
#                cout<<"Can not open Sample.out. please check it!"<<endl;
 +
#        }
 +
#
 +
#        //open the result.out file
 +
#        outFile.open("result.out");
 +
#        if(!outFile)
 +
#        {
 +
#                cout<<"Can not create output file"<<endl;
 +
#        }
 +
#
 +
#        //scan the data and calculate the data.
 +
#        while(!inFile.eof())
 +
#        {
 +
#                double avgJitter = 0;
 +
#                double avgLatency = 0;
 +
#                double jitSum = 0;
 +
#                double latencySum = 0;
 +
#                int numOfValidItem = 0;
 +
# int numOfP = 0;
 +
#                numOfTest++;
 +
#
 +
#                inFile>>pktNumber>>latency>>jitter>>numOfP;
 +
#
 +
#                for(int i = 0; i < numOfP; i++)
 +
#                {
 +
#
 +
# cout<<"Reading the "<<i<<"th of the line."<<endl;
 +
#                        inFile>>pktN>>ltcy>>jtr;
 +
#                        cout<<pktN<<" "<<ltcy<<" "<<jtr<<endl;
 +
#                        if(ltcy != 99999)
 +
#                        {
 +
#                                jitSum += jtr;
 +
#                                latencySum += ltcy;
 +
#                                numOfValidItem++;
 +
#                        }
 +
#
 +
#                }
 +
# if(numOfValidItem != 0)
 +
# {
 +
#                avgJitter = jitSum / numOfValidItem;
 +
#                 avgLatency = latencySum / numOfValidItem;
 +
#                cout<<"Average latency :"<<avgLatency;
 +
#                cout<<"  Average jitter:"<<avgJitter<<endl;
 +
#                 outFile<<"The "<<numOfTest<<" test, average latency is "
 +
#                        <<avgLatency<<" average jitter: "<<avgJitter
 +
#                        <<endl;
 +
# }
 +
#        }
 +
#
 +
#        inFile.close();
 +
#        outFile.close();
 +
#        return 0;
 +
#}
 +
 
 +
=====UDPPktReceiver.java=====
 +
#import javax.swing.*;
 +
#import java.awt.*;
 +
#import java.io.IOException;
 +
#import java.net.*;
 +
#
 +
#public class UDPPktReceiver extends JFrame {
 +
#
 +
# private JLabel sendingInfoLabel;
 +
# private JLabel rcvInfoLabel;
 +
# private JPanel panel;
 +
#
 +
# private JTextArea rcvTextArea;
 +
# private JTextArea resendTextArea;
 +
# private JScrollPane rcvScrollPane;
 +
# private JScrollPane resendScrollPane;
 +
# private Container con;
 +
# public static int clearTextTag = 0;
 +
#
 +
# DatagramSocket ds;
 +
#
 +
# public UDPPktReceiver() {
 +
# this.setBounds(new Rectangle(500, 0, 480, 480));
 +
# con = this.getContentPane();
 +
# con.setLayout(new FlowLayout());
 +
#
 +
# panel = new JPanel();
 +
# panel.setLayout(new FlowLayout());
 +
# sendingInfoLabel = new JLabel("Received information:                  ");
 +
# rcvInfoLabel = new JLabel("Resended information: ");
 +
#
 +
# rcvTextArea = new JTextArea(20, 20);
 +
# resendTextArea = new JTextArea(20, 20);
 +
# rcvTextArea.setEditable(false);
 +
# resendTextArea.setEditable(false);
 +
# rcvScrollPane = new JScrollPane(rcvTextArea);
 +
# resendScrollPane = new JScrollPane(resendTextArea);
 +
#
 +
# con.add(panel);
 +
# panel.add(this.sendingInfoLabel);
 +
# panel.add(this.rcvInfoLabel);
 +
# con.add(rcvScrollPane);
 +
# con.add(resendScrollPane);
 +
#
 +
# this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
 +
# this.setVisible(true);
 +
# this.setResizable(false);
 +
# this.setTitle("Infiniband.Test.Receive Client");
 +
#
 +
# try {
 +
# ds = new DatagramSocket(7000);
 +
# } catch (SocketException e) {
 +
# e.printStackTrace();
 +
# }
 +
# }
 +
#
 +
# public DatagramPacket rcvPacket() throws IOException {
 +
# byte[] buf = new byte[100];
 +
# DatagramPacket dp = new DatagramPacket(buf, 100);
 +
# ds.receive(dp);
 +
# this.rcvTextArea.append(new String(buf, 0, dp.getLength()) + "\n");
 +
# return dp;
 +
# }
 +
#
 +
# public int resendPkt(DatagramPacket dp) {
 +
# DatagramSocket ds;
 +
# String originalData = new String(dp.getData());
 +
# String newData = "Original pkt: " + originalData.trim();
 +
# try {
 +
# ds = new DatagramSocket();
 +
# DatagramPacket newDp = new DatagramPacket(newData.getBytes(),
 +
# newData.length(), dp.getAddress(), 6500);
 +
# ds.send(newDp);
 +
# this.resendTextArea.append(new String(dp.getData()).trim() + "\n");
 +
# } catch (Exception e) {
 +
# e.printStackTrace();
 +
# }
 +
# return 1;
 +
# }
 +
#
 +
# public void clearPreviousInfo() {
 +
# this.rcvTextArea.setText("");
 +
# this.resendTextArea.setText("");
 +
# }
 +
#
 +
# public static void main(String[] args) throws IOException {
 +
# UDPPktReceiver udpRcver = new UDPPktReceiver();
 +
# DatagramPacket dp;
 +
# while (true) {
 +
# if (udpRcver.clearTextTag == 1) {
 +
# udpRcver.clearPreviousInfo();
 +
# }
 +
# dp = udpRcver.rcvPacket();
 +
# udpRcver.resendPkt(dp);
 +
# }
 +
# }
 +
#}
 +
 
 +
=====UDPPktSender.java=====
 +
#import javax.swing.*;
 +
#import java.awt.*;
 +
#import java.awt.event.*;
 +
#import java.io.BufferedWriter;
 +
#import java.io.FileWriter;
 +
#import java.io.IOException;
 +
#import java.net.*;
 +
#
 +
#public class UDPPktSender extends JFrame {
 +
# private JButton btn1;
 +
# private JTextField text;
 +
# private JLabel label;
 +
# private JLabel sendingInfoLabel;
 +
# private JLabel rcvInfoLabel;
 +
# private JPanel panel;
 +
# private JPanel ipPanel;
 +
# private JLabel ipHintLabel;
 +
# private JTextField ipTextField;
 +
#
 +
# private JTextArea sendTextArea;
 +
# private JTextArea rcvTextArea;
 +
# private JScrollPane sendScrollPane;
 +
# private JScrollPane rcvScrollPane;
 +
# private Container c;
 +
# private int pktId = 1;
 +
# private int pktNum = 1;
 +
# private DatagramSocket dsRecv;
 +
# private long startTime = 0;
 +
# private long endTime = 0;
 +
# private long internal = 0;
 +
# private long totalTime = 0;
 +
# private long rcvPktNums = 0;
 +
# private long prevLatency = 0;
 +
# private long jitter = 0;
 +
# private long jitterSum = 0;
 +
# private long validJitterNumber = 0;
 +
# private boolean isLost = false;
 +
# private String data;
 +
# BufferedWriter out;
 +
#
 +
# // Constructor
 +
# public UDPPktSender() {
 +
# setBounds(new Rectangle(0, 0, 520, 480));
 +
# c = getContentPane();
 +
# c.setLayout(new FlowLayout());
 +
# btn1 = new JButton("Send");
 +
# // new JButton("second Button");
 +
# panel = new JPanel();
 +
# ipPanel = new JPanel();
 +
# ipPanel.setLayout(new FlowLayout());
 +
# ipHintLabel = new JLabel("Enter Remote IP Address:");
 +
# ipTextField = new JTextField(27);
 +
# ipTextField.setText("localhost");
 +
# panel.setLayout(new FlowLayout());
 +
# label = new JLabel("Enter Number of Packet to Send:");
 +
# sendingInfoLabel = new JLabel("Sending information:                          ");
 +
# rcvInfoLabel = new JLabel("Receiving information:");
 +
#
 +
# sendTextArea = new JTextArea(20, 20);
 +
# rcvTextArea = new JTextArea(20, 20);
 +
# sendTextArea.setEditable(false);
 +
# rcvTextArea.setEditable(false);
 +
# sendScrollPane = new JScrollPane(sendTextArea);
 +
# rcvScrollPane = new JScrollPane(rcvTextArea);
 +
# rcvScrollPane.setAutoscrolls(true);
 +
#
 +
# text = new JTextField(13);
 +
# text.setText("10");
 +
# text.setSelectionStart(0);
 +
# text.setSelectionEnd(10);
 +
#
 +
#
 +
# btn1.addActionListener(new ActionListener() {
 +
# int currPktId = 1;
 +
# int returnPktId = -1;
 +
#
 +
# public void actionPerformed(ActionEvent e) {
 +
# sendTextArea.setText("");
 +
# rcvTextArea.setText("");
 +
# UDPPktReceiver.clearTextTag = 1;
 +
# pktId = 1;
 +
# totalTime = 0;
 +
# rcvPktNums = 0;
 +
# try {
 +
# pktNum = Integer.parseInt(text.getText());
 +
# } catch (Exception ex) {
 +
# JOptionPane.showMessageDialog(JFrame.getFrames()[0],
 +
# "Input Number Is Invalid Please Check It");
 +
# text.setFocusable(true);
 +
# return;
 +
# }
 +
# if (pktNum <= 0) {
 +
# JOptionPane.showMessageDialog(JFrame.getFrames()[0],
 +
# "Packet Number must more than 0");
 +
# return;
 +
# }
 +
# if (pktNum >= 100) {
 +
# JOptionPane
 +
# .showMessageDialog(JFrame.getFrames()[0],
 +
# "You should not send more than 100 traffic,enter number again");
 +
# return;
 +
# }
 +
# for (int i = 0; i < pktNum; i++) {
 +
# startTime = System.currentTimeMillis();
 +
# currPktId = pktId;
 +
# sendPacket(currPktId);
 +
# pktId++;
 +
#
 +
# returnPktId = rcvPkt();
 +
# endTime = System.currentTimeMillis();
 +
# internal = endTime - startTime;
 +
# totalTime += internal;
 +
#
 +
# if (currPktId == returnPktId) {
 +
# rcvPktNums++;
 +
# appendToTextArea("round-trip latency :" + internal
 +
# + " ms");
 +
# } else {
 +
# appendToTextArea("packet " + currPktId + "  has lost");
 +
# isLost = true;
 +
# }
 +
#
 +
# if(i == 0)
 +
# {
 +
# prevLatency = internal;
 +
# jitter = 0;
 +
# }
 +
# else
 +
# {
 +
# if(!isLost)
 +
# {
 +
# jitter = internal - prevLatency;
 +
# prevLatency = internal;
 +
# }
 +
# else{
 +
# jitter = 0;
 +
# prevLatency = 0;
 +
# }
 +
# }
 +
# try
 +
# {
 +
# out = new BufferedWriter(new FileWriter("Sample.out",true));
 +
# if(i == 0)
 +
# out.write("PacketNumber    Latency    Jitter " + pktNum + "\n");
 +
# //out.write("\n"+5);
 +
# if(!isLost)
 +
# {
 +
# out.write(currPktId + "                ");
 +
# out.write(internal + "            ");
 +
# out.write("  " + Math.abs(jitter) + "\n");
 +
# jitterSum += Math.abs(jitter);
 +
# validJitterNumber++;
 +
# }
 +
# else
 +
# {
 +
# out.write(currPktId + "                ");
 +
# out.write("99999        ");
 +
# out.write("  " + Math.abs(jitter) + "\n");
 +
# }
 +
# out.close();
 +
# }
 +
# catch(IOException e3)
 +
# {
 +
# e3.printStackTrace();
 +
# }
 +
# }
 +
#
 +
# appendToTextArea("Total Time :" + totalTime + " ms");
 +
# appendToTextArea("Average Time :" + totalTime / pktNum + " ms");
 +
# appendToTextArea("loss rate :" + (1 - rcvPktNums / pktNum)
 +
# * 100 + "%");
 +
# UDPPktReceiver.clearTextTag = 0;
 +
# }
 +
# });
 +
#
 +
# c.add(label);
 +
# c.add(text);
 +
# c.add(btn1);
 +
# c.add(ipPanel);
 +
# ipPanel.add(this.ipHintLabel);
 +
# ipPanel.add(this.ipTextField);
 +
# c.add(sendScrollPane);
 +
# c.add(rcvScrollPane);
 +
# c.add(panel);
 +
# panel.add(this.sendingInfoLabel);
 +
# panel.add(this.rcvInfoLabel);
 +
#
 +
# this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
 +
# setVisible(true);
 +
# this.setResizable(false);
 +
# this.setTitle("Infiniband.Test.Sending Client");
 +
# try {
 +
# dsRecv = new DatagramSocket(6500);
 +
# } catch (SocketException e1) {
 +
# e1.printStackTrace();
 +
# }
 +
#
 +
# }
 +
#
 +
# // Send the pkt according to the packet Id
 +
# public int sendPacket(int pktId) {
 +
# try {
 +
# DatagramSocket ds = new DatagramSocket();
 +
# String str = "packet number:" + pktId;
 +
# String ip = ipTextField.getText();
 +
# DatagramPacket dp = new DatagramPacket(str.getBytes(),
 +
# str.length(), InetAddress.getByName(ip),
 +
# 7000);
 +
# ds.send(dp);
 +
# this.sendTextArea.append("sending packet: " + pktId + "\n");
 +
#
 +
# } catch (Exception ex) {
 +
# ex.printStackTrace();
 +
# return 0;
 +
# }
 +
# return 1;
 +
# }
 +
#
 +
# // Receive the packet
 +
# public int rcvPkt() {
 +
# try {
 +
# byte[] buf = new byte[100];
 +
# DatagramPacket dpRecv = new DatagramPacket(buf, 100);
 +
# dsRecv.setSoTimeout(100);
 +
# dsRecv.receive(dpRecv);
 +
# data = new String(buf);
 +
# this.rcvTextArea.append(new String(buf, 0, dpRecv.getLength())
 +
# + "\n");
 +
# } catch (Exception ex) {
 +
# ex.printStackTrace();
 +
# return -1;
 +
# }
 +
# int pktId = this.getPacketId(data);
 +
# return pktId;
 +
# }
 +
#
 +
# public int getPacketId(String s) {
 +
# s = s.trim();
 +
# int index = s.lastIndexOf(':');
 +
# int pktId = -1;
 +
# try {
 +
# pktId = Integer.parseInt(s.substring(index + 1));
 +
# } catch (Exception ex) {
 +
# JOptionPane.showMessageDialog(null, s);
 +
# ex.printStackTrace();
 +
# }
 +
# return pktId;
 +
# }
 +
#
 +
# public void closeSocket() {
 +
# this.dsRecv.close();
 +
# }
 +
#
 +
# public void appendToTextArea(String s) {
 +
# this.rcvTextArea.append(s);
 +
# this.rcvTextArea.append("\n");
 +
# }
 +
#
 +
# public static void main(String[] args) {
 +
# UDPPktSender udpSender = new UDPPktSender();
 +
# }
 +
#}
 +
 
 +
==Einstein==
 +
 
 +
===rc3.d 2010-01-16===
 +
K01dnsmasq
 +
K02avahi-dnsconfd
 +
K02dhcdbd
 +
K02NetworkManager
 +
K05conman
 +
K05saslauthd
 +
K05wdaemon
 +
K10dc_server
 +
K10psacct
 +
K12dc_client
 +
K12mailman
 +
K15httpd
 +
K19ntop
 +
K20nfs
 +
K24irda
 +
K25squid
 +
K30spamassassin
 +
K35dovecot
 +
K35smb
 +
K35vncserver
 +
K35winbind
 +
K50netconsole
 +
K50snmptrapd
 +
K50tux
 +
K69rpcsvcgssd
 +
K73ldap
 +
K73ypbind
 +
K74ipmi
 +
K74nscd
 +
K74ntpd
 +
K80kdump
 +
K85mdmpd
 +
K87multipathd
 +
K87named
 +
K88wpa_supplicant
 +
K89dund
 +
K89netplugd
 +
K89pand
 +
K89rdisc
 +
K91capi
 +
K92ip6tables
 +
K99readahead_later
 +
S02lvm2-monitor
 +
S04readahead_early
 +
S05kudzu
 +
S06cpuspeed
 +
S07iscsid
 +
S08ip6tables
 +
S08iptables
 +
S08mcstrans
 +
S09isdn
 +
S10network
 +
S11auditd
 +
S12restorecond
 +
S12syslog
 +
S13irqbalance
 +
S13iscsi
 +
S13mcstrans
 +
S13named
 +
S13portmap
 +
S14nfslock
 +
S15mdmonitor
 +
S18rpcidmapd
 +
S19rpcgssd
 +
S22messagebus
 +
S23setroubleshoot
 +
S25bluetooth
 +
S25netfs
 +
S25pcscd
 +
S26acpid
 +
S26hidd
 +
S26lm_sensors
 +
S27ldap
 +
S28autofs
 +
S29iptables-npg
 +
S50denyhosts
 +
S50hplip
 +
S50snmpd
 +
S55sshd
 +
S56cups
 +
S56rawdevices
 +
S56xinetd
 +
S58ntpd
 +
S60apcupsd
 +
S65dovecot
 +
S78spamassassin
 +
S80postfix
 +
S85gpm
 +
S85httpd
 +
S90crond
 +
S90elogd
 +
S90splunk
 +
S90xfs
 +
S95anacron
 +
S95atd
 +
S95saslauthd
 +
S97libvirtd
 +
S97rhnsd
 +
S97yum-updatesd
 +
S98avahi-daemon
 +
S98haldaemon
 +
S98mailman
 +
S99firstboot
 +
S99local
 +
S99smartd
 +
 
 +
==Corn==
 +
==Jalapeno==
 +
==Roentgen==
 +
 
 +
===Xen to VMware Conversion 2009-06-23===
 +
The transfer process
 +
#Shutdown the xen virtual machine and make a backup of the .img file.
 +
#Make a tarball of roentgens filesystem
 +
#*This must be done as root
 +
#*tar -cvf machine.tar /lib /lib64 /etc /usr /bin /sbin /var /root 
 +
#Set up an identical OS (CentOS 5.3) on VMWare Server.
 +
#Mount the location of the tarball and extract to the /
 +
#*Make sure to backup the original OSs /etc/ to /etc.bak/
 +
#*tar -xvf machine.tar
 +
 
 +
Files to copy back over from the /etc.bak/
 +
  /etc/sysconfig/network-scripts/ifcfg-*
 +
  /etc/inittab
 +
  /etc/fstab
 +
  /etc/yum*
 +
  /etc/X11*
 +
Turn roentgen on to prepare for rsync transfer.
 +
 
 +
Make sure to shutdown all important services (httpd, mysqld, etc)
 +
 
 +
Log on to roentgen as root and run the following command for each folder archived above.
 +
  rsync -av --delete /src/(lib) newserver.unh.edu:/dest/(lib)>>rsync.(lib).log
 +
 
 +
Rsync process
 +
  --delete  delete extraneous files from dest dirs
 +
  -a, --archive              archive mode; equals -rlptgoD (no -H,-A,-X)
 +
   --no-OPTION            turn off an implied OPTION (e.g. --no-D)
 +
 +
 
 +
 
 +
This tells us how to convert xen to vmware
 +
#download the current kernel for the xen virtual machine (not the xen kernel) and install it on the virtual machine.  This is done so when the virtual machine is transitioned into a fully virtualized setup, it can boot a normal kernel not the xen kernel.
 +
#shutdown roentgen to copy the image file to a back for exporting
 +
#Install qemu-img
 +
#Run the following command:
 +
#*qemu-img convert <source_xen_machine> -O vmdk <destination_vmware.vmdk>
 +
#Now it boots but, it also kernel panics.
 +
 
 +
This was scratched and instead made a tarball of roentgens filesystem.
  
 
http://www.howtoforge.com/how-to-convert-a-xen-virtual-machine-to-vmware
 
http://www.howtoforge.com/how-to-convert-a-xen-virtual-machine-to-vmware
 +
 +
===Cacti===
 +
 +
====Notes 2009-05-21====
 +
UN:admin or npglinux
 +
 +
go to roentgen.unh.edu/cacti for login
 +
 +
Adding a device:
 +
 +
To manage devices within Cacti, click on the Devices menu item. Clicking Add will bring up a new device form.
 +
 +
====Cacti Integrat Groundwork Monitor 2009-07-16====
 +
#http://www.groundworkopensource.com/community/forums/viewtopic.php?f=3&t=356
 +
#
 +
# Post subject: sorry for the delay
 +
#PostPosted: Thu Oct 19, 2006 6:16 am
 +
#Offline
 +
#
 +
#Joined: Fri Sep 15, 2006 9:43 am
 +
#Posts: 19
 +
#Sorry for the delay... I have written a very ugly step by step to integrating Cacti and GWOS from scratch, I haven't had time to pretty it up, but will in the future.
 +
#
 +
#The instructions are specific to my install on SuSE 10.1, so you may need to do some modification for your distribution.
 +
#
 +
#FYI, I have not been able to get php-snmp working yet, when/if I do, I'll post more.
 +
#****************BEGIN**************
 +
#
 +
#Install the following packages
 +
#
 +
#smart install mysql
 +
#installs mysql,mysql-client,mysql-shared, perl-DBD-mysql, perl-data-show table
 +
#(if using smart installer, you may want to remove the download.opensuse.org channel and add mirrors.kernel.org/suse/.... or add the local mirrors from suse.cs.utah.edu )
 +
#
 +
#Configure MySQL:
 +
#sudo mysql_install_db
 +
#chown -R mysql:mysql /var/lib/mysql/
 +
#
 +
#Scary Part**
 +
#as root, "export MYSQL_ROOT=password" (mysqlpass)
 +
#
 +
#
 +
#**
 +
#
 +
#edit /etc/hosts to look like
 +
#127.0.0.1 localhost localhost.localdomain
 +
#xxx.xxx.xxx.xxx gwservername gwservername.domain.end
 +
#
 +
#Install GWOS rpm
 +
#
 +
#wget http://groundworkopensource.com/downloa ... 1.i586.rpm
 +
#
 +
#fix libexpat problem before installing rpm...
 +
#ln -s /usr/lib/libexpat.so.1.5.0 /usr/lib/libexpat.so.0
 +
#
 +
#rpm -Uvh groundwork-monitor-os-xxx
 +
#
 +
#Set Firewall Appropriately...
 +
#___________
 +
#
 +
#Time to install Cacti
 +
#
 +
#download the latest Cacti from http://www.cacti.net
 +
#
 +
#
 +
#wget http://www.cacti.net/dolwnloads/cacti-0.8.6i.tar.gz
 +
#
 +
#untar cacti
 +
#
 +
#tar xvfz cacti-0.8.6i.tar.gz
 +
#
 +
#and rename
 +
#
 +
#mv cacti-0.8.6i/ cacti
 +
#
 +
#then move cacti directory to GWOS
 +
#
 +
#mv cacti/ /usr/local/groundwork/cacti
 +
#
 +
#now, cd /usr/local/groundwork/cacti
 +
#
 +
#__
 +
#
 +
#$%@#%$^ Should we install net-snmp?? $^@%&#^*^
 +
#
 +
#Time to create cacti user and group...
 +
#
 +
#
 +
#Create a new user via yast or useradd named cactiuser (or whatever your cacti user name wants to be)
 +
#
 +
#Make sure to add cactiuser to your nagios, nagioscmd, mysql, and nobody groups.
 +
#You will probably want to disable user login... set cactiuser password, make sure to remember it... my default is "cacti" for configuration purposes.
 +
#
 +
#Time to own directories...
 +
#Inside your cacti directory:
 +
#
 +
#sudo chown -R cactiuser rra/ log/
 +
#
 +
#cd include
 +
#
 +
#Now edit config.php with your preferred editor. (emacs config.php in my case)
 +
#
 +
#edit the $database variables to fit your preferred installation.
 +
#
 +
#If you've followed my default configuration, you will only need to change the $database_password, which will be the same as your cactiuser password.
 +
#
 +
#save and exit your editor
 +
#
 +
#_
 +
#
 +
#Now to build our DB
 +
#
 +
#from /cacti
 +
#
 +
##:sudo mysqladmin -u root -p create cacti
 +
#
 +
##: mysql -u root -p cacti <cacti> grant all on cacti.* to cactiuser@localhost identified by 'yercactiuserpassword';
 +
#> grant all on cacti.* to cactiuser;
 +
#> grant all on cacti.* to root@localhost;
 +
#> grant all on cacti.* to root;
 +
#> flush privileges;
 +
#> exit
 +
#__
 +
#
 +
#Time to cron the poller
 +
#
 +
#There are several ways to do this...
 +
#swith to cactiuser
 +
##: su cactiuser
 +
##: crontab -e
 +
#
 +
#insert this line to poll every 5 minutes.. make sure you use proper paths, we want to use GWOS' php binary.
 +
#
 +
#*/5 * * * * /usr/local/groundwork/bin/php /usr/local/groundwork/cacti/poller.php > /dev/nell 2>&1
 +
#
 +
#esc shift-ZZ to exit
 +
#
 +
#__
 +
#
 +
#Now to build Cacti into the GWOS tabs.
 +
#
 +
#Instructions for adding a tab to GWOS can be found at https://wiki.chpc.utah.edu/index.php/Ad ... _Interface
 +
#
 +
#Specific instructions:
 +
#
 +
#mkdir /usr/local/groundwork/guava/packages/cacti
 +
#mkdir /usr/local/groundwork/guava/packages/cacti/support
 +
#mkdir /usr/local/groundwork/guava/packages/cacti/sysmodules
 +
#mkdir /usr/local/groundwork/guava/packages/cacti/views
 +
#
 +
#now we create the package definition
 +
#
 +
#cd /usr/local/groundwork/guava/packages/cacti
 +
#emacs package.pkg
 +
#
 +
#create contents appropriately
 +
#
 +
#My package.pkg is as follows:
 +
#
 +
#
 +
#/* Cacti Package File */
 +
#
 +
#define package {
 +
#name = Cacti
 +
#shortname = Cacti
 +
#description = cacti graphing utility
 +
#version_major = 1
 +
#version_minor = 0
 +
#}
 +
#
 +
#
 +
#define view {
 +
#name = Cacti
 +
#description = Cacti Graph Viewer
 +
#classname = CactiMainView
 +
#file = views/cacti.inc.php
 +
#}
 +
#
 +
#
 +
#___
 +
#
 +
#Now to create the view file:
 +
#
 +
#cd /usr/local/groundwork/guava/packages/cacti/views
 +
#
 +
#emacs cacti.inc.php
 +
#
 +
#Example contents:
 +
#<php>
 +
#
 +
#____
 +
#
 +
#Now you must set permissions for cacti
 +
#
 +
#chown -R nagios.nagios /usr/local/groundwork/guava/packages/cacti/
 +
#chmod -R a+r /usr/local/groundwork/guava/packages/cacti/
 +
#
 +
#
 +
#Installing cacti into GW
 +
#
 +
#Login to GWOS
 +
#Select the Administration tab
 +
#Select packages from below the Administration tab
 +
#Select Cacti from the main menu
 +
#Select Install this package now
 +
#
 +
#
 +
#Now select the Users link below the Administration tab
 +
#
 +
#Select Administrators from the Roles submenu.
 +
#At the bottom of the page click the drop menu "Add View to This Role" and select Cacti, click the "Add View" button
 +
#
 +
#
 +
#Now logout of GWOS, then log back in. The Cacti tab should now be available.
 +
#
 +
#__
 +
#
 +
#For my apache config (below) to work, you must simlink cacti into the apache2 dir.
 +
##: ln -s /usr/local/groundwork/cacti /usr/local/groundwork/apache2/htdocs/cacti
 +
#
 +
#Now for apache configuration... I am not an apache guru, so this is a hackaround...
 +
#
 +
#edit your httpd.conf from /usr/local/groundwork/apache2/conf/httpd.conf
 +
#
 +
#add index.php to your DirectoryIndex list
 +
#
 +
#below the monitor/ alias directory add
 +
#
 +
#Alias /cacti/ "/usr/local/groundwork/apache2/htdocs/cacti/"
 +
#
 +
#<Directory>
 +
#Options Indexes
 +
#AllowOverride None
 +
#Order allow,deny
 +
#Allow from all
 +
#</Directory>
 +
#
 +
#restart apache now
 +
##: /usr/local/groundwork/apache2/bin/apachectl restart
 +
#
 +
#Now go back to GWOS and click the Cacti tab. You should see an index of the cacti directory. Click the install dir link and then the index.php link. You will be prompted through new installation, make sure and set your binary paths to those in the GWOS directory. i.e. /usr/local/groundwork/bin/rrdtool
 +
#
 +
#
 +
#At this point I had a permission problem, probably due to the simlink above... if you followed my instructions, you will need to do this to fix the problem.
 +
##: chown -R nagios.nagios /usr/local/groundwork/cacti
 +
##: cd /usr/local/groundwork/cacti
 +
##: chown -R cactiuser.users rra/ log/
 +
#
 +
#Restart apache, reload the GW page, and Presto, the Cacti tab should bring you directly to the Cacti login screen.
 +
#
 +
#
 +
#Let me know if you have problems. It's well within the bounds of probable that I made a mistake.
 +
#
 +
#Jeff
 +
#
 +
#
 +
#Last edited by dudemanxtreme on Thu Oct 19, 2006 6:28 am, edited 1 time in total.
 +
#
 +
#Back to top
 +
# Profile 
 +
#
 +
#dudemanxtreme
 +
# Post subject: mistake
 +
#PostPosted: Thu Oct 19, 2006 6:21 am
 +
#Offline
 +
#
 +
#Joined: Fri Sep 15, 2006 9:43 am
 +
#Posts: 19
 +
#The Forum won't write the cacti Alias directory, so here you go...
 +
#
 +
#after the Alias /cacti/ line, in the Directory <> add "/usr/local/groundwork/apache2/cacti"
 +
#
 +
#
 +
#Back to top
 +
# Profile 
 +
#
 +
#dudemanxtreme
 +
# Post subject: newest cacti
 +
#PostPosted: Thu Oct 19, 2006 7:56 am
 +
#Offline
 +
#
 +
#Joined: Fri Sep 15, 2006 9:43 am
 +
#Posts: 19
 +
#The instructions posted worked fine for cacti-0.8.6h, but with the newest cacti-0.8.6i I cannot get graphs to show up. RRDs are created and updated properly however. I have posted to the cacti forums, if/when I get it working I will update.
 +
#
 +
#
 +
#Back to top
 +
# Profile 
 +
#
 +
#dudemanxtreme
 +
# Post subject: fixed
 +
#PostPosted: Thu Oct 19, 2006 8:20 pm
 +
#Offline
 +
#
 +
#Joined: Fri Sep 15, 2006 9:43 am
 +
#Posts: 19
 +
#hokay... if you followed my instructions above, this is what you need to do to get graphs to render...
 +
#
 +
#edit /usr/local/groundwork/cacti/lib/rrd.php
 +
#
 +
#delete or comment out the following segment
 +
#
 +
#/* rrdtool 1.2.x does not provide smooth lines, let's force it */
 +
#if (read_config_option("rrdtool_version") == "rrd-1.2.x") {
 +
#$graph_opts .= "--slope-mode" . RRD_NL;
 +
#}
 +
#
 +
#
 +
#Save, exit, and voila. If anyone has any problems past this point... Well, we'll do what we can.
  
 
==Wigner==
 
==Wigner==

Latest revision as of 17:52, 29 June 2014

This is a log of everything Josh (Systems Administrator) has done over the years.

Projects, Scripts, and Daemons

This section includes things like:

  • Scripts I have written
  • Daemons I have setup
  • Projects I have attempted or completed

Upgrades and Survival Guides

This a list of my notes on the sysems upgrades I have performed in the past.

System Upgrade 2013-12-30

The order we will be updating is: jalapeno, pumpkin, gourd, einstein, taro, roentgen, and endeavour. The reason I picked this order is because we need a physical machine to test this update on and Pumpkin is the lowest priority physical machine to do tthis with. Taro needs to stay after gourd and einstein because I will want to be able to recover the VMs on a working virtualized server (the backup will come from the pulled drive on Gourd, described below). If pumpkin goes well, then it should follow that gourd will go smoothly. Jalapeno goes first because it is the lowest priority VM and it will help us get our feet wet with the updating of CentOS 5 to 6, which will also help in pumpkin's update from RHEL 5 to 6.

This will require (for the physical machines) us to get in touch with UNH IT and make sure we can get the proper keys to update with official RHEL 6 repos. Gourd could be problematic, that is why we will update her and make sure she runs properly (including the VMs) then we will detach one of the software RAID drives (for backup) and rebuild the RAID with a new drive, and then we will proceed to upgrading to RHEL 6.

There are a few problems I foresee, that is: upgrading from 5 to 6, endeavour's yum and cluster software, making sure that latest version GCC (and anyother crucial software to the physicists projects) is backwards compatible with older version (in other words, how many problems will they have), the video cards in pumpkin and taro, and finally einstein's mail and LDAP (will it be compatible with CentOS 6).

Startup Procedure 2012-11-01

How to start Gourd and the virtual machines

  1. Start Gourd.
  2. NOTE: Make sure you boot gourd with the correct kernel with the correct kernel modules (like kvm-intel) use this command to check for the kvm module:
  3. modprobe kvm-intel
  4. Login as root
  5. To start the virtual machines use these commands:
  6. virsh domid vm
  7. Example: virsh start einstein.unh.edu
  8. Once einstein (LDAP and Mail) and jalapeno (DNS) has been started, start the netgroups2iptables script
  9. service iptables-netgroups start or
  10. /etc/init.d/iptables-netgroups start
  11. NOTE: Gourd's netgroup iptables needs LDAP, so you need to start einstein for LDAP. If you do not start iptables-netgroups, clients will not be able to properly automount their home folders.
  12. Once you have finished the above, you can proceed to start all the servers (Virtual and Physical).

General administration of virtual machines

Once you’ve got your virtual machine installed, you’ll need to know the various commands for everyday administration of KVM virtual machines. In these examples, change the name of the VM from ‘vm’ to whatever yours is called.

To show general info about virtual machines, including names and current state:

virsh list --all

To see a top-style monitor window for all VMs:

virt-top

To show info about a specific virtual machine:

virsh dominfo vm

To start a virtual machine:

virsh start vm

To pause a virtual machine:

virsh suspend vm

To resume a virtual machine:

virsh resume vm

To shut down a virtual machine (the ‘acpid’ service must be running on the guest for this to work):

virsh shutdown vm

To force a hard shutdown of a virtual machine:

virsh destroy vm

To remove a domain (don’t do this unless you’re sure you really don’t want this virtual machine any more):

virsh undefine vm

Initial host setup

Firstly it’s necessary to make sure you have all the necessary software installed:

yum -y groupinstall Virtualization "Virtualization Client"
 "Virtualization Platform" "Virtualization Tools" ;
 yum -y install libguestfs-tools

Next check that libvirtd is running:

service libvirtd status

If not, make sure that messagebus and avahi-daemon are running, then start libvirtd:

service messagebus start
service avahi-daemon start
service libvirtd start

Use chkconfig to ensure that all three of these services start automatically on system boot.

Next it’s necessary to set up the network bridge so that the virtual machines can function on the network in the same way as physical servers. To do this, copy /etc/sysconfig/network-scripts/ifcfg-eth0 (or whichever is the file for the active network interface) to /etc/sysconfig/network-scripts/ifcfg-br0.

In /etc/sysconfig/network-scripts/ifcfg-eth0, comment out all the lines for ‘BOOTPROTO’, ‘DNS1', ‘GATEWAY’, ‘IPADDR’ and ‘NETMASK’, then add this line:

BRIDGE="br0"

Then edit /etc/sysconfig/network-scripts/ifcfg-br0, comment out the ‘HWADDR’ line, change the ‘DEVICE’ to “br0", and change the ‘TYPE’ to “Bridge”.

Then restart the network:

service network restart

The bridge should now be up and running. You can check its status with:

ifconfig
brctl show

Creating the disk volumes for a new virtual machine

We need to create new LVM volumes for the root and swap partitions in the new virtual machine. I’m assuming LVM is already being used, that the volume group is called ‘sysvg’, and that there is sufficient free space available in the sysvg group for the new volumes. If your volume group has a different name then just modify the instructions below accordingly. Change the volume sizes to suit your requirements:

lvcreate -L 20G -n vm-root sysvg
lvcreate -L 4096M -n vm-swap sysvg

Installing the operating system on the new virtual machine

Here I’m installing CentOS 6 on the guest machine using Kickstart, although I will also explain how to perform a normal non-automated installation. You’ll need to modify the instructions accordingly to install different operating systems. To make CentOS easily available for the installation, firstly make sure you have Apache installed and running. If necessary, install it with:

yum -y install httpd

Then start it with:

service httpd start

Then create the directory /var/www/html/CentOS and copy the contents of the CentOS DVDs into it.

If you’re using Kickstart then you’ll need these lines in your Kickstart config file to make sure that it can find the files from the CentOS DVDs. The IP address of the host in this example is 192.168.1.1, so change that as needed:

install
url --url http://192.168.1.1/CentOS

These lines are also required to make sure that Kickstart can find and use the LVM volumes created earlier:

zerombr
clearpart --all --initlabel
bootloader --location=mbr
part / --fstype ext4 --size 1 --grow --ondrive=vda
part swap --size 1 --grow --ondrive=vdb

Once the Kickstart file is ready, call it ks.cfg and copy it to /var/www/html

This command installs CentOS on the guest using a Kickstart automated installation. The guest is called ‘vm’, it has a dedicated physical CPU core (core number 2) and 1 GB of RAM allocated to it. Again, the IP address of the host is 192.168.1.1, so change that as needed:

virt-install --name=vm --cpuset=2 --ram=1024
 --network bridge=br0 --disk=/dev/mapper/sysvg-vm--root
 --disk=/dev/mapper/sysvg-vm--swap --vnc --vnclisten=0.0.0.0
 --noautoconsole --location /var/www/html/CentOS
 --extra-args "ks=http://192.168.1.1/ks.cfg"

The installation screen can be seen by connecting to the host via VNC. This isn’t necessary for a Kickstart installation (unless something goes wrong). If you want to do a normal install rather than a Kickstart install then you will need to use VNC to get to the installation screen, and in that case you’ll want to use the virt-install command above but just leave off everything from ‘–extra-args’ onwards.

Also, you may want to install directly from a CDROM image, in which case replace the ‘–location’ bit with ‘–cdrom=’ and the path to the CD image, e.g. to install Ubuntu in your VM you might put ‘–cdrom=/tmp/ubuntu-12.04.1-server-i386.iso’. (If virtual servers are already using VNC on the host then you will need to add the appropriate number to the VNC port number to connect to, e.g. the standard VNC port is 5900, and if there are already two virtual servers using VNC on the host then you will need to connect VNC to port 5902 for this install.).

General administration of virtual machines

Once you’ve got your virtual machine installed, you’ll need to know the various commands for everyday administration of KVM virtual machines. In these examples, change the name of the VM from ‘vm’ to whatever yours is called.

To show general info about virtual machines, including names and current state:

virsh list --all

To see a top-style monitor window for all VMs:

virt-top

To show info about a specific virtual machine:

virsh dominfo vm

To start a virtual machine:

virsh start vm

To pause a virtual machine:

virsh suspend vm

To resume a virtual machine:

virsh resume vm

To shut down a virtual machine (the ‘acpid’ service must be running on the guest for this to work):

virsh shutdown vm

To force a hard shutdown of a virtual machine:

virsh destroy vm

To remove a domain (don’t do this unless you’re sure you really don’t want this virtual machine any more):

virsh undefine vm

Cloning virtual machines

To clone a guest VM, firstly it’s necessary to create new disk volumes for the clone, then we use the virt-clone command to clone the existing VM:

lvcreate -L 20G -n newvm-root sysvg
lvcreate -L 4096M -n newvm-swap sysvg
virt-clone -o vm -n newvm -f /dev/mapper/sysvg-newvm--root
 -f /dev/mapper/sysvg-newvm--swap

Then dump the XML for the new VM:

virsh dumpxml newvm > /tmp/newvm.xml

Edit /tmp/newvm.xml. Look for the ‘vcpu’ line and change the ‘cpuset’ number to the CPU core you want to dedicate to this VM. Then make this change effective:

virsh define /tmp/newvm.xml

You’ll also need to grab the MAC address from the XML. Keep this available as we’ll need it in a minute:

grep "mac address" /tmp/newvm.xml | awk -F ' '{print $2}'

Start up the new VM and connect to it via VNC as per the instructions in the Installation section above. Edit /etc/sysconfig/network and change the hostname to whatever you want to use for this new machine. Then edit /etc/sysconfig/network-scripts/ifcfg-eth0 and change the ‘HOSTNAME’ and ‘IPADDR’ to the settings you want for this new machine. Change the ‘HWADDR’ to the MAC address you obtained a moment ago, making sure that the letters are capitalised.

Then reboot and the new VM should be ready.

Backing up and migrating virtual machines

In order to take backups and to be able to move disk volumes from virtual machines to other hosts, we basically need to create disk image files from the LVM volumes. We’ll first snapshot the LVM volume and take the disk image from the snapshot, as this significantly reduces the amount of time that the VM needs to remain paused (i.e. effectively offline) for. We remove the snapshot at the end of the process so that the VM’s IO is not negatively affected.

This disk image, once created, can then be stored in a separate location as a backup, and/or transferred to another host server in order to copy or move the VM there.

So, make sure that the VM is paused or shut down, then create a LVM snapshot, then resume the VM, then create the image from the snapshot, then remove the snapshot:

virsh suspend vm
lvcreate -L 100M -n vm-root-snapshot -s /dev/sysvg/vm-root
virsh resume vm
dd if=/dev/mapper/sysvg-vm--root--snapshot
 of=/tmp/vm-root.img bs=1M
lvremove /dev/mapper/sysvg-vm--root--snapshot

You can then do what you like with /tmp/vm-root.img – store it as a backup, move it to another server, and so forth.

In order to restore from it or create a VM from it on a new server, firstly use ‘lvcreate’ to create the LVM volume for restore if it isn’t already there, then copy the disk image to the LVM volume:

dd if=/tmp/vm-root.img of=/dev/mapper/sysvg-vm--root bs=1M

You may also need to perform this procedure for the swap partition depending on what you are trying to achieve.

You’ll also want to back up the current domain configuration for the virtual machine:

virsh dumpxml vm > /tmp/vm.xml

Then just store the XML file alongside the disk image(s) you’ve taken.

If you’re moving the virtual machine to a new server then once you’ve got the root and swap LVM volumes in place, you’ll need to create the domain for the virtual machine on the new server. Firstly edit the XML file and change the locations of disk volumes to the layout on the new server if it’s different to the old server, then define the new domain:

virsh define /tmp/vm.xml

You should then be able to start up the ‘vm’ virtual machine on the new server.

Resizing partitions on a guest

Let’s say we want to expand the root partition on our VM from 20G to 25G. Firstly make sure the VM is shut down, then use virt-filesystems to get the information we need for the resize procedure:

virsh shutdown vm
virt-filesystems -lh -a /dev/mapper/sysvg-vm--root

This will probably tell you that the available filesystem on that volume is /dev/sda1, which is how these tools see the virtual machine’s /dev/vda1 partition. We’ll proceed on the basis that this is the case, but if the filesystem device name is different then alter the command below accordingly.

Next we create a new volume, then we perform the virt-resize command from the old volume to the new volume, then we set the new volume as the active root partition for our domain:

lvcreate -L 25G -n vm-rootnew sysvg
virt-resize --expand /dev/sda1 /dev/mapper/sysvg-vm--root
 /dev/mapper/sysvg-vm--rootnew
lvrename /dev/sysvg/vm-root /dev/sysvg/vm-rootold
lvrename /dev/sysvg/vm-rootnew /dev/sysvg/vm-root
virsh start vm

Then, when you’re sure the guest is running OK with the new resized partition, remove the old root partition volume:

lvremove /dev/mapper/sysvg-vm--rootold

Personal bash Scripts

This is a collection of bash scripts I have written over the years.

bash_denyhost.sh

##!/bin/bash
##For Gourd
#DIR_HOST=/usr/share/denyhosts/data
##For Endeavour
##DIR_HOST=/var/lib/denyhosts
#
#echo "Enter IP or HOST"
#read IP_HOST
#
#echo "/etc/hosts.deny"
#cat /etc/hosts.deny | grep $IP_HOST
#echo "hosts"
#cat DIR_HOST/hosts | grep $IP_HOST
#echo "hosts-restricted"
#cat $DIR_HOST/hosts-restricted | grep $IP_HOST
#echo "hosts-root"
#cat $DIR_HOST/hosts-root | grep $IP_HOST
#echo "hosts-valid"
#cat $DIR_HOST/hosts-valid | grep $IP_HOST
#echo "user-hosts"
#cat $DIR_HOST/user-hosts | grep $IP_HOST

bash_profile

## .bash_profile
#
## Get the aliases and functions
#if [ -f ~/.bashrc ]; then
#	. ~/.bashrc
#fi
#
## User specific environment and startup programs
#PATH=$PATH:$HOME/bin:/sbin
#export PATH
#unset USERNAME

bashrc

## .bashrc
#
#PATH=$PATH:$HOME/bin:/sbin
## Source global definitions
#if [ -f /etc/bashrc ]; then
#	. /etc/bashrc
#fi
#export PATH="/opt/mono-1.2.4/bin:$PATH"
#export PKG_CONFIG_PATH="/opt/mono-1.2.4/lib/pkgconfig:$PKG_CONFIG_PATH"
#export MANPATH="/opt/mono-1.2.4/share/man:$MANPATH"
#export LD_LIBRARY_PATH="/opt/mono-1.2.4/lib:$LD_LIBRARY_PATH"
#export CLASSPATH=~/Download/BarabasiAlbertGenerator/jung-1.7.6.jar
#
## Keep 10000 lines in .bash_history (default is 500)
#export HISTSIZE=100000
#export HISTFILESIZE=100000
#
## User specific aliases and functions
#alias ll='ls -lh --color=no'
#alias ssh='ssh -X'
#alias doc='cd /net/home/Tbow/Desktop/documentation-notes'
#
##Work Servers and PC's
#alias sshpsplunk='ssh -l Tbow -L 8080:localhost:8000 pumpkin.unh.edu'
#alias sshptemp='ssh -l Tbow -L 8081:10.0.0.98:80 pumpkin.unh.edu'
#alias sshpipmitomato='ssh -l Tbow -L 8082:10.0.0.148:80 pumpkin.unh.edu'
#alias sshvncokra='ssh -L 5900:localhost:5900 admin@okra'
#alias ssht='ssh -l Tbow taro.unh.edu'
#alias sshg='ssh -l Tbow gourd.unh.edu'
#alias sshe='ssh -l Tbow einstein.unh.edu'
#alias sshl='ssh -l Tbow lentil.unh.edu'
#alias sshp='ssh -l Tbow pumpkin.unh.edu'
#alias sshj='ssh -l Tbow jalapeno.unh.edu'
#
##Reading notes script
#alias reading_notes='python ~/.bash_reading_notes.py'
#
##aliases that link to bash scripts
#alias denyhostscheck='sh ~/.bash_denyhosts'
#
## Wake on LAN commands
#alias wollen='sudo ether-wake 00:23:54:BC:70:F1'
#alias wollis='sudo ether-wake 00:1e:4f:9b:26:d5'
#alias wolben='sudo ether-wake 00:1e:4f:9b:13:90'
#alias wolnode2='sudo ether-wake 00:30:48:C6:F6:80'
#
##alias for grep command
#alias grepconfig="grep '^[^ #]'"
#
##Command to search log files with date
##cat /var/log/secure* | grep "`date --date="2013-05-14" +%b\ %e`">> temp_secure_log.txt

bash_reading_notes.py

##!/usr/bin/python
#
#import sys
#import re
#
#def printusage():
# print "Usage:"
# print "   reading_notes input_file"
# print "   reading_notes input_file btag etag"
# print "   reading_notes input_file btag etag output_file"
#
#try:
# sys.argv[1]
#except IndexError:
# print "Need input_file"
# printusage()
# sys.exit()
#
#if sys.argv[1] == "--help":
# printusage()
# sys.exit()
#elif len(sys.argv) == 4:
# ifile = sys.argv[1]
# btag = sys.argv[2]
# etag = sys.argv[3]
# ofile = "output_notes"
#elif len(sys.argv) == 5:
# ifile = sys.argv[1]
# btag = sys.argv[2]
# etag = sys.argv[3]
# ofile = sys.argv[4]
#else:
# ifile = sys.argv[1]
# btag = 'Xx'
# etag = 'Yy'
# ofile = "output_notes"
#
##Open file for read only access
#input = open(ifile,"r")
##Open file with write access
#output = open(ofile,"w")
#
##Organize initial data into a raw array.
#def splitin(infile):
# #Export data from the file into variable i
# i = infile.read()
# #Split o into an array based on "--- Page "
# i_split = i.split("--- Page ")
# #Write data to output array
# a = []
# for z in i_split:
#  if z.find(btag) >= 0:
#   a.append(z)
# return a
#
##Sifts through data array and outputs to file.
#def processdata(t, outfile):
# for m in t:
#  c = cleanup(m)
#  for u in c:
#   #output is a global variable
#   outfile.write(u)
#
##Process the array based on delimiters
#def cleanup(v):
# q = []
# q.append('--- ' + v[0:v.find('\n')])
# s = v.split(btag)
# for p in s:
#  if p.find(etag) >= 0:
#   q.append(p[0:p.find(etag)])
#   #Detects end of array and doesn't append
#   if p != s[-1]:
#    q.append('---')
# q.append('\n')
# return q
#
##This is the main function
#indata = splitin(input)
#processdata(indata, output)

bash_useful_commands

##!/bin/bash
##sites for grep:
##http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
##sites for find:
##http://www.codecoffee.com/tipsforlinux/articles/21.html
##http://www.cyberciti.biz/faq/find-large-files-linux/
##Site for using find with dates
##http://www.cyberciti.biz/faq/howto-finding-files-by-date/
##
##Command to search through log with specific date
#cat /var/log/secure* | grep "`date --date="2013-05-14" +%b\ %e`">> temp_secure_log.txt
##will look through files for specific date
#grep -HIRn "`date --date="2013-05-14" +%b\ %e`" /var/log/*
##Output all lines except commented ones
#grep -HIRn '^[^ #]' /etc/nsswitch.conf
##grep recursively with line number and filename
#grep -HIRn "ldap" /var/log/*
##find a file in a specific directory with particular string
#find /etc/ -name 'string'
##grep files that have been output by find
#grep -HIRn 'einstein' `find /etc/ -name 'mailman'`
##find a files above a certain size
#find . -type f -size +10000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
##Insert file1 into file at line 3
#sed -i '3r file1' file
##Insert text at line 3
#sed -i "3i this tesxt" file
##grep for lines with no comments and contains /www or /log
#grep -EHIRn '^[^#].*(\/www|\/log).*' file
##grep for email addresses
#grep -EHIRni '^[^#].*[a-z0-9]{1,}@.*(com|net|org|uk|mil|gov|edu).*' file
##Commenting multiple lines in vim
## go to first char of a line and use blockwise visual mode (CTRL-V)
## go down/up until first char of all lines I want to comment out are selected
## use SHIFT-I and then type my comment character (# for ruby)
## use ESC to insert the comment character for all lines
#
##In Vim command mode use this to add a comment to the beginning of a range (N-M) of lines.
#:N,Ms/^/#/
#To take that comment away use:
#:N,Ms/^//

RAID and Areca

Drive Life 2012-06-24

This is a list of expected drive life from manufacturer. All of these drives are in are RAIDs.

Pumpkin

ST3750640NS (p.23)
 8,760 power-on-hours per year.
 250 average motor start/stop cycles per year.
ST3750640AS (p.37)
 2400 power-on-hours per year.
 10,000 average motor start/stop cycles per year.
WDC WD7500AAKS-00RBA0
 Start/stop cycles 50,000

Endeavour

ST31000340NS
ST31000524AS
ST31000526SV
 MTBF 1,000,000 hours
 Start / Stop Cycles 50,000
 Non-Recoverable Errors 1 per 10^14

Areca 1680 2010-01-10

4.3 Driver Installation for Linux

This chapter describes how to install the SAS RAID controller driver to Red Hat Linux, SuSE and other versions of Linux. Before installing the SAS RAID driver to the Linux, complete the following actions:

  1. Install and configure the controller and hard disk drives according to the instructions in Chapter 2 Hardware Installation.
  2. Start the system and then press Tab+F6 to enter the McBIOS RAID manager configuration utility. Using the McBIOS RAID manager to create the RAID set and volume set. For details, see Chapter 3, McBIOS RAID Manager.

If you are using a Linux distribution for which there is not a compiled driver available from Areca, you can copy the source from the SAS software CD or download the source from the Areca website and compile a new driver.

Compiled and tested drivers for Red Hat and SuSE Linux are included on the shipped CD. You can download updated versions of com- piled and tested drivers for RedHat or SuSE Linux from the Areca web site at http://www.areca.com.tw. Included in these downloads is the Linux driver source, which can be used to compile the updat- ed version driver for RedHat, SuSE and other versions of Linux. Please refer to the “readme.txt” file on the included Areca CD or website to make driver diskette and to install driver to the system.

Areca Scripts

This is a collection of the Areca Scripts I have attempted to build.

grep_areca_info.sh 2012-10-09

#!/bin/bash
cat /net/data/taro/areca/areca_info | grep -A 52 $1 | grep \#$3 | grep $2

areca_info.sh 2014-01-14

#!/bin/bash
info=areca_info
echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" >> $info
echo "`date +%Y-%m-%d_%T`_`echo $HOSTNAME`" >> $info
echo "------------------------------------------------------------------" >> $info
echo -e "Drv#\t`areca_cli64 disk smart info drv=1 | grep Attribute`" >> $info
echo "======================================================================================" >> $info
for i in `seq 1 $1`
do
 areca_cli64 disk smart info drv=$i > .areca_temp
 echo -e "`echo \#$i`\t`cat .areca_temp | grep Start`" >> $info
done
for i in `seq 1 $1`
do
 areca_cli64 disk smart info drv=$i > .areca_temp
 echo -e "`echo \#$i`\t`cat .areca_temp | grep Power-on`" >> $info
done
for i in `seq 1 $1`
do
 areca_cli64 disk info drv=$i > .areca_temp
 echo -e "`echo \#$i`\t`cat .areca_temp | grep Temperature`" >> $info
done
rm .areca_temp
echo "------------------------------------------------------------------" >> $info
areca_cli64 hw info | grep Temp >> $info

mydata.py 2012-06-19

#!/usr/bin/python
import sqlite3
import re
data = open("mydata","r")
all_data = data.read()
all_data_split = all_data.split("+++")
for i in all_data_split:
 print i
#Make connection to database mydata.db,
#	which is in the current directory.
conn = sqlite3.connect('mydata.db')
c = conn.cursor()
# Insert a row of data
c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
# Save (commit) the changes
conn.commit()
# We can also close the cursor if we are done with it
c.close()
# Create table
#c.execute(CREATE TABLE stocks
#             (date text, trans text, symbol text, qty real, price real))

LDAP and Email

LDAP setup 2009-05-20

Setting up through command line

sudo -s (to be root)
env HOME=/root /usr/local/bin/adduser-npg
make sure that in adduser-npg (script) that the location for luseradd is set to /usr/sbin/	
add user to farm, npg, and domain-admins

Something is still wrong with the lgroupmod

LDAP_output.py

#!/usr/bin/env python
#
# Copyright (C) 2011 Adam Duston 
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
#
import os,sys,getopt,random,ldif,ldap,subprocess
import ldap.modlist as modlist 
from string import letters,digits
from getpass import getpass
from crypt import crypt
from grp import getgrnam
from time import sleep
from shutil import copytree
#
ldap_server = "ldaps://einstein.farm.physics.unh.edu:636"
basedn      = "dc=physics,dc=unh,dc=edu"
domain      = "physics.unh.edu"
homedir     = "/home"
maildir     = "/mail"
admin_dn    = "cn=root,dc=physics,dc=unh,dc=edu"
users_ou    = "ou=People"
skel_dir    = "/etc/skel/"
#
def usage():
   """ 
       Print usage information
   """
   print "Usage: usergen.py [options] USERNAME"
   print "Creates a new NPG user account and adds to the LDAP database."
   print "Will prompt for necessary values if not provided."
   print "The--ldif and --disable options effect existing accounts,"
   print "and will not attempt to add new users to the LDAP database." 
   print " " 
   print "Options:"
   print "-d, --create-dirs" 
   print "    Create home and mail directories for the new account. "
   print "-f, --firstname NAME"
   print "    The user's first name."
   print "-l, --lastname NAME"
   print "    The user's last name."
   print "-m, --mail ADDRESS" 
   print "    The user's e-mail address." 
   print "-u, --uid UID"
   print "    The user's numerical UID value."
   print "-g, --gid GID"
   print "    The numerical value of the user's primary group."
   print "-s, --shell SHELL"
   print "    The user's login shell."
   print "-h, --help"
   print "     Display this help message and exit."
   print "--disable"
   print "    Disables logins by changing user's login shell to /bin/false." 
   print "--ldif"
   print "    Save user details to an LDIF file, but do not add the user to LDAP."
#     
def makeuser( login, firstname, lastname, mail, \
             uidnum, gidnum, shell, password ):
   """
       Returns a tuple containing full dn and a dictionary of
       attributes for the user information given. Output intended
       to be used for adding new user to LDAP database or generating
       an LDIF file for that user.
   """
#
   dn = "uid=%s,%s,%s" % (login,users_ou,basedn)
   attrs = {} 
   attrs['uid'] = [login]
   attrs['objectClass'] = ['top', 'posixAccount', 'shadowAccount',
                           'inetOrgPerson', 'organizationalPerson',
                           'person']
   attrs['loginShell'] = [shell]
   attrs['uidNumber'] = [uidnum]
   attrs['gidNumber'] = [gidnum] 
   attrs['mail'] = [mail] 
   attrs['homeDirectory'] = ['%s/%s' % (homedir, login)]
   attrs['cn'] = ['%s %s' % (firstname, lastname)]
   attrs['sn'] = [lastname]
   attrs['gecos'] = ['%s %s' % (firstname, lastname)]
   attrs['userPassword'] = [password]
#
   return (dn, attrs) 
#
def getsalt():
   """
       Return a two-character salt to use for hashing passwords.
   """
   chars = letters + digits
   return random.choice(chars) + random.choice(chars)
#
def user_exists(username):
   """
       Search LDAP database to verify whether username already exists.
       Return a boolean value. 
   """
#
   search_base = "%s,%s" % (users_ou,basedn)
   search_string = "(&(uid=%s)(objectClass=posixAccount))" % username
#    
   try:
       # Open LDAP Connection
       ld = ldap.initialize(ldap_server)
#        
       # Bind anonymously to the server
       ld.simple_bind_s("","") 
#
       # Search for username
       result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
                            ['distinguisedName'])
#
       # Close connection
       ld.unbind_s()                     
#    
   except ldap.LDAPError, err: 
       print "Error searching LDAP database: %s" % err
       sys.exit(1) 
#
   # If user is not found, result should be an empty list. 
   if len(result) != 0: 
       return True
   else: 
       return False 
#
def get_uids():
   """
       Return a list of UID numbers currently in use in the LDAP database. 
   """
#    
   search_base = "%s,%s" % (users_ou, basedn)
   search_string = "(objectClass=posixAccount)"
#    
   try: 
       # Bind anonymously
       ld = ldap.initialize(ldap_server) 
       
       ld.simple_bind_s("","")
       
       # Get UIDS from all posixAccount objects. 
       result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
                            ['uidNumber'])
#    
       ld.unbind_s()
#
   except ldap.LDAPError, err: 
           print "Error connecting to LDAP server: %s" % err 
           sys.exit(1)
#
   # Pull the list of UIDs out of the results. 
   uids = [result[i][1]['uidNumber'][0] for i in range(len(result))]
#
   # Sort UIDS and return
   return sorted(uids)
#
def create_ldif(dn, attrs):
   """
       Output an LDIF file to the current directory. 
   """
#
   try:
       file = open(str(attrs['uid'][0]) + ".ldif", "w")
#        
       writer = ldif.LDIFWriter(file) 
       writer.unparse(dn, attrs)
#
       file.close() 
#    
   except EnvironmentError, err:
       print "Unable to open file: %s" % err
       sys.exit(1)
#
def ldap_add(dn, attrs):
   """
       Add a user account with the given dn and attributes to the LDAP 
       database. Requires authentication as LDAP admin. If user added
       successfully return true, else return False. 
   """ 
#    
   try:
       # Open a connection to the ldap server
       ld = ldap.initialize(ldap_server)
#        
       print "\nAdding new user record. Authentication required." 
#        
       # Bind to the server as administrator     
       ld.simple_bind_s(admin_dn,getpass("LDAP Admin Password: "))
#        
       # Convert attrs to correct syntax for ldap add_s function
       ldif = modlist.addModlist(attrs) 
#
       # Add the entry to the LDAP server
       ld.add_s(dn, ldif) 
#
       # Close connection to the server
       ld.unbind_s()
#      
       print "User account added successfully." 
       return True
#
   except ldap.LDAPError, err: 
       print "Error adding new user: %s" % err
       return False 
#
def ldap_disable(username):
   """
       Disable login on a user a count by setting the login shell to 
       /bin/false.
   """
   try:
       # Open a connection to the ldap server
       ld = ldap.initialize(ldap_server)
#
       print "\nModifying user record. Authentication required."
#
       ld.simple_bind_s(admin_dn,getpass("LDAP Admin Password: "))
#    
       # Set the dn to modify and the search parameters 
       mod_dn = "uid=%s,%s,%s" % (username,users_ou,basedn)
       search_base = "%s,%s" % (users_ou,basedn) 
       search_string = "(&(uid=%s)(objectClass=posixAccount))" % username
#        
       # Get the current value of loginShell from the user LDAP entry.
       result = ld.search_s(search_base, ldap.SCOPE_SUBTREE, search_string, \
                            ['loginShell'])
#     
       oldshell = result[0][1]
       newshell = {'loginShell':['/bin/false']}
#    
       # Use modlist to configure changes
       diff = modlist.modifyModlist(oldshell,newshell)
#        
       # Modify the LDAP entry. 
       ld.modify_s(mod_dn,diff)
#
       # Unbind from the LDAP server
       ld.unbind_s()
#        
       # Return True if successful
       return True
#
   except ldap.LDAPError, err:
       print "Error connecting to LDAP server: %s" % err
       return False 
#   
def chown_recursive(path, uid, gid):
   """
       Recursively set ownership for the files in the given
       directory to the given uid and gid. 
   """
   command = "chown -R %i:%i %s" % (uid,gid,path) 
#
   subprocess.Popen(command, shell=True) 
#
def create_directories(username, uid, gid):
   """
       Create user home and mail directories. 
   """  
   # Create home directory
   try:
#        
       user_homedir = "%s/%s" % (homedir,username)
#
       # Copying skel dir to user's home dir makes the directory and
       # adds the skeleton files.
       copytree(skel_dir,user_homedir)
#
       chown_recursive(user_homedir,uid,gid) 
#
   except OSError, err:
       print "Unable to create home directory: %s" % err
       sys.exit(1) 
#
   # Create mail directory 
   try:
       # Get GID for the mail group
       mailgid = getgrnam('mail')[2]
#        
       user_maildir = "%s/%s" % (maildir,username)
#        
       os.mkdir(user_maildir)
       # There also needs to be a "cur" subdirectory or IMAP will cry.
       os.mkdir(user_maildir + "/cur") 
#
       chown_recursive(user_maildir, uid, mailgid)
#
   except OSError, err:
       print "Unable to create mail directory: %s" % err 
       sys.exit(1)
#    
def main(argv):
   """
       Parse command line arguments, prompt the user for any missing
       values that might be needed to create a new user. 
   """
   # Parse command line args using getopt
   try:
       opts, args = getopt.getopt(argv, "hf:l:m:u:g:s:d", \
                                  ["help", "ldif", "create-dirs","disable", "firstname=", \
                                   "lastname=", "mail=", "uid=", "gid=", \
                                   "shell="])
   except getopt.GetoptError:
       # An exception should mean misuse of command line options, so print
       # help and quit. 
       usage()
       sys.exit(2)
#    
   # Defining variables ahead of time should help later on when I want to
   # check whether they were set by command line arguments or not. 
   firstname       = ""
   lastname        = ""
   mail            = "" 
   uid             = ""
   gid             = ""
   shell           = "" 
#
   # Booleans for run options
   run_add      = True
   run_ldif     = False
   run_disable  = False
   create_dirs  = False     
#
   # Parse command line options
   for opt, arg in opts:
#        
       if opt in ("-h", "--help"):
           usage()
           sys.exit()
       elif opt in "--ldif":
           # If creating LDIF don't add a new user. 
           run_ldif = True
           run_add = False 
       elif opt in "--disable": 
           # If disabling a user, turn off adding new user
           run_disable = True
           run_add = False
       elif opt in ("-d","--create-dirs"):
           create_dirs = True 
       elif opt in ("-f", "--firstname"):
           firstname = arg
       elif opt in ("-l", "--lastname"):
           lastname = arg
       elif opt in ("-m", "--mail"):
           mail = arg
       elif opt in ("-u", "--uid"):
           uid = arg
       elif opt in ("-g", "--gid"):
           gid = arg 
       elif opt in ("-s", "--shell"):
           shell = arg
#    
   # Whatever was left over after parsing arguments should be the login name
   username = "".join(args)
#    
   # Make sure the user entered a username.  
   while not username:
       username = raw_input("Enter a username: ")
#    
   if run_disable:
       # Make sure the user exists before trying to delete it. 
       if user_exists(username):
           print "Warning: This will disable logins for user %s. Proceed?" \
                  % username
           answer = raw_input("y/N: ")
#            
           if answer in ("y","yes","Y"):
               # If user is disabled print success message and quit.
               # If an error occurs here quit anyway. 
               if ldap_disable(username):
                   print "Logins for user %s disabled." % username
                   sys.exit(1)
               else:
                   print "An error occurred. Exiting." 
                   sys.exit(1) 
           else:
               print "User account not modified."
               sys.exit(1)
       else:
           print "User %s does not exist in LDAP database. Exiting." % username 
           sys.exit(1) 
#    
   # Don't continue if this account already exists. 
   if run_add and user_exists(username):
       print "Error: account with username %s already exists." % username
       sys.exit(1) 
#
#    
   # Prompt user for any values that were not defined as a command line option
   while not firstname:
       firstname = raw_input("First Name: ")
   while not lastname: 
       lastname = raw_input("Last Name: ")
   while not mail:
       addr_default = "%s@%s" % (username,domain) 
       mail = raw_input("E-mail address [%s]: " % addr_default)
       if not mail:
           mail = addr_default
#    
   # Get the uid. Make sure it's not already in use. 
   while not uid: 
       # Get a list of in-use UID numbers
       existing_uids = get_uids()
#       
       # Get one plus the highest used uid        
       next_uid = int(existing_uids[-1]) + 1
#
       uid = raw_input("UID [%i]: " % next_uid)
#
       if not uid: 
           uid = str(next_uid) 
       elif uid in existing_uids: 
           print "UID " + uid + " is already in use." 
           uid = ""
#    
   # Get the user's default group. Use 5012 (npg) if none other specified. 
   while not gid:
       gid = raw_input("GID [5012]: ")
#    
       if not gid:
           gid = "5012" 
#    
   # Prompt for a shell, if user doesn't enter anything just use the default
   # Make sure the shell exists before accepting it.
   while not shell:
       shell = raw_input("Shell [/bin/bash]: ")
       if not shell:
           shell = "/bin/bash"
       elif not os.path.exists(shell):
           print shell + " is not a valid shell."
           shell = ""
#    
   # Get the password from the user. Make sure it's correct. 
   pwCorrect = False 
   while not pwCorrect:
       salt = getsalt()
       password1 = crypt(getpass(),salt)
       password2 = crypt(getpass('Retype password: '),salt)
       if password1 == password2:
           ldap_password = "{CRYPT}" + password1
           pwCorrect = True
       else:
           print "Passwords do not match. Try again."
#    
   # Build the account info        
   account = makeuser(username, firstname, lastname, mail, \
                      uid, gid, shell, ldap_password)
#    
   # Decide what to do with it. Only one of these should run at a time. 
   if run_add:
       if ldap_add(account[0],account[1]):
           if create_dirs:
               create_directories(username, int(uid), int(gid))
               print "User directories created successfully."
           else: 
               print "Create home and mail directories for %s?" % username
               answer = raw_input("y/N")
#
               if answer in ("y","Y","yes"):
                   create_directories(username, int(uid), int(gid))
       else:
           print "Create user failed." 
           sys.exit(1)
#
   if run_ldif:
       create_ldif(account[0],account[1])
#
if __name__ == "__main__":
   if os.geteuid() != 0: 
       print "This program must be run as an administrator."
   else:
       main(sys.argv[1:])

Mailman Notes 2009-05-20

In /etc/mailman/ there is a python script pointing to the /usr/lib/mailman/ with a sym link

SSSD Setup Files 2013-07-16

SSSD Notes

  • yum install sssd libsss_sudo
  • authconfig --enablesssd --enablesssdauth --enablelocauthorize --update
  • /etc/sssd/sssd.conf:
  [sssd]
  config_file_version = 2
  services = nss, pam
  domains = default
  [nss]
  filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd
  [domain/default]
  ldap_tls_reqcert = never
  auth_provider = ldap
  ldap_schema = rfc2307bis
  krb5_realm = EXAMPLE.COM
  ldap_search_base = dc=physics,dc=unh,dc=edu
  id_provider = ldap
  ldap_id_use_start_tls = False
  chpass_provider = ldap
  ldap_uri = ldaps://einstein.unh.edu
  krb5_kdcip = kerberos.example.com
  cache_credentials = True
  ldap_tls_cacertdir = /etc/openldap/cacerts
  entry_cache_timeout = 600
  ldap_network_timeout = 3
  ldap_access_filter = (&(objectclass=shadowaccount)(objectclass=posixaccount))
  • /etc/nsswitch.conf:
  passwd     files sss
  shadow     files sss
  group      files sss
  sudoers    files sss
  • service sssd restart
  • Test settings: id (username)

Note: If you are not able to get back proper information with the 'id' command try removing the ca certs from the /etc/openldap/cacerts/ directory. Always back that directory up before removing the contents of it.

sssd.conf

[sssd]
config_file_version = 2
#
# Number of times services should attempt to reconnect in the
# event of a crash or restart before they give up
reconnection_retries = 3
#
# If a back end is particularly slow you can raise this timeout here
sbus_timeout = 30
services = nss, pam, sudo
#
# SSSD will not start if you do not configure any domains.
# Add new domain configurations as [domain/<NAME>] sections, and
# then add the list of domains (in the order you want them to be
# queried) to the "domains" attribute below and uncomment it.
# domains = LOCAL,LDAP
#
domains = default
[nss]
# The following prevents SSSD from searching for the root user/group in
# all domains (you can add here a comma-separated list of system accounts that
# are always going to be /etc/passwd users, or that you want to filter out).
filter_groups = root
#filter_users = root
filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd
reconnection_retries = 3
#
# The entry_cache_timeout indicates the number of seconds to retain an
# entry in cache before it is considered stale and must block to refresh.
# The entry_cache_nowait_timeout indicates the number of seconds to
# wait before updating the cache out-of-band. (NSS requests will still
# be returned from cache until the full entry_cache_timeout). Setting this
# value to 0 turns this feature off (default).
# entry_cache_timeout = 600
# entry_cache_nowait_timeout = 300
#
[pam]
reconnection_retries = 3
#
[sudo]
#
# Example domain configurations
# Note that enabling enumeration in the following configurations will have a
# moderate performance impact while enumerations are actually running, and
# may increase the time necessary to detect network disconnection.
# Consequently, the default value for enumeration is FALSE.
# Refer to the sssd.conf man page for full details.
#
# Example LOCAL domain that stores all users natively in the SSSD internal
# directory. These local users and groups are not visible in /etc/passwd; it
# now contains only root and system accounts.
# [domain/LOCAL]
# description = LOCAL Users domain
# id_provider = local
# enumerate = true
# min_id = 500
# max_id = 999
#
# Example native LDAP domain
# ldap_schema can be set to "rfc2307", which uses the "memberuid" attribute
# for group membership, or to "rfc2307bis", which uses the "member" attribute
# to denote group membership. Changes to this setting affect only how we
# determine the groups a user belongs to and will have no negative effect on
# data about the user itself. If you do not know this value, ask an
# administrator.
# [domain/LDAP]
# id_provider = ldap
# auth_provider = ldap
# ldap_schema = rfc2307
# ldap_uri = ldap://ldap.mydomain.org
# ldap_search_base = dc=mydomain,dc=org
# ldap_tls_reqcert = demand
# cache_credentials = true
# enumerate = False
#
# Example LDAP domain where the LDAP server is an Active Directory server.
#
# [domain/AD]
# description = LDAP domain with AD server
# enumerate = false
# min_id = 1000
#
# id_provider = ldap
# auth_provider = ldap
# ldap_uri = ldap://your.ad.server.com
# ldap_schema = rfc2307bis
# ldap_user_search_base = cn=users,dc=example,dc=com
# ldap_group_search_base = cn=users,dc=example,dc=com
# ldap_default_bind_dn = cn=Administrator,cn=Users,dc=example,dc=com
# ldap_default_authtok_type = password
# ldap_default_authtok = YOUR_PASSWORD
# ldap_user_object_class = person
# ldap_user_name = msSFU30Name
# ldap_user_uid_number = msSFU30UidNumber
# ldap_user_gid_number = msSFU30GidNumber
# ldap_user_home_directory = msSFU30HomeDirectory
# ldap_user_shell = msSFU30LoginShell
# ldap_user_principal = userPrincipalName
# ldap_group_object_class = group
# ldap_group_name = msSFU30Name
# ldap_group_gid_number = msSFU30GidNumber
# ldap_force_upper_case_realm = True
#
[domain/default]
enumerate = True
#
ldap_tls_reqcert = never
auth_provider = ldap
krb5_realm = EXAMPLE.COM
ldap_search_base = dc=physics,dc=unh,dc=edu
id_provider = ldap
ldap_id_use_start_tls = False
chpass_provider = ldap
ldap_uri = ldaps://einstein.unh.edu
chpass_provider = ldap
krb5_kdcip = kerberos.example.com
cache_credentials = True
ldap_tls_cacertdir = /etc/openldap/cacerts
entry_cache_timeout = 600
ldap_network_timeout = 3
ldap_access_filter = (&(objectclass=shadowaccount)(objectclass=posixaccount))
#
#ldap_schema = rfc2307bis
ldap_schema = rfc2307
#ldap_group_member = memberUid
#ldap_group_search_base = ou=groups,dc=physics,dc=unh,dc=edu
ldap_rfc2307_fallback_to_local_users = True
#
sudo_provider = ldap
ldap_sudo_search_base = ou=groups,dc=physics,dc=unh,dc=edu
ldap_sudo_full_refresh_interval=86400
ldap_sudo_smart_refresh_interval=3600

Elog

Elog notes 2009-05-20

Info from the site https://midas.psi.ch/elog/adminguide.html

Download: http://midas.psi.ch/elog/download/

RPM Install Notes

Since version 2.0, ELOG contains a RPM file which eases the installation. Get the file elog-x.x.x-x.i386.rpm from the download section and execute as root "rpm -i elog-x.x.x-x.i386.rpm". This will install the elogd daemon in /usr/local/sbin and the elog and elconv programs in /usr/local/bin. The sample configuration file elogd.cfg together with the sample logbook will be installed under /usr/local/elog and the documentation goes to /usr/share/doc. The elogd startup script will be installed at /etc/rc.d/init.d/elogd. To start the daemon, enter

/etc/rc.d/init.d/elogd start

It will listen under the port specified in /usr/local/elog/elogd.cfg which is 8080 by default. So one can connect using any browser with the URL:

http://localhost:8080

To start the daemon automatically, enter:

chkconfig --add elogd
chkconfig --level 345 elogd on 

which will start the daemon on run levels 3,4 and 5 after the next reboot.

Note that the RPM installation creates a user and group elog, under which the daemon runs.

Notes on running elog under apache

For cases where elogd should run under port 80 in parallel to an Apache server, Apache can be configured to run Elog in a subdirectory of Apache. Start elogd normally under port 8080 (or similarly) as noted above and make sure it's working there. Then put following redirection into the Apache configuration file:

Redirect permanent /elog http://your.host.domain/elog/
ProxyPass /elog/ http://your.host.domain:8080/

Make sure that the Apache modules mod_proxy.c and mod_alias.c are activated. Justin Dieters <enderak@yahoo.com> reports that mod_proxy_http.c is also required. The Redirect statement is necessary to automatically append a "/" to a request like http://your.host.domain/elog. Apache then works as a proxy and forwards all requests staring with /elog to the elogd daemon.

Note: Do not put "ProxyRequests On" into your configuration file. This option is not necessary and can be misused for spamming and proxy forwarding of otherwise blocked sites.

Because elogd uses links to itself (for example in the email notification and the redirection after a submit), it has to know under which URL it is running. If you run it under a proxy, you have to add the line:

     URL = http://your.proxy.host/subdir/

into elogd.cfg.

Notes on Apache:

Another possibility is to use the Apache web server as a proxy server allowing secure connections. To do so, Apache has to be configured accordingly and a certificate has to be generated. See some instructions on how to create a certificate, and see Running elogd under Apache before on this page on how to run elogd under Apache. Once configured correctly, elogd can be accessed via http://your.host and via https://your.host simultaneously.

The redirection statement has to be changed to

     Redirect permanent /elog https://your.host.domain/elog/
     ProxyPass /elog/ http://your.host.domain:8080/
and following has to be added to the section "VirtualHOst ...:443 in /etc/httpd/conf.d/ssl.conf:
     # Proxy setup for Elog
     <Proxy *>
     Order deny,allow
     Allow from all
     </Proxy>
     ProxyPass /elog/ http://host.where.elogd.is.running:8080/
     ProxyPassReverse /elog/ http://host.where.elogd.is.running:8080/
Then, following URL statement has to be written to elogd.cfg:
     URL = https://your.host.domain/elog

There is a more detailed step-by-step instructions at the contributions section.

Using ssh: elogd can be accessed through a a SSH tunnel. To do so, open an SSH tunnel like:

ssh -L 1234:your.server.name:8080 your.server.name

This opens a secure tunnel from your local host, port 1234, to the server host where the elogd daemon is running on port 8080. Now you can access http://localhost:1234 from your browser and reach elogd in a secure way.

Notes on Server Configuration

The ELOG daemon elogd can be executed with the following options :

elogd [-p port] [-h hostname/IP] [-C] [-m] [-M] [-D] [-c file] [-s dir] [-d dir] [-v] [-k] [-f file] [-x]

with :

   * -p <port>  TCP port number to use for the http server (if other than 80)
   * -h <hostname or IP address> in the case of a "multihomed" server, host name or IP address of the interface ELOG should run on
   * -C <url>  clone remote elogd configuration 
   * -m  synchronize logbook(s) with remote server
   * -M  synchronize with removing deleted entries
   * -l <logbook>  optionally specify logbook for -m and -M commands
   * -D   become a daemon (Unix only)
   * -c <file>  specify the configuration file (full path mandatory if -D is used)
   * -s <dir> specify resource directory (themes, icons, ...)
   * -d <dir> specify logbook root directory
   * -v  verbose output for debugging
   * -k  do not use TCP keep-alive
   * -f <file> specify PID file where elogd process ID is written when server is started
   * -x  enables execution of shell commands

It may also be used to generate passwords :

     elogd [-r pwd] [-w pwd] [-a pwd] [-l logbook]

with :

   * -r <pwd> create/overwrite read password in config file
   * -w <pwd> create/overwrite write password in config file
   * -a <pwd> create/overwrite administrative password in config file
   * -l <logbook> specify logbook for -r and -w commands

The appearance, functionality and behaviour of the various logbooks on an ELOG server are determined by the single elogd.cfg file in the ELOG installation directory.

This file may be edited directly from the file system, or from a form in the ELOG Web interface (when the Config menu item is available). In this case, changes are applied dynamically without having to restart the server. Instead of restarting the server, under Unix one can send a HUP signal like "killall -HUP elogd" to tell the server to re-read its configuration.

The many options of this unique but very important file are documented on the separate elogd.cfg syntax page.

To better control appearance and layout of the logbooks, elogd.cfg may optionally specify the use of additional files containing HTML code, and/or custom "themes" configurations. These need to be edited directly from the file system right now.

The meaning of the directory flags -s and -d is explained in the section covering the configuration options Resource dir and Logbook dir in the elogd.cfg description.

Notes on tarball install Make sure you have the libssl-dev package installed. Consult your distribution for details.

Expand the compressed TAR file with tar -xzvf elog-x.x.x.tar.gz. This creates a subdirectory elog-x.x.x where x.x.x is the version number. In that directory execute make, which creates the executables elogd, elog and elconv. These executables can then be copied to a convenient place like /usr/local/bin or ~/bin. Alternatively, a "make install" will copy the daemon elogd to SDESTDIR (by default /usr/local/sbin) and the other files to DESTDIR (by default /usr/local/bin). These directories can be changed in the Makefile. The elogd executable can be started manually for testing with :

elogd -p 8080

where the -p flag specifies the port. Without the -p flag, the server uses the standard WWW port 80. Note that ports below 1024 can only be used if elogd is started under root, or the "sticky bit" is set on the executable.

When elogd is started under root, it attaches to the specified port and tries to fall-back to a non-root account. This is necessary to avoid security problems. It looks in the configuration file for the statements Usr and Grp.. If found, elogd uses that user and goupe name to run under. The names must of course be present on the system (usually /etc/passwd and /etc/group). If the statements Usr and Grp. are not present, elogd tries user and group elog, then the default user and group (normally nogroup and nobody). Care has to be taken that elogd, when running under the specific user and group account, has read and write access to the configuration file and logbook directories. Note that the RPM installation automatically creates a user and group elog.

If the program complains with something like "cannot bind to port...", it could be that the network is not started on the Linux box. This can be checked with the /sbin/ifconfig program, which must show that eth0 is up and running.

The distribution contains a sample configuration file elogd.cfg and a demo logbook in the demo subdirectory. If the elogd server is started in the elogd-x.x.x directory, the demo logbook can be directly accessed with a browser by specifying the URL http://localhost:8080 (or whatever port you started the elog daemon on). If the elogd server is started in some other directory, you must specify the full path of the elogd file with the "-c" flag and change the Data dir = option in the configuration file to a full path like /usr/local/elog.

Once testing is complete, elogd will typically be started with the -D flag to run as a daemon in the background, like this :

elogd -p 8080 -c /usr/local/elog/elogd.cfg -D

Note that it is mandatory to specify the full path for the elogd file when started as a daemon. To test the daemon, connect to your host via :

http://your.host:8080/

If port 80 is used, the port can be omitted in the URL. If several logbooks are defined on a host, they can be specified in the URL :

http://your.host/<logbook>

where <logbook> is the name of the logbook.

The contents of the all-important configuration file elogd.cfg are described below:

[Tbow@gluon documentation-notes]$ ll elog*
-rw-r--r-- 1 Tbow npg 9.4K May 20  2009 elog
-rw-r--r-- 1 Tbow npg  623 Jan 26  2010 elog.roentgen.messages.problem
-rw-r--r-- 1 Tbow npg 1.2K Feb 11 19:12 elog_users_setup
[Tbow@gluon documentation-notes]$ text
text2pcap  text2wave  textools

elog_users_setup 2010-02-11

You can find some instructions/information here:

http://pbpl.physics.ucla.edu/old_stuff/elogold/current/doc/config.html#access

The thing you have to remember is that you want the new users to end up being users of just the logbook they will be using, not a global user. So, if you look at where my name is in the elogd.cfg file, I am designated as an admin user, and am a global user that can log into any logbook to fix things. If you look through the file for a user like Daniel, he can only log into the nuclear group logbooks, not my private one, or Karl's, or Maurik's. So, if you want to add someone to the nuclear group's logbooks, for example, add that new person's user name to where you find people like Daniel and Ethan, and set the thing to allow self-registering at the top. Restart, and then go ahead and use the self-register to register the new person's password and account. Then go back into the elogd.cfg file and comment out the self register, so other people cannot do that, and restart. That should be the easiest way to do it, but you can read the info and decide about that. How does that sound? Does this make sense?

elog_roentgen_messages_problems 2010-01-26

Jan 26 09:48:00 roentgen elogd[15215]: elogd 2.7.8 built Dec  2 2009, 11:54:27 
Jan 26 09:48:00 roentgen elogd[15215]: revision 2278
Jan 26 09:48:00 roentgen elogd[15215]: Falling back to default group "elog"
Jan 26 09:48:01 roentgen elogd[15215]: Falling back to default user "elog"
Jan 26 09:48:01 roentgen elogd[15215]: FCKedit detected
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default group "elog"
Jan 26 09:48:01 roentgen elogd[15217]: Falling back to default user "elog"
Jan 26 09:48:01 roentgen elogd[15215]: ImageMagick detected
Jan 26 09:48:02 roentgen elogd[15215]: SSLServer listening on port 8080

CUPS

CUPS quota accounting 2009-06-10

3. 3. Print quotas and accounting

CUPS has also basic page accounting and quota capabilities.

Every printed page is logged in the file /var/log/cups/page_log So one can everytime read out this file and determine who printed how many pages. The system is based on the CUPS filters. They simply analyse the PostScript data stream to determine the number of pages. And there fore it depends on the quality of the PostScript generated by the applications whether the pages get correctly counted. And if there is a paper jam, pages are already counted and do not get printed. Also Jobs which get rendered printer-ready on the client (Windows) will not get accounted correctly, as CUPS does not understand the proprietary language of the printer.

In addition, one can restrict the amount of pages (or kBytes) which a user is allowed to print in a certain time frame. Such restrictions can be applied to the print queues with the "lpadmin" command.

lpadmin -p printer1 -o job-quota-period=604800 -o job-k-limit=1024
lpadmin -p printer2 -o job-quota-period=604800 -o job-page-limit=100

The first command means that within the "job-quota-period" (time always given in seconds, in this example we have one week) users can only print a maximum of 1024 kBytes (= 1 MByte) of data on the printer "printer1". The second command restricts printing on "printer2" to 100 pages per week. One can also give both "job-k-limit" and "job-page-limit" to one queue. Then both limits apply so the printer rejects jobs when the user already reaches one of the limits, either the 1 MByte or the 100 pages.

This is a very simple quota system: Quotas cannot be given per-user, so a certain user's quota cannot be raised independent of the other users, for example if the user pays his pages or gets a more printing-intensive job. Also counting of the pages is not very sophisticated as it was already shown above.

So for more sophisticated accounting it is recommended to use add-on software which is specialized for this job. This software can limit printing per-user, can create bills for the users, use hardware page counting methods of laser printers, and even estimate the actual amount of toner or ink needed for a page sent to the printer by counting the pixels.

The most well-known and complete free software package for print accounting and quotas id PyKota:

http://www.librelogiciel.com/software/PyKota/

A simple system based on reading out the hardware counter of network printers via SNMP is accsnmp:

http://fritz.potsdam.edu/projects/cupsapps/

CUPS Basic Info 2009-06-11

This file contains some basic cups commands and info:

The device can be a parallel port, a network interface, and so forth. Devices within CUPS use Uniform Resource Identifiers ("URIs") which are a more general form of Uniform Resource Locators ("URLs") that are used in your web browser. For example, the first parallel port in Linux usually uses a device URI of parallel:/dev/lp1

Lookup printer info:

lpinfo -v ENTER
 network socket
 network http
 network ipp
 network lpd
 direct parallel:/dev/lp1
 serial serial:/dev/ttyS1?baud=115200
 serial serial:/dev/ttyS2?baud=115200
 direct usb:/dev/usb/lp0
 network smb

File devices have device URIs of the form file:/directory/filename while network devices use the more familiar method://server or method://server/path format. Printer queues usually have a PostScript Printer Description ("PPD") file associated with them. PPD files describe the capabilities of each printer, the page sizes supported, etc.

Adding a printer:

/usr/sbin/lpadmin -p printer -E -v device -m ppd

Managing printers:

/usr/sbin/lpadmin -p printer options

Starting and Stopping printer queues:

/usr/bin/enable printer ENTER
/usr/bin/disable printer ENTER

Accepting and Rejecting Print jobs:

/usr/sbin/accept printer ENTER
/usr/sbin/reject printer ENTER

Restrict Access:

/usr/sbin/lpadmin -p printer -u allow:all

Virtualization

Xen Basic Commands 2009-06-04

Basic management options

The following are basic and commonly used xm commands:

xm help [--long]: view available options and help text.
 use the xm list command to list active domains:
$ xm list
 Name                                ID  Mem(MiB)   VCPUs      State      Time(s)
 Domain-0                            0     520       2         r-----     1275.5
 r5b2-mySQL01                       13     500       1         -b----       16.1

xm create [-c] DomainName/ID: start a virtual machine. If the -c option is used, the start up process will attach to the guest's console.

xm console DomainName/ID: attach to a virtual machine's console.
xm destroy DomainName/ID: terminate a virtual machine , similar to a power off.
xm reboot DomainName/ID: reboot a virtual machine, runs through the normal system shut down and start up process.
xm shutdown DomainName/ID: shut down a virtual machine, runs a normal system shut down procedure.
xm pause
xm unpause
xm save
xm restore
xm migrate

Research 2011-08-24

This is a collection of notes I took on virtualization over the summer.

KVM Commands

  1. Installing KVM
yum groupinstall KVM

Adding storage pools

virsh pool-dumpxml default > pool.xml

edit pool.xml # with new name and path

virsh pool-create pool.xml
virsh pool-refresh name

XCP XE Commands

  • SR Creation
xe sr-create content-type=user type=nfs name-label=yendi shared=true device-config:server=10.0.0.237 device-config:serverpath=/data1/Xen/VMs/
xe pool-list
xe pool-param-set uuid=<pool-uuid> default-SR=<newly_created_SR_uuid>
xe sr-list
  • VM Creation from CD
xe vm-install template="Other install media" new-name-label=<vm-name>
xe vbd-list vm-uuid=<vm_uuid> userdevice=0 params=uuid --minimal
  • Using the UUID returned from vbd-list, set the root disk to not be bootable:
xe vbd-param-set uuid=<root_disk_uuid> bootable=false
  • CD Creation
xe cd-list
xe vm-cd-add vm=<vm-uuid> cd-name="<cd-name>" device=3
xe vbd-param-set uuid=<cd_uuid> bootable=true
xe vm-param-set uuid=<vm_uuid> other-config:install-repository=cdrom
  • Network Installation
xe sr-list
xe vm-install template="Other install media" new-name-label=<name_for_vm> sr-uuid=<storage_repository_uuid>
xe network-list bridge=xenbr0 --minimal
xe vif-create vm-uuid=<vm-uuid> network-uuid=<network-uuid> mac=random device=0
xe vm-param-set uuid=<vm_uuid> other-config:install-repository=<http://server/redhat/5.0>
  • Lookup dom-id for VNC connections
xe vm-list uuid=<vm-uuid> params=dom-id
  • Use this command to port forward to the local system
ssh -l root -L 5901:127.0.0.1:5901 <xcp_server>
  • Or by adding this line to iptables file on XCP host server
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 5901 -j ACCEPT
  • you can use this ssh command
ssh -l root -L 5901:tomato:5901 gourd.unh.edu
  • or you can ssh to gourd locally then to tomato using
ssh -l root -L 5901:127.0.0.1:5901 gourd.unh.edu
  • then on gourd run
ssh -l root -L 5901:127.0.0.1:5901 tomato
  • Virtual Disk Creation
xe vm-disk-add disk-size=10000000 device=4
xe vm-disk-lisst

VMware ESXi Notes

VMWare ESXi

Key areas of interest for us:

  • vMotion
  • SAN
  • hypervisor
  • Pricing for gourd would be $2600

Xen Removal on Pumpkin 2009-08-26

When removing kernel-xen use these commands:

yum groupremove Virtualization
yum remove kernel-xenU
yum update

Xen and NVidia 2011-06-07

#Running Binary nVidia Drivers under Xen Host
#Sun, 06/22/2008 - 00:50 — jbreland
#
#In my last post I mentioned that I recently had a hardware failure that took down my server. I needed to get it back up and running again ASAP, but due to a large number of complications I was unable to get the original hardware up and running again, nor could I get any of the three other systems I had at my disposal to work properly. Seriously, it was like Murphy himself had taken up residence here. In the end, rather desperate and out of options, I turned to Xen (for those unfamiliar with it, it's similar to VMware or Virtual Box, but highly geared towards server0. I'd recently had quite a bit of experience getting Xen running on another system, so I felt it'd be a workable, albeit temporary, solution to my problem.
#
#Unfortunately, the only working system I had suitable for this was my desktop, and while the process of installing and migrating the server to a Xen guest host was successful (this site is currently on that Xen instance) it was not without it's drawbacks. For one thing, there's an obvious performance hit on my desktop while running under Xen concurrently with my server guest, though fortunately my desktop is powerful enough that this mostly isn't an issue (except when the guest accesses my external USB drive to backup files; for some reason that consumes all CPU available for about 2 minutes and kills performance on the host). There were a few other minor issues, but by far the biggest problem was that the binary nVidia drivers would not install under Xen. Yes, the open source 'nv' driver would work, but that had a number of problems/limitations:
#
#   1. dramatically reduced video performance, both in video playback and normal 2d desktop usage
#   2. no 3d acceleration whatsoever (remember, this is my desktop system, so I sometimes use it for gaming)
#   3. no (working) support for multiple monitors
#   4. significantly different xorg.conf configuration
#
#In fairness, issues 1 and 2 are a direct result of nVidia not providing adequate specifications for proper driver development. Nonetheless, I want my hardware to actually work, so the performance was not acceptable. Issue 3 was a major problem as well, as I have two monitors and use both heavily while working. I can only assume that this is due to a bug in the nv driver for the video card I'm using (a GeForce 8800 GTS), as dual monitors should be supported by this driver. It simply wouldn't work, though. Issue 4 wasn't that significant, but it did require quite a bit of time to rework it, which was ultimately pointless anyway due to issue 3.
#
#So, with all that said, I began my quest to get the binary nVidia drivers working under Xen. Some basic searches showed that this was possible, but in every case the referenced material was written for much older versions of Xen, the Linux kernel, and/or the nVidia driver. I tried several different suggestions and patches, but none would work. I actually gave up, but then a few days later I got so fed up with performance that I started looking into it again and trying various different combinations of suggestions. It took a while, but I finally managed hit on the special sequence of commands necessary to get the driver to compile AND load AND run under X. Sadly, the end result is actually quite easy to do once you know what needs to be done, but figuring it out sure was a bitch. So, I wanted to post the details here to hopefully save some other people a lot of time and pain should they be in a similar situation.
#
#This guide was written with the following system specs in mind:
#
#    * Xen 3.2.1
#    * Gentoo dom0 host using xen-sources-2.6.21 kernel package
#          o a non-Xen kernel must also be installed, such as gentoo-sources-2.6.24-r8
#    * GeForce 5xxx series or newer video card using nvidia-drivers-173.14.09 driver package
#
#Version differences shouldn't be too much of an issue; however, a lot of this is Gentoo-specific. If you're running a different distribution, you may be able to modify this technique to suit your needs, but I haven't tested it myself (if you do try and have any success, please leave a comment to let others know what you did). The non-Xen kernel should be typically left over from before you installed Xen on your host; if you don't have anything else installed, however, you can do a simple emerge gentoo-source to install it. You don't need to run it, just build against it.
#
#Once everything is in place, and you're running the Xen-enabled (xen-sources) kernel, I suggest uninstalling any existing binary nVidia drivers with emerge -C nvidia-drivers. I had a version conflict when trying to start X at one point as the result of some old libraries not being properly updated, so this is just to make sure that the system's in a clean state. Also, while you can do most of this while in X while using the nv driver, I suggest logging out of X entirely before the modprobe line.
#
#Here's the step-by-step guide:
#
#   1. Run uname -r to verify the version of your currently running Xen-enabled kernel; eg., mine's 2.6.21-xen
#   2. verify that you have both Xen and non-Xen kernels installed: cd /usr/src/ && ls -l
#          * eg., I have both linux-2.6.21-xen and linux-2.6.24-gentoo-r8
#   3. create a symlink to the non-Xen kernel: ln -sfn linux-2.6.24-gentoo-r8 linux
#   4. install the nVidia-drivers package, which includes the necessary X libraries: emerge -av nvidia-drivers
#          * this will also install the actual driver, but it'll be built and installed for the non-Xen kernel, not your current Xen-enabled kernel
#   5. determine the specific name and version of the nVidia driver package that was just installed; this can be found by examining the output of emerge -f nvidia-drivers (look for the NVIDIA-Linux-* line)
#   6. extract the contents of the nVidia driver package: bash /usr/portage/distfiles/NVIDIA-Linux-x86_64-173.14.09-pkg2.run -a -x
#   7. change to the driver source code directory: cd NVIDIA-Linux-x86_64-173.14.09-pkg2/usr/src/nv/
#   8. build the driver for the currently-running Xen-enabled kernel: IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
#   9. assuming there are no build errors (nvidia.ko should exist), install the driver:
#          * mkdir /lib/modules/`uname -r`/video
#          * cp -i nvidia.ko /lib/modules/`uname -r`/video/
#          * depmod -a
#  10. if necessary, log out of X, then load the driver: modprobe nvidia
#  11. if necessary, reconfigure xorg.conf to use the nvidia binary driver rather than the nv driver
#  12. test that X will now load properly with startx
#  13. if appropriate, start (or restart) the display manager with /etc/init.d/xdm start
#
#Assuming all went well, you should now have a fully functional and accelerated desktop environment, even under a Xen dom0 host. W00t. If not, feel free to post a comment and I'll try to help if I can. You should also hit up the Gentoo Forums, where you can get help from people far smarter than I.
#
#I really hope this helps this helps some people out. It was a royal pain in the rear to get this working, but believe me, it makes a world of difference when using the system.
#
#    * Linux
#    * Tips
#
#Comment viewing options
#Select your preferred way to display the comments and click "Save settings" to activate your changes.
#Sat, 07/12/2008 - 16:37 — Simon (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#Hi,
#
#A question:
#Why do I need to install the non-Xen kernel? Is this only to be able to properly install the nvidia driver using it's setup-script?
#Im using openSuSE 10 x64 with a almost recent kernel (2.6.25.4) and currently without xen.
#
#According to you writing the nvidia-driver/xen support each other (and compile fine under xen). My last state was that this setup is only possible for an old patched nvidia driver (with several performance and stability problems).
#
#Thanks ahead!
#
#PS: sorry for my bad english
#
#- Simon
#Thu, 07/17/2008 - 17:28 — jbreland
#jbreland's picture
#Re: Running Binary nVidia Drivers under Xen Host
#
#There are two parts to the binary driver package:
#
#    * the driver itself (the kernel module - nvidia.ko)
#    * the various libraries needed to make things work
#
#While the kernel module will indeed build against the Xen kernel (provided the appropriate CLI options are used, as discussed above), I was unable to get the necessary libraries installed using the Xen kernel. It might be possible to do this, but I don't know how. For me, it was easier to let my package manager (Portage, for Gentoo) install the package. This would only install when I'm using the non-Xen kernel. After that was installed, I could then switch back to the Xen kernel and manually build/install the kernel module.
#
#Of course, as I mentioned above, this was done on a Gentoo system. Other distributions behave differently, and I'm not sure what may be involved in getting the binary drivers setup correctly on them. If you have any luck, though, please consider posting your results here for the benefits of others.
#
#Good luck.
#
#--
#http://www.legroom.net/
#Wed, 10/08/2008 - 14:47 — Jonathan (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#I have it working on CentOS 5.2 with a Xen kernel as well, thanks to this I have TwinView available again:
#
#[root@mythtv ~]# dmesg | grep NVRM
#NVRM: loading NVIDIA UNIX x86_64 Kernel Module 100.14.19 Wed Sep 12 14:08:38 PDT 2007
#NVRM: builtin PAT support disabled, falling back to MTRRs.
#NVRM: bad caching on address 0xffff880053898000: actual 0x77 != expected 0x73
#NVRM: please see the README section on Cache Aliasing for more information
#NVRM: bad caching on address 0xffff880053899000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff88005389a000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff88005389b000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff88005389c000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff88005389d000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff88005389e000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff88005389f000: actual 0x77 != expected 0x73
#NVRM: bad caching on address 0xffff8800472f4000: actual 0x67 != expected 0x63
#NVRM: bad caching on address 0xffff880045125000: actual 0x67 != expected 0x63
#[root@mythtv ~]# uname -r
#2.6.18-92.1.13.el5xen
#[root@mythtv ~]#
#
#Now see if I can fix the bad caching errors... and see if I can run a dom host.
#
#Thanks heaps!
#Thu, 11/06/2008 - 21:18 — CentOS N00b (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#Can you explain how you got that to work. I'm still getting a error on the modprobe step.
#
#[root@localhost ~]# modprobe nvidia
#nvidia: disagrees about version of symbol struct_module
#FATAL: Error inserting nvidia (/lib/modules/2.6.18-92.el5xen/kernel/drivers/video/nvidia.ko): Invalid module format
#
#Any ideas anyone?
#Tue, 11/18/2008 - 15:32 — Jonathan (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#I have the following kernel related packages installed and am compiling some older drivers (100.14.19) as they work for my card in non-xen kernels as well:
#
#[root@mythtv ~]# rpm -qa kernel* | grep $(uname -r | sed -e 's/xen//') | sort
#kernel-2.6.18-92.1.18.el5
#kernel-devel-2.6.18-92.1.18.el5
#kernel-headers-2.6.18-92.1.18.el5
#kernel-xen-2.6.18-92.1.18.el5
#kernel-xen-devel-2.6.18-92.1.18.el5
#[root@mythtv ~]#
#
#I am booted into the xen kernel:
#
#[root@mythtv ~]# uname -r
#2.6.18-92.1.18.el5xen
#[root@mythtv ~]#
#
#I already have my source extracted like explained in the article and navigated to it. Inside the ./usr/src/nv folder of the source tree I issue the following command (from the article as well) which starts compiling:
#
#[root@mythtv nv]# IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
#
#Above command should start compilation. After compilation I copy the driver to my lib tree:
#
#[root@mythtv nv]# mkdir -p /lib/modules/`uname -r`/kernel/drivers/video/nvidia/
#[root@mythtv nv]# cp -i nvidia.ko /lib/modules/`uname -r`/kernel/drivers/video/nvidia/
#
#Then to load the driver:
#[root@mythtv ~]# depmod -a
#[root@mythtv ~]# modprobe nvidia
#
#To see if it was loaded I issue this command:
#[root@mythtv ~]# dmesg | grep NVIDIA
#
#which in my case outputs this:
#[root@mythtv ~]# dmesg |grep NVIDIA
#nvidia: module license 'NVIDIA' taints kernel.
#NVRM: loading NVIDIA UNIX x86_64 Kernel Module 100.14.19 Wed Sep 12 14:08:38 PDT 2007
#[root@mythtv ~]#
#
#I do not worry about the tainting of the kernel as it seems to work pretty well for me as well as for this error:
#
#[root@mythtv nv]# dmesg |grep NVRM
#NVRM: loading NVIDIA UNIX x86_64 Kernel Module 100.14.19 Wed Sep 12 14:08:38 PDT 2007
#NVRM: builtin PAT support disabled, falling back to MTRRs.
#[root@mythtv nv]#
#Sat, 07/12/2008 - 18:18 — Anonymous (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#A really intersting article you created here -- if there were not (I hope) a typo that destroys everything:
#
#The last paragraph reads:
#"Assuming all went well, you should not have a fully functional ..."
#
#The word "not" is disturbing me, and I have some hope that it should be a "now", as that would make sense with all your efforts.
#
#Can you please comment on this issue?
#
#Thanks
#Thu, 07/17/2008 - 17:30 — jbreland
#jbreland's picture
#Re: Running Binary nVidia Drivers under Xen Host
#
#Oops. Yeah, that was a typo. I guess it would pretty much defeat the purpose of going through this exercise, considering it's already not working at the start. :-)
#
#Corrected now - thanks for pointing it out.
#
#--
#http://www.legroom.net/
#Fri, 07/18/2008 - 22:04 — SCAP (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#Xcellent! Works great for me... Now I have to choose between VMWare, VirtualBox and Xen...
#Thu, 07/31/2008 - 14:53 — Anonymous (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#works with 173.14.12 drivers too! ;)
#Wed, 09/24/2008 - 12:13 — VladSTar (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#Thanks for the solution - it works like a charm.
#Fedora Core 8, kernel 2.6.21.7-5, XEN 3.1.2-5 and latest nvidia driver (173.14.12).
#Thu, 10/23/2008 - 21:17 — Jamesttgrays (not verified)
#Re: Running Binary nVidia Drivers under Xen Host
#
#Hm.. strange - I wasn't able to get this to work with the newest version of the Nvidia drivers. it says something along the lines of "will not install to Xen-enabled kernel." Darned Nvidia - serves me right.. I ought've gotten me an ATI card!
#Wed, 11/05/2008 - 05:54 — kdvr (not verified)
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
#
#openSUSE 11.0 with linux-2.6.27 (its a fresh install and dont remember exact version of kernel, im under windows), Leadtek Winfast 9600GT
#
#For me it doesn't work. It won't build with: IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module. It says something like this kernel is not supported, the same error as the setup, nothing about xen though.
#
#I need xen for studying purposes on my desktop pc, and running it without drivers is not an option as the cooler is blowing at full speed.
#Sun, 11/09/2008 - 00:55 — jbreland
#jbreland's picture
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
#
#I think you need to have kernel headers installed in order to build the module. I'm not sure what you'd need to install in OpenSUSE, but query your package repository for kernel-headers, linux-headers, etc. and see if you can find something that matches your specific kernel.
#
#--
#http://www.legroom.net/
#Sat, 11/15/2008 - 19:42 — Anonymous (not verified)
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
#
#Same here, with 2.6.27 (x86/64) and 177.80 step 8 fails ("Unable to determine kernel version").
#Tue, 11/18/2008 - 15:29 — Anonymous (not verified)
#Re: Running Binary nVidia Dr...==> it wont build (HELP!)
#
#For OpenSuse 11 I got it working doing this
#cd /usr/src/linux
#make oldconfig && make scripts && make prepare
#
## Extract the source code from nvidia installer
#sh NVIDIA-Linux-whateverversion-pkg2.run -a -x
#
#cd NVIDIA-Linux-whateverversion-pkg2/usr/src/nv/
##build
#IGNORE_XEN_PRESENCE=y make SYSSRC=/usr/src/linux module
##should have built a kernel module
#cp nvidia.ko /lib/modules/`uname -r`/kernel/drivers/video/
#cd /lib/modules/`uname -r`/kernel/drivers/video/
#depmod -a
#modprobe nvidia
#
#glxinfo is showing direct rendering: yes
#So it seems to be working.
#Tue, 12/23/2008 - 00:42 — Andy (not verified)
#No luck with RHEL 5.1
#
#I tried different combinations of Red Hat kernels (2.6.18-92.1.22.el5xen-x86_64, 2.6.18-120.el5-xen-x86_64, 2.6.18-92.el5-xen-x86_64) and NVIDIA drivers (177.82, 173.08) but I couldn't get it to run. I succeed in compiling the kernel module but once I start the X server (either with startx or init 5) the screen just turns black and a hard reset is needed.
#
#/var/log/messages contains the lines:
#
#
#Dec 23 14:23:56 jt8qm1sa kernel: BUG: soft lockup - CPU#0 stuck for 10s! [X:8177]
#Dec 23 14:23:56 jt8qm1sa kernel: CPU 0:
#Dec 23 14:23:56 jt8qm1sa kernel: Modules linked in: nvidia(PU) ...
#
#I'm giving up now. Any hint anyone?
#
#Andy
#Thu, 01/22/2009 - 21:43 — Hannes (not verified)
#Error on starting X
#
#Juche jbreland,
#first of all thank you for your article, it gave me confidence that it will work some time. But this time is still to come.
#I Did everything as you said (exept that I unmerged the old nvidia Driver at the beginning) and Every time I want to start X I get this Error:
#
#
#(II) Module already built-in
#NVIDIA: could not open the device file /dev/nvidia0 (Input/output error).
#(EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device PCI:1:0:0.
#(EE) NVIDIA(0): Please see the COMMON PROBLEMS section in the README for
#(EE) NVIDIA(0): additional information.
#(EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device!
#(EE) Screen(s) found, but none have a usable configuration.
#
#I am using different Versions than you did:
#
#* sys-kernel/xen-sources: 2.6.21
#* app-emulation/xen: 3.3.0
#* x11-drivers/nvidia-drivers: 177.82
#* sys-kernel/gentoo-sources: 2.6.27-r7
#
#It looks like everything works out fine, lsmod has the nvidia module listed, the File /dev/nvidia0 exists and there is no Problem in Accessing it, I have the same problem if I try to start X as root user.
#
#Do you have any Idea?
#
#Hannes
#
#PS.: could you please post your obsolet xorg.conf configuration of the open Source Driver, that would help me too.
#Fri, 01/23/2009 - 22:17 — jbreland
#jbreland's picture
#Re: Error on starting X
#
#Off the top of my head, no. You installed the new nvidia-drivers package first, right? All of those libraries are needed when loading the module. Did you get any build errors or errors when inserting the new module? Did you try rebooting just to be certain that an older version of the nvidia or nv module was not already loaded or somehow lingering in memory?
#
#As for the xorg.conf file using the nv driver, you can grab it from the link below, but keep in mind that it's not fully functional. It provided me with a basic, unaccelerated, single monitor desktop that was usable, but rather miserable.
#xorg.conf.nv
#
#--
#http://www.legroom.net/
#Sun, 01/25/2009 - 17:49 — Anonymous (not verified)
#Re: Error on starting X
#
#THX for your reply.
#I am still trying. Here my new enlightenments:
#I found out, that when I compile the nvidia Driver the regular way it uses this make command:
#
#make -j3 HOSTCC=i686-pc-linux-gnu-gcc CROSS_COMPILE=i686-pc-linux-gnu- LDFLAGS= IGNORE_CC_MISMATCH=yes V=1 SYSSRC=/usr/src/linux SYSOUT=/lib/modules/2.6.21-xen/build HOST_CC=i686-pc-linux-gnu-gcc clean module
#
#This is verry different from your suggestion of only running
#
#IGNORE_XEN_PRESENCE=y make SYSSRC=/lib/modules/`uname -r`/build module
#
#So I tried my long make Command with the Prefix of IGNORE_XEN_PRESENCE=y but this lead to a Build error (some other error then the "This is a XEN Kernel" Error) see below. Then I tired (why did not you take this approach):
#
#IGNORE_XEN_PRESENCE=y emerge -av nvidia-drivers
#
#which was an easier way to produce the same error:
#
#include/asm/fixmap.h:110: error: expected declaration specifiers or '...' before 'maddr_t'
#
#Very Strange, If I just run your short command, compilation runs without any Problem.
#
#Another thing that I found out was that the Log in dmsg is different when loadning the nvidia module under XEN:
#
#nvidia: module license 'NVIDIA' taints kernel.
#NVRM: loading NVIDIA UNIX x86 Kernel Module 177.82 Tue Nov 4 13:35:57 PST 2008
#
#or under a regular Kernel:
#
#nvidia: module license 'NVIDIA' taints kernel.
#nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
#nvidia 0000:01:00.0: setting latency timer to 64
#NVRM: loading NVIDIA UNIX x86 Kernel Module 177.82 Tue Nov 4 13:35:57 PST 2008
#
#There are two lines missing maybe important, maybe the reason why the /dev/nvidia0 device was not created.
#
#I will continue trying, if I Find some solution or new enlightenments I will keep you and your fan-club informed. Thanks for your comments,
#
#Hannes
#Wed, 01/28/2009 - 08:14 — Hannes (not verified)
#Re: Error on starting X
#
#Juche Jbreland,
#
#I give up I am one of those who could not make it.
#I will now use Virtual box. That ist sad but more efficient.
#
#Hannes
#Sat, 01/31/2009 - 01:56 — jbreland
#jbreland's picture
#Re: Error on starting X
#
#Sorry to hear you couldn't get it working. Nothing wrong with VirtualBox, though. I use it myself everytime I need to run Windows. :-)
#--
#http://www.legroom.net/

Yum

RHEL to CentOS 2010-01-12

Display priority scores for all repositories

You can list all repositories set up on your system by a yum repolist all. However, this does not show priority scores. Here's a one liner for that. If no number is defined, the default is the lowest priority (99).

cat /etc/yum.repos.d/*.repo | sed -n -e "/^\[/h; /priority *=/{ G; s/\n/ /; s/ity=/ity = /; p }" | sort -k3n

Installing yum

Okay, okay -- I get it -- it is not CentOS. But, I still want yum, or to try to remove and repair a crippled set of yum configurations.

<!> First, take full backups and make sure they may be read. This may not work.

Then, you need the following package to get a working yum - all of which can be downloaded from any CentOS mirror:

  • centos-release

You should already have this package installed. You can check that with

rpm -q centos-release
centos-release-4-4.3.i386

If it is already on your system, please check that the yum configuration hasn't been pulled and is available on your system:

ls -l /etc/yum.repos.d/

This directory should contain only the files: CentOS-Base.repo and CentOS-Media.repo. If those aren't there, you should make a directory: 'attic' there, and 'mv' a backup of the current content into that attic, to prepare for the reinstall of the centos-release package:

rpm -Uvh --replacepkgs centos-release.*.rpm

If centos-release isn't installed on your machine, you can drop the --replacepkgs from the command above. Make a backup directory ./attic/ and move any other files present into it, so that you can back out of this proccess later, if you decide you are in 'over your head'.

Then you need the following packages:

CentOS 4

(available from where you also got the centos-release package):

   * yum
   * sqlite
   * python-sqlite
   * python-elementtree
   * python-urlgrabber 

CentOS 5

(available from where you also got the centos-release package):

   * m2crypto
   * python-elementtree
   * python-sqlite
   * python-urlgrabber
   * rpm-python
   * yum 

Download those into a separate directory and install them with

rpm -Uvh *.rpm

from that directory. As before, take a backup of /etc/yum.conf so that you might back out any changes.

Transana

This is for Dawn's research and graduate students. It is transcription software for videos.

Notes 2010-03-16

So far this is all the info I have form Bo The Transana should work now. The following information is maybe what you need during the client setup.

Username: dawn
password: dawnpass (This is your mysql username and password)

MySQL Host: roentgen.unh.edu or 132.177.88.61

port 3306
Database: test or in the mysql server you can create your own database

Transana Message Server: pumpkin.unh.edu

port 17595

Setup Instructions for Client Computers

Once you've got the network server up and running, you need the following information to run Transana 2.3-MU and connect to the servers:

  • A username and password
  • The DSN or IP address of the computer running MySQL
  • The name(s) of the database(s) for your project(s).
  • The DSN or IP address of the computer running the Transana Message Server.
  • The path to the common Video Storage folder.

Please note that all computers accessing the same database must enter the MySQL computer information in the same way, and must use the same Transana Message Server. They do NOT need to connect to the same common video storage folder, but subfolders within each separate video storage folder must be named and arranged identically.

  1. Install Transana 2.3-MU.
  2. Start Transana 2.3-MU. You will see the following screen:
    • Enter your Username, password, the DSN or IP address of your MySQL Server, and the name of your project database.
  3. If this is the first time you've used Transana 2.3-MU, you will see this message next:
    • Click the "Yes" button to specify your Transana Message Server. You will see this screen:
    • Enter the DSN or IP address of your Transana Message Server.
  4. You need to configure your Video Root Directory before you will be able to connect to the project videos. If you haven't yet closed the Transana Settings dialog box, click the "Directories" tab. If you alreadly closed it, go to the "Options" menu and select "Program Settings". You will see the following screen:

Under "Video Root Directory", browse to the common Video Storage folder.

We recommend also setting the "Waveform Directory" to a common waveforms folder so that each video only needs to go through waveform extraction once for everyone on the team to share.

Also, on Windows we recommend mapping a network drive to the common Video folder if it is on another server, rather than using machine-qualified path names. We find that mapping drive V: to "\\VideoServer\ProjectDirectory" produces faster connections to videos than specifying "\\VideoServer\ProjectDirectory" in the Video Root directory. If you have any questions about this, please feel free to tell me.

Transana Survival Guide 2013-08-24

Setup Instructions for Mac OS X Network Servers

The first step is to install MySQL. Transana 2.4-MU requires MySQL 4.1.x or later. We have tested Transana 2.4-MU with a variety of MySQL versions on a variety of operating systems without difficulty, but we are unable to test all possible combinations. Please note that MySQL 4.0.x does not support the UTF-8 character set, so should not be used with Transana 2.4-MU.

Install MySQL

Follow these directions to set up MySQL.

  1. Download the "Max" version of MySQL for Mac OS X, not the "Standard" version. It is available at http://www.mysql.com. 

NOTE: The extensive MySQL documentation available on the MySQL Web Site can help you make sense of the rest of these instructions. We strongly recommend you familiarize yourself with the MySQL Manual, as it can answer many of your questions.
  2. You probably want to download and install the MySQL GUI Tools as well. The MySQL Administrator is the easiest way to create and manage user accounts, in my opinion.
  3. Install MySQL from the Disk Image file. Follow the on screen instructions. Be sure to assign a password to the root user account. (This prevents unauthorized access to your MySQL database by anyone who knows about this potential security hole.)
  4. You need to set the value of the "max_allowed_packet" variable to at least 8,388,608.

On OS X 10.5.8, using MySQL 4.1.22 on one computer and MySQL 5.0.83 on another, I edited the file /etc/my.conf so that it included the following lines:

[mysqld]
     lower_case_table_names=1
     max_allowed_packet=8500000

This should work for MySQL 5.1 as well.

Using MySQL 4.1.14-max on OS X 10.3, I edited the "my.cnf" file in /etc, adding the following line to the [mysqld] section:

set-variable=max_allowed_packet=8500000

Exactly what you do may differ, of course.

Setup MySQL User Accounts

Here's what I do. It's the easiest way I've found to manage databases and accounts while maintaining database security. You are, of course, free to manage MySQL however you choose.

I have downloaded and installed the MySQL GUI Tools from the MySQL Web Site. These tools work nicely to manage databases and user accounts, as well as to manipulate data in MySQL tables. The tools have minor differences on different platforms, so the following directions are necessarily a bit vague on the details.

First I use the MySQL Administrator tool to create databases (called "catalogs" and "schemas" in the tool.) Go to the "Catalogs" page and choose to create a new "schema."

Second, still within the MySQL Administrator tool, I go to the Accounts page. I create a new user account, filling in (at least) the User Name and Password fields on the General tab. I then go to the Schema Privileges tab, select a user account (in some versions, you select a host, usually "%" under the user account, in others you select the user account itself,) and a specific database (schema), then assign specific privileges. I generally assign all privileges except "Grant" but you may choose to try a smaller subset if you wish. The "Select," "Insert," "Update," "Delete," "Create," and "Alter" privileges are all required. You may assign privileges to multiple databases for a single user account if you wish. Once I'm done setting privileges, I save or apply the settings and move on to the next user account.

I have chosen to give my own user account "God-like" privileges within MySQL so that I can look at and manipulate all data in all database without having to assign myself specific privileges. This also allows me to create new Transana databases from within Transana-MU rather than having to run the MySQL Administrator. To accomplish this, I used the MySQL Query tool to go into MySQL's "mysql" database and edit my user account's entry in the "users" table to give my account global privileges. Please note that this is NOT a "best practice" or a recommendation, and is not even a good idea for most users. I mention it here, however, as I know some users will want to do this.

These instructions are not meant to be detailed or comprehensive. They are intended only to help people get started with Transana-MU. Please see the documentation on the MySQL site for more information on manipulating databases, user accounts, and privileges.

Set up the Transana Message Server

Once you've set up MySQL user accounts, you should set up version 2.40 of the Transana Message Server. It does not need to be on the same server as MySQL, though it may be.

Follow these directions to set up the Message Server.

  1. If your server is running an earlier version of the Transana Message Server, you need to remove the old Message Server before installing the new one. See the Transana Message Server 2.40 Upgrade guide.
  2. Download TransanaMessageServer240Mac.dmg from the Transana web site. The link to the download page is in your Transana-MU Purchase Receipt e-mail.
  3. Install it on the server.
  4. If you want the Transana Message Server to start automatically when the server starts up, follow these instructions:
      1. Open a Terminal Window. Type su to work as a superuser with the necessary privileges.
      2. In your /Library/StartupItems folder, (NOT your /HOME/Library, but the root /Library folder,) create a subfolder called TransanaMessageServer.
      3. Enter the following (single line) command:
        • cp /Applications/TransanaMessageServer/StartupItems/* /Library/StartupItems/TransanaMessageServer
        • This will copy the files necessary for the TransanaMessage Server to auto-start.
      4. Reboot your computer now so that the Transana Message Server will start. Alternately, you can start the Transana Message Server manually, just this once, to avoid rebooting.
  5. If you want to start the Message Server manually for testing purposes, you will need to type the following (single line) command into a Terminal window:
    • sudo python /Applications/TransanaMessageServer/MessageServer.py

Configure the Firewall

If you will have Transana-MU users connecting to the MySQL and Transana Message Server instances you just set up from outside the network, you need to make sure port 3306 for MySQL and port 17595 for the Transana Message Server are accessible from outside the network. This will probably require explicitly configuring your firewall software to allow traffic through to these ports. Consult your firewall software's documentation to learn how to do this.

Creating a Shared Network Volume for Video Storage

Finally, you must create a shared network volume where users can store any video that will be shared with all Transana-MU users. Be sure to allocate sufficient disk space for all necessary video files. This volume may be on your Mac Server or on another computer, but it must be accessible to all Transana-MU users on your network.

If you will have Transana-MU users connecting to the MySQL and Transana Message Server instances you just set up from outside the network, they will need to set up their own parallel Video Storage volumes.

Now configure the client computers

Each user will need the following information to connect to the server programs you have just set up:

Username and password. (Don't create a single user account for users to share. The analytic process flows more smoothly when users can tell who else is interacting with the data, who has locked a record, and so on.)


The DSN or IP address of the MySQL Server computer.


The name of the database set up for the project.


The DSN or IP address of the Transana Message Server computer, if different from the MySQL Server computer.
 Instructions on how to connect to the local network's common video storage folder.

Once you have this information, you are ready to start setting up client computers for the members of the project. Wait, you lost me. Take me back to the overview.

Setup with MySQL Workbench

Here's what I do. I don't know if it's the optimum way, but it works for me. You are, of course, free to manage MySQL however you choose.

I have downloaded and installed the MySQL Workbench Tool from the MySQL Web Site. This tool works nicely to manage databases (called schemas in Workbench) and user accounts, as well as to manipulate data in MySQL tables. The tools have minor differences on different platforms, so the following directions are necessarily a bit vague on the details. These directions assume you have alread defined your Server Instance in the MySQL Workbench.

If I need to create a new database for Transana, I start at the MySQL Workbench Home screen. On the left side, under "Open Connection to Start Querying", I double-click the connection for the server I want to use and enter my password. This takes me to the SQL Editor. In the top toolbar, I select the "Create a new schema in the connected server" icon, which for me is the 3rd one. I give my schema (my Transana database) a name (avoiding spaces in the name), and press the Apply button. The "Apply SQL Script to Database" window pops up, and I again press the Apply button, followed by pressing the "Finish" button when the script is done executing. My Transana Database now exists, so I close teh SQL Editor tab and move on to adding or authorizing user accounts.

Each person using Transana-MU needs a unique user account with permission to access the Transana database. To create user accounts for an existing database, I start at the MySQL Workbench Home screen. On the right side, under "Server Administration", I double-click the server instance I want to work with and enter my password. Under "Security", I select "Users and Privileges". At the bottom of the screen, I press the "Add Account" button, then provide a Login Name and password. I then press the Apply button to create the user account. After that, I go to the Schema Privileges tab, select the user account I just created, and click the "Add Entry..." button on the right-hand side of the screen. This pops up the New Schema Privilege Definion window. I select Any Host under hosts, and Selected Schema, followed by the name of the database I want to provide access to. Once this is done, the bottom section of the screen, where privileges are managed, will be enabled. Click the "Select ALL" button. Make sure that the "GRANT" or "GRANT OPTION" right is NOT checked, then press the "Save Changes" button.

These instructions are not meant to be detailed or comprehensive. They are intended only to help people get started with Transana-MU. Please see the documentation on the MySQL site for more information on manipulating databases, user accounts, and privileges.

Appendix A: Resetting MySQL Root Password

http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html

On Unix, use the following procedure to reset the password for all MySQL root accounts. The instructions assume that you will start the server so that it runs using the Unix login account that you normally use for running the server. For example, if you run the server using the mysql login account, you should log in as mysql before using the instructions. Alternatively, you can log in as root, but in this case you must start mysqld with the --user=mysql option. If you start the server as root without using --user=mysql, the server may create root-owned files in the data directory, such as log files, and these may cause permission-related problems for future server startups. If that happens, you will need to either change the ownership of the files to mysql or remove them.

  1. Log on to your system as the Unix user that the mysqld server runs as (for example, mysql).
  2. Locate the .pid file that contains the server's process ID. The exact location and name of this file depend on your distribution, host name, and configuration. Common locations are /var/lib/mysql/, /var/run/mysqld/, and /usr/local/mysql/data/. Generally, the file name has an extension of .pid and begins with either mysqld or your system's host name. 
You can stop the MySQL server by sending a normal kill (not kill -9) to the mysqld process, using the path name of the .pid file in the following command: 
shell> kill `cat /mysql-data-directory/host_name.pid`
  3. 
Use backticks (not forward quotation marks) with the cat command. These cause the output of cat to be substituted into the kill command.
  4. Create a text file containing the following statements. Replace the password with the password that you want to use. 
UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root';
  5. FLUSH PRIVILEGES;
Write the UPDATE and FLUSH statements each on a single line. The UPDATE statement resets the password for all root accounts, and the FLUSH statement tells the server to reload the grant tables into memory so that it notices the password change.
  6. Save the file. For this example, the file will be named /home/me/mysql-init. The file contains the password, so it should not be saved where it can be read by other users. If you are not logged in as mysql (the user the server runs as), make sure that the file has permissions that permit mysql to read it.
  7. Start the MySQL server with the special --init-file option: 
shell> mysqld_safe --init-file=/home/me/mysql-init &
  8. 
The server executes the contents of the file named by the --init-file option at startup, changing each root account password.
  9. After the server has started successfully, delete /home/me/mysql-init.

You should now be able to connect to the MySQL server as root using the new password. Stop the server and restart it normally.

Network

Network Manager: Fedora 17 Static IP 2013-03-12

First disable the gnome network manager from starting up

systemctl stop NetworkManager.service 
systemctl disable NetworkManager.service

Now start the network service and set to run on boot

systemctl restart network.service 
systemctl enable network.service

Check which interface(s) you want to set to static

[root@server ~]# ifconfig
em1: flags=4163 mtu 1500
inet 192.168.1.148 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::dad3:85ff:feae:dd4c prefixlen 64 scopeid 0x20 ether d8:d3:85:ae:dd:4c txqueuelen 1000 (Ethernet)
RX packets 929 bytes 90374 (88.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1010 bytes 130252 (127.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 19

lo: flags=73mtu 16436
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 32 bytes 3210 (3.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 32 bytes 3210 (3.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 

Now you will need to edit the config file for that interface

vi /etc/sysconfig/network-scripts/ifcfg-em1 

Edit the config to look like so. You will need to change BOOTPROTO from dhcp to static and add IPADDR, NETMASK, BROADCAST and NETWORK variables. Also make sure ONBOOT is set to yes.

UUID="e88f1292-1f87-4576-97aa-bb8b2be34bd3"
NM_CONTROLLED="yes"
HWADDR="D8:D3:85:AE:DD:4C"
BOOTPROTO="static"
DEVICE="em1"
ONBOOT="yes"
IPADDR=192.168.1.2
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
NETWORK=192.168.1.0
GATEWAY=192.168.1.1

Now to apply the settings restart the network service

systemctl restart network.service

Network Monitoring Tools 2009-05-29

The tools best used for Traffic monitoring (these are in the centos repo)

wireshark
iptraf
ntop
tcpdump

Other products found

vnStat
bwm-ng

Misc

denyhosts-undeny.py 2013-05-31

#!/usr/bin/env python
import os
import sys
import subprocess
#The only argument should be the host to undeny
try:
 goodhost = sys.argv[1]
except:
 print "Please specify a host to undeny!"
 sys.exit(1)
#These commands start/stop denyhosts. Set these as appropriate for your system.
stopcommand = '/etc/init.d/denyhosts stop'
startcommand = '/etc/init.d/denyhosts start'
#Check to see what distribution we're using.
distrocheckcommand = "awk '// {print $1}' /etc/redhat-release"
d = os.popen(distrocheckcommand)
distro = d.read()
distro = distro.rstrip('\n')
#Check to see what user we're being run as.
usercheckcommand = "whoami"
u = os.popen(usercheckcommand)
user = u.read()
user = user.rstrip('\n')
if user == 'root':
 pass
else:
 print "Sorry, this script requires root privileges."
 sys.exit(1)
#The files we should be purging faulty denials from.
if distro == 'Red':
 filestoclean = ['/etc/hosts.deny','/var/lib/denyhosts/hosts-restricted','/var/lib/denyhosts/sync-hosts','/var/lib/denyhosts/suspicious-logins']
elif distro == 'CentOS':
 filestoclean = ['/etc/hosts.deny','/usr/share/denyhosts/data/hosts-restricted','/usr/share/denyhosts/data/sync- hosts','/usr/share/denyhosts/data/suspicious-logins']
elif distro == 'Fedora':
 print "This script not yet supported on Fedora systems!"
 sys.exit(1)
else:
 print "This script is not yet supported on your distribution, or I can't properly detect it."
 sys.exit(1)
#Stop denyhosts so that we don't get any confusion.
os.system(stopcommand)
#Let's now remove the faulty denials.
for targetfile in filestoclean:
 purgecommand = "sed -i '/" + goodhost + "/ d' " + targetfile
 os.system(purgecommand)
#Now that the faulty denials have been removed, it's safe to restart denyhosts.
os.system(startcommand)
sys.exit(0)

Temperature Shutdown Procedure 2009-06-06

Room temp greater than 25C

  • If the outdoor temp is lower than indoor, open the windows.
  • Shut down any unused workstations.
  • Shut down any workstations being built or configured.

Room temp greater than 28C

  • Follow procedure for >25C.
  • Write an email to npg-users@physics.unh.edu:

Subject: Systems being shut down due to temperature

Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: gourd, pepper, and tomato. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience. Thank you, (Your name)

  • Wait 10 minutes. If the temperature is still too high, shut down gourd, pepper, and tomato.

Room temp greater than 30C

  • Follow procedure for >25C, then >27C.
  • Wait 10 minutes after shutting down gourd, pepper, and tomato.
  • If temperatures are still greater than 30C, write an email to npg-users@physics.unh.edu:

Subject: Systems being shut down due to temperature

Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: endeavour. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.

  • Wait 10 minutes. If the temperature is still too high, shut down endeavour.
  • Wait 5 minutes. If the temperature is still too high, write an email to npg-users@physics.unh.edu:

Subject: Systems being shut down due to temperature

Body: Due to high temperatures in the server room, we will be performing an emergency shutdown on the following servers: taro and pumpkin. These systems will be going offline in the next 10 minutes, so please save your work immediately. We apologize for any inconvenience.

  • Wait 10 minutes. If the temperature is still too high, shut down taro and pumpkin.
  • Wait 5 minutes. If the temperature is still too high, shut down lentil.

Room temp greater than 35C

  • Immediately shut down all workstations, gourd, pepper, tomato, lentil, endeavour, taro, and pumpkin, in that order. If the temperature drops to another category, follow the instructions for that category.
  • Wait 5 minutes. If the temperature is still too high, send an email to npg-users@physics.unh.edu:

Subject: Critical temperatures in server room

Body: The server room temperatures are now critical. In order to avoid hardware damage, we have performed an emergency shutdown of all servers, and the mail server will be shut down shortly. We apologize for any inconvenience.

  • Wait 15 minutes so that people can get your notification.
  • If the temperature has not dropped below 35C, shut down einstein.

NUT UPS 2009-05-22

I am trying to get nut (network ups tool) working on gourd.

Initial install

create group nut user ups
/configure --with-user=ups --with-group=nut --with-usb
make
sudo -s
make install
If you want to know the build path info use these commands
 make DESTDIR=/tmp/package install
 make DESTDIR=/tmp/package install-conf

Create dir to be used by user

mkdir -p /var/state/ups
chmod 0770 /var/state/ups
chown root:nut /var/state/ups

To set up the correct permissions for the USB device, you may need to set up (operating system dependent) hotplugging scripts. Sample scripts and information are provided in the scripts/hotplug and scripts/udev directories. For most users, the hotplugging scripts will be installed automatically by "make install".

Go to /usr/local/ups/etc/ups.conf and add the following lines

[myupsname]
driver = mydriver
port = /dev/ttyS1
desc = "Workstation"
with the usbhid-ups the port field is ignored

Start drivers for hardware

/usr/local/ups/bin/upsdrvctl -u root start

or you can use

/usr/local/ups/bin/upsdrvctl -u ups start

Fedora 11 Root Login 2009-08-11

This is how to login as root in fedora 11 as this is disabled only allowing terminal access and no gui.

First:

cd /etc/pam.d/

Then find the

gdm 
gdm-password 
gdm-fingerprint 

files and uncomment or remove the line that says this:

auth required pam_succeed_if.so user != root quiet

hardinfo.sh 2009-09-15

#!/bin/bash
#
hardinfo -r -m devices.so -m network.so -m computer.so -f text > hardinfo.`echo $HOSTNAME`
mail -s "$HOSTNAME hardinfo file" Tbow@physics.unh.edu < hardinfo.`echo $HOSTNAME`

DNS Setup 2009-10-15

Things to think about when setting up DNS in a small VM

  1. IP address and changing our FQDN address on root server
  2. change all resolv.conf on all clients to point to new DNS address
  3. Give VM a domain name (probably compton)

Automounting for Macs 2013-06-02

I have attached the relevant files for automounting the NFS directories on a Mac. Drop these in /etc/ and then reload the automount maps with:

automount -vc

Please note that with these automount maps we break some of the (unused) automatic mounting functionality that Mac tries to have, since we overwrite the /net entry.

OLD WAY OF DOING THIS UNDER 10.4

For the 10.4 folks, here are the brief instructions on doing automount on 10.4. This also works with 10.5 and 10.6 but is cumbersome. Please note that a laptop does not have a static IP and will thus never be allowed past the firewall!

The Mac OS X automounter is configured with NetInfo

Create a new entry under the "mounts" subdirectory.

Name the entry "servername:/dir"

Add properties:

"dir"   "/mnt/einstein"  ( Directory where to mount)
"opts" "resvport"         (Mount options)
"vfstype" "nfs"             (Type of mount)

Notfy the automounter: kill -1 `cat /var/run/automount.pid` Note: The new directory is a special link to /automount/static/mnt/einstein

auto_data

pumpkin1	pumpkin.unh.edu:/data1
pumpkin2	pumpkin.unh.edu:/data2
pumpkin		pumpkin.unh.edu:/data1
gourd		gourd.unh.edu:/data
pepper		pepper.unh.edu:/data
taro		taro.unh.edu:/data
tomato		tomato.unh.edu:/data
endeavour	endeavour.unh.edu:/data1
endeavour1	endeavour.unh.edu:/data1
endeavour2	endeavour.unh.edu:/data2
einsteinVM	einstein.unh.edu:/dataVM
VM              einstein.unh.edu:/dataVM

auto_home

#
# Automounter map for /home
#
+auto_home	# Use directory service

auto_home_nfs

#
# Automatic NFS home directories from Einstein.
#
*          einstein.unh.edu:/home/&

auto_master

#
# Automounter master map
#
+auto_master		# Use directory service
#/net			-hosts		-nobrowse,hidefromfinder,nosuid
#/home			auto_home	-nobrowse,hidefromfinder
/Network/Servers	-fstab
/-			-static
#
# UNH:
#
/net/home		auto_home_nfs	-nobrowse,resvport,intr,soft
#/net/data		auto_data	-nobrowse,resvport,intr,soft,locallocks,rsize=32768,wsize=32768
/net/data		auto_data	-nobrowse,resvport,intr,soft,rsize=32768,wsize=32768
/net/www		auto_www	-nobrowse,resvport,intr,soft

auto_www

nuclear		roentgen.unh.edu:/var/www/nuclear
physics		roentgen.unh.edu:/var/www/physics
theory		roentgen.unh.edu:/var/www/theory
einstein	einstein.unh.edu:/var/www/einstein
personal_pages  roentgen.unh.edu:/var/www/personal_pages

Hosts

These are hosts that I have worked on. The services I have worked on may not carry the same services, but this is a log not a reflection of what is.

Gourd

Network Config 2012-11-05

ifcfg-farm

DEVICE=eth0
ONBOOT=yes
HWADDR=00:30:48:ce:e2:38
BRIDGE=farmbr

ifcfg-farmbr

ONBOOT=yes
TYPE=bridge
DEVICE=farmbr
BOOTPROTO=static
IPADDR=10.0.0.252
NETMASK=255.255.0.0
GATEWAY=10.0.0.1
NM_CONTROLLED=no
DELAY=0

ifcfg-farmbr:1

ONBOOT=yes
TYPE=Ethernet
DEVICE=farmbr:1
BOOTPROTO=static
IPADDR=10.0.0.240
NETMADK=255.255.0.0
GATEWAY=10.0.0.1
NM_CONTROLLED=no
ONPARENT=yes

ifcfg-unh

DEVICE=eth1
ONBOOT=yes
HWADDR=00:30:48:ce:e2:39
BRIDGE=unhbr

ifcfg-unhbr

ONBOOT=yes
TYPE=bridge
DEVICE=unhbr
BOOTPROTO=static
IPADDR=132.177.88.75
NETMASK=255.255.252.0
GATEWAY=132.177.88.1
NM_CONTROLLED=no
DELAY=0

ifcfg-unhbr:1

ONBOOT=yes
TYPE=Ethernet
DEVICE=unhbr:1
BOOTPROTO=static
IPADDR=132.177.91.210
NETMASK=255.255.252.0
GATEWAY=132.177.88.1
NM_CONTROLLED=no
ONPARENT=yes

rc.local 2009-05-20

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
#This will send an email to the npg-admins at startup with the hostname and the boot.log file
mail -s "$HOSTNAME Started, Here is the boot.log" npg-admins@physics.unh.edu < /var/log/boot.log

Yum 2009-05-21

Fixing yum on gourd

In order to get RHN support (repo files) you must download and install off the rhn network

yum-rhn-plugin

and then these errors

[Tbow@gourd ~]$ sudo rpm -i Desktop/documentation-notes/downloads/yum-rhn-plugin-0.5.3-30.el5.noarch.rpm 
Password:
warning: Desktop/documentation-notes/downloads/yum-rhn-plugin-0.5.3-30.el5.noarch.rpm: V3 DSA signature: NOKEY, key ID 37017186
error: Failed dependencies:
rhn-client-tools >= 0.4.19-9 is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
rhn-setup is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
yum >= 3.2.19-15 is needed by yum-rhn-plugin-0.5.3-30.el5.noarch
[Tbow@gourd nut-2.4.1]$ less /proc/version|grep Linux
Linux version 2.6.9-67.0.15.EL (brewbuilder@hs20-bc2-2.build.redhat.com) (gcc version 3.4.6   20060404 (Red Hat 3.4.6-9)) #1 Tue Apr 22 13:42:17 EDT 2008

When I tried installing the package for el3 this came up

[Tbow@gourd nut-2.4.1]$ sudo rpm -Uvh /yum-2.0.8-0.1.el3.rf.noarch.rpm 
Preparing...                ########################################### [100%]
package yum-2.4.2-0.4.el4.rf (which is newer than yum-2.0.8-0.1.el3.rf) is already installed

Tried using the --replacefiles, but didn't work with this command, look into it

[Tbow@gourd nut-2.4.1]$ sudo rpm -U --replacefiles /yum-2.4.2-0.4.el4.rf.noarch.rpm 
package yum-2.4.2-0.4.el4.rf is already installed

Tried updating then go this

[Tbow@gourd nut-2.4.1]$ sudo yum update
Setting up Update Process
Setting up repositories
No Repositories Available to Set Up
Reading repository metadata in from local files
No Packages marked for Update/Obsoletion

Either go to the red hat network website to find the repos.d/ files or run rhn_check

/usr/sbin/rhn_check
/usr/sbin/rhn_register
Upgrade yum for rhel 3

Old repository files are still on this system so I will reinstall yum on the is system

smartd.conf 2009-05-20

# Home page is: http://smartmontools.sourceforge.net
# $Id: smartd.conf,v 1.38 2004/09/07 12:46:33 ballen4705 Exp $
# smartd will re-read the configuration file if it receives a HUP
# signal
# The file gives a list of devices to monitor using smartd, with one
# device per line. Text after a hash (#) is ignored, and you may use
# spaces and tabs for white space. You may use '\' to continue lines. 
# You can usually identify which hard disks are on your system by
# looking in /proc/ide and in /proc/scsi.
# The word DEVICESCAN will cause any remaining lines in this
# configuration file to be ignored: it tells smartd to scan for all
# ATA and SCSI devices.  DEVICESCAN may be followed by any of the
# Directives listed below, which will be applied to all devices that
# are found.  Most users should comment out DEVICESCAN and explicitly
# list the devices that they wish to monitor.
#DEVICESCAN
# First (primary) ATA/IDE hard disk.  Monitor all attributes, enable
# automatic online data collection, automatic Attribute autosave, and
# start a short self-test every day between 2-3am, and a long self test
# Saturdays between 3-4am.
#/dev/hda -a -o on -S on -s (S/../.././02|L/../../6/03)
# Monitor SMART status, ATA Error Log, Self-test log, and track
# changes in all attributes except for attribute 194
#/dev/hda -H -l error -l selftest -t -I 194 
# A very silent check.  Only report SMART health status if it fails
# But send an email in this case
#/dev/hda -H -m npg-admins@physics.unh.edu
# First two SCSI disks.  This will monitor everything that smartd can
# monitor.  Start extended self-tests Wednesdays between 6-7pm and
# Sundays between 1-2 am
#/dev/sda -d scsi -s L/../../3/18
#/dev/sdb -d scsi -s L/../../7/01
# Monitor 4 ATA disks connected to a 3ware 6/7/8000 controller which uses
# the 3w-xxxx driver. Start long self-tests Sundays between 1-2, 2-3, 3-4, 
# and 4-5 am.
# Note: one can also use the /dev/twe0 character device interface.
#/dev/sdc -d 3ware,0 -a -s L/../../7/01
#/dev/sdc -d 3ware,1 -a -s L/../../7/02
#/dev/sdc -d 3ware,2 -a -s L/../../7/03
#/dev/sdc -d 3ware,3 -a -s L/../../7/04
# Monitor 2 ATA disks connected to a 3ware 9000 controller which uses
# the 3w-9xxx driver. Start long self-tests Tuesdays between 1-2 and 3-4 am
#/dev/sda -d 3ware,0 -a -s L/../../2/01
#/dev/sda -d 3ware,1 -a -s L/../../2/03
#Send quick test email at smartd startud
#/dev/sda -d 3ware,0 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,1 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,2 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,3 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,4 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,5 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,6 -a -m npg-admins@physics.unh.edu -M test
#/dev/sda -d 3ware,7 -a -m npg-admins@physics.unh.edu -M test
#Email all (-a) the information gathered for each drive
/dev/sda -d 3ware,0 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,1 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,2 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,3 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,4 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,5 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,6 -a -m npg-admins@physics.unh.edu
/dev/sda -d 3ware,7 -a -m npg-admins@physics.unh.edu
#Does a Long test on all 12 drives on the 3ware card
#and is scheduled on saturday to run at specified (Military) time.
/dev/sda -d 3ware,0 -a -s L/../../7/01
/dev/sda -d 3ware,1 -a -s L/../../7/03
/dev/sda -d 3ware,2 -a -s L/../../7/05
/dev/sda -d 3ware,3 -a -s L/../../7/07
/dev/sda -d 3ware,4 -a -s L/../../7/09
/dev/sda -d 3ware,5 -a -s L/../../7/11
/dev/sda -d 3ware,6 -a -s L/../../7/13
/dev/sda -d 3ware,7 -a -s L/../../7/15
# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
#
#   -d TYPE Set the device type: ata, scsi, removable, 3ware,N
#   -T TYPE set the tolerance to one of: normal, permissive
#   -o VAL  Enable/disable automatic offline tests (on/off)
#   -S VAL  Enable/disable attribute autosave (on/off)
#   -n MODE No check. MODE is one of: never, sleep, standby, idle
#   -H      Monitor SMART Health Status, report if failed
#   -l TYPE Monitor SMART log.  Type is one of: error, selftest
#   -f      Monitor for failure of any 'Usage' Attributes
#   -m ADD  Send warning email to ADD for -H, -l error, -l selftest, and -f
#   -M TYPE Modify email warning behavior (see man page)
#   -s REGE Start self-test when type/date matches regular expression (see man page)
#   -p      Report changes in 'Prefailure' Normalized Attributes
#   -u      Report changes in 'Usage' Normalized Attributes
#   -t      Equivalent to -p and -u Directives
#   -r ID   Also report Raw values of Attribute ID with -p, -u or -t
#   -R ID   Track changes in Attribute ID Raw value with -p, -u or -t
#   -i ID   Ignore Attribute ID for -f Directive
#   -I ID   Ignore Attribute ID for -p, -u or -t Directive
#   -C ID   Report if Current Pending Sector count non-zero
#   -U ID   Report if Offline Uncorrectable count non-zero
#   -v N,ST Modifies labeling of Attribute N (see man page)
#   -a      Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
#   -F TYPE Use firmware bug workaround. Type is one of: none, samsung
#   -P TYPE Drive-specific presets: use, ignore, show, showall
#    #      Comment: text after a hash sign is ignored
#    \      Line continuation character
# Attribute ID is a decimal integer 1 <= ID <= 255
# except for -C and -U, where ID = 0 turns them off.
# All but -d, -m and -M Directives are only implemented for ATA devices
#
# If the test string DEVICESCAN is the first uncommented text
# then smartd will scan for devices /dev/hd[a-l] and /dev/sd[a-z]
# DEVICESCAN may be followed by any desired Directives.

rc3.d 2010-01-16

K00ipmievd
K01dnsmasq
K02avahi-dnsconfd
K02NetworkManager
K05conman
K05saslauthd
K05wdaemon
K10dc_server
K10psacct
K12dc_client
K15httpd
K24irda
K25squid
K30spamassassin
K34yppasswdd
K35dhcpd
K35dhcrelay
K35dovecot
K35vncserver
K35winbind
K36lisa
K50netconsole
K50tux
K69rpcsvcgssd
K73ypbind
K74ipmi
K74nscd
K74ntpd
K74ypserv
K74ypxfrd
K80kdump
K85mdmpd
K87multipathd
K88wpa_supplicant
K89dund
K89hidd
K89netplugd
K89pand
K89rdisc
K90bluetooth
K91capi
K91isdn
K99readahead_later
S00microcode_ctl
S02lvm2-monitor
S04readahead_early
S05kudzu
S06cpuspeed
S08ip6tables
S08iptables
S08mcstrans
S10network
S11auditd
S12restorecond
S12syslog
S13irqbalance
S13portmap
S14nfslock
S15mdmonitor
S18rpcidmapd
S19nfs
S19rpcgssd
S20vmware
S22messagebus
S23setroubleshoot
S25netfs
S25pcscd
S26acpid
S26lm_sensors
S28autofs
S29iptables-netgroups
S50hplip
S55sshd
S56cups
S56rawdevices
S56xinetd
S60apcupsd
S80sendmail
S85arecaweb
S85gpm
S90crond
S90splunk
S90xfs
S95anacron
S95atd
S97rhnsd
S97yum-updatesd
S98avahi-daemon
S98haldaemon
S99denyhosts
S99firstboot
S99local
S99smartd

Taro

Lentil

Pumpkin

Endeavour

Yum Problems 2012-10-11

libsdp.x86_64
libsdp-devel.x86_64

Journal of Process

Install both libsdp (i386 and x86_64) and libxml2 from rpm

There is still a seg fault when yum tries to read the primary.xml, this is seen when I run strace yum check-update.

Wake-On LAN 2013-08-20

First run this command on the node

ethtool -s eth0 wol g

Then add this line to the /etc/sysconfig/network-scripts/ifcfg-eth0

ETHTOOL_OPTS="wol g"

List of the nodes and their MACs:

Node2 (10.0.0.2) at 00:30:48:C6:F6:80
node3 (10.0.0.3) at 00:30:48:C7:03:FE
node4 (10.0.0.4) at 00:30:48:C7:2A:0E
node5 (10.0.0.5) at 00:30:48:C7:2A:0C
node6 (10.0.0.6) at 00:30:48:C7:04:54
node7 (10.0.0.7) at 00:30:48:C7:04:A8
node8 (10.0.0.8) at 00:30:48:C7:04:98
node9 (10.0.0.9) at 00:30:48:C7:04:F4
node16 (10.0.0.16) at 00:30:48:C7:04:A4
node17 (10.0.0.17) at 00:30:48:C7:04:A6
node18 (10.0.0.18) at 00:30:48:C7:04:4A
node19 (10.0.0.19) at 00:30:48:C7:04:62
node20 (10.0.0.20) at 00:30:48:C6:F6:14
node21 (10.0.0.21) at 00:30:48:C6:F6:12
node22 (10.0.0.22) at 00:30:48:C6:EF:A6
node23 (10.0.0.23) at 00:30:48:C6:EB:CC
node24 (10.0.0.24) at 00:30:48:C7:04:5A
node25 (10.0.0.25) at 00:30:48:C7:04:5C
node26 (10.0.0.26) at 00:30:48:C7:04:4C
node27 (10.0.0.27) at 00:30:48:C7:04:40

Infiniban

tc Commands 2009-05-24

I am using the following commands to "throttle" the bandwidth of the NIC at eth0 :

tc qdisc del dev eth0 root
tc qdisc add dev eth0 root handle 10: cbq bandwidth 100mbit avpkt 1000
tc qdisc add dev eth0 root handle 10: htb
tc class add dev eth0 parent 10: classid 10:1 cbq bandwidth 100mbit rate 128kbit allot 1514 maxburst 20 avpkt 1000 bounded prio 3
tc class add dev eth0 parent 10: classid 10:1 htb rate 128kbit

tc Script Bandwidth Throttle 2009-05-28

## Set some variables
##!/bin/bash
#EXT_IFACE=âeth0â³
#INT_IFACE=âeth1â³
#TC=âtcâ
#UNITS=âkbitâ
#LINE=â10000â³ #maximum ext link speed
#LIMIT=â5000â³ #maximum that weâ™ll allow
#
#
## Set some variables for individual âœclasses❠that weâ™ll use to shape internal upload speed, i.e. shaping eth0
#CLS1_RATE=â200â³ # High Priority traffic class has 200kbit
#CLS2_RATE=â300â³ # Medium Priority class has 300kbit
#CLS3_RATE=â4500â³ # Bulk class has 4500kbit
## (Weâ™ll set which ones can borrow from which later)
#
## Set some variables for individual âœclasses❠that weâ™ll use to shape internal download speed, i.e. shaping eth1
#INT_CLS1_RATE=â1000â³ #Priority
#INT_CLS2_RATE=â4000â³ #Bulk
#
## Delete current qdiscs. i.e. clean up
#${TC} qdisc del dev ${INT_IFACE} root
#${TC} qdisc del dev ${EXT_IFACE} root
#
## Attach root qdiscs. We are using HTB here, and attaching this qdisc to both interfaces. Weâ™ll label it âœ1:0â³
#${TC} qdisc add dev ${INT_IFACE} root handle 1:0 htb
#${TC} qdisc add dev ${EXT_IFACE} root handle 1:0 htb
#
## Create root classes, with the maximum limits defined
## One for eth1
#${TC} class add dev ${INT_IFACE} parent 1:0 classid 1:1 htb rate ${LIMIT}${UNITS} ceil ${LIMIT}${UNITS}
## One for eth0
#${TC} class add dev ${EXT_IFACE} parent 1:0 classid 1:1 htb rate ${LIMIT}${UNITS} ceil ${LIMIT}${UNITS}
#
## Create child classes
## These are for our internal interface eth1
## Create a class labelled âœ1:2â³ and give it the limit defined above
#${TC} class add dev ${INT_IFACE} parent 1:1 classid 1:2 htb rate ${INT_CLS1_RATE}${UNITS} ceil ${LIMIT}${UNITS}
## Create a class labelled âœ1:3â³ and give it the limit defined above
#${TC} class add dev ${INT_IFACE} parent 1:1 classid 1:3 htb rate ${INT_CLS2_RATE}${UNITS} ceil ${INT_CLS2_RATE}${UNITS}
#
## EXT_IF (upload) now. We also set which classes can borrow and lend.
## This class is guaranteed 200kbit and can burst up to 5000kbit if available
#${TC} class add dev ${EXT_IFACE} parent 1:1 classid 1:2 htb rate ${CLS1_RATE}${UNITS} ceil ${LIMIT}${UNITS}
## This class is guaranteed 300kbit and can burst up to 5000kbit-200kbit=4800kbit if available
#${TC} class add dev ${EXT_IFACE} parent 1:1 classid 1:3 htb rate ${CLS2_RATE}${UNITS} ceil `echo ${LIMIT}-${CLS1_RATE}|bc`${UNITS}
## This class can is guaranteed 4500kbit and canâ™t burst past it (5000kbit-200kbit-300kbit=4500kbit).
## I.e. even if our bulk traffic goes crazy, the two classes above are still guaranteed availability.
#${TC} class add dev ${EXT_IFACE} parent 1:1 classid 1:4 htb rate ${CLS3_RATE}${UNITS} ceil `echo ${LIMIT}-${CLS1_RATE}-${CLS2_RATE}|bc`${UNITS}
#
## Add pfifo. Read more about pfifo elsewhere, itâ™s outside the scope of this howto.
#${TC} qdisc add dev ${INT_IFACE} parent 1:2 handle 12: pfifo limit 10
#${TC} qdisc add dev ${INT_IFACE} parent 1:3 handle 13: pfifo limit 10
#${TC} qdisc add dev ${EXT_IFACE} parent 1:2 handle 12: pfifo limit 10
#${TC} qdisc add dev ${EXT_IFACE} parent 1:3 handle 13: pfifo limit 10
#${TC} qdisc add dev ${EXT_IFACE} parent 1:4 handle 14: pfifo limit 10
#
#### Done adding all the classes, now set up some rules! ###
## INT_IFACE
## Note the â˜dst♠direction. Traffic that goes OUT of our internal interface and to our servers is out serverâ™s download speed, so SOME_IMPORTANT_IP is allocated to 1:2 class for download.
#${TC} filter add dev ${INT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip dst SOME_IMPORTANT_IP/32 flowid 1:2
#${TC} filter add dev ${INT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip dst SOME_OTHER_IMPORTANT_IP/32 flowid 1:2
##All other servers download speed goes to 1:3 - not as important as the above two
#${TC} filter add dev ${INT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip dst 0.0.0.0/0 flowid 1:3
#
## EXT_IFACE
## Prioritize DNS requests
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src IMPORTANT_IP/32 match ip sport 53 0xffff flowid 1:2
## SSH is important
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src IMPORTANT_IP/32 match ip sport 22 0xffff flowid 1:2
## Our exim SMTP server is important too
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src 217.10.156.197/32 match ip sport 25 0xffff flowid 1:3
## The bulk
#${TC} filter add dev ${EXT_IFACE} parent 1:0 protocol ip prio 1 u32 match ip src 0.0.0.0/0 flowid 1:4 

tc BASH Script Traffic Shaper 2009-05-28

##!/bin/bash
##
##  tc uses the following units when passed as a parameter.
##  kbps: Kilobytes per second 
##  mbps: Megabytes per second
##  kbit: Kilobits per second
##  mbit: Megabits per second
##  bps: Bytes per second 
##       Amounts of data can be specified in:
##       kb or k: Kilobytes
##       mb or m: Megabytes
##       mbit: Megabits
##       kbit: Kilobits
##  To get the byte figure from bits, divide the number by 8 bit
##
#TC=/sbin/tc
#IF=eth0		    # Interface 
#DNLD=1mbit          # DOWNLOAD Limit
#UPLD=1mbit          # UPLOAD Limit 
#IP=216.3.128.12     # Host IP
#U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32"
# 
#start() {
#
#    $TC qdisc add dev $IF root handle 1: htb default 30
#    $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD
#    $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD
#    $U32 match ip dst $IP/32 flowid 1:1
#    $U32 match ip src $IP/32 flowid 1:2
#
#}
#
#stop() {
#
#    $TC qdisc del dev $IF root
#
#}
#
#restart() {
#
#    stop
#    sleep 1
#    start
#
#}
#
#show() {
#
#    $TC -s qdisc ls dev $IF
#
#}
#
#case "$1" in
#
#  start)
#
#    echo -n "Starting bandwidth shaping: "
#    start
#    echo "done"
#    ;;
#
#  stop)
#
#    echo -n "Stopping bandwidth shaping: "
#    stop
#    echo "done"
#    ;;
#
#  restart)
#
#    echo -n "Restarting bandwidth shaping: "
#    restart
#    echo "done"
#    ;;
#
#  show)
#    	    	    
#    echo "Bandwidth shaping status for $IF:\n"
#    show
#    echo ""
#    ;;
#
#  *)
#
#    pwd=$(pwd)
#    echo "Usage: $(/usr/bin/dirname $pwd)/tc.bash {start|stop|restart|show}"
#    ;;
#
#esac
#
#exit 0

Testing Scripts 2012-07-17

calculate.c
##include<iostream>
##include<fstream>
##include<string>
#using namespace std;
#int main()
#{
#        ofstream outFile;
#        ifstream inFile;
#        string pktNumber, latency, jitter;
#        int    pktN, ltcy, jtr;
#        int numOfTest = 0;
#
#        //open the Sample.out file
#        inFile.open("Sample.out");
#        if(!inFile)
#        {
#                cout<<"Can not open Sample.out. please check it!"<<endl;
#        }
#
#        //open the result.out file
#        outFile.open("result.out");
#        if(!outFile)
#        {
#                cout<<"Can not create output file"<<endl;
#        }
#
#        //scan the data and calculate the data.
#        while(!inFile.eof())
#        {
#                double avgJitter = 0;
#                double avgLatency = 0;
#                double jitSum = 0;
#                double latencySum = 0;
#                int numOfValidItem = 0;
#		int numOfP = 0;
#                numOfTest++;
#
#                inFile>>pktNumber>>latency>>jitter>>numOfP;
#
#                for(int i = 0; i < numOfP; i++)
#                {
#		
#			cout<<"Reading the "<<i<<"th of the line."<<endl;
#                        inFile>>pktN>>ltcy>>jtr;
#                        cout<<pktN<<" "<<ltcy<<" "<<jtr<<endl;
#                        if(ltcy != 99999)
#                        {
#                                jitSum += jtr;
#                                latencySum += ltcy;
#                                numOfValidItem++;
#                        }
#		
#                }
#		if(numOfValidItem != 0)
#		{
#                	avgJitter = jitSum / numOfValidItem;
#	                avgLatency = latencySum / numOfValidItem; 
#        	        cout<<"Average latency :"<<avgLatency;
#                	cout<<"  Average jitter:"<<avgJitter<<endl;
#	                outFile<<"The "<<numOfTest<<" test, average latency is "
#        	                <<avgLatency<<" average jitter: "<<avgJitter
#                	        <<endl;
#		}
#        }
#
#        inFile.close();
#        outFile.close();
#        return 0;
#}
UDPPktReceiver.java
#import javax.swing.*;
#import java.awt.*;
#import java.io.IOException;
#import java.net.*;
#
#public class UDPPktReceiver extends JFrame {
#
#	private JLabel sendingInfoLabel;
#	private JLabel rcvInfoLabel;
#	private JPanel panel;
#
#	private JTextArea rcvTextArea;
#	private JTextArea resendTextArea;
#	private JScrollPane rcvScrollPane;
#	private JScrollPane resendScrollPane;
#	private Container con;
#	public static int clearTextTag = 0;
#
#	DatagramSocket ds;
#
#	public UDPPktReceiver() {
#		this.setBounds(new Rectangle(500, 0, 480, 480));
#		con = this.getContentPane();
#		con.setLayout(new FlowLayout());
#
#		panel = new JPanel();
#		panel.setLayout(new FlowLayout());
#		sendingInfoLabel = new JLabel("Received information:                  ");
#		rcvInfoLabel = new JLabel("Resended information: ");
#
#		rcvTextArea = new JTextArea(20, 20);
#		resendTextArea = new JTextArea(20, 20);
#		rcvTextArea.setEditable(false);
#		resendTextArea.setEditable(false);
#		rcvScrollPane = new JScrollPane(rcvTextArea);
#		resendScrollPane = new JScrollPane(resendTextArea);
#
#		con.add(panel);
#		panel.add(this.sendingInfoLabel);
#		panel.add(this.rcvInfoLabel);
#		con.add(rcvScrollPane);
#		con.add(resendScrollPane);
#
#		this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
#		this.setVisible(true);
#		this.setResizable(false);
#		this.setTitle("Infiniband.Test.Receive Client");
#
#		try {
#			ds = new DatagramSocket(7000);
#		} catch (SocketException e) {
#			e.printStackTrace();
#		}
#	}
#
#	public DatagramPacket rcvPacket() throws IOException {
#		byte[] buf = new byte[100];
#		DatagramPacket dp = new DatagramPacket(buf, 100);
#		ds.receive(dp);
#		this.rcvTextArea.append(new String(buf, 0, dp.getLength()) + "\n");
#		return dp;
#	}
#
#	public int resendPkt(DatagramPacket dp) {
#		DatagramSocket ds;
#		String originalData = new String(dp.getData());
#		String newData = "Original pkt: " + originalData.trim();
#		try {
#			ds = new DatagramSocket();
#			DatagramPacket newDp = new DatagramPacket(newData.getBytes(),
#					newData.length(), dp.getAddress(), 6500);
#			ds.send(newDp);
#			this.resendTextArea.append(new String(dp.getData()).trim() + "\n");
#		} catch (Exception e) {
#			e.printStackTrace();
#		}
#		return 1;
#	}
#
#	public void clearPreviousInfo() {
#		this.rcvTextArea.setText("");
#		this.resendTextArea.setText("");
#	}
#
#	public static void main(String[] args) throws IOException {
#		UDPPktReceiver udpRcver = new UDPPktReceiver();
#		DatagramPacket dp;
#		while (true) {
#			if (udpRcver.clearTextTag == 1) {
#				udpRcver.clearPreviousInfo();
#			}
#			dp = udpRcver.rcvPacket();
#			udpRcver.resendPkt(dp);
#		}
#	}
#}
UDPPktSender.java
#import javax.swing.*;
#import java.awt.*;
#import java.awt.event.*;
#import java.io.BufferedWriter;
#import java.io.FileWriter;
#import java.io.IOException;
#import java.net.*;
#
#public class UDPPktSender extends JFrame {
#	private JButton btn1;
#	private JTextField text;
#	private JLabel label;
#	private JLabel sendingInfoLabel;
#	private JLabel rcvInfoLabel;
#	private JPanel panel;
#	private JPanel ipPanel;
#	private JLabel ipHintLabel;
#	private JTextField ipTextField;
#
#	private JTextArea sendTextArea;
#	private JTextArea rcvTextArea;
#	private JScrollPane sendScrollPane;
#	private JScrollPane rcvScrollPane;
#	private Container c;
#	private int pktId = 1;
#	private int pktNum = 1;
#	private DatagramSocket dsRecv;
#	private long startTime = 0;
#	private long endTime = 0;
#	private long internal = 0;
#	private long totalTime = 0;
#	private long rcvPktNums = 0;
#	private long prevLatency = 0;
#	private long jitter = 0;
#	private long jitterSum = 0;
#	private long validJitterNumber = 0;
#	private boolean isLost = false;
#	private String data;
#	BufferedWriter out;
#
#	// Constructor
#	public UDPPktSender() {
#		setBounds(new Rectangle(0, 0, 520, 480));
#		c = getContentPane();
#		c.setLayout(new FlowLayout());
#		btn1 = new JButton("Send");
#		// new JButton("second Button");
#		panel = new JPanel();
#		ipPanel = new JPanel();
#		ipPanel.setLayout(new FlowLayout());
#		ipHintLabel = new JLabel("Enter Remote IP Address:");
#		ipTextField = new JTextField(27);
#		ipTextField.setText("localhost");
#		panel.setLayout(new FlowLayout());
#		label = new JLabel("Enter Number of Packet to Send:");
#		sendingInfoLabel = new JLabel("Sending information:                          ");
#		rcvInfoLabel = new JLabel("Receiving information:");
#
#		sendTextArea = new JTextArea(20, 20);
#		rcvTextArea = new JTextArea(20, 20);
#		sendTextArea.setEditable(false);
#		rcvTextArea.setEditable(false);
#		sendScrollPane = new JScrollPane(sendTextArea);
#		rcvScrollPane = new JScrollPane(rcvTextArea);
#		rcvScrollPane.setAutoscrolls(true);
#
#		text = new JTextField(13);
#		text.setText("10");
#		text.setSelectionStart(0);
#		text.setSelectionEnd(10);
#		
#
#		btn1.addActionListener(new ActionListener() {
#			int currPktId = 1;
#			int returnPktId = -1;
#
#			public void actionPerformed(ActionEvent e) {
#				sendTextArea.setText("");
#				rcvTextArea.setText("");
#				UDPPktReceiver.clearTextTag = 1;
#				pktId = 1;
#				totalTime = 0;
#				rcvPktNums = 0;
#				try {
#					pktNum = Integer.parseInt(text.getText());
#				} catch (Exception ex) {
#					JOptionPane.showMessageDialog(JFrame.getFrames()[0],
#							"Input Number Is Invalid Please Check It");
#					text.setFocusable(true);
#					return;
#				}
#				if (pktNum <= 0) {
#					JOptionPane.showMessageDialog(JFrame.getFrames()[0],
#							"Packet Number must more than 0");
#					return;
#				}
#				if (pktNum >= 100) {
#					JOptionPane
#							.showMessageDialog(JFrame.getFrames()[0],
#									"You should not send more than 100 traffic,enter number again");
#					return;
#				}
#				for (int i = 0; i < pktNum; i++) {
#					startTime = System.currentTimeMillis();
#					currPktId = pktId;
#					sendPacket(currPktId);
#					pktId++;
#
#					returnPktId = rcvPkt();
#					endTime = System.currentTimeMillis();
#					internal = endTime - startTime;
#					totalTime += internal;
#					
#					if (currPktId == returnPktId) {
#						rcvPktNums++;
#						appendToTextArea("round-trip latency :" + internal
#								+ " ms");
#					} else {
#						appendToTextArea("packet " + currPktId + "  has lost");
#						isLost = true;
#					}
#					
#					if(i == 0)
#					{
#						prevLatency = internal;
#						jitter = 0;
#					}
#					else
#					{
#						if(!isLost)
#						{
#						jitter = internal - prevLatency;
#						prevLatency = internal;
#						}
#						else{
#							jitter = 0;
#							prevLatency = 0;
#						}
#					}
#					try
#					{
#						out = new BufferedWriter(new FileWriter("Sample.out",true));
#						if(i == 0)
#							out.write("PacketNumber     Latency     Jitter " + pktNum + "\n");
#						//out.write("\n"+5);		
#						if(!isLost)
#						{
#						out.write(currPktId + "                ");
#						out.write(internal + "            ");
#						out.write("  " + Math.abs(jitter) + "\n");
#						jitterSum += Math.abs(jitter);
#						validJitterNumber++;
#						}
#						else
#						{
#							out.write(currPktId + "                ");
#							out.write("99999        ");
#							out.write("  " + Math.abs(jitter) + "\n");
#						}
#						out.close();
#					}
#					catch(IOException e3)
#					{
#						e3.printStackTrace();
#					}
#				}
#
#				appendToTextArea("Total Time :" + totalTime + " ms");
#				appendToTextArea("Average Time :" + totalTime / pktNum + " ms");
#				appendToTextArea("loss rate :" + (1 - rcvPktNums / pktNum)
#						* 100 + "%");
#				UDPPktReceiver.clearTextTag = 0;
#			}
#		});
#
#		c.add(label);
#		c.add(text);
#		c.add(btn1);
#		c.add(ipPanel);
#		ipPanel.add(this.ipHintLabel);
#		ipPanel.add(this.ipTextField);
#		c.add(sendScrollPane);
#		c.add(rcvScrollPane);
#		c.add(panel);
#		panel.add(this.sendingInfoLabel);
#		panel.add(this.rcvInfoLabel);
#
#		this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
#		setVisible(true);
#		this.setResizable(false);
#		this.setTitle("Infiniband.Test.Sending Client");
#		try {
#			dsRecv = new DatagramSocket(6500);
#		} catch (SocketException e1) {
#			e1.printStackTrace();
#		}
#		
#	}
#
#	// Send the pkt according to the packet Id
#	public int sendPacket(int pktId) {
#		try {
#			DatagramSocket ds = new DatagramSocket();
#			String str = "packet number:" + pktId;
#			String ip = ipTextField.getText();
#			DatagramPacket dp = new DatagramPacket(str.getBytes(),
#					str.length(), InetAddress.getByName(ip),
#					7000);
#			ds.send(dp);
#			this.sendTextArea.append("sending packet: " + pktId + "\n");
#
#		} catch (Exception ex) {
#			ex.printStackTrace();
#			return 0;
#		}
#		return 1;
#	}
#
#	// Receive the packet
#	public int rcvPkt() {
#		try {
#			byte[] buf = new byte[100];
#			DatagramPacket dpRecv = new DatagramPacket(buf, 100);
#			dsRecv.setSoTimeout(100);
#			dsRecv.receive(dpRecv);
#			data = new String(buf);
#			this.rcvTextArea.append(new String(buf, 0, dpRecv.getLength())
#					+ "\n");
#		} catch (Exception ex) {
#			ex.printStackTrace();
#			return -1;
#		}
#		int pktId = this.getPacketId(data);
#		return pktId;
#	}
#
#	public int getPacketId(String s) {
#		s = s.trim();
#		int index = s.lastIndexOf(':');
#		int pktId = -1;
#		try {
#			pktId = Integer.parseInt(s.substring(index + 1));
#		} catch (Exception ex) {
#			JOptionPane.showMessageDialog(null, s);
#			ex.printStackTrace();
#		}
#		return pktId;
#	}
#
#	public void closeSocket() {
#		this.dsRecv.close();
#	}
#
#	public void appendToTextArea(String s) {
#		this.rcvTextArea.append(s);
#		this.rcvTextArea.append("\n");
#	}
#
#	public static void main(String[] args) {
#		UDPPktSender udpSender = new UDPPktSender();
#	}
#}

Einstein

rc3.d 2010-01-16

K01dnsmasq
K02avahi-dnsconfd
K02dhcdbd
K02NetworkManager
K05conman
K05saslauthd
K05wdaemon
K10dc_server
K10psacct
K12dc_client
K12mailman
K15httpd
K19ntop
K20nfs
K24irda
K25squid
K30spamassassin
K35dovecot
K35smb
K35vncserver
K35winbind
K50netconsole
K50snmptrapd
K50tux
K69rpcsvcgssd
K73ldap
K73ypbind
K74ipmi
K74nscd
K74ntpd
K80kdump
K85mdmpd
K87multipathd
K87named
K88wpa_supplicant
K89dund
K89netplugd
K89pand
K89rdisc
K91capi
K92ip6tables
K99readahead_later
S02lvm2-monitor
S04readahead_early
S05kudzu
S06cpuspeed
S07iscsid
S08ip6tables
S08iptables
S08mcstrans
S09isdn
S10network
S11auditd
S12restorecond
S12syslog
S13irqbalance
S13iscsi
S13mcstrans
S13named
S13portmap
S14nfslock
S15mdmonitor
S18rpcidmapd
S19rpcgssd
S22messagebus
S23setroubleshoot
S25bluetooth
S25netfs
S25pcscd
S26acpid
S26hidd
S26lm_sensors
S27ldap
S28autofs
S29iptables-npg
S50denyhosts
S50hplip
S50snmpd
S55sshd
S56cups
S56rawdevices
S56xinetd
S58ntpd
S60apcupsd
S65dovecot
S78spamassassin
S80postfix
S85gpm
S85httpd
S90crond
S90elogd
S90splunk
S90xfs
S95anacron
S95atd
S95saslauthd
S97libvirtd
S97rhnsd
S97yum-updatesd
S98avahi-daemon
S98haldaemon
S98mailman
S99firstboot
S99local
S99smartd

Corn

Jalapeno

Roentgen

Xen to VMware Conversion 2009-06-23

The transfer process

  1. Shutdown the xen virtual machine and make a backup of the .img file.
  2. Make a tarball of roentgens filesystem
    • This must be done as root
    • tar -cvf machine.tar /lib /lib64 /etc /usr /bin /sbin /var /root
  3. Set up an identical OS (CentOS 5.3) on VMWare Server.
  4. Mount the location of the tarball and extract to the /
    • Make sure to backup the original OSs /etc/ to /etc.bak/
    • tar -xvf machine.tar

Files to copy back over from the /etc.bak/

/etc/sysconfig/network-scripts/ifcfg-*
/etc/inittab
/etc/fstab
/etc/yum*
/etc/X11*

Turn roentgen on to prepare for rsync transfer.

Make sure to shutdown all important services (httpd, mysqld, etc)

Log on to roentgen as root and run the following command for each folder archived above.

rsync -av --delete /src/(lib) newserver.unh.edu:/dest/(lib)>>rsync.(lib).log

Rsync process

--delete   delete extraneous files from dest dirs
-a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
 --no-OPTION             turn off an implied OPTION (e.g. --no-D)


This tells us how to convert xen to vmware

  1. download the current kernel for the xen virtual machine (not the xen kernel) and install it on the virtual machine. This is done so when the virtual machine is transitioned into a fully virtualized setup, it can boot a normal kernel not the xen kernel.
  2. shutdown roentgen to copy the image file to a back for exporting
  3. Install qemu-img
  4. Run the following command:
    • qemu-img convert <source_xen_machine> -O vmdk <destination_vmware.vmdk>
  5. Now it boots but, it also kernel panics.

This was scratched and instead made a tarball of roentgens filesystem.

http://www.howtoforge.com/how-to-convert-a-xen-virtual-machine-to-vmware

Cacti

Notes 2009-05-21

UN:admin or npglinux

go to roentgen.unh.edu/cacti for login

Adding a device:

To manage devices within Cacti, click on the Devices menu item. Clicking Add will bring up a new device form.

Cacti Integrat Groundwork Monitor 2009-07-16

#http://www.groundworkopensource.com/community/forums/viewtopic.php?f=3&t=356
#
# Post subject: sorry for the delay
#PostPosted: Thu Oct 19, 2006 6:16 am 
#Offline
#
#Joined: Fri Sep 15, 2006 9:43 am
#Posts: 19 	
#Sorry for the delay... I have written a very ugly step by step to integrating Cacti and GWOS from scratch, I haven't had time to pretty it up, but will in the future.
#
#The instructions are specific to my install on SuSE 10.1, so you may need to do some modification for your distribution.
#
#FYI, I have not been able to get php-snmp working yet, when/if I do, I'll post more.
#****************BEGIN**************
#
#Install the following packages
#
#smart install mysql
#installs mysql,mysql-client,mysql-shared, perl-DBD-mysql, perl-data-show table
#(if using smart installer, you may want to remove the download.opensuse.org channel and add mirrors.kernel.org/suse/.... or add the local mirrors from suse.cs.utah.edu )
#
#Configure MySQL:
#sudo mysql_install_db
#chown -R mysql:mysql /var/lib/mysql/
#
#Scary Part**
#as root, "export MYSQL_ROOT=password" (mysqlpass)
#
#
#**
#
#edit /etc/hosts to look like
#127.0.0.1 localhost localhost.localdomain
#xxx.xxx.xxx.xxx gwservername gwservername.domain.end
#
#Install GWOS rpm
#
#wget http://groundworkopensource.com/downloa ... 1.i586.rpm
#
#fix libexpat problem before installing rpm...
#ln -s /usr/lib/libexpat.so.1.5.0 /usr/lib/libexpat.so.0
#
#rpm -Uvh groundwork-monitor-os-xxx
#
#Set Firewall Appropriately...
#___________
#
#Time to install Cacti
#
#download the latest Cacti from http://www.cacti.net
#
#
#wget http://www.cacti.net/dolwnloads/cacti-0.8.6i.tar.gz
#
#untar cacti
#
#tar xvfz cacti-0.8.6i.tar.gz
#
#and rename
#
#mv cacti-0.8.6i/ cacti
#
#then move cacti directory to GWOS
#
#mv cacti/ /usr/local/groundwork/cacti
#
#now, cd /usr/local/groundwork/cacti
#
#__
#
#$%@#%$^ Should we install net-snmp?? $^@%&#^*^
#
#Time to create cacti user and group...
#
#
#Create a new user via yast or useradd named cactiuser (or whatever your cacti user name wants to be)
#
#Make sure to add cactiuser to your nagios, nagioscmd, mysql, and nobody groups.
#You will probably want to disable user login... set cactiuser password, make sure to remember it... my default is "cacti" for configuration purposes.
#
#Time to own directories...
#Inside your cacti directory:
#
#sudo chown -R cactiuser rra/ log/
#
#cd include
#
#Now edit config.php with your preferred editor. (emacs config.php in my case)
#
#edit the $database variables to fit your preferred installation.
#
#If you've followed my default configuration, you will only need to change the $database_password, which will be the same as your cactiuser password.
#
#save and exit your editor
#
#_
#
#Now to build our DB
#
#from /cacti
#
##:sudo mysqladmin -u root -p create cacti
#
##: mysql -u root -p cacti <cacti> grant all on cacti.* to cactiuser@localhost identified by 'yercactiuserpassword';
#> grant all on cacti.* to cactiuser;
#> grant all on cacti.* to root@localhost;
#> grant all on cacti.* to root;
#> flush privileges;
#> exit
#__
#
#Time to cron the poller
#
#There are several ways to do this...
#swith to cactiuser
##: su cactiuser
##: crontab -e
#
#insert this line to poll every 5 minutes.. make sure you use proper paths, we want to use GWOS' php binary.
#
#*/5 * * * * /usr/local/groundwork/bin/php /usr/local/groundwork/cacti/poller.php > /dev/nell 2>&1
#
#esc shift-ZZ to exit
#
#__
#
#Now to build Cacti into the GWOS tabs.
#
#Instructions for adding a tab to GWOS can be found at https://wiki.chpc.utah.edu/index.php/Ad ... _Interface
#
#Specific instructions:
#
#mkdir /usr/local/groundwork/guava/packages/cacti
#mkdir /usr/local/groundwork/guava/packages/cacti/support
#mkdir /usr/local/groundwork/guava/packages/cacti/sysmodules
#mkdir /usr/local/groundwork/guava/packages/cacti/views
#
#now we create the package definition
#
#cd /usr/local/groundwork/guava/packages/cacti
#emacs package.pkg
#
#create contents appropriately
#
#My package.pkg is as follows:
#
#
#/* Cacti Package File */
#
#define package {
#name = Cacti
#shortname = Cacti
#description = cacti graphing utility
#version_major = 1
#version_minor = 0
#}
#
#
#define view {
#name = Cacti
#description = Cacti Graph Viewer
#classname = CactiMainView
#file = views/cacti.inc.php
#}
#
#
#___
#
#Now to create the view file:
#
#cd /usr/local/groundwork/guava/packages/cacti/views
#
#emacs cacti.inc.php
#
#Example contents:
#<php>
#
#____
#
#Now you must set permissions for cacti
#
#chown -R nagios.nagios /usr/local/groundwork/guava/packages/cacti/
#chmod -R a+r /usr/local/groundwork/guava/packages/cacti/
#
#
#Installing cacti into GW
#
#Login to GWOS
#Select the Administration tab
#Select packages from below the Administration tab
#Select Cacti from the main menu
#Select Install this package now
#
#
#Now select the Users link below the Administration tab
#
#Select Administrators from the Roles submenu.
#At the bottom of the page click the drop menu "Add View to This Role" and select Cacti, click the "Add View" button
#
#
#Now logout of GWOS, then log back in. The Cacti tab should now be available.
#
#__
#
#For my apache config (below) to work, you must simlink cacti into the apache2 dir.
##: ln -s /usr/local/groundwork/cacti /usr/local/groundwork/apache2/htdocs/cacti
#
#Now for apache configuration... I am not an apache guru, so this is a hackaround...
#
#edit your httpd.conf from /usr/local/groundwork/apache2/conf/httpd.conf
#
#add index.php to your DirectoryIndex list
#
#below the monitor/ alias directory add
#
#Alias /cacti/ "/usr/local/groundwork/apache2/htdocs/cacti/"
#
#<Directory>
#Options Indexes
#AllowOverride None
#Order allow,deny
#Allow from all
#</Directory>
#
#restart apache now
##: /usr/local/groundwork/apache2/bin/apachectl restart
#
#Now go back to GWOS and click the Cacti tab. You should see an index of the cacti directory. Click the install dir link and then the index.php link. You will be prompted through new installation, make sure and set your binary paths to those in the GWOS directory. i.e. /usr/local/groundwork/bin/rrdtool
#
#
#At this point I had a permission problem, probably due to the simlink above... if you followed my instructions, you will need to do this to fix the problem.
##: chown -R nagios.nagios /usr/local/groundwork/cacti
##: cd /usr/local/groundwork/cacti
##: chown -R cactiuser.users rra/ log/
#
#Restart apache, reload the GW page, and Presto, the Cacti tab should bring you directly to the Cacti login screen.
#
#
#Let me know if you have problems. It's well within the bounds of probable that I made a mistake.
#
#Jeff
#
#
#Last edited by dudemanxtreme on Thu Oct 19, 2006 6:28 am, edited 1 time in total.
#
#Back to top 	
# Profile  
# 
#dudemanxtreme 	
# Post subject: mistake
#PostPosted: Thu Oct 19, 2006 6:21 am 
#Offline
#
#Joined: Fri Sep 15, 2006 9:43 am
#Posts: 19 	
#The Forum won't write the cacti Alias directory, so here you go...
#
#after the Alias /cacti/ line, in the Directory <> add "/usr/local/groundwork/apache2/cacti"
#
#
#Back to top 	
# Profile  
# 
#dudemanxtreme 	
# Post subject: newest cacti
#PostPosted: Thu Oct 19, 2006 7:56 am 
#Offline
#
#Joined: Fri Sep 15, 2006 9:43 am
#Posts: 19 	
#The instructions posted worked fine for cacti-0.8.6h, but with the newest cacti-0.8.6i I cannot get graphs to show up. RRDs are created and updated properly however. I have posted to the cacti forums, if/when I get it working I will update.
#
#
#Back to top 	
# Profile  
# 
#dudemanxtreme 	
# Post subject: fixed
#PostPosted: Thu Oct 19, 2006 8:20 pm 
#Offline
#
#Joined: Fri Sep 15, 2006 9:43 am
#Posts: 19 	
#hokay... if you followed my instructions above, this is what you need to do to get graphs to render...
#
#edit /usr/local/groundwork/cacti/lib/rrd.php
#
#delete or comment out the following segment
#
#/* rrdtool 1.2.x does not provide smooth lines, let's force it */
#if (read_config_option("rrdtool_version") == "rrd-1.2.x") {
#$graph_opts .= "--slope-mode" . RRD_NL;
#}
#
#
#Save, exit, and voila. If anyone has any problems past this point... Well, we'll do what we can.

Wigner

This is the HP printer in the office.

Color to Black 2009-08-07

This is how to switch Wigner from color to black, when color ink is low.

  1. Log onto the web interface of wigner, with username as npg-admins.
  2. Go into the Settings tab
  3. Drill down into Configure Device>System Setup, now scroll down and look for the options and set them from:
Color/Black Mix		Mostly Color Pages
 Mostly Black Pages
Color Supply		Stop
 Autocontinue Black
  • There are also some settings in this location and the ones I changed will be listed, and I changed them from the Normal settings
Configure Devices>Print Quality>Print Modes>
 Color			Normal Mode
  Autosensing Mode

Tomato