Difference between revisions of "Pumpkin"
From Nuclear Physics Group Documentation Pages
Jump to navigationJump to searchLine 15: | Line 15: | ||
*** The kernel module can be build from the sources located in /usr/src/kernels/Acera_RAID. Just run make. | *** The kernel module can be build from the sources located in /usr/src/kernels/Acera_RAID. Just run make. | ||
* Currently we have a temporary drive in the system on the onboard SATA which holds a RHEL5 distro and the original RHEL4 distro from the manufacturer. | * Currently we have a temporary drive in the system on the onboard SATA which holds a RHEL5 distro and the original RHEL4 distro from the manufacturer. | ||
+ | |||
+ | It seems that right now, the only bootable install is on the temporary drive. From what I understand, you can use xen to create a guest os on a partition, and once it's all set up, you can even point grub to boot that as the "real" os. A possible plan of action seems like it could be to get raid working on the current install, use xen to put a rhel5_64 install (pumpkin) on one of the raid sets (probably system), boot that, and then xen-install rhel5_32 corn to the other raid set. At that point, we could pull the random drive we're using now and be close to done. | ||
== To Do == | == To Do == |
Revision as of 19:13, 28 December 2007
Pumpkin
Pumpkin is our new 8 CPU 24 disk monster machine. It is really, really nice. Currently it is only tied to the "corn" ip address.
Basic Setup
- We will run Xen on this so that it can have 2 personalies: Pumpkin, 64-bit, and Corn, 32-bit, RHEL5.
- In order to do this right, Pumpkin should be the host, since you can't virtualize 64-bit under 32-bit, but you can do the other way around. See the bottom of http://www.redhat.com/rhel/virtualization/. Currently, all boot options in GRUB are 32-bit. The only difference between the first and second boot options is that the first (default) loads an initrd ending with _raid.img, which panics.
- The RAID is currently split. This allows for much easier maintenance and in the future possible upgrades.
- Disk 1 to 11 is in RAID Set 0, which holds the RAID Volumes: System (300GB, RAID6, SCSI:0.0.0), System1(300GB, RAID6, SCSI:0.0.1), Data1 (6833GB, RAID5, SCSI:0.0.2)
- Disk 11 to 22 is RAID Set 1, which holds the RAID Volume: Data2 (7499GB, RAID5, SCSI:0.0.3)
- Disk 23 and 24 are passthrough (single disks) at SCSI:0.0.6 and SCSI:0.0.7. These can be used as spares, as backup, or to expand the other RAID sets later on.
- The RAID card can be monitored at http://10.0.0.99/ login as "admin" with a password that is the same as the door combo.
- To use this card with Linux you need a driver: arcmsr. This must be part of the initrd for the kernel, else you cannot boot from the RAID.
- The kernel module can be build from the sources located in /usr/src/kernels/Acera_RAID. Just run make.
- Currently we have a temporary drive in the system on the onboard SATA which holds a RHEL5 distro and the original RHEL4 distro from the manufacturer.
It seems that right now, the only bootable install is on the temporary drive. From what I understand, you can use xen to create a guest os on a partition, and once it's all set up, you can even point grub to boot that as the "real" os. A possible plan of action seems like it could be to get raid working on the current install, use xen to put a rhel5_64 install (pumpkin) on one of the raid sets (probably system), boot that, and then xen-install rhel5_32 corn to the other raid set. At that point, we could pull the random drive we're using now and be close to done.
To Do
- Move the system to System drive and remove the current temp drive.
- Setup mount points for the data drives.
- Setup LDAP for users to log in. I started, but it's not working.
- Setup Exports, so other systems can see the drives.
- Setup autofs so that it can see other drives.
- Setup sensors so that we can monitor the system.
- Setup smartd so we will know when a disk is going bad. This can be done inside the RAID card using a system to send SNMP and EMAIL. but it needs to be done.
- Setup the other system with Xen on the System1 drive
- Setup SNMP for cacti monitoring.
- There must be other things....
Done
- Setup ethernet.
- Setup RAID volumes.
- Setup partitions and create file systems.