Debian install (base)

Got Debian stable ("Lenny") AMD64 release 5.04 netinst CDROM iso (namely, debian-504-amd64-netinst.iso), and burn it onto a CDROM.

Installation has been executed using the integrated eLOM management console, but a real CDROM has been used, since remote mounting an ISO image uses /dev/sda for the virtual CDROM device, and personally I dislike that.
Except for physical CDROM presence, all the operations reported below have been carried out remotely using either the eLOM web interface (e.g. BIOS accesses and modifications) or the ILOM Java remote client (all the installation process).

What is reported below are just some notes to better direct a similar installation process. A fair knowledge of the Debian installer is assumed.

Basic Debian installation


Configure BIOS
Make sure that BIOS is configured using "Optimal defaults", whatever they are.
Do not dare enabling the hardware watchdog, since it pretends to reboot the machine assuming a too heavy load (duh!).

Enable IOMMU
The Debian installer complains that IOMMU is disabled: didn't find any entry in the BIOS to toggle IOMMU (on X2200 M2 was under AdvancedIOMMU Option Menu64 MB, strange, strange...)

network configuration
The box sports 4 gigabit Ethernet NICs: during the installation phase, select the first one (i.e. the first nVIDIA one), and let DHCP do its dirty job (of course you need a valid DHCP server, YMMV!). Network configuration has to be tuned appropriately afterwards. See "Configure network" below.

Disk partitioning
HD size, after RAID configuration is 2 x 146 GB. The machine will be a workgroup file server, so long-term data storage is rather a point.
I started examining our old server partitioning, resizing most used partitions and trying to place the two most easily used onto the two different RAID volumes. Some mountpoints have been redefined, adopting the /srv directory standard.
devicemount pointformatsize (GB)
sda1/ext36
sda2/varext315
sda3<swap><swap>8
sda5/var/wwwext35
sda6/homeext332
sda7/srv/shareext370
sda8/srv/udcext310
sdb1/srv/gisext3120
sdb2/srv/dataext35
sdb3<swap><swap>8
sdb4/srv/appsext313
Note that, due to LSI RAID controller idiosyncracies, the first SCSI device (LUN 0,0,0) is seen by Debian as /dev/sdb, whereas the second (namely, the two disks in slot 6 and 7, LUN 0,6,0) is /dev/sda.
For that reason, I placed the "system" RAID array in slots 6 and 7, so it will always be sda. Different solutions are available, but for now this will do. Will check Sun's MRM... [Update 20100309: MRM is packaged in RPM only, and has its own license. Since looks like is not a vital piece of software, I'll forget it.]

Tasksel
Leave as proposed (Standard system), i.e. install just a base system.
Any other extra stuff will be installed separately and installation will be documented in a dedicated page.

GRUB
Let the installer set up and configure GRUB (install it on the MBR). I prefer the good ol'LILO, and will replace GRUB with it afterwards.

Post-installation tweaks


Check apt repositories list
edit /etc/apt/sources.list: Use standard stable repositories, this is a production server!
  1. deb http://<your nearest Debian mirror here> lenny main
  2. deb-src http://<your nearest Debian mirror here> lenny main
  3.  
  4. deb http://security.debian.org/ lenny/updates main
  5. deb-src http://security.debian.org/ lenny/updates main
  6.  
  7. deb http://volatile.debian.org/debian-volatile lenny/volatile main
  8. deb-src http://volatile.debian.org/debian-volatile lenny/volatile main

add RAID monitoring software
aptitude install mpt-status
Edit /etc/modules and add a line to call mptctl module at boot.
You can check array status with:
  1. mpt-status -i <ID> --newstyle
where <ID> is the LUN number of the master disk in each array (e.g. 0 and 6, given the hardware configuration of the host I'm setting up, with two RAID1 arrays, one on disks in bays 0 and 1 and the other on disks in bay 6 and 7)

See if any OS upgrade is needed
The mantra goes like that: aptitude update && aptitude safe-upgrade

Replace GRUB with LILO
  1. Install LILO with aptitude install lilo lilo-doc
  2. Next, run liloconfig to configure LILO:
    1. install partition boot record to boot from /dev/sda1? → YES
    2. pick a bitmap for LILO fancy background → /boot/coffee.bmp
    3. install a master boot record on /dev/sda? → YES
    4. make /dev/sda1 the active partition? → YES
  3. open /etc/lilo.conf with your favourite editor, and check if everithing is OK, like that:
    1. lba32
    2. boot=/dev/sda
    3. root=/dev/sda1
    4. bitmap=/boot/debianlilo.bmp
    5. bmp-colors=1,,0;9,,0
    6. bmp-table=106p,144p,2,9,144-
    7. bmp=timer=514p,144p,6,8,0
    8. install=bmp
    9. prompt
    10. timeout=50
    11. large-memory
    12. map=/boot/map
    13. vga=ask
    Which means, place the bootloader in sda MBR (not in sda first partition!),
  4. Try a reboot.
  5. If everything is fine, you're done, just remember to piss off grub with aptitude purge grub!
Looks like that when kernel is upgraded via aptitude, a call is made to update-grub. We need instead update-lilo being called, so:
ln -s /usr/sbin/update-lilo /usr/sbin/update-grub
Last, a fancy touch: replace the bitmap with my favourite one (this can be done after ssh access or cifs/NFS is working): copy debiansquirrel.bmp and debiansquirrel.dat in /boot, edit lilo.conf and rerun lilo. Replace the bitmap stanza with this one:
  1. bitmap = /boot/debiansquirrel.bmp
  2. bmp-table = 13,6;1,25,16,4
  3. bmp-colors = 7,,;6,,13
  4. bmp-timer = 65,3;11,0,13
bmp-* parameters are as from file debiansquirrel.dat.
Try if RAID array is solid enough: stop the machine, remove the drive in bay 6, try a reboot, simulating a degraded RAID. The machine should boot normally and you should see that an array is degraded with
  1. mpt-status -i 6
Remember that mpt-status works if mptctl module has been loaded (check with lsmod, and then issue a modprobe mptctl or check if mptctl is permanently defined as to be loaded at boot in /etc/modules.
Halt the sytem, re-insert the drive, reboot and wait for the resync...
Repeat with drive in bay 5...

Set up nice console fonts
First, grab as much screen real estate as possible:
  1. Edit lilo.conf to enable hi-res VESA modes: add vga=ask in lilo.conf
  2. reboot and select 'scan', choose something near 132x60: I decided to use a VESA mode of 1280x1024x32 (0x323, decimal 803)
  3. lilo.conf needs VESA mode in decimal so, I put vga=803 there.
  4. Try a reboot and check if it works.
Next, set up a fancy font (I love Terminus! Credits to blog.venthur.de):
  1. install the Terminus console font with aptitude install console-terminus
  2. enable the font in /etc/console-tools/config, setting set SCREEN_FONT=Uni3-Terminus16
  3. to have the same font on all terminals, in /etc/console-tools/config, comment out all the lines with SCREEN_FONT_vc*
  4. test the new font with $ sudo /etc/init.d/console-screen.sh start

Enable LS_COLORS
Edit root's .bashrc, and uncomment (or add) the following lines:
  1. # You may uncomment the following lines if you want `ls' to be colorized:
  2. export LS_OPTIONS='--color=auto'
  3. eval "`dircolors`"
  4. alias ls='ls $LS_OPTIONS'
  5. alias ll='ls $LS_OPTIONS -l'
  6. alias l='ls $LS_OPTIONS -lA'

Configure network: assign static ip and create bonding interface
The box has 4 NIC. All of them must be bonded and work as one.
  1. use ifconfig -a and jot down the four NICs (eth[0-3]) MAC addressess, you'll need them to configure the bonding interface
  2. Install ifenslave. Mind that we're using a 2.6 kernel, so issue an aptitude install ifenslave-2.6
  3. Edit /etc/network/interfaces, comment out the DHCP stuff and add this:
    1. auto bond0
    2. iface bond0 inet static
    3. address <your-ip-address>
    4. netmask <your netmask>
    5. broadcast <your broadcast address>
    6. gateway <your gateway address>
    7. pre-up ifconfig eth0 hw ether <eth0 MAC address>
    8. pre-up ifconfig eth1 hw ether <eth1 MAC address>
    9. pre-up ifconfig eth2 hw ether <eth2 MAC address>
    10. pre-up ifconfig eth3 hw ether <eth3 MAC address>
    11. up ifenslave bond0 eth0 eth1 eth2 eth3
    12. down ifenslave -d bond0 eth0 eth1 eth2 eth3
    13. post-down ifconfig eth0 down && ifconfig eth1 down && ifconfig eth2 down && ifconfig eth3 down
  4. Edit /etc/modprobe.d/arch/x86_64:
    1. alias bond0 bonding
    2. options bonding miimon=100 downdelay=200 updelay=200 mode=5
  5. Restart networking service with /etc/init.d/networking restart.
  6. To check for proper working: cat /proc/net/bonding/bond0

NTP & timezone setup
  1. install time zones and daylight savings data with
    aptitude install tzdata
    configure the timezone with dpkg-reconfigure tzdata, we're in Italy, so I choose "Europe", "Rome"
  2. install network time protocol daemon with
    aptitude install ntp
    edit /etc/ntp.conf:
    1. comment out all the server lines
    2. add the nearest ntp servers, in my case:
      1. server <your nearest ntp server here>
      2. server pool.ntp.org
    3. (re) start ntpd with /etc/init.d/ntp restart
    4. test functionality with ntpq -p

Deactivate "user-private" groups
in /etc/adduser.conf set:
  1. USERGROUPS=no
  2. SETGID_HOME=yes
so we have all users belonging to the users group, and home directories GID assigned to users group as well.

UPS support Still to be done, deferred until the right UPSs will be in place
Since we have an APC Back-UPS ES 700, USB signalling cable (the box sucks just a mere 250 W, green computing rules!), proceed the easy way, this time no mucking with soldering irons, RS-232 cables and dumb signalling!
  1. install apcupsd: aptitude install apcupsd apcupsd-doc
  2. in /etc/apcupsd/apcupsd.conf, set the relevant parameters:
    1. UPSNAME <your UPS name here>
    2. UPSCABLE usb
    3. UPSTYPE usb
    4. #DEVICE comment out the DEVICE line, so that USB UPSs will be located automagically
    5. NETSERVER on
    6. NISPORT 3551
  3. in /etc/default/apcupsd, set ISCONFIGURED=yes
  4. restart apcupsd (/etc/init.d/apcupsd restart)
  5. check if the UPS is detected with apcaccess
Please, please, please, remember to carry out all the tests as recommended on apcupsd site! You've been warned.

Set up a backup system
Install flexbackup with aptitude install flexbackup. Follow my flexbackup HOWTO. Ugh.

Well, that's all for a bare production system... producing nothing at all except keeping itself sane.
For service-specific installation & configuration stuff, go to the grocer's list.