Here is my recommended Openfiler installation guide. I initially was going to use pen/flash/usb/thumb drives, but I found out that they are a) very slow, b) somewhat unreliable and c) have a limited write-cycle life.
This is part one of two.
Openfiler Installation - Part I
This setup assumes a configuration of a Chenbro case, motherboard with 2 GB nics on board, an extra NIC for management, a PCI-X slot, 2 small hard drives for booting attached to the motherboard, an Areca 1230 RAID Card,2 GB of RAM and a bunch of SATA disks attached to the Areca card.
At the end of this installation, you will end up with a reliable NFS/iSCSI box that can be used with VMware ESX with good performance. The machine will boot from the hard drives attached to the motherboard, so all hard drive slots (12 available) can be used for storage drives. We did try to use USB sticks to boot from, but they were a) somewhat unreliable and b) have a limited write-cycle (1-5 million writes) and c) quite slow.
You need to have 3 IP’s for the machine – one for management and one for the monitoring of the Areca card. Then you need 1 IP for the iSCSI/NFS traffic – this one is usually on a different subnet.
1) Setup your hardware.
2) Boot the machine – set BIOS appropriately (make boot order CD, then motherboard attached hard drive boot disks)
3) Go into Areca RAID – Press F6
4) Setup IP for Areca card in Ethernet config – default password is 0000
5) Setup RAID volume as one large volume across all disks – either use Quick Volume & Raid setup, or create a Raid and then a Volume
6) Press F10 to Reset/Exit
7) Boot from Openfiler 2.3 64 bit
8) From boot prompt type: expert text
9) Skip the media test
10) Press OK on Welcome to Openfiler
11) Keyboard – US
12) Use Disk Druid
13) Initialize – Yes
14) If any partitions – wipe them out
15) Create a new 200 MB with Filesystem Type: software raid on the first boot disk (usually sda – these will be the smaller disks) – Note that you need to pick the correct hard drive to apply to – only one at a time. Use TAB to move from selection to selection and cursor keys to move up/down.
16) Do the same for the 2nd boot disk (usually sdb)
17) Create a new 10000 MB with Filesystem Type: software raid on the first boot disk
18) Do the same for the 2nd boot disk (usually sdb)
19) Create a new 5000 MB swap partition on each of the boot disks (sda and sdb)
20) Before you begin this step, note which partitions are the 200 MB ones. Now use the RAID option on the partition overview screen. Type “/boot” in the mount point (only type what is inside the quotes). Filesystem type should be ext3. Use RAID1 and only include the 200 MB partitions on each of the 2 boot disks.
21) Again use the RAID option to create a “/” mount point. Again use RAID1, ext3 and use the remaining 2 partitions - one on each disk.
22) Click OK
23) Configure the Ethernet appropriately (usually we don’t use DHCP) and make sure to Activate on Boot for all of them. Also…only put the management IP in on one of them and put the other two as fake IPs for now as they will be replaced by a bonded connection later.
24) Set the hostname – ex: openfiler1.lexcom.local
25) Set the Timezone to America/Regina (unless you are somewhere else!)
26) Set the password
27) Click OK on Log screen
28) Let it install – takes about 15 minutes
29) When it is finished, it may eject the CD. Put it back in and then reboot to the Openfiler CD. These next steps ensure that the machine will boot from either software raided drive.
30) At the boot prompt, type: linux rescue
31) Click OK on next two screens.
32) Answer No to network interface question
33) Click Continue and then OK on next screen
34) At the shell prompt, type: fdisk –l (this is a lowercase “L”)
35) Note which partitions are the 200 MB boot partitions and which ones are the 10000 MB root partitions. (Usually the boot drives are sda and sdb)
36) We want to mount one of the 10000 MB. So type (replace sda2 in the line below with the correct partition):
mount /dev/sda2 /mnt/source
37) Type: cd /mnt/source/sbin
38) Type: ./grub
39) For these next lines, make sure you use the drives that are the boot drives (again, usually sda and sdb) At the grub prompt type (again, replace sda with the correct drive – this will not have a number as that indicates the partition, only the letter):
device (hd0) /dev/sda
40) Type: root (hd0,0) ß-after this expect unknown partition message
41) Type: setup (hd0)
42) Type: device (hd1) /dev/sdb
43) Type: root (hd1,0) ß-after this expect unknown partition message
44) Type: setup (hd1)
45) Type: quit
46) At the shell prompt, type exit
47) Eject the CD and ensure that the BIOS is set to boot from the motherboard hard drives.
48) Reboot and continue with Openfiler configuration.
I will post part two as soon as it is done.
Wednesday, August 20, 2008
Wednesday, July 9, 2008
VMware ESX 3.5, MSCS and iSCSI
Trying to setup Microsoft Clustering Services on VMware ESX 3.5 with iSCSI turned out to be a real pain in the *ss.
So...my solution was to use the Microsoft iSCSI Initiator inside the Windows VM's and connect to iSCSI LUNs directly from the VM's.
First I setup the iSCSI SAN. For this I used the newest Openfiler 2.3 as the iSCSI target. I set it up to boot from a pen drive (USB memory stick) following some instructions here: http://ha.nnes.be/index.php/install-openfiler-on-usb-stick/
Mind you....I didn't do the chroot and I had created a separate /boot and / directories, so my install was slightly different. Anyway....got it all working.
Next....setup Windows 2003 VM's with Clustering and install the Microsoft iSCSI initator.
Tired now....gonna post the rest later....
Edit: August 14th, 2008
Found out that USB drives are somewhat flakey for this purpose. Plus they have a limited write cycle. It can be from 100,000 to 1-5 million writes, but on a system that is supposed to be up 24x7 and be extremely reliable, I don't feel comfortable using a $10 USB stick that could wear out.
So...I have redone this all with Openfiler 2.3, Areca 1230 RAID card and 2 small internal hard drives attached via SATA to the motherboard using Software RAID for the boot drives. I have been running this setup for over a month and done extensive testing on various settings/configs to find the one that worked the best (for me).
I use iSCSI for the clustered disks (with Microsoft iSCSI Initiator) and NFS mounts for all regular disks (from VMware ESX). Suprisingly, the NFS performs pretty much the same as iSCSI plus a) it is readable from the Openfiler directly and b) it doesn't have the 2 TB LUN restriction. Only downside I have seen thus far is that I don't get disk performance stats for the NFS disks from ESX. Not a huge thing.
Two important things to get the best speed from NFS - set async mode on the NFS (this assumes you are using a good UPS) - this will significantly improve your write speed. The second thing is to use alb bonding mode for your two gb ethernet nics. This mode does not have any special switch requirements but allows good speed both receiving and sending.
I will try to document my steps here in a little while.
PDF from VMware (http://www.vmware.com/pdf/vi3_35/esx_3/vi3_35_25_u1_mscs.pdf) indicates the "proper" and supported way of doing it. However, I don't have Fiber channel and if you follow these instructions, you can't VMotion any machine that has Clustered disk (unless you have all VM's on the same ESX server, but then what is the point of the MSCS?)
Anyway, I wanted a solution that would allow me to have clustering, but yet still get the benefit of VMotion.So...my solution was to use the Microsoft iSCSI Initiator inside the Windows VM's and connect to iSCSI LUNs directly from the VM's.
First I setup the iSCSI SAN. For this I used the newest Openfiler 2.3 as the iSCSI target. I set it up to boot from a pen drive (USB memory stick) following some instructions here: http://ha.nnes.be/index.php/install-openfiler-on-usb-stick/
Mind you....I didn't do the chroot and I had created a separate /boot and / directories, so my install was slightly different. Anyway....got it all working.
Next....setup Windows 2003 VM's with Clustering and install the Microsoft iSCSI initator.
Tired now....gonna post the rest later....
Edit: August 14th, 2008
Found out that USB drives are somewhat flakey for this purpose. Plus they have a limited write cycle. It can be from 100,000 to 1-5 million writes, but on a system that is supposed to be up 24x7 and be extremely reliable, I don't feel comfortable using a $10 USB stick that could wear out.
So...I have redone this all with Openfiler 2.3, Areca 1230 RAID card and 2 small internal hard drives attached via SATA to the motherboard using Software RAID for the boot drives. I have been running this setup for over a month and done extensive testing on various settings/configs to find the one that worked the best (for me).
I use iSCSI for the clustered disks (with Microsoft iSCSI Initiator) and NFS mounts for all regular disks (from VMware ESX). Suprisingly, the NFS performs pretty much the same as iSCSI plus a) it is readable from the Openfiler directly and b) it doesn't have the 2 TB LUN restriction. Only downside I have seen thus far is that I don't get disk performance stats for the NFS disks from ESX. Not a huge thing.
Two important things to get the best speed from NFS - set async mode on the NFS (this assumes you are using a good UPS) - this will significantly improve your write speed. The second thing is to use alb bonding mode for your two gb ethernet nics. This mode does not have any special switch requirements but allows good speed both receiving and sending.
I will try to document my steps here in a little while.
Subscribe to:
Posts (Atom)