Emidio Planamente 's Homepage

Home GNU/Linux Programming Technology Sport Others Contact

Search

  Google linux


Debian

  Apt-get
  Debian Anatomy
  Debian Backports
  Debian Help
  Debian Planet
  Debian Swiss
  History of Debian
  Getdeb
  Kernel
  Kernel 2.6
  Packages
  Refcard for newbie
  Reference book
  SATA raid
  Weekly News


Official Docs

  Distrowatch
  Firewire
  Gimp
  Gimp Photo Archive
  Linuxdoc
  Linuxdoc Reference
  Linux Focus
  Linux From Scratch
  Linux Hacks
  Linux Hardware
  Linux Printing
  MySQL
  O'Reilly
  Rute
  Source Forge
  USB guide


Installation

  Instalinux
  Preseed


Laptop

  Linux on Laptops
  Linux Toshiba
  Tux Mobil


Live-CDs

  Create it Your self
  Knoppix
  Kororaa XGL 3


Pictures

  Bay13
  Gnomelook
  Kuznetsov


Security

  GNU/Linux
  PortsDB


Linux based product

  Dreambox


Free web service

  S5 presents


Against Micro$oft

  Combatti Micro$oft
  Microsuck


HTML validator CSS validator

 

Root on LVM on software RAID howto [Debian Etch]


1. Introduction

2. Requirements

3. Installation

4. Using self compiled kernel

5. Tips and tricks

6. Trouble shooting

7. Testing

8. About


1. Introduction


This howto briefly describes how to install Debian Etch on LVM2 on software RAID 1.


This can be done directly from the official Etch DVD, without having to install the OS on a classical partition and then move it to a LV residing on RAID.


If you need more information about LVM, you might be interested to my LVM micro howto.


If you need more information about RAID, you might be interested to my RAID micro howto.


Two words before starting:


1) This task is not indicated for beginners, but since I have been contacted from a lot of non expert users, I have rewritten the whole document describing more in detail the most important parts. This should help also users with only a basic knowledge about LVM and RAID. ;-)


2) Don't forget to make a backup of your data, better if on an external HD that will be unplugged before starting the installation or on a CD/DVD.


1.1. Restrictions


1.1.1. Restriction one


There is a limitation installing the system on a LV: the "vgscan" and "vgchange" commands must be available to the system in order to mount LVs.


There are the following possibilities to accomplish this:


1) Use of an initrd image.


2) Install only a part of the system on LVs.


With the first solution, the whole system can be placed on LVs (either in a single LV or in more LVs). I'm not a fan of initrd images, but this is the easiest way.


With the second solution, the /sbin containing the vg tools must be placed outside the LVs, in a non LVM partition.

At least other directories must be also outside the LVs (e.g.: /, /etc, /dev/, ...). The great advantage is that only the necessary LVs are mounted read-write, all the rest is mounted read-only!


Since the second solution is the right way to configure a server, but is a little bit expensive for a desktop PC, I only treat the first one in this document.


1.1.2. Restriction two


This resctriction is caused by booting a system installed on a LV.


The /boot cannot be a LV because it does not exist an easy way the bootloader can read inside LVs!


The LILO bootloader seems to be able to do that (may be only with a patch), but this procedure is out of the scope of this document.


1.2. Layout


The choosen RAID configuration is realized with 2 IDE harddisks: /dev/hda and /dev/hdb. The second harddisk is an exact mirror of the first.


No any spare disk has been considered, but you should be able to add one, if you like.


1.2.1. RAID layout


Since /boot must reside in a non LVM partition, 2 RAID devices are needed:


  /dev/md0: used for /boot

  /dev/md1: used for the LVs (/, swap, /home, ...)


As following, 4 partitions are needed:


  /dev/hda1 and /dev/hdb1: used for /dev/md0

  /dev/hda2 and /dev/hdb2: used for /dev/md1


Here the result:


  +-------------------------------------------------------------------+

  |  hda1  |                           hda2                           |

  +-------------------------------------------------------------------+

      |                                 |

  +--------+----------------------------------------------------------+

  |   md0  |                           md1                            |

  |  /boot |                   LVs: root, swap, ...                   |

  +--------+----------------------------------------------------------+

      |                                 |

  +-------------------------------------------------------------------+

  |  hdb1  |                           hdb2                           |

  +-------------------------------------------------------------------+


1.2.2. LVM layout


For the volume group name I have choosen "raid", but you can use any name.


For the logical volume name of the / I have choosen "etch", but you can use any name.

For the logical volume name of the swap I have choosen "swap", but you can use any name.


Here the result:


  /dev/raid/etch: used for /

  /dev/raid/swap: used for swap


2. Requirements


2.1. Hardware


The minimal requirements are:


- 2 harddisks (it doesn't make sense to have raid 1 on the same harddisk!)


2.2. Software


The minimal requirementes are:


- kernel 2.6

- lvm2

- mdadm


3. Installation


3.1. Booting the DVD


Power on with the Etch DVD1 in the DVD reader.


Once the Debian logo appears, type


  expertgui


This will choose the new graphical installer. The same procedure is possible with the text mode installer.


3.2. Installing the system - before RAID/LVM configuration


Select the following items and configure it like normally:


  - "Choose language"

  - "Select a keyboard layout"

  - "Detect and mount CD-ROM"


For the


  - "Load installer components from CD"


in the previous Debian release (Sarge) it was necessary to load the md and lvm modules here. This is not anymore necessary, because they seem to be already be integrated by default.


Here, you can choose some special features (e.g.: cryptographie, ...), but these descriptions are not part of this document.


Continue with:


  - "Detect network hardware"

  - "Configure the network"

  - "Detect disks"


The steps described in the next chapters are that specific to RAID and LVM.


3.3. Installing the system - RAID/LVM configuration


3.3.1. Creating partition table


Select


  - "Partition disks"


If you have new harddisk which have never been used before, you have to create a valid partition table, first.


If not, no any message will be showed here, in which case you can skip this chapter.


Create partition table for hda:


  - double click on the first harddisk (which corresponds to /dev/hda)

  - select "Yes" to create a new empty partition table

  - select "msdos" for partition table type


You should now see something like "pri/log xxx GB FREE SPACE" under your harddisk.


Repeat the previous steps for hdb.


3.3.2. Creating physical partitions


Create hda1:


  - double click on "FREE SPACE" of hda

  - double click on "Create a new partition"

  - enter 200MB

  - double click on "Primary"

  - select "Beginning"


  - double click on "Use as:"

  - select "physical volume for RAID"

  - double click on "Bootable flag" to set it to on

  - double click on "Done setting up the partition"


You should now see something like:


  "#1 primary 197.4 MB B K raid"


Repeat these steps for hdb1.


Create hda2


  - double click on "FREE SPACE" of hda

  - double click on "Create a new partition"

  - leave the displayed value

  - double click on "Primary"


  - double click on "Use as:"

  - select "physical volume for RAID"

  - double click on "Done setting up the partition"


You should now see something like:


  "#2 primary xxx GB K raid"


Repeat these steps for hdb2.


3.3.3. Creating RAID devices


Scroll up and start RAID devices creation:


  - double click on "Configure software RAID"

  - select "Yes"

  - click on "Continue"


Create /dev/md0 using /dev/hda1 and /dev/hdb1:


  - double click on "Create MD device"

  - double click on "RAID1"

  - leave "2" for number of active devices

  - click on "Continue"

  - leave "0" for number of spare devices

  - click on "Continue"

  - select "/dev/hda1" and "/dev/hdb1"

  - click on "Continue"


You have just created /dev/md0, which will be used for /boot.


Create /dev/md1 using /dev/hda2 and /dev/hdb2:


  - double click on "Create MD device"

  - double click on "RAID1"

  - leave "2" for number of active devices

  - click on "Continue"

  - leave "0" for number of spare devices

  - click on "Continue"

  - select "/dev/hda2" and "/dev/hdb2"

  - click on "Continue"


You have just created /dev/md1, which will be used for the LVs.


Save RAID configuration:


  - double click on "Finish"


You should now see something like:


  RAID 1 device #0 - 197.3 MB Software RAID device

    > #1 197.3 MB


  RAID 1 device #1 - xxx GB Software RAID device

    > #1 xxx GB


Now, the RAID arrays have been created and are currently synchronizing them self. You can verify this going on the second console, by pressing CTRL + ALT + F2, and repeatedly calling


  cat /proc/mdstat


It would be nice to simply call


  watch cat /proc/mdstat


but this last has not been integrated in the Debian installer.


To switch back to the graphical installer, press CTRL + ALT + F5.


3.3.4. Creating LV devices


We leave /dev/md0 untouched, because it must reside outside the LVs (do you remember?). We will now proceed with /dev/md1.


Mark md1 as LVM:


  - double click on "#1" of "RAID1 device #1" (which corresponds to /dev/md1)

  - double click on "Use as:"

  - select "physical volume for LVM"

  - double click on "Done setting up the partition"


You should now see something like:


  RAID 1 device #1 - xxx GB Software RAID device

    > #1 xxx GB K lvm


Now, the /dev/md1 RAID device is ready to be used with LVM and we can proceed by creating our LVs.


3.3.5. Creating LVs


Scroll up and start LVM devices creation:


  - double click on "Configure the Logical Volume Manager"

  - select "Yes"

  - click on "Continue"


Create volume group:


  - double click on "Create volume group"

  - enter "raid"

  - click on "Continue"

  - select "/dev/md1"

  - click on "Continue"


Now, it exists a volume group called "raid" and we use it to create our LVs.


Create logical volume "etch" with a size of 6G and of group "raid":


  - double click on "Create logical volume"

  - double click on "raid"

  - enter "etch"

  - click on "Continue"

  - enter 6G

  - click on "Continue"


This LV will be used later for the / partition.


Create logical volume "swap" with a size of 2G and of group "raid":


  - double click on "Create logical volume"

  - double click on "raid"

  - enter "swap"

  - click on "Continue"

  - enter 2G

  - click on "Continue"


This LV will be used later for the swap partition.


Depending on your configuration, create additional volumes in the group "raid" for /home, /var/www, ...

If you prefer, you can do this later, but the process will be a little bit more complicated.


Save LVM configuration:


  - double click on "Finish"


You should now see something like:


  LVM VG raid, LV etch - 6.4 GB Linux device-mapper

    > #1 6.4 GB


  LVM VG raid, LV swap - 1.9 GB Linux device-mapper

    > #1 1.9 GB


The LVM configuration is now finished and we can proceed by creating our filesystems.


3.3.6. Creating /


Create mountpoint for /:


  - double click on "#1" of the "LVM VG raid, LV etch" (which corresponds to /dev/raid/etch)

  - double click on "Use as:"

  - double click on "ReiserFS journaling file system"

  - double click on "Mount point:"

  - double click on "/ - the root file system"

  - double click on "Mount options:"

  - select "notail - disable packing of files into the file system tree"

  - click on "Continue"

  - double click on "Label:"

  - enter "etch"

  - double click on "Done setting up the partition"


You should now see something like:


  LVM VG raid, LV etch - 6.4 GB Linux device-mapper

    > #1 6.4 GB f reiserfs /


3.3.7. Creating swap


Create mountpoint for swap:


  - double click "#1" of the "LVM VG raid, LV swap" (which corresponds to /dev/raid/swap)

  - double click on "Use as:"

  - double click on "swap area"

  - double click on "Done setting up the partition"


You should now see something like:


  LVM VG raid, LV swap - 1.9 GB Linux device-mapper

    > #1 1.9 GB - f swap   swap


3.3.8. Creating additional LVs before rebooting


For each additional logical volume you have created before, repeat the same process like for the / partition.

If you didn't create any additional LV, skip this chapter.


Here an example for /home:


  - double click on "#1" of the "LVM VG raid, LV home" (which corresponds to /dev/raid/home)

  - double click on "Use as:"

  - double click on "ReiserFS journaling file system"

  - double click on "Mount point:"

  - double click on "/home - user home directories"

  - double click on "Mount options:"

  - select "notail - disable packing of files into the file system tree"

  - click on "Continue"

  - double click on "Label:"

  - enter "home"

  - double click on "Done setting up the partition"


You should now see something like:


  LVM VG raid, LV home - xxx GB Linux device-mapper

    > #1 xxx GB - f reiserfs /home


3.3.9. Creating /boot


Create mountpoint for /boot:


  - double click on "#1" of "RAID1 device #0" (which corresponds to /dev/md0)

  - double click on "Use as:"

  - double click on "ReiserFS journaling file system"

  - double click on "Mount point:"

  - double click on "/boot - static files of the boot loader"

  - double click on "Mount options:"

  - select "notail - disable packing of files into the file system tree"

  - click on "Continue"

  - double click on "Label:"

  - enter "boot"

  - double click on "Done setting up the partition"


You should now see something like:


  RAID 1 device #0 - 197.3 MB Software RAID device

    > #1 197.3 MB - f reiserfs /boot


Once done, we have just to save the partitions.


3.3.10. Finish partitioning


Write all changes to disk:


  - double click on "Finish partitioning and write changes to disk"

  - select "Yes"

  - click on "Continue"


At this point, the special configuration for the RAID/LVM combination has been done. You can now proceed with the classical installation.


3.4. Installing the system - after RAID/LVM configuration


You can proceed by installing the system exactly as you would make without LVM and RAID.


The only difference now, is that /target is mounted on /dev/raid/etch and /target/boot is mounted on /dev/md0. You can verify this by going on the second console (ALT + F2) and typing


  mount


Select the following items and configure them like normally:


  - "Configure time zone"

  - "Configure the clock"

  - "Set up users and passwords"

  - "Install the base system"


When you are asked for the kernel to install, if you have a dual core processor choose:


  linux-image-2.6.8-686-smp


otherwise, choose the proposed one:


  linux-image-2.6.8-686


Continue by selecting the following items:


  "Configure the package manager"

  "Select and install software"


The next step, is to install the bootloader.


3.4.1. Installing the boot loader on the first harddisk


You can choose between GRUB and LILO. Personally, I prefer GRUB because it is newer, more flexible and has some nice features like upgrading it self when a new kernel is installed/removed. If you want, you can try with LILO, but at your own risk.


Select:


  "Install the GRUB boot loader on a hard disk"

  - select "Yes"

  - click on "Continue"

  - leave password empty and click on "Continue"


You have just installed the bootloader on /dev/hda1 and overwritten the MBR.


WARNING! This installs the boot loader only on the first harddisk. In case this harddisk has a failure, you will not be able to boot the system. Therefore, the boot loader must also be installed on the second harddisk. Since this step cannot be accomplished here, you have to do it later.


A detailed documentation about GRUB can be found here: http://www.gnu.org/software/grub/manual/html_node/index.html.


3.4.2. Rebooting


At this point, your RAID won't be probably 100% synchronized. You can check this by switching to the second console (ALT + F2) and by repeatedly calling


  cat /proc/mdstat


You can anyway reboot by selecting


  "Finish the installation"


You can wait that your harddiks are synchronized later. The only risk until then is that if the first harddisk fails, the second won't be usable and your system will be broken.


If your system does not boot correctly, you have probably done something wrong in one of the steps above.

In this case, refer you to chapter "Trouble shooting".


3.5. Installing the boot loader on the second harddisk


Since the bootloader has been installed only on the first harddisk, you have to manually install it on the second one.

Therefore, once the system is up:


  grub

  device (hd0) /dev/hdb

  root (hd0,0)

  setup (hd0)


Logically, replace the device path with them of your second harddisk!


3.6. Installing and configuring additional packages


You can now proceed installing all kind of packages do you need and logically configuring them.


3.7. Creating additional LVs after rebooting


If you have already done this step before, just skip to the next chapter.


Don't forget to create all the LVs you need (e.g.: /home, /var/www, ...) and to update the /etc/fstab file.


Logically, before mounting them with the correct mount point, move the content to the LV.


Here a complete example for /home.


3.7.1. Creating /home


Create LV of 2G:


  lvcreate -A -n home -L 2G raid


Create filesystem:


  mkreiserfs --label home /dev/raid/home


Temporarily mount /dev/raid/home and transfer the content of /home:


  mount /dev/raid/home /mnt

  mv /home/* /mnt

  umount /mnt


3.7.2. Updating /etc/fstab


Add an entry for /home:


  /dev/raid/home  /home  reiserfs  noatime,notail


3.7.3. Mounting /home


  mount /home


4. Using self compiled kernel


4.1. Introduction


A self compiled kernel is not a MUST, but it could be necessary for some reason.


In such a case, it is important to create the initrd image, otherwise the system will not boot.


4.2. Compilation


In order to have a initrd image, it's enough to specify the parameter


  --initrd


Therefore:


  fakeroot make-kpkg --append_to_version -yourHost --initrd --revision=yourRevisionNumber kernel _image modules_image


If you cannot find the initrd image in the package, don't panic! It will be created later during the installation process and placed in /boot.


4.3. Installation


Just install your self compiled kernel:


  dpkg -i /usr/src/kernel-image-2.6.x-yourHost_yourRevisionNumber.deb


If you have followed my suggestion and have installed grub, you are ready to reboot and testing your self compiled kernel.


5. Tips and tricks


5.1. Making backup of RAID configuration


In case the system becomes unbootable, you need your RAID configuration in order to be sure the RAID can start in every condition.


Therefore, once the system is working, make a backup of your RAID configuration. It is just a file called


  /etc/mdadm/mdadm.conf


If it is not present, type:


  cd /etc/mdadm

  echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf

  mdadm --detail --scan >> mdadm.conf


to create a new one.


Make a backup of this file out of your RAID system!!! It is very important if you have to solve a problem in your system using a rescue CD/DVD.


5.2. How to prevent the system becomes unusable


This system is safe enough from failure, but it is not safe enough against you! ;-)


It is very easy to break the system trying something new, maybe installing a kernel without the currect support (LVM, RAID, ...) and having the same name of the working kernel. In this way, the working kernel will be removed and the buggy kernel installed.


To prevent you cannot access your system if it becomes unbootable, procure you a rescue system.


5.2.1. Rescue CD/DVD


Procure you a rescue CD/DVD with kernel 2.6, LVM2 and RAID support.


You can download my Emi's rescue CD here: http://emidio.planamente.ch/rescuecd.


If you prefer, you can use the Debian DVD it self, but it needs much more time until you can have access to your damaged system.


5.2.2. Rescue system


An alternative solution to the rescue CD/DVD is to install a rescue system on an other HD (maybe an external one).


Be sure to install it in a way you can always boot it, for example on a little partition on the first harddisk.


Be also sure to have installed all what is needed (kernel + tools).


5.3. Backup


Don't forget, data and system backups are always important, also if you use RAID system. An erased file is erased for the whole array and this is irreversible!!!


Take a look at http://www.planamente.ch/emidio/pages/linux_howto_backup.php.


6. Trouble shooting


6.1. Duplicate PV


By creating LVM on RAID 1 device, it could happen PVs are not created on the RAID device but on the physical partitions (e.g.: /dev/sda1 and /dev/sdb1 if there are part of /dev/md0).

This will result as an error by doing LVM scanning (pvscan, vgscan, ...) like


  Found duplicate PV 9w3TIxKZ6lFRqWUmQm9tlV5nsdUkTi4i: using /dev/sda1 not /dev/sdb1


and will make impossible to create a new volume group on /dev/md1.


In this case, you have to create a filter for LVM.

In the /etc/lvm/lvm.conf file, you have to add such a line:


  filter = [ "r|/dev/cdrom|","r|hd[ab]|" ]


Logically, replace  "hd" with "sd" for SCSI devices and "ab" with the correct one.


This will prevent such devices are scanned or used for LVM.


You have to reload LVM with


  /etc/init.d/lvm force-reload


but since your system is on LVM, this is not possible. Therefore you have to reboot.


6.2. System hangs up


The current description is for using the Sarge DVD1 as rescue DVD. I haven't found the time to test it with the Etch DVD1, that should provide the same functionality.


Boot with Sarge DVD1.


6.2.1. Starting RAID


Once the rescue system has booted, the RAID devices are not started yet, because the md driver (raid driver) is compiled as module and not built in.

I have taken a look in the source code and it seems the raid autodetection is explicity disabled if the driver is compiled as module. Don't ask me why.


Therefore, you have to know your exactly RAID configuration: using the mdadm.conf file of just your head!


Load drivers:


  modprobe md raid1


Assemble devices, without configuration file:


  mdadm --assemble /dev/md/0 /dev/scsi/host0/bus0/target0/lun0/part1 /dev/scsi/host0/bus0/target1/lun0/part1

  mdadm --assemble /dev/md/1 /dev/scsi/host0/bus0/target0/lun0/part2 /dev/scsi/host0/bus0/target1/lun0/part2


Assemble devices, with configuration file:


  mdadm --assemble --scan --config=myConfigFile


Logically, in both cases, replace the partition paths with yours.


If the arrays are not degraded, they should also automatically be started. If not, do it:


  mdadm --run /dev/md/0

  mdadm --run /dev/md/1


Verify they have been started:


  cat /proc/mdstat


6.2.2. Starting LVM


The logical volumes are easier to start.


Load device mapper driver:


  modprobe dm-mod


If you forget to load this driver, you will get a terrible error like:


  /proc/misc: No entry for device-mapper found

  Is device-mapper driver missing from kernel?

  Failure to communicate with kernel device-mapper driver.

  Incompatible libdevmapper 1.01.00-ioctl (2005-01-17)(compat) md kernel driver


Don't panic, just load dm-mod!!!


Search and activate all volume groups:


  vgscan

  vgchange -a y


Verify all LVs are active with:


  lvscan


6.2.3. Mounting broken system


Make mount point


  mkdir /target


and mount / of the broken system:


  mount /dev/raid/sarge /target


6.2.4. Changing root


This is one of the most interessting command of GNU/Linux: chroot.


This command will change the root in the shell where it has been invoked. Therefore, call


  chroot /target


and you will transfered in the broken system. If you type


  ls -l /


you will see the content of /target, but all what is outside it is absolutely not visible.


6.2.5. Mounting /proc


In order to restore your broken system, you have to mount the /proc directory, otherwise your kernel won't have info about the chrooted system.


Therefore, just type:


  mount /proc


6.2.6. Mounting /boot


Since the /boot is not in the same partition, it has to be mounted. Therefore, call


  mount /boot


6.2.7. Solving the problem


Here I can't help you a lot. There could be milion of problems and it is your responsability to solve it. I have warned you. Root on LVM on RAID is not for everyone! ;-)


If you didn't follow my instructions and installed LILO instead of GRUB, don't forget to call


  lilo


when you are done.


6.2.8. Unmounting /proc


In theory, you are ready to reboot, but you have to exit from the chrooted directory before.


Since normally it is important to unmount /proc before exiting, we do it (just for pedagogical level) also if in this case it would not be necessary, because you want to reboot and not work anymore on the rescue system.


Anyway, type:


  umount /proc


6.2.9. Exiting


Now, we are ready for leaving the chrooted environment.


Just type:


  exit


6.2.10. Rebooting


Finally, we can reboot and hope the problem is solved.


7. Testing


7.1. Simulating disk failure


7.1.1. Second disk fails


I have made this test by physically removing the second harddisk from my system (PC was turned off!!!).


The system booted without any problem and after a few minutes, the root received a mail informing him that a degraded array was detected.


Taking a look at


  cat /proc/mdstat


I could verify the second harddisk was really missing.


After have reconnected the second harddisk to the system (PC was turned off!!!) and booted, I had to manually add the second harddisk by calling:


  mdadm --add /dev/md0 /dev/hdb1

  mdadm --add /dev/md1 /dev/hdb2


Once the synchronization was done (took a lot of time) the original situation was restored.


In case of a real failure, reinstall the MBR after have replaced the broken disk:


  grub

  device (hd0) /dev/hdb

  root (hd0,0)

  setup (hd0)


7.1.2. First disk fails


I made this test by physically removing the first harddisk from my system (PC was turned off!!!).


The system could not boot, but not because the RAID was not working but for the following reason.

I have both SCSI and IDE harddisks and the system is installed on the SCSI.

The BIOS can map the sequence of all the harddisk. By default, it assigns IDE before SCSI, but I have changed this order.

If the BIOS detects a change in the harddisk configuration, it reassigns the position of the devices by putting the IDE before SCSI.

In this case, the system becomes unbootable. I had just to reassign the correct order and the system booted correctly.


Also in this case, after a few minutes, the root received a mail informing him that a degraded array was detected.


Taking a loot at


  cat /proc/mdstat


I could verify the second (and not the first) harddisk was really missing.


Note: In my case, the second harddisk was called /dev/hda and not /dev/hdb, because the first one was missing!!!


After have reconnected the first harddisk to the system (PC was turned off!!!) and booted, I had to manually add the first harddisk by calling:


  mdadm --add /dev/md0 /dev/hda1

  mdadm --add /dev/md1 /dev/hda2


Note: At this point, the second harddisk was newly called /dev/hdb, because the first harddisk was newly present!!!


Once the synchronization was done (took a lot of time) the original situation was restored.


In case of a real failure, reinstall the MBR after have replaced the broken disk:


  grub

  device (hd0) /dev/hda

  root (hd0,0)

  setup (hd0)


7.2. Partition fails


One after the other, I set to faulty all the single partitions composing the array.

Every time I set only one partition to faulty, I verified that the root received a warning email and I rebooted the system to verify if it could still come up.


To set a partition to faulty:


  mdadm --fail /dev/md1 /dev/hda2


To add the faulty partition to the array:


  mdadm --add /dev/md1 /dev/hda2


7.2.1. Data corruption


RAID has not been implemented against data corruption. If you try to simulate data corruption, you will have data corruption.


Therefore, don't try such a test unless you will really destroy your data!


8. About


8.1. Author


Emidio Planamente <eplanamente@gmx.ch>


8.2. Feedback


Please let me know if you could successfully install your system on LVM2 on RAID1 following this document description.


Any other feedback is also welcome.


8.3. History


Version 3.3 / 2007-11-04

  Fixed "Creating physical partitions"


Version 3.2 / 2007-05-28

  Changed "Restriction two"

  Minor corrections


Version 3.1 / 2007-04-30

  Minor corrections


Version 3.0 / 2007-04-18

  Updated document for Debian Etch


Version 2.2 / 2006-02-06

  Changed "Creating RAID devices"

  Changed "Installing the boot loader on the first harddisk"

  Changed "Installing the boot loader on the second harddisk"


Version 2.1 / 2006-01-25

  Changed "Restriction one"

  Changed "Creating LVM devices"


Version 2.0 / 2006-01-22

  Changed "Rescue CD/DVD"


Version 1.9

  Changed "Creating RAID devices"

  Changed "Rebooting"


Version 1.8

  Fixed "Creating physical partitions"

  Fixed "Creating RAID devices"


Version 1.7

  Fixed "System hangs up"


Version 1.6

  Changed "Partition fails"


Version 1.5

  Added "Using self compiled kernel"


Version 1.4

  Changed "First disk fails"

  Changed "Second disk fails"


Version 1.3

  Changed "First disk fails"


Version 1.2

  Changed "Starting RAID" "without config file"


Version 1.1

  Changed "5.2 System hangs up"

  Changed "Installing the boot loader"

  Added "6. Testing"


Version 1

  First public release



Emidio Planamente

Last modified on 2007-11-04