Linux Software Raid Management
How to replace a hard disk on a software raid
# cat /proc/mdstat
instead of the string [UUUU] you will see [UU_U] or if you have a degraded array.
- mark /dev/sdd1 as failed:
# mdadm --manage /dev/md0 --fail /dev/sdd1
- Then we remove /dev/sdd1 from /dev/md0:
# mdadm --manage /dev/md0 --remove /dev/sdd1
If the machine has rebooted before you discovered the raid degrade, you may not need to do either of the above.
- Then power down the system:
# halt
and replace the old drive with one at least as big as the smallest drive in the current array, then boot up the system.
# fdisk -l
should show the new drive - see below my /dev/sdd was the new one :
Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 19452 156248158+ fd Linux raid autodetect Disk /dev/sdd: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/sdd doesn't contain a valid partition table
- Create the same partitioning as on one of the current raid members.
# sfdisk -d /dev/sdc | sfdisk /dev/sdd
then
# fdisk -l
to check if they have same partitioning now. mine looked like:
Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 19452 156248158+ fd Linux raid autodetect Disk /dev/sdd: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdd1 1 19452 156248158+ fd Linux raid autodetect
Next, add the new drive in :
# mdadm --manage /dev/md0 --add /dev/sdd1 mdadm: added /dev/sdd1
# cat /proc/mdstat
to see when it's finished.
During the synchronization the output will look like this:
root@mediabox:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4] sdb1[2] sdc1[1] sde1[3] 468744192 blocks level 5, 64k chunk, algorithm 2 [4/3] [_UUU] [>....................] recovery = 0.2% (413184/156248064) finish=69.1min speed=37562K/sec [=>...................] recovery = 6.4% (1572096/24418688) finish=1.9min speed=196512K/sec unused devices: <none>
useful other commands
Finding out serial numbers to identify the failed one:
udevadm info --query=all --name=/dev/sda | grep SERIAL
or
lsblk --nodeps -o name,serial
Create mdadm conf once you have set up the raids:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
You may get a 'no raid devices specified' when using:
mdadm --create /dev/md0 --level=1 /dev/sdc1 /dev/sdd1
This isn't very helpful because it looks like you have specified the raid devices. The solution is use:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
useful links
http://wiki.archlinux.org/index.php/Convert_a_single_drive_system_to_RAID