Introduction
I have been using a quite secure setup for the last couples of years with a 4 drive RAID 6 setup. This setup can tolerate two disk failures without any data loss. Recently though, I have been getting close to the edge of the filesystem and could use some extra space and since I have both monthly backup to an external hard drive and nightly offsite backup I am actually not very afraid of a data loss on a RAID 5 setup. So I have planned to change my 4 disk RAID 6 to a 4 disk RAID 5 without any spares.
A word of caution: Please do not do any of the actions below before a backup has been made.
Changing the raid level
Using mdadm it is very easy to change the raid level. The command below changes the raid level from my previous RAID 6 setup with 4 disks to a RAID 5 with 3 active disks and a spare. This reason for using 3 disks and a spare is that mdadm recommends using having a spare when downgrading. Since I am no hurry this is fine with me. Additionally, the command below saves some critical data to a backup file during the process to ease recovery if anything should go wrong or power should be lost. This should be done to a hard disk that is not a part of the raid. The backup of file is not big. In my case it was about 30 MB and was saved to my root that resides on the system SSD.
mdadm --grow /dev/md0 --level=raid5 --raid-devices=3 --backup-file=/root/mdadm-backupfile
This process can take a long while (around 24 hours in my case), but during the raid is fully functional and it is all done in the background. To monitor the progress use either of the commands below (assuming that your raid device is md0). Please note that this is not output from the actual process, but output from my functional raid 5 when it is all done and I am writing this blog entry:
[root@kelvin ~][22:00]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sde1[0] sdb1[3] sdd1[4] sdc1[1] 2930280960 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 2/8 pages [8KB], 65536KB chunk unused devices: <none>
[root@kelvin ~][22:00]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Feb 1 23:24:46 2011 Raid Level : raid5 Array Size : 2930280960 (2794.53 GiB 3000.61 GB) Used Dev Size : 976760320 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Aug 29 22:00:58 2013 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : kelvin:0 (local to host kelvin) UUID : 3f335d48:4b43c1d5:6ee5c975:1428eb56 Events : 1186983 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1 3 8 17 3 active sync /dev/sdb1
Growing the raid from 3 to 4 active disks
After this operation my raid had 3 active disk and a spare. I would like to expand my raid, so I will convert the spare into an active disk using the following command:
mdadm --grow -n 4 /dev/md0
This process is also done while the raid is active and took about 12 hours. My server actually crashed during the process, probably from a XBMC bug, but the process resumed without a hitch when I came home and rebooted it. 🙂
Resizing the filesystem to the disk
Finally I needed to expand the filesystem (ext4 in my case) to use all the new space. So I changed to init level 1 (single user mode) to make sure that all the users are off the system.
init 1
Next I unmounted the raid system
umount /home/
Forced a check to ensure that the filesystem is healthy
e2fsck -f /dev/md0
This took about 5 minutes. Finally I resized my ext4 partition to fill all the available space on md0:
resize2fs /dev/md0
This took about 10 minutes. I could have mounted and changed back to init level 2 (debian multi-usermode), but I decided to restart and had everything up and running shortly after. I now have plenty of space left on my /home partition:
[root@kelvin ~][22:08]# df -hTl Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 ext4 50G 18G 30G 37% / /dev/md0 ext4 2.7T 1.5T 1.3T 56% /home /dev/sda1 ext2 291M 92M 184M 34% /boot ...
As a finally note, I noticed strange activity in my munin logging on the disks after the reboot. All the drives in the raid were utilized at around 5% even though nothing was happening. Using iotop it turned out to be a command called “ext4lazyinit” running in the background:
When lazy_itable_init extended option is passed to mke2fs, it
considerably speed up filesystem creation because inode tables are
not zeroed out, thus contains some old data. When this fs is mounted
filesystem code should initialize (zero out) inode tables…For purpose of zeroing inode tables it introduces new kernel thread
called ext4lazyinit, which is created on demand and destroyed, when it
is no longer needed…
From: http://lists.openwall.net/linux-ext4/2010/09/08/11
Sources
http://www.linuxquestions.org/questions/linux-general-1/how-to-migrate-from-raid-6-to-raid-5-using-mdadm-and-lvm-939508/
http://www.linuxquestions.org/questions/linux-software-2/size-of-backup-file-for-changing-mdadm-raid-level-943477/
http://www.allmyit.com.au/mdadm-growing-raid5-array-ubuntu
https://raid.wiki.kernel.org/index.php/Growing
http://ubuntuforums.org/showthread.php?t=1494846
http://www.allmyit.com.au/mdadm-growing-raid5-array-ubuntu
Hi, nice article and I was thinking of doing exactly this on my QNAP TS-420 which has 4x3TB drives – it took a week to go from raid 5 to raid 6 by using the QNAP desktop utilities. Now I want to go the other way. Does the method above preserve data? I do of course have backups.
I have no idea on how the QNAP TS-420 works or if even uses linux mdadm, but the method I described above on my Linux box preserves all data, but as always in these cases make sure you have a complete backup just in case.
I tried this on my QNAP TS-420 but I got an error message from MDADM that I could not decipher so I took the easy route by resetting the QNAP and restoring my data to the new RAID 5 volume.
Hey, thanks for this article, I realise its a few years old now but to my research the commands/process still look relevant. Thus I attempted them and so far I have an array which is reshaping but for the past 9 hours has been reporting at 0% complete with a time to finish of…well infinity almost. I have 4x 4TB disks so I was certainly expecting a wait and I’m not suggesting anything is wrong with my current progress, but would appreciate an insights as to whether I should be seeing progress via /proc/mdstat after 9 hours?
FWIW the array is still fully functional, I’ve zero complaints and so am happy to wait for it to do its thang! 🙂
I am not sure why it is so slow. But there are several methods to speed up madm, see the comments in http://www.tjansson.dk/2011/02/building-a-powerful-cheap-and-silent-linux-nas-server/
The basic things to look into are:
Increasing the stripe cache
Increase the raid speed limits
Its taken me weeks of trial and error but I have finally converted my 4 x 6TB RAID 6 to RAID 5, I’ve gone from 12TB of space to 18TB of space! Thank you for this article a million times over!
Hello, I need help…. I’ve try to convert my 6x4TB RAID 6 to RAID5, but after the initial command, 3 days of reshape, many worries about system becoming irresponsive.. the process finished with…..still RAID 6 config. No change at all…. don’t understand what I did wrong. here is the command:
mdadm –grow /dev/md126 –level=raid5 –raid-devices=5 –backup-file=/root/mdadm-backup-310823