RAID array migration from CentOS 7 to Debian GNU/Linux 10 (upgrading drives)

After installing Debian GNU/Linux Testing (Bullseye) on my HP Proliant G8 Microserver in a new SSD drive, I have to setup an already created RAID array on my new fresh Linux installation and upgrade disks in that array at the same time, these are the steps which I followed:

Scan RAID arrays

# mdadm --detail --scan

Upgrade disks

# mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 2 drives.

# mdadm -Db /dev/md0
23 ARRAY /dev/md0 metadata=1.2 name=g8.acme:0 UUID=20d65619:0ed9ba74:36f94bc0:6fddc56e

# mdadm -Q /dev/sdb1
/dev/sdb1: is not an md array
/dev/sdb1: device 0 in 2 device active raid1 /dev/md/0.  Use mdadm --examine for more detail.

# mdadm -Q /dev/sdc1
/dev/sdc1: is not an md array
/dev/sdc1: device 1 in 2 device active raid1 /dev/md/0.  Use mdadm --examine for more detail.

# mdadm -D /dev/md0
/dev/md0:
            Version : 1.2
      Creation Time : Mon Feb 29 23:57:11 2016
         Raid Level : raid1
         Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
      Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
      Intent Bitmap : Internal
        Update Time : Sun Sep 22 10:05:08 2019
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
 Consistency Policy : bitmap
               Name : g8.acme:0
               UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e
             Events : 6868
     Number   Major   Minor   RaidDevice State
        0       8       17        0      active sync   /dev/sdb1
        1       8       33        1      active sync   /dev/sdc1

With previous scan made our new system has detected our previous configured RAID array:

# lsblk
 ...
 └─sdb1             8:17   0   1.8T  0 part
   └─md0            9:0    0   1.8T  0 raid1
     └─raid1-lvm0 254:3    0   950G  0 lvm
 sdc                8:32   0   1.8T  0 disk
 └─sdc1             8:33   0   1.8T  0 part
   └─md0            9:0    0   1.8T  0 raid1
     └─raid1-lvm0 254:3    0   950G  0 lvm
 ...

As we want to upgrade our hard drive from our RAID1 array is needed to mark as failure one of the drives, in this case /dev/sdb:

# mdadm /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

# mdadm -D /dev/md0
/dev/md0:
            Version : 1.2
      Creation Time : Mon Feb 29 23:57:11 2016
         Raid Level : raid1
         Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
      Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
      Intent Bitmap : Internal
        Update Time : Sun Sep 22 11:26:20 2019
              State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 1
      Spare Devices : 0
 Consistency Policy : bitmap
               Name : g8.acme:0
               UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e
             Events : 6870
     Number   Major   Minor   RaidDevice State
        -       0        0        0      removed
        1       8       33        1      active sync   /dev/sdc1
        0       8       17        -      faulty   /dev/sdb1

As we’re going to change disks in our RAID1 array, I need to shutdown my server (because it hasn’t hotplug SATA feature) and unplug the old drive (/dev/sdb) and plug the new one. In addition to change disks by a bigger ones, it’s also needed to change the bay in which they are plugged, as the first two bays in my server speed up 6Gbps and the other two bays just until 3Gbps instead of using /dev/sdb (bay 2) and /dev/sdc (bay 3):

$ lsblk
NAME             MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
...
sdb                8:16   0   3.7T  0 disk
sdc                8:32   0   1.8T  0 disk
 └─sdc1             8:33   0   1.8T  0 part
  └─md0            9:0    0   1.8T  0 raid1
    └─raid1-lvm0 254:3    0   950G  0 lvm

Partition new drive

# fdisk -l /dev/sdb
Disk /dev/sdb: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 sfdisk -d /dev/sdc | sfdisk /dev/sdb
Checking that no-one is using this disk right now ... OK
Disk /dev/sdb: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> The size of this disk is 3.7 TiB (4000787030016 bytes). DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).
Created a new DOS disklabel with disk identifier 0x47f1c671.
/dev/sdb1: Created a new partition 1 of type 'Linux raid autodetect' and of size 1.8 TiB.
/dev/sdb2: Done.
New situation:
Disklabel type: dos
Disk identifier: 0x47f1c671
Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

# mdadm --add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[2] sdc1[1]
       1953382464 blocks super 1.2 [2/1] [_U]
       [>....................]  recovery =  0.2% (4322880/1953382464) finish=256.6min speed=126588K/sec
       bitmap: 0/15 pages [0KB], 65536KB chunk

Once synchronization process has been finished it’s needed to repeat the same procedure with the other drive:

# mdadm /dev/md0 --fail /dev/sdc1

Reboot my server, new drive has to be plugged in the first bay (/dev/sda):

# sfdisk -d /dev/sdb | sfdisk /dev/sda
# mdadm --add /dev/md0 /dev/sda1

Setup pro-active disk monitoring

# apt -y install smartmontools

Edit /etc/smartd.conf

/dev/sda -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m xxx@acme.local-M exec /usr/share/smartmontools/smartd-runner
/dev/sdb -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m xxx@acme.local -M exec
/usr/share/smartmontools/smartd-runner

And run:

# systemctl restart mdmonitor

Reference Links:


“If you cannot do great things, do small things in a great way.”
— Napoleon Hill

Leave a comment