I have a raid 5 mdadm with 5 disks that is in trouble.
One of the disk has failed (/dev/sdf) and the array is degraded.
Nothing very new here, but my problem is that I mistype a command to remove the failed disk from the array and removed an other one (/dev/sdd)... So the array had 3 disks left. I'm able to add back the disk and the array is degraded with 4 disks and in (auto-read-only).
Partage:/home/ld# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active (auto-read-only) raid5 sdd[3] sdb[0] sde[4] sdc[1] 7814057984 blocks level 5, 64k chunk, algorithm 2 [5/4] [UU_UU]
unused devices: <none>
Partage:/home/ld# mdadm -D /dev/md0 /dev/md0: Version : 0.90 Creation Time : Wed Nov 3 00:13:24 2010 Raid Level : raid5 Array Size : 7814057984 (7452.07 GiB 8001.60 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent
Update Time : Sat Mar 15 11:41:50 2014 State : clean, degraded
Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 64K UUID : 9af5cf0b:d99dd36c:6b830b00:0306cea5 (local to host Partage) Events : 0.1438325 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 0 0 2 removed 3 8 48 3 active sync /dev/sdd 4 8 64 4 active sync /dev/sde
With the (real) failed disk /dev/sdf missing.
The problem is that after a reboot the disk is kicked out of the array every time.
I notice that information on the disks are note the same:
Magic : a92b4efc Version : 0.90.00 UUID : 9af5cf0b:d99dd36c:6b830b00:0306cea5 (local to host Partage)
Creation Time : Wed Nov 3 00:13:24 2010 Raid Level : raid5 Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Array Size : 7814057984 (7452.07 GiB 8001.60 GB) Raid Devices : 5 Total Devices : 3 Preferred Minor : 0
Update Time : Sat Mar 15 11:41:50 2014 State : clean
Active Devices : 3 Working Devices : 3 Failed Devices : 2 Spare Devices : 0 Checksum : a0dabed7 - correct Events : 1438325
Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State
this 0 8 16 0 active sync /dev/sdb
0 0 8 16 0 active sync /dev/sdb 1 1 8 32 1 active sync /dev/sdc 2 2 0 0 2 faulty removed 3 3 0 0 3 faulty removed 4 4 8 64 4 active sync /dev/sde
On
The 3 other disks (/dev/sd[bce]) don't see /dev/sdd as active in the array. What can I do to sync this information on all 4 disks (/dev/sd[bcde]) and run the array before I buy a new disk?
Thanks for your help! First post here, but long time lurker!
[link][8 comments]