CentOS 7 RAID配置

CentOS 7 RAID配置

mdadm是multiple devices admin的简称,它是Linux下的一款标准的软件 RAID 管理工具。

在linux系统中目前以MD(Multiple Devices)虚拟块设备的方式实现软件RAID,利用多个底层的块设备虚拟出一个新的虚拟设备,并且利用条带化(stripping)技术将数据块均匀分布到多个磁盘上来提高虚拟设备的读写性能,利用不同的数据冗祭算法来保护用户数据不会因为某个块设备的故障而完全丢失,而且还能在设备被替换后将丢失的数据恢复到新的设备上。

目前MD支持linear,multipath,raid0(stripping),raid1(mirror),raid4,raid5,raid6,raid10等不同的冗余级别和级成方式,当然也能支持多个RAID陈列的层叠组成raid1 0,raid5 1等类型的陈列。

实验

正常情况,可以通过服务器上的raid 阵列卡,来创建RAID磁盘。如果服务器上没有raid 阵列卡,也可以尝试使用mdadm创建软raid。实验在正常的虚拟机上添加四块硬盘,组成raid 5磁盘阵列, 三个块数据加一块备份。

添加磁盘

给虚拟机添加四块磁盘,磁盘编号从vd{b,c,d,e}。系统查看已添加的四块磁盘。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@haproxy-node-a ~]# fdisk -l

Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a9efb

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 104857599 51379200 8e Linux LVM

Disk /dev/mapper/centos-root: 50.5 GB, 50457477120 bytes, 98549760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors # 新添加的磁盘1
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/vdc: 21.5 GB, 21474836480 bytes, 41943040 sectors # 新添加的磁盘2
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdd: 21.5 GB, 21474836480 bytes, 41943040 sectors # 新添加的磁盘3
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vde: 21.5 GB, 21474836480 bytes, 41943040 sectors # 新添加的磁盘4
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

查看原有的文件系统:

1
2
3
4
5
6
7
8
9
[root@haproxy-node-a ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 874M 0 874M 0% /dev
tmpfs tmpfs 886M 180K 885M 1% /dev/shm
tmpfs tmpfs 886M 17M 869M 2% /run
tmpfs tmpfs 886M 0 886M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 47G 2.5G 45G 6% /
/dev/vda1 xfs 1014M 186M 829M 19% /boot
tmpfs tmpfs 382M 0 382M 0% /run/user/0

可以看到CentOS的文件系统为xfs。以及各目录挂载的位置。

磁盘检查

新添加的磁盘,可以先检查一下,是否已经做过raid。

1
mdadm -E /dev/vd{b,c,d,e}

磁盘分区初始化

分别对新添加的四块磁盘进行分区操作,主要包含以下操作:

  • 创建新分区 n
  • 选择分区类型 l
  • 调整分区类型 t
  • Linux raid auto fd
  • 保存分区配置 w
  • 查看分区信息 p

以vdc为例,参考命令块内容操作,其它三个操作一样。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
[root@haproxy-node-a ~]# fdisk /dev/vdc 
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xbd7a8d98.

Command (m for help): p

Disk /dev/vdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xbd7a8d98

Device Boot Start End Blocks Id System

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): l

0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx
5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data
6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility
8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt
9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access
a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT
f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/
10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b
11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor
12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary
16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT
1e Hidden W95 FAT1 80 Old Minix

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

验证分区是否为RAID

再次检查已经分区过的磁盘,得到分区信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@haproxy-node-a ~]# mdadm -E /dev/vd{b,c,d,e}
/dev/vdb:
MBR Magic : aa55
Partition[0] : 41940992 sectors at 2048 (type fd)
/dev/vdc:
MBR Magic : aa55
Partition[0] : 41940992 sectors at 2048 (type fd)
/dev/vdd:
MBR Magic : aa55
Partition[0] : 41940992 sectors at 2048 (type fd)
/dev/vde:
MBR Magic : aa55
Partition[0] : 41940992 sectors at 2048 (type fd)

创建raid 5磁盘

mdadm -n 指定磁盘数量, -x 指定备份盘数量, -l 指定raid 级别,数字 5 或 raid5 均可,最后指定原如磁盘设备。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@haproxy-node-a ~]# mdadm -Cv /dev/md0 -n3 -l raid5 -x 1 /dev/vd{b,c,d,e}
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: partition table exists on /dev/vdb
mdadm: partition table exists on /dev/vdb but will be lost or
meaningless after creating array
mdadm: partition table exists on /dev/vdc
mdadm: partition table exists on /dev/vdc but will be lost or
meaningless after creating array
mdadm: partition table exists on /dev/vdd
mdadm: partition table exists on /dev/vdd but will be lost or
meaningless after creating array
mdadm: partition table exists on /dev/vde
mdadm: partition table exists on /dev/vde but will be lost or
meaningless after creating array
mdadm: size set to 20954112K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

查看raid 创建信息

创建raid 5 磁盘阵列,可以查看raid创建的进度,见下结果。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdd[4] vde[3](S) vdc[1] vdb[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[===============>.....] recovery = 75.3% (15780992/20954112) finish=0.8min speed=105513K/sec

unused devices: <none>
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdd[4] vde[3](S) vdc[1] vdb[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=================>...] recovery = 88.4% (18524672/20954112) finish=0.3min speed=104563K/sec

unused devices: <none>
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdd[4] vde[3](S) vdc[1] vdb[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

查看raid 5的信息

同样使用上面的验证命令,这次得到下面的结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
[root@haproxy-node-a ~]# mdadm -E /dev/vd{b,c,d,e}
/dev/vdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Name : haproxy-node-a:0 (local to host haproxy-node-a)
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 41908224 sectors (19.98 GiB 21.46 GB)
Array Size : 41908224 KiB (39.97 GiB 42.91 GB)
Data Offset : 34816 sectors
Super Offset : 8 sectors
Unused Space : before=34664 sectors, after=0 sectors
State : clean
Device UUID : 3b365a02:36b9e4fd:ea3f65fc:f5488542

Update Time : Fri Sep 16 14:08:20 2022
Bad Block Log : 512 entries available at offset 136 sectors
Checksum : 1086a417 - correct
Events : 18

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0 # 可以看到vdb为激活状态
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/vdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Name : haproxy-node-a:0 (local to host haproxy-node-a)
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 41908224 sectors (19.98 GiB 21.46 GB)
Array Size : 41908224 KiB (39.97 GiB 42.91 GB)
Data Offset : 34816 sectors
Super Offset : 8 sectors
Unused Space : before=34664 sectors, after=0 sectors
State : clean
Device UUID : ae142c9c:9c01f688:82e21f85:4da922be

Update Time : Fri Sep 16 14:08:20 2022
Bad Block Log : 512 entries available at offset 136 sectors
Checksum : 39c1cde1 - correct
Events : 18

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1 # 可以看到vdc为激活状态
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/vdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Name : haproxy-node-a:0 (local to host haproxy-node-a)
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 41908224 sectors (19.98 GiB 21.46 GB)
Array Size : 41908224 KiB (39.97 GiB 42.91 GB)
Data Offset : 34816 sectors
Super Offset : 8 sectors
Unused Space : before=34664 sectors, after=0 sectors
State : clean
Device UUID : 2ee270bf:4403d299:95a0f6f2:c4cd9ea5

Update Time : Fri Sep 16 14:08:20 2022
Bad Block Log : 512 entries available at offset 136 sectors
Checksum : c3357f96 - correct
Events : 18

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2 # 可以看到vdd为激活状态
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/vde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Name : haproxy-node-a:0 (local to host haproxy-node-a)
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 41908224 sectors (19.98 GiB 21.46 GB)
Array Size : 41908224 KiB (39.97 GiB 42.91 GB)
Data Offset : 34816 sectors
Super Offset : 8 sectors
Unused Space : before=34664 sectors, after=0 sectors
State : clean
Device UUID : 452bfa49:99adb562:7602a209:20b2ff3f

Update Time : Fri Sep 16 14:08:20 2022
Bad Block Log : 512 entries available at offset 136 sectors
Checksum : c7aeb93c - correct
Events : 18

Layout : left-symmetric
Chunk Size : 512K

Device Role : spare # 可以看到vde为备份盘
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

上面的信息输出太多了,还可以这样看md的状态,新创建的raid从0开始编号。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@haproxy-node-a ~]# mdadm -D /dev/md0 
/dev/md0:
Version : 1.2
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Fri Sep 16 14:08:20 2022
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Name : haproxy-node-a:0 (local to host haproxy-node-a)
UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Events : 18

Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
4 252 48 2 active sync /dev/vdd

3 252 64 - spare /dev/vde

如创建的一样,b、c、d三块盘为活跃状态,e为备份盘。

格式化并挂载raid

至此我们已经将raid 5创建好了,只需要把md0当成一块正常添加的磁盘使用即可。如实验,将md0格式化为xfs文件系统,并挂载在/home目录下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@haproxy-node-a ~]# mkfs.xfs -f /dev/md0       
meta-data=/dev/md0 isize=512 agcount=16, agsize=654720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=10475520, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=5120, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0



[root@haproxy-node-a ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 874M 0 874M 0% /dev
tmpfs tmpfs 886M 200K 885M 1% /dev/shm
tmpfs tmpfs 886M 17M 869M 2% /run
tmpfs tmpfs 886M 0 886M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 47G 2.5G 45G 6% /
/dev/vda1 xfs 1014M 186M 829M 19% /boot
tmpfs tmpfs 382M 0 382M 0% /run/user/0
[root@haproxy-node-a ~]# mount /dev/md0 /home/
[root@haproxy-node-a ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 874M 0 874M 0% /dev
tmpfs tmpfs 886M 200K 885M 1% /dev/shm
tmpfs tmpfs 886M 17M 869M 2% /run
tmpfs tmpfs 886M 0 886M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 47G 2.5G 45G 6% /
/dev/vda1 xfs 1014M 186M 829M 19% /boot
tmpfs tmpfs 382M 0 382M 0% /run/user/0
/dev/md0 xfs 40G 33M 40G 1% /home # 新挂载的md0

添加自动挂截

查看分区类型:

1
2
[root@haproxy-node-a ~]# blkid /dev/md0
/dev/md0: UUID="b7e24434-a5a5-4b7f-82f7-8f5bcb7669a2" TYPE="xfs"

添加自动挂载:

1
2
3
4
5
6
7
8
9
10
11
12
vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Aug 19 11:44:47 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=e79d9b36-6226-4269-a1d1-cff60d68c2e4 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
/dev/md0 /home xfs defaults 0 0 # 写入自动挂载

在挂载完,我们可以检查一下挂载信息是否正确。检查fstab语法。

1
2
3
4
5
[root@haproxy-node-a ~]# mount -av
/ : ignored
/boot : already mounted
swap : ignored
/home : already mounted

保存md配置信息

写入md0的配置信息至mdadm的配置文件中。

1
2
3
4
[root@haproxy-node-a ~]# mdadm -D --scan > /etc/mdadm.conf
[root@haproxy-node-a ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 spares=1 name=haproxy-node-a:0 UUID=7860ea0c:fb5693d3:176f627f:be6979c7
[root@haproxy-node-a ~]#

测试

模拟硬盘故障,扩充或删除硬盘,查看raid 5的变化。

raid 扩容

在虚拟机中添加一块磁盘。新添加磁盘信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
[root@haproxy-node-a home]# fdisk -l

Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a9efb

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 104857599 51379200 8e Linux LVM

Disk /dev/mapper/centos-root: 50.5 GB, 50457477120 bytes, 98549760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xaa22accc

Device Boot Start End Blocks Id System
/dev/vdb1 2048 41943039 20970496 fd Linux raid autodetect

Disk /dev/vdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xbd7a8d98

Device Boot Start End Blocks Id System
/dev/vdc1 2048 41943039 20970496 fd Linux raid autodetect

Disk /dev/vdd: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x01dada7d

Device Boot Start End Blocks Id System
/dev/vdd1 2048 41943039 20970496 fd Linux raid autodetect

Disk /dev/vde: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x9b826ee3

Device Boot Start End Blocks Id System
/dev/vde1 2048 41943039 20970496 fd Linux raid autodetect

Disk /dev/md0: 42.9 GB, 42914021376 bytes, 83816448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes


Disk /dev/vdf: 21.5 GB, 21474836480 bytes, 41943040 sectors # 新添加的磁盘
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

将磁盘vdf初始化,见上面磁盘初始化操作。

1
2
3
4
5
6
7
8
9
10
11
fdisk /dev/vdf
.........
Disk /dev/vdf: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xea86a973

Device Boot Start End Blocks Id System
/dev/vdf1 2048 41943039 20970496 fd Linux raid autodetect

将vdf添加到md0。

1
2
[root@haproxy-node-a ~]# mdadm --manage /dev/md0 --add /dev/vdf
mdadm: added /dev/vdf

查看md0中新添加的磁盘状态。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@haproxy-node-a ~]# mdadm -D /dev/md0 
/dev/md0:
Version : 1.2
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Fri Sep 16 17:42:17 2022
State : clean
Active Devices : 3
Working Devices : 5
Failed Devices : 0
Spare Devices : 2

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Name : haproxy-node-a:0 (local to host haproxy-node-a)
UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Events : 19

Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
4 252 48 2 active sync /dev/vdd

3 252 64 - spare /dev/vde
5 252 80 - spare /dev/vdf # 新扩容上去的vdf成了备份盘

扩容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# 这里增长的raid-devices = 4 就正确了。数据盘原为3块,一块热备。这里配置了5,即全部成了数据盘。
[root@haproxy-node-a ~]# mdadm --grow /dev/md0 --raid-devices=5

[root@haproxy-node-a ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Sep 16 14:04:54 2022
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Fri Sep 16 20:18:59 2022
State : clean, reshaping
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Reshape Status : 3% complete
Delta Devices : 2, (3->5)

Name : haproxy-node-a:0 (local to host haproxy-node-a)
UUID : 7860ea0c:fb5693d3:176f627f:be6979c7
Events : 35

Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
4 252 48 2 active sync /dev/vdd
5 252 80 3 active sync /dev/vdf # 新添加的 vdf已经成了数据盘
3 252 64 4 active sync /dev/vde # 原有热备盘也给配置成了数据盘。

# 再次查看raid重构进度
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdf[5] vdd[4] vde[3] vdc[1] vdb[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
[=========>...........] reshape = 47.1% (9870848/20954112) finish=15.5min speed=11884K/sec
# 发现已经是五个U了,等raid重构结束
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdf[5] vdd[4] vde[3] vdc[1] vdb[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
[==================>..] reshape = 90.0% (18875904/20954112) finish=1.0min speed=32304K/sec

unused devices: <none>
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdf[5] vdd[4] vde[3] vdc[1] vdb[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
[==================>..] reshape = 92.3% (19344384/20954112) finish=0.8min speed=32288K/sec

unused devices: <none>
[root@haproxy-node-a ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 vdf[5] vdd[4] vde[3] vdc[1] vdb[0]
83816448 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] # raid重构结束

unused devices: <none>

再看一下md0的磁盘容量,这里需要对比一下扩容前md0的容量:

1
2
3
4
Disk /dev/md0: 42.9 GB, 42914021376 bytes, 83816448 sectors    # 扩容前的容量
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

这个是md0扩容后的磁盘容量:

1
2
3
4
5
6
[root@haproxy-node-a ~]# fdisk -l
.........
Disk /dev/md0: 85.8 GB, 85828042752 bytes, 167632896 sectors # 扩容后的容量,即上一步把两个热备盘vde,vdf全加上了
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

调整文件系统大小

到这里,已经把新添加的硬盘添加至md0了。不过会发现,文件系统依然没有变化,这里,需要调整文件系统大小。操作还是对比调整前和调整后,调前调整前的保持不变。

1
2
3
4
5
6
7
8
9
10
[root@haproxy-node-a ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 874M 0 874M 0% /dev
tmpfs tmpfs 886M 200K 885M 1% /dev/shm
tmpfs tmpfs 886M 17M 869M 2% /run
tmpfs tmpfs 886M 0 886M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 47G 2.5G 45G 6% /
/dev/vda1 xfs 1014M 186M 829M 19% /boot
tmpfs tmpfs 382M 0 382M 0% /run/user/0
/dev/md0 xfs 40G 33M 40G 1% /home # 调整前挂载的md0为40G

接下来调整xfs文件系统容量,调整使用以下命令:

1
2
3
4
5
6
7
8
9
10
11
[root@haproxy-node-a ~]# xfs_growfs -d /home/
meta-data=/dev/md0 isize=512 agcount=16, agsize=654720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=10475520, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=5120, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 10475520 to 20954112 # 数据库已经更新

我们再看调整后的结果:

1
2
3
4
5
6
7
8
9
10
[root@haproxy-node-a ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 874M 0 874M 0% /dev
tmpfs tmpfs 886M 320K 885M 1% /dev/shm
tmpfs tmpfs 886M 17M 869M 2% /run
tmpfs tmpfs 886M 0 886M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 47G 2.5G 45G 6% /
/dev/vda1 xfs 1014M 186M 829M 19% /boot
/dev/md0 xfs 80G 34M 80G 1% /home # 这里已经扩容到80G,即为md0扩容后的容量
tmpfs tmpfs 382M 0 382M 0% /run/user/0

写到这里,我们已经,会了创建raid磁盘阵列,向磁盘阵列中添加硬盘并重构,再扩容到文件系统中。

如上实验,我们挂载的为xfs文件系统,对于其它文件系统,如何调整文件系统大小,这里先停一下,这一篇贴得实在有点长,我们放在下一篇。继续补充raid的其它测试验证及文件系统操作。下文见。


CentOS 7 RAID配置
https://ywmy.xyz/2022/09/16/CentOS-7-RAID配置/
作者
ian
发布于
2022年9月16日
许可协议