写点什么

高性能、高可用的 Lustre 解决方案:使用 xiRAID 4.1 在双节点共享 NVMe 环境下

作者:Sergey Platonov
  • 2025-05-15
    俄罗斯
  • 本文字数:54302 字

    阅读完需:约 178 分钟

高性能、高可用的 Lustre 解决方案:使用 xiRAID 4.1 在双节点共享 NVMe 环境下

本综合指南演示了如何在 SBB 平台上使用 xiRAID Classic 4.1Pacemaker 创建一个强大且高性能的 Lustre 文件系统。我们将逐步引导您完成从系统布局和硬件配置到软件安装、集群设置以及性能调优的整个过程。通过利用双端口 NVMe 驱动器和先进的集群技术,我们将实现一个高可用性存储解决方案,能够提供令人印象深刻的读写速度。无论您是要构建新的 Lustre 安装还是扩展现有的安装,本文都提供了详细的路线图,帮助您创建适合要求苛刻的高性能计算环境的尖端容错并行文件系统。


系统布局

xiRAID Classic 4.1 支持将 RAID 集成到基于 Pacemaker 的高可用性集群(HA-cluster)中。这一功能使得需要对服务进行集群化的用户能够从 xiRAID Classic 的卓越性能和可靠性中受益。


本文介绍了如何使用 NVMe SBB 系统(一个包含两台 x86-64 服务器和一组共享 NVMe 驱动器的单机箱系统)构建一个基本的 Lustre 并行文件系统高可用集群,并将数据存储在基于 xiRAID Classic 4.1 的集群化 RAID 上。


本文还将帮助您了解如何在实际任务中部署 xiRAID Classic。

Lustre 服务器 SBB 平台

我们将使用 Viking VDS2249R 作为 SBB 平台。其配置细节如下表所示:




Viking VDS22


标题:基于双节点共享NVMe的xiRAID 4.1高性能高可用Lustre解决方案
| 配置项 | 节点0 (node26) | 节点1 (node27) ||---------------------|-------------------------------------------|-------------------------------------------|| 主机名 | node26 | node27 || CPU | AMD EPYC 7713P 64核 | AMD EPYC 7713P 64核 || 内存 | 256GB | 256GB || 系统盘 | 2 x Samsung SSD 970 EVO Plus 250GB (镜像) | 2 x Samsung SSD 970 EVO Plus 250GB (镜像) || 操作系统 | Rocky Linux 8.9 | Rocky Linux 8.9 || IPMI地址 | 192.168.64.106 | 192.168.67.23 || IPMI登录名 | admin | admin || IPMI密码 | admin | admin || 管理网卡 | enp194s0f0: 192.168.65.26/24 | enp194s0f0: 192.168.65.27 || 集群心跳网卡 | enp194s0f1: 10.10.10.1 | enp194s0f1: 10.10.10.2 || Infiniband LNET HDR | ib0: 100.100.100.26 | ib0: 100.100.100.27 || | ib3: 100.100.100.126 | ib3: 100.100.100.127 || NVMe存储 | 24 x Kioxia CM6-R 3.84TB KCM61RUL3T84 | 24 x Kioxia CM6-R 3.84TB KCM61RUL3T84 |Z\
复制代码


系统配置与调优

在进行软件安装和配置之前,我们需要对平台进行准备,以提供最佳性能。

性能调优

tuned-adm profile accelerator-performance
复制代码

网络配置

检查两个主机是否能够解析所有 IP 地址。在我们的案例中,我们将通过 /etc/hosts 文件进行解析,因此我们在两个节点上的 /etc/hosts 文件内容如下:


127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.65.26 node26192.168.65.27 node2710.10.10.1 node26-ic10.10.10.2 node27-ic192.168.64.50 node26-ipmi192.168.64.76 node27-ipmi100.100.100.26 node26-ib100.100.100.27 node27-ib
复制代码


基于策略的路由设置

我们在服务器上使用多轨(multirail)配置:每个服务器上的两个 IB 接口被配置为在相同的 IPv4 网络中工作。为了让 Linux IP 协议栈在这种配置下正常工作,我们需要在两台服务器上为这些接口设置基于策略的路由。

node26 配置:

node26# nmcli connection modify ib0 ipv4.route-metric 100node26# nmcli connection modify ib3 ipv4.route-metric 101node26# nmcli connection modify ib0 ipv4.routes "100.100.100.0/24 src=100.100.100.26 table=100"node26# nmcli connection modify ib0 ipv4.routing-rules "priority 101 from 100.100.100.26 table 100"node26# nmcli connection modify ib3 ipv4.routes "100.100.100.0/24 src=100.100.100.126 table=200"node26# nmcli connection modify ib3 ipv4.routing-rules "priority 102 from 100.100.100.126 table 200"node26# nmcli connection up ib0Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)node26# nmcli connection up ib3Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
复制代码

node27 配置:

node27# nmcli connection modify ib0 ipv4.route-metric 100node27# nmcli connection modify ib3 ipv4.route-metric 101node27# nmcli connection modify ib0 ipv4.routes "100.100.100.0/24 src=100.100.100.27 table=100"node27# nmcli connection modify ib0 ipv4.routing-rules "priority 101 from 100.100.100.27 table 100"node27# nmcli connection modify ib3 ipv4.routes "100.100.100.0/24 src=100.100.100.127 table=200"node27# nmcli connection modify ib3 ipv4.routing-rules "priority 102 from 100.100.100.127 table 200"node27# nmcli connection up ib0Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)node26# nmcli connection up ib3Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
复制代码


NVMe 驱动器设置

在 SBB 系统中,我们有 24 块 Kioxia CM6-R 3.84TB KCM61RUL3T84 驱动器。这些驱动器是 PCIe 4.0、双端口、读密集型的,具有 1DWPD 的耐久性。根据厂商规格,单个驱动器的理论性能可以达到顺序读取 6.9GB/s 和顺序写入 4.2GB/s。


在我们的设置中,我们计划创建一个性能足够的简单 Lustre 安装。然而,由于 SBB 系统中的每个 NVMe 驱动器仅通过两条 PCIe 通道连接到每个服务器,因此 NVMe 驱动器的性能会受到限制。为了克服这一限制,我们将在每个 NVMe 驱动器上创建两个命名空间,并将这些命名空间用于 Lustre OST RAID。通过分别从第一个和第二个 NVMe 命名空间创建独立的 RAID,并通过集群软件配置,使从第一个命名空间创建的 RAID(及其 Lustre 服务器)运行在 Lustre 节点 #0 上,而从第二个命名空间创建的 RAID 运行在节点 #1 上,从而充分利用每块 NVMe 的四条 PCIe 通道来存储 OST 数据,因为 Lustre 本身会在所有 OST 中分配工作负载。


由于我们部署的是一个简单的 Lustre 安装,我们将使用一个简单的文件系统方案,仅包含一个元数据服务器。由于只有一个元数据服务器,我们只需要为元数据创建一个 RAID。因此,我们不会在用于 MDT RAID 的驱动器上创建两个命名空间。


以下是 NVMe 驱动器配置的初始状态:


# nvme listNode                  SN                   Model                                    Namespace Usage                      Format           FW Rev--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------/dev/nvme0n1          21G0A046T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme1n1          21G0A04BT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme10n1         21G0A04ET2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme11n1         21G0A045T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme12n1         S59BNM0R702322Z      Samsung SSD 970 EVO Plus 250GB           1           8.67  GB / 250.06  GB    512   B +  0 B   2B2QEXM7/dev/nvme13n1         21G0A04KT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme14n1         21G0A047T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme15n1         21G0A04CT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme16n1         11U0A00KT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme17n1         21G0A04JT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme18n1         21G0A048T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme19n1         S59BNM0R702439A      Samsung SSD 970 EVO Plus 250GB           1         208.90  kB / 250.06  GB    512   B +  0 B   2B2QEXM7/dev/nvme2n1          21G0A041T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme20n1         21G0A03TT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme21n1         21G0A04FT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme22n1         21G0A03ZT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme23n1         21G0A04DT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme24n1         21G0A03VT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme25n1         21G0A044T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme3n1          21G0A04GT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme4n1          21G0A042T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme5n1          21G0A04HT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme6n1          21G0A049T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme7n1          21G0A043T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme8n1          21G0A04AT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme9n1          21G0A03XT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106
复制代码


三星(Samsung)驱动器用于操作系统安装。


我们将保留 /dev/nvme0/dev/nvme1 驱动器用于元数据 RAID1。目前,xiRAID 不支持集群配置中的备用池,但拥有一块备用驱动器对于快速手动更换驱动器非常有用。因此,我们还将保留 /dev/nvme3 作为 RAID1 的备用驱动器,并将所有其他 KCM61RUL3T84 驱动器分成两个命名空间。


/dev/nvme4 为例,所有其他驱动器将以完全相同的方式进行分割。


检查驱动器的最大可能容量以确保无误:


# nvme id-ctrl /dev/nvme4 | grep -i tnvmcaptnvmcap : 3840755982336
复制代码


检查驱动器支持的最大命名空间数量:


# nvme id-ctrl /dev/nvme4 | grep ^nnnn : 64
复制代码


检查两台服务器上用于连接驱动器的控制器(它们会有所不同):


node27# nvme id-ctrl /dev/nvme4 | grep ^cntlidcntlid : 0x1
node26# nvme id-ctrl /dev/nvme4 | grep ^cntlidcntlid : 0x2
复制代码


我们需要计算要创建的命名空间大小。驱动器的实际大小按 4K 块计算为:


3840755982336 / 4096 = 937684566
复制代码


因此,每个命名空间的大小按 4K 块计算为:


937684566 / 2 = 468842283
复制代码


由于 NVMe 内部架构限制,实际无法创建完全等分的两个命名空间。因此我们将创建每个 468,700,000 块大小的命名空间。


写入密集型场景建议

若构建写入密集型任务系统,推荐使用 3DWPD 耐久度的写入优化型驱动器。若必须使用读优化型驱动器,建议保留 10-25%的 NVMe 空间不分配命名空间。这种做法通常能使读优化型 NVMe 在写入性能衰减方面接近写入优化型驱动器的表现。

操作步骤

清除现有命名空间(在任一节点执行):


node26# nvme delete-ns /dev/nvme4 -n 1
复制代码

创建新命名空间(相同节点):

node26# nvme create-ns /dev/nvme4 --nsze=468700000 --ncap=468700000 -b=4096 --dps=0 -m 1create-ns: Success, created nsid:1node26# nvme create-ns /dev/nvme4 --nsze=468700000 --ncap=468700000 -b=4096 --dps=0 -m 1create-ns: Success, created nsid:2node26# nvme attach-ns /dev/nvme4 --namespace-id=1 -controllers=0x2attach-ns: Success, nsid:1node26# nvme attach-ns /dev/nvme4 --namespace-id=2 -controllers=0x2attach-ns: Success, nsid:2
复制代码

在第二节点挂载命名空间(需指定正确控制器):

node27# nvme attach-ns /dev/nvme4 --namespace-id=1 -controllers=0x1attach-ns: Success, nsid:1node27# nvme attach-ns /dev/nvme4 --namespace-id=2 -controllers=0x1attach-ns: Success, nsid:2
复制代码

验证配置

双节点均应显示如下信息:

# nvme list |grep nvme4/dev/nvme4n1  21G0A042T2G8  KCM61RUL3T84  1  0.00 B / 1.92 TB  4 KiB + 0 B  0106/dev/nvme4n2  21G0A042T2G8  KCM61RUL3T84  2  0.00 B / 1.92 TB  4 KiB + 0 B  0106
复制代码

所有驱动器均采用相同方式分割,最终配置如下(节选):

# nvme listNode                  SN                   Model                                    Namespace Usage                      Format           FW Rev--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------/dev/nvme0n1          21G0A046T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme1n1          21G0A04BT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme10n1         21G0A04ET2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme10n2         21G0A04ET2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme11n1         21G0A045T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme11n2         21G0A045T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme12n1         S59BNM0R702322Z      Samsung SSD 970 EVO Plus 250GB           1           8.67  GB / 250.06  GB    512   B +  0 B   2B2QEXM7/dev/nvme13n1         21G0A04KT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme13n2         21G0A04KT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme14n1         21G0A047T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme14n2         21G0A047T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme15n1         21G0A04CT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme15n2         21G0A04CT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme16n1         11U0A00KT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme16n2         11U0A00KT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme17n1         21G0A04JT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme17n2         21G0A04JT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme18n1         21G0A048T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme18n2         21G0A048T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme19n1         S59BNM0R702439A      Samsung SSD 970 EVO Plus 250GB           1         208.90  kB / 250.06  GB    512   B +  0 B   2B2QEXM7/dev/nvme2n1          21G0A041T2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme20n1         21G0A03TT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme20n2         21G0A03TT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme21n1         21G0A04FT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme21n2         21G0A04FT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme22n1         21G0A03ZT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme22n2         21G0A03ZT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme23n1         21G0A04DT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme23n2         21G0A04DT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme24n1         21G0A03VT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme24n2         21G0A03VT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme25n1         21G0A044T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme25n2         21G0A044T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme3n1          21G0A04GT2G8         KCM61RUL3T84                             1           0.00   B /   3.84  TB      4 KiB +  0 B   0106/dev/nvme4n1          21G0A042T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme4n2          21G0A042T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme5n1          21G0A04HT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme5n2          21G0A04HT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme6n1          21G0A049T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme6n2          21G0A049T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme7n1          21G0A043T2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme7n2          21G0A043T2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme8n1          21G0A04AT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme8n2          21G0A04AT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme9n1          21G0A03XT2G8         KCM61RUL3T84                             1           0.00   B /   1.92  TB      4 KiB +  0 B   0106/dev/nvme9n2          21G0A03XT2G8         KCM61RUL3T84                             2           0.00   B /   1.92  TB      4 KiB +  0 B   0106
复制代码

软件组件安装

Lustre 环境部署

  1. 创建 Lustre 仓库文件 /etc/yum.repos.d/lustre-repo.repo

[lustre-server]name=lustre-serverbaseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el8.9/servergpgcheck=0
[lustre-client]name=lustre-clientbaseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el8.9/clientgpgcheck=0
[e2fsprogs-wc]name=e2fsprogs-wcbaseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el8gpgcheck=0
复制代码

安装 e2fs 工具

yum --nogpgcheck --disablerepo=* --enablerepo=e2fsprogs-wc install e2fsprogs
复制代码

安装 Lustre 内核

yum --nogpgcheck --disablerepo=baseos,extras,updates --enablerepo=lustre-server install kernel kernel-devel kernel-headers
复制代码

重启以应用新内核:

reboot
复制代码

验证内核版本

node26# uname -aLinux node26 4.18.0-513.9.1.el8_lustre.x86_64 #1 SMP Sat Dec 23 05:23:32 UTC 2023 x86_64 GNU/Linux
复制代码
  1. 安装 Lustre 服务组件

yum --nogpgcheck --enablerepo=lustre-server,ha install kmod-lustre kmod-lustre-osd-ldiskfs lustre-osd-ldiskfs-mount lustre lustre-resource-agents
复制代码
  1. 测试 Lustre 模块加载

[root@node26 ~]# modprobe -v lustreinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/net/libcfs.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/net/lnet.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/obdclass.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/ptlrpc.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/fld.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/fid.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/osc.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/lov.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/mdc.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/lmv.koinsmod /lib/modules/4.18.0-513.9.1.el8_lustre.x86_64/extra/lustre/fs/lustre.ko
复制代码


卸载模块:

# lustre_rmmod
复制代码

安装 xiRAID Classic 4.1

按照 Xinnor xiRAID 4.1.0 安装指南,在两个节点上从存储库安装 xiRAID Classic 4.1:


# yum install -y epel-release# yum install https://pkg.xinnor.io/repository/Repository/xiraid/el/8/kver-4.18/xiraid-repo-1.1.0-446.kver.4.18.noarch.rpm  # yum install xiraid-release
复制代码

安装 Pacemaker

在两个节点上执行以下步骤:


启用集群存储库:


# yum config-manager --set-enabled ha appstream
复制代码


安装集群软件:

# yum install pcs pacemaker psmisc policycoreutils-python3
复制代码


Csync2 安装

由于我们在 Rocky Linux 8 上安装系统,因此无需自行从源代码编译 Csync2。只需从 Xinnor 存储库在两个节点上安装 Csync2 包:


# yum install csync2
复制代码


NTP 服务器安装

# yum install chrony
复制代码


高可用集群 (HA) 设置

时间同步设置

根据需要修改 /etc/chrony.conf 文件以配置与正确的 NTP 服务器通信。在此设置中,我们将使用默认配置。


# systemctl enable --now chronyd.service
复制代码


通过运行以下命令验证时间同步是否正常工作:


# chronyc tracking
复制代码


Pacemaker 集群创建

本章将描述集群的配置。在我们的集群中,我们使用专用网络来创建集群互联。该网络在物理上是通过专用以太网电缆(不使用交换机)在服务器的 enp194s0f1 接口之间建立的直接连接。集群互联是任何高可用性集群的重要组成部分,其可靠性应尽可能高。基于 Pacemaker 的集群可以通过配置两个集群互联网络来提高冗余可靠性。虽然在我们的配置中将使用单网络配置,但如果需要,请考虑为您的项目使用双网络互联。

设置防火墙以允许 Pacemaker 软件运行(在两个节点上执行):

# firewall-cmd --add-service=high-availability# firewall-cmd --permanent --add-service=high-availability
复制代码


hacluster 用户设置相同的密码(在两个节点上执行):

# passwd hacluster
复制代码


在两个节点上启动集群软件:

# systemctl start pcsd.service# systemctl enable pcsd.service
复制代码


通过互连接口从一个节点对集群节点进行身份验证:

node26# pcs host auth node26-ic node27-ic -u haclusterPassword:node26-ic: Authorizednode27-ic: Authorized
复制代码


创建并启动集群(在一个节点上启动):

node26# pcs cluster setup lustrebox0 node26-ic node27-icNo addresses specified for host 'node26-ic', using 'node26-ic'No addresses specified for host 'node27-ic', using 'node27-ic'Destroying cluster on hosts: 'node26-ic', 'node27-ic'...node26-ic: Successfully destroyed clusternode27-ic: Successfully destroyed clusterRequesting remove 'pcsd settings' from 'node26-ic', 'node27-ic'node26-ic: successful removal of the file 'pcsd settings'node27-ic: successful removal of the file 'pcsd settings'Sending 'corosync authkey', 'pacemaker authkey' to 'node26-ic', 'node27-ic'node26-ic: successful distribution of the file 'corosync authkey'node26-ic: successful distribution of the file 'pacemaker authkey'node27-ic: successful distribution of the file 'corosync authkey'node27-ic: successful distribution of the file 'pacemaker authkey'Sending 'corosync.conf' to 'node26-ic', 'node27-ic'node26-ic: successful distribution of the file 'corosync.conf'node27-ic: successful distribution of the file 'corosync.conf'Cluster has been successfully set up.
复制代码


启动集群(在所有节点上启动):

node26# pcs cluster start --allnode26-ic: Starting Cluster...node27-ic: Starting Cluster...
复制代码


检查当前集群状态:

node26# pcs statusCluster name: lustrebox0
WARNINGS:No stonith devices and stonith-enabled is not false
Cluster Summary: * Stack: corosync (Pacemaker is running) * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum * Last updated: Fri Jul 12 20:55:53 2024 on node26-ic * Last change: Fri Jul 12 20:55:12 2024 by hacluster via hacluster on node27-ic * 2 nodes configured * 0 resource instances configured
Node List: * Online: [ node26-ic node27-ic ]
Full List of Resources: * No resources
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码

设置防护(Fencing)

在任何使用共享存储设备的高可用性集群中,正确配置和运行防护(STONITH)是非常重要的。在我们的案例中,共享设备是之前创建的所有 NVMe 命名空间。防护(STONITH)的设计应由集群管理员根据系统的功能和架构来开发和实施。在本系统中,我们将使用 IPMI 进行防护。不过,在设计和部署您自己的集群时,请自行选择防护配置,同时考虑所有可能性、限制和风险。

首先,检查系统中已安装的防护代理列表:

node26# pcs stonith listfence_watchdog - Dummy watchdog fence agent
复制代码


可以看到,我们的集群节点上尚未安装 IPMI 防护代理。要安装它,请在两个节点上运行以下命令:


# yum install fence-agents-ipmilan
复制代码


您可以运行以下命令查看 IPMI 防护代理的选项说明:


pcs stonith describe fence_ipmilan
复制代码


添加防护资源:

node26# pcs stonith create node27.stonith fence_ipmilan ip="192.168.67.23" auth=password password="admin" username="admin" method="onoff" lanplus=true pcmk_host_list="node27-ic" pcmk_host_check=static-list op monitor interval=10snode26# pcs stonith create node26.stonith fence_ipmilan ip="192.168.64.106" auth=password password="admin" username="admin" method="onoff" lanplus=true pcmk_host_list="node26-ic" pcmk_host_check=static-list op monitor interval=10s
复制代码


防止 STONITH 资源在其需要隔离的节点上启动:

node26# pcs constraint location node27.stonith avoids node27-ic=INFINITYnode26# pcs constraint location node26.stonith avoids node26-ic=INFINITY
复制代码


Csync2 配置

配置防火墙以允许 Csync2 工作(在两个节点上运行):

# firewall-cmd --add-port=30865/tcp# firewall-cmd --permanent --add-port=30865/tcp
复制代码


node26 上创建 Csync2 配置文件 /usr/local/etc/csync2.cfg,内容如下:

nossl * *;group csxiha {    host node26;    host node27;    key /usr/local/etc/csync2.key_ha;    include /etc/xiraid/raids;}
复制代码


生成密钥:

node26# csync2 -k /usr/local/etc/csync2.key_ha
复制代码


将配置文件和密钥文件复制到第二个节点:

node26# scp /usr/local/etc/csync2.cfg /usr/local/etc/csync2.key_ha node27:/usr/local/etc/
复制代码


为 Csync2 设置定时同步

在两个节点上运行 crontab -e,并添加以下记录以每分钟执行一次同步:


* * * * * /usr/local/sbin/csync2 -x
复制代码


创建异步同步脚本

在两个节点上运行以下命令以创建同步脚本(重复此步骤):


# vi /etc/xiraid/config_update_handler.sh
复制代码


将以下内容填入创建的脚本中:

#!/usr/bin/bash/usr/local/sbin/csync2 -xv
复制代码


保存文件。

设置脚本的正确权限:

# chmod +x /etc/xiraid/config_update_handler.sh
复制代码


xiRAID 集群配置

禁用 RAID 自动启动

为了防止在节点启动期间 xiRAID 自动激活 RAID,在集群配置中,RAID 必须通过 Pacemaker 使用集群资源来激活。请在两个节点上运行以下命令:


# xicli settings cluster modify --raid_autostart 0
复制代码


让 xiRAID Classic 4.1 资源代理对 Pacemaker 可见(在两个节点上运行以下命令序列):

# mkdir -p /usr/lib/ocf/resource.d/xraid# ln -s /etc/xraid/agents/raid /usr/lib/ocf/resource.d/xraid/raid
复制代码


xiRAID RAID 创建

为了能够创建 RAID,我们需要先在两个主机上安装 xiRAID Classic 4.1 的许可证。这些许可证需要从 Xinnor 获取。为了生成许可证,Xinnor 需要两个节点上的 xicli license show 命令的输出。


node26# xicli license showKernel version: 4.18.0-513.9.1.el8_lustre.x86_64
hwkey: B8828A09E09E8F48license_key: nullversion: 0crypto_version: 0created: 0-0-0expired: 0-0-0disks: 4levels: 0type: nvmedisks_in_use: 2status: trial
复制代码


从 Xinnor 收到的许可证文件需要通过以下命令安装(同样,在两个节点上执行):


node26# xicli license showKernel version: 4.18.0-513.9.1.el8_lustre.x86_64
hwkey: B8828A09E09E8F48license_key: 0F5A4B87A0FC6DB7544EA446B1B4AF5F34A08169C44E5FD119CE6D2352E202677768ECC78F56B583DABE11698BBC800EC96E556AA63E576DAB838010247678E7E3B95C7C4E3F592672D06C597045EAAD8A42CDE38C363C533E98411078967C38224C9274B862D45D4E6DED70B7E34602C80B60CBA7FDE93316438AFDCD7CBD23version: 1crypto_version: 1created: 2024-7-16expired: 2024-9-30disks: 600levels: 70type: nvmedisks_in_use: 2status: valid
复制代码


由于我们计划部署一个小型 Lustre 安装,将 MGT 和 MDT 合并在同一个目标设备上是完全可以接受的。但对于中型或大型 Lustre 安装,最好为 MGT 使用单独的目标(和 RAID)。


以下是需要创建的 RAID 列表:

| RAID Name | RAID Level | Number of Devices | Strip Size | Drive List | Lustre Target ||-----------|------------|-------------------|------------|------------|---------------|| r_mdt0    | 1          | 2                | 16         | /dev/nvme0n1 /dev/nvme1n1 | MGT + MDT index=0 || r_ost0    | 6          | 10               | 128        | /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1 /dev/nvme9n1 /dev/nvme10n1 /dev/nvme11n1 /dev/nvme13n1 /dev/nvme14n1 | OST index=0 || r_ost1    | 6          | 10               | 128        | /dev/nvme4n2 /dev/nvme5n2 /dev/nvme6n2 /dev/nvme7n2 /dev/nvme8n2 /dev/nvme9n2 /dev/nvme10n2 /dev/nvme11n2 /dev/nvme13n2 /dev/nvme14n2 | OST index=1 || r_ost2    | 6          | 10               | 128        | /dev/nvme15n1 /dev/nvme16n1 /dev/nvme17n1 /dev/nvme18n1 /dev/nvme20n1 /dev/nvme21n1 /dev/nvme22n1 /dev/nvme23n1 /dev/nvme24n1 /dev/nvme25n1 | OST index=2 || r_ost3    | 6          | 10               | 128        | /dev/nvme15n2 /dev/nvme16n2 /dev/nvme17n2 /dev/nvme18n2 /dev/nvme20n2 /dev/nvme21n2 /dev/nvme22n2 /dev/nvme23n2 /dev/nvme24n2 /dev/nvme25n2 | OST index=3 |
复制代码


在第一个节点上创建所有 RAID:

node26# xicli raid create -n r_mdt0 -l 1 -d /dev/nvme0n1 /dev/nvme1n1node26# xicli raid create -n r_ost0 -l 6 -ss 128 -d /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1 /dev/nvme9n1 /dev/nvme10n1 /dev/nvme11n1 /dev/nvme13n1 /dev/nvme14n1node26# xicli raid create -n r_ost1 -l 6 -ss 128 -d /dev/nvme4n2 /dev/nvme5n2 /dev/nvme6n2 /dev/nvme7n2 /dev/nvme8n2 /dev/nvme9n2 /dev/nvme10n2 /dev/nvme11n2 /dev/nvme13n2 /dev/nvme14n2node26# xicli raid create -n r_ost2 -l 6 -ss 128 -d /dev/nvme15n1 /dev/nvme16n1 /dev/nvme17n1 /dev/nvme18n1 /dev/nvme20n1 /dev/nvme21n1 /dev/nvme22n1 /dev/nvme23n1 /dev/nvme24n1 /dev/nvme25n1node26# xicli raid create -n r_ost3 -l 6 -ss 128 -d /dev/nvme15n2 /dev/nvme16n2 /dev/nvme17n2 /dev/nvme18n2 /dev/nvme20n2 /dev/nvme21n2 /dev/nvme22n2 /dev/nvme23n2 /dev/nvme24n2 /dev/nvme25n2
复制代码


在此阶段,无需等待 RAID 初始化完成——可以安全地让其在后台运行。

检查第一个节点的 RAID 状态:

node26# xicli raid show╔RAIDs═══╦══════════════════╦═════════════╦════════════════════════╦═══════════════════╗║ name   ║ static           ║ state       ║ devices                ║ info              ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_mdt0 ║ size: 3576 GiB   ║ online      ║ 0 /dev/nvme0n1 online  ║                   ║║        ║ level: 1         ║ initialized ║ 1 /dev/nvme1n1 online  ║                   ║║        ║ strip_size: 16   ║             ║                        ║                   ║║        ║ block_size: 4096 ║             ║                        ║                   ║║        ║ sparepool: -     ║             ║                        ║                   ║║        ║ active: True     ║             ║                        ║                   ║║        ║ config: True     ║             ║                        ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost0 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n1 online  ║ init_progress: 11 ║║        ║ level: 6         ║ initing     ║ 1 /dev/nvme5n1 online  ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n1 online  ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n1 online  ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n1 online  ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme9n1 online  ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme10n1 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme11n1 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme13n1 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme14n1 online ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost1 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n2 online  ║ init_progress: 7  ║║        ║ level: 6         ║ initing     ║ 1 /dev/nvme5n2 online  ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n2 online  ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n2 online  ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n2 online  ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme9n2 online  ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme10n2 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme11n2 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme13n2 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme14n2 online ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost2 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n1 online ║ init_progress: 5  ║║        ║ level: 6         ║ initing     ║ 1 /dev/nvme16n1 online ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n1 online ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n1 online ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n1 online ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme21n1 online ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme22n1 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme23n1 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme24n1 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme25n1 online ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost3 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n2 online ║ init_progress: 2  ║║        ║ level: 6         ║ initing     ║ 1 /dev/nvme16n2 online ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n2 online ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n2 online ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n2 online ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme21n2 online ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme22n2 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme23n2 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme24n2 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme25n2 online ║                   ║╚════════╩══════════════════╩═════════════╩════════════════════════╩═══════════════════╝
复制代码


检查 RAID 配置是否成功复制到第二个节点(请注意,在第二个节点上,RAID 状态为 None,这是预期的情况):


node27# xicli raid show╔RAIDs═══╦══════════════════╦═══════╦═════════╦══════╗║ name   ║ static           ║ state ║ devices ║ info ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_mdt0 ║ size: 3576 GiB   ║ None  ║         ║      ║║        ║ level: 1         ║       ║         ║      ║║        ║ strip_size: 16   ║       ║         ║      ║║        ║ block_size: 4096 ║       ║         ║      ║║        ║ sparepool: -     ║       ║         ║      ║║        ║ active: False    ║       ║         ║      ║║        ║ config: True     ║       ║         ║      ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost0 ║ size: 14302 GiB  ║ None  ║         ║      ║║        ║ level: 6         ║       ║         ║      ║║        ║ strip_size: 128  ║       ║         ║      ║║        ║ block_size: 4096 ║       ║         ║      ║║        ║ sparepool: -     ║       ║         ║      ║║        ║ active: False    ║       ║         ║      ║║        ║ config: True     ║       ║         ║      ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost1 ║ size: 14302 GiB  ║ None  ║         ║      ║║        ║ level: 6         ║       ║         ║      ║║        ║ strip_size: 128  ║       ║         ║      ║║        ║ block_size: 4096 ║       ║         ║      ║║        ║ sparepool: -     ║       ║         ║      ║║        ║ active: False    ║       ║         ║      ║║        ║ config: True     ║       ║         ║      ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost2 ║ size: 14302 GiB  ║ None  ║         ║      ║║        ║ level: 6         ║       ║         ║      ║║        ║ strip_size: 128  ║       ║         ║      ║║        ║ block_size: 4096 ║       ║         ║      ║║        ║ sparepool: -     ║       ║         ║      ║║        ║ active: False    ║       ║         ║      ║║        ║ config: True     ║       ║         ║      ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost3 ║ size: 14302 GiB  ║ None  ║         ║      ║║        ║ level: 6         ║       ║         ║      ║║        ║ strip_size: 128  ║       ║         ║      ║║        ║ block_size: 4096 ║       ║         ║      ║║        ║ sparepool: -     ║       ║         ║      ║║        ║ active: False    ║       ║         ║      ║║        ║ config: True     ║       ║         ║      ║╚════════╩══════════════════╩═══════╩═════════╩══════╝
复制代码

在创建 RAID 之后,无需等待 RAID 初始化完成。RAID 在创建后即可立即使用,尽管性能可能会略有下降。


为了获得最佳性能,最好为每个 RAID 分配特定的不重叠 CPU 核心集。目前,所有 RAID 都在 node26 上激活,因此核心集会重叠,但当它们分布在 node26 和 node27 之间时,将不会重叠。


node26# xicli raid modify -n r_mdt0 -ca 0-7 -se 1node26# xicli raid modify -n r_ost0 -ca 8-67 -se 1node26# xicli raid modify -n r_ost1 -ca 8-67 -se 1 # 将在 node27 上运行node26# xicli raid modify -n r_ost2 -ca 68-127 -se 1node26# xicli raid modify -n r_ost3 -ca 68-127 -se 1 # 将在 node27 上运行
复制代码

Lustre 配置

LNET 配置

为了让 Lustre 正常工作,我们需要配置 Lustre 网络栈 (LNET)。


在两个节点上运行以下命令:


# systemctl start lnet# systemctl enable lnet# lnetctl net add --net o2ib0 --if ib0# lnetctl net add --net o2ib0 --if ib3
复制代码


检查配置:


# lnetctl net show -v
复制代码


输出示例:


net:    - net type: lo      local NI(s):        - nid: 0@lo          status: up          statistics:              send_count: 289478              recv_count: 289474              drop_count: 4          tunables:              peer_timeout: 0              peer_credits: 0              peer_buffer_credits: 0              credits: 0          lnd tunables:          dev cpt: 0          CPT: "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]"    - net type: o2ib      local NI(s):        - nid: 100.100.100.26@o2ib          status: down          interfaces:              0: ib0          statistics:              send_count: 213607              recv_count: 213604              drop_count: 7          tunables:              peer_timeout: 180              peer_credits: 8              peer_buffer_credits: 0              credits: 256          lnd tunables:              peercredits_hiw: 4              map_on_demand: 1              concurrent_sends: 8              fmr_pool_size: 512              fmr_flush_trigger: 384              fmr_cache: 1              ntx: 512              conns_per_peer: 1          dev cpt: -1          CPT: "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]"        - nid: 100.100.100.126@o2ib          status: up          interfaces:              0: ib3          statistics:              send_count: 4              recv_count: 4              drop_count: 0          tunables:              peer_timeout: 180              peer_credits: 8              peer_buffer_credits: 0              credits: 256          lnd tunables:              peercredits_hiw: 4              map_on_demand: 1              concurrent_sends: 8              fmr_pool_size: 512              fmr_flush_trigger: 384              fmr_cache: 1              ntx: 512              conns_per_peer: 1          dev cpt: -1          CPT: "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]"
复制代码


请注意主机上的 LNET NID(网络标识符)。我们将使用 100.100.100.26@o2ib 作为 node26 的主 NID,使用 100.100.100.27@o2ib 作为 node27 的主 NID。


保存 LNET 配置:


# lnetctl export -b > /etc/lnet.conf
复制代码

LDISKFS 文件系统创建

在此步骤中,我们将 RAID 格式化为 LDISKFS 文件系统格式。在格式化过程中,我们需要指定目标类型(--mgs/--mdt/--ost)、特定目标类型的唯一编号(--index)、Lustre 文件系统名称(--fsname)、每个目标文件系统可以挂载的节点 ID(NID)以及相应服务器会自动启动的节点(--servicenode),以及可以找到 MGS 的节点 ID(--mgsnode)。


由于我们的 RAID 将在集群中运行,我们指定两个服务器节点的 NID 作为目标文件系统的挂载点,并确保相应的服务器可以在这些节点上自动启动。出于同样的原因,我们还指定了其他服务器应该查找 MGS 服务的两个 NID。


node26# mkfs.lustre --mgs --mdt --fsname=lustre0 --index=0 --servicenode=100.100.100.26@o2ib --servicenode=100.100.100.27@o2ib --mgsnode=100.100.100.26@o2ib --mgsnode=100.100.100.27@o2ib /dev/xi_r_mdt0node26# mkfs.lustre --ost --fsname=lustre0 --index=0 --servicenode=100.100.100.26@o2ib --servicenode=100.100.100.27@o2ib --mgsnode=100.100.100.26@o2ib --mgsnode=100.100.100.27@o2ib /dev/xi_r_ost0node26# mkfs.lustre --ost --fsname=lustre0 --index=1 --servicenode=100.100.100.26@o2ib --servicenode=100.100.100.27@o2ib --mgsnode=100.100.100.26@o2ib --mgsnode=100.100.100.27@o2ib /dev/xi_r_ost1node26# mkfs.lustre --ost --fsname=lustre0 --index=2 --servicenode=100.100.100.26@o2ib --servicenode=100.100.100.27@o2ib --mgsnode=100.100.100.26@o2ib --mgsnode=100.100.100.27@o2ib /dev/xi_r_ost2node26# mkfs.lustre --ost --fsname=lustre0 --index=3 --servicenode=100.100.100.26@o2ib --servicenode=100.100.100.27@o2ib --mgsnode=100.100.100.26@o2ib --mgsnode=100.100.100.27@o2ib /dev/xi_r_ost3
复制代码


更多详细信息,请参阅 Lustre 文档。


集群资源创建

请查看下表。需要配置的内容已在表格中描述。

表格:集群资源配置

以下是可复制的 Markdown 格式表格:


| RAID 名称 | HA 集群 RAID 资源名称 | Lustre 目标           | 挂载点         | HA 集群文件系统资源名称 | 优先集群节点 ||-----------|------------------------|-----------------------|----------------|--------------------------|--------------|| r_mdt0    | rr_mdt0               | MGT + MDT index=0     | /lustre_t/mdt0 | fsr_mdt0                 | node26       || r_ost0    | rr_ost0               | OST index=0           | /lustre_t/ost0 | fsr_ost0                 | node26       || r_ost1    | rr_ost1               | OST index=1           | /lustre_t/ost1 | fsr_ost1                 | node27       || r_ost2    | rr_ost2               | OST index=2           | /lustre_t/ost2 | fsr_ost2                 | node26       || r_ost3    | rr_ost3               | OST index=3           | /lustre_t/ost3 | fsr_ost3                 | node27       |
复制代码


创建 xiRAID Classic 的 Pacemaker 资源

我们将使用 xiRAID 资源代理来创建 Pacemaker 资源。此资源代理随 xiRAID Classic 安装,并在之前的步骤中已提供给 Pacemaker。

集群 Lustre 服务的选项

目前有两种资源代理可以管理 Lustre OSD,具体如下:


  1. ocf:heartbeat:Filesystem

  2. 由 ClusterLabs 在 resource-agents 包中分发,Filesystem RA 是一个非常成熟且稳定的工具,多年来一直是 Pacemaker 项目的一部分。

  3. Filesystem 提供了对存储设备挂载和卸载的通用支持,间接包括 Lustre。

  4. ocf:lustre:Lustre

  5. 专门为 Lustre OSD 开发,此资源代理由 Lustre 项目分发,并从 Lustre 2.10.0 版本开始提供。

  6. 由于其范围更窄,因此比 ocf:heartbeat:Filesystem 更简单,更适合管理 Lustre 存储资源。


为了简化操作,我们将在本例中使用 ocf:heartbeat:Filesystem。然而,ocf:lustre:Lustre 也可以轻松与 xiRAID Classic 结合使用,在 Pacemaker 集群配置中运行。有关 Lustre 集群的更多详细信息,请参阅 Lustre 文档页面


首先,在两个节点上为所有格式化为 LDISKFS 的 RAID 创建挂载点:


# mkdir -p /lustre_t/ost3# mkdir -p /lustre_t/ost2# mkdir -p /lustre_t/ost1# mkdir -p /lustre_t/ost0# mkdir -p /lustre_t/mdt0
复制代码


在 RAID 处于活动状态的节点上卸载所有 RAID:


node26# xicli raid show╔RAIDs═══╦══════════════════╦═════════════╦════════════════════════╦══════╗║ name   ║ static           ║ state       ║ devices                ║ info ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_mdt0 ║ size: 3576 GiB   ║ online      ║ 0 /dev/nvme0n1 online  ║      ║║        ║ level: 1         ║ initialized ║ 1 /dev/nvme1n1 online  ║      ║║        ║ strip_size: 16   ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: True     ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost0 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n1 online  ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme5n1 online  ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n1 online  ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n1 online  ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n1 online  ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme9n1 online  ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme10n1 online ║      ║║        ║                  ║             ║ 7 /dev/nvme11n1 online ║      ║║        ║                  ║             ║ 8 /dev/nvme13n1 online ║      ║║        ║                  ║             ║ 9 /dev/nvme14n1 online ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost1 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n2 online  ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme5n2 online  ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n2 online  ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n2 online  ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n2 online  ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme9n2 online  ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme10n2 online ║      ║║        ║                  ║             ║ 7 /dev/nvme11n2 online ║      ║║        ║                  ║             ║ 8 /dev/nvme13n2 online ║      ║║        ║                  ║             ║ 9 /dev/nvme14n2 online ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost2 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n1 online ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme16n1 online ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n1 online ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n1 online ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n1 online ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme21n1 online ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme22n1 online ║      ║║        ║                  ║             ║ 7 /dev/nvme23n1 online ║      ║║        ║                  ║             ║ 8 /dev/nvme24n1 online ║      ║║        ║                  ║             ║ 9 /dev/nvme25n1 online ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost3 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n2 online ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme16n2 online ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n2 online ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n2 online ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n2 online ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme21n2 online ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme22n2 online ║      ║║        ║                  ║             ║ 7 /dev/nvme23n2 online ║      ║║        ║                  ║             ║ 8 /dev/nvme24n2 online ║      ║║        ║                  ║             ║ 9 /dev/nvme25n2 online ║      ║╚════════╩══════════════════╩═════════════╩════════════════════════╩══════╝
node26# xicli raid unload -n r_mdt0node26# xicli raid unload -n r_ost0node26# xicli raid unload -n r_ost1node26# xicli raid unload -n r_ost2node26# xicli raid unload -n r_ost3
node26# xicli raid show╔RAIDs═══╦══════════════════╦═══════╦═════════╦══════╗║ name ║ static ║ state ║ devices ║ info ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_mdt0 ║ size: 3576 GiB ║ None ║ ║ ║║ ║ level: 1 ║ ║ ║ ║║ ║ strip_size: 16 ║ ║ ║ ║║ ║ block_size: 4096 ║ ║ ║ ║║ ║ sparepool: - ║ ║ ║ ║║ ║ active: False ║ ║ ║ ║║ ║ config: True ║ ║ ║ ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost0 ║ size: 14302 GiB ║ None ║ ║ ║║ ║ level: 6 ║ ║ ║ ║║ ║ strip_size: 128 ║ ║ ║ ║║ ║ block_size: 4096 ║ ║ ║ ║║ ║ sparepool: - ║ ║ ║ ║║ ║ active: False ║ ║ ║ ║║ ║ config: True ║ ║ ║ ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost1 ║ size: 14302 GiB ║ None ║ ║ ║║ ║ level: 6 ║ ║ ║ ║║ ║ strip_size: 128 ║ ║ ║ ║║ ║ block_size: 4096 ║ ║ ║ ║║ ║ sparepool: - ║ ║ ║ ║║ ║ active: False ║ ║ ║ ║║ ║ config: True ║ ║ ║ ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost2 ║ size: 14302 GiB ║ None ║ ║ ║║ ║ level: 6 ║ ║ ║ ║║ ║ strip_size: 128 ║ ║ ║ ║║ ║ block_size: 4096 ║ ║ ║ ║║ ║ sparepool: - ║ ║ ║ ║║ ║ active: False ║ ║ ║ ║║ ║ config: True ║ ║ ║ ║╠════════╬══════════════════╬═══════╬═════════╬══════╣║ r_ost3 ║ size: 14302 GiB ║ None ║ ║ ║║ ║ level: 6 ║ ║ ║ ║║ ║ strip_size: 128 ║ ║ ║ ║║ ║ block_size: 4096 ║ ║ ║ ║║ ║ sparepool: - ║ ║ ║ ║║ ║ active: False ║ ║ ║ ║║ ║ config: True ║ ║ ║ ║╚════════╩══════════════════╩═══════╩═════════╩══════╝
复制代码


在第一个节点上创建集群信息库的副本以进行修改:


node26# pcs cluster cib fs_cfgnode26# ls -l fs_cfg-rw-r--r--. 1 root root 8614 7月 20 02:04 fs_cfg
复制代码


获取 RAID 的 UUID:


node26# grep uuid /etc/xiraid/raids/*.conf/etc/xiraid/raids/r_mdt0.conf:    "uuid": "75E2CAA5-3E5B-4ED0-89E9-4BF3850FD542",/etc/xiraid/raids/r_ost0.conf:    "uuid": "AB341442-20AC-43B1-8FE6-F9ED99D1D6C0",/etc/xiraid/raids/r_ost1.conf:    "uuid": "1441D09C-0073-4555-A398-71984E847F9E",/etc/xiraid/raids/r_ost2.conf:    "uuid": "0E225812-6877-4344-A552-B6A408EC7351",/etc/xiraid/raids/r_ost3.conf:    "uuid": "F749B8A7-3CC4-45A9-A61E-E75EDBB3A53E",
复制代码


为 r_mdt0 RAID 创建资源 rr_mdt0:


node26# pcs -f fs_cfg resource create rr_mdt0 ocf:xraid:raid name=r_mdt0 uuid=75E2CAA5-3E5B-4ED0-89E9-4BF3850FD542 op monitor interval=5s meta migration-threshold=1
复制代码


设置约束,使第一个节点优先运行 r_mdt0 资源:


node26# pcs -f fs_cfg constraint location rr_mdt0 prefers node26-ic=50
复制代码


为 r_mdt0 RAID 的挂载点 /lustre_t/mdt0 创建资源:


node26# pcs -f fs_cfg resource create fsr_mdt0 Filesystem device="/dev/xi_r_mdt0" directory="/lustre_t/mdt0" fstype="lustre"
复制代码


配置集群,确保 rr_mdt0 和 fsr_mdt0 仅在同一节点上启动:


node26# pcs -f fs_cfg constraint colocation add rr_mdt0 with fsr_mdt0 INFINITY
复制代码


配置集群,确保 fsr_mdt0 仅在 rr_mdt0 启动后启动:


node26# pcs -f fs_cfg constraint order rr_mdt0 then fsr_mdt0
复制代码


以相同的方式配置其他资源:


node26# pcs -f fs_cfg resource create rr_ost0 ocf:xraid:raid name=r_ost0 uuid=AB341442-20AC-43B1-8FE6-F9ED99D1D6C0 op monitor interval=5s meta migration-threshold=1node26# pcs -f fs_cfg constraint location rr_ost0 prefers node26-ic=50node26# pcs -f fs_cfg resource create fsr_ost0 Filesystem device="/dev/xi_r_ost0" directory="/lustre_t/ost0" fstype="lustre"node26# pcs -f fs_cfg constraint colocation add rr_ost0 with fsr_ost0 INFINITYnode26# pcs -f fs_cfg constraint order rr_ost0 then fsr_ost0
node26# pcs -f fs_cfg resource create rr_ost1 ocf:xraid:raid name=r_ost1 uuid=1441D09C-0073-4555-A398-71984E847F9E op monitor interval=5s meta migration-threshold=1node26# pcs -f fs_cfg constraint location rr_ost1 prefers node27-ic=50node26# pcs -f fs_cfg resource create fsr_ost1 Filesystem device="/dev/xi_r_ost1" directory="/lustre_t/ost1" fstype="lustre"node26# pcs -f fs_cfg constraint colocation add rr_ost1 with fsr_ost1 INFINITYnode26# pcs -f fs_cfg constraint order rr_ost1 then fsr_ost1
node26# pcs -f fs_cfg resource create rr_ost2 ocf:xraid:raid name=r_ost2 uuid=0E225812-6877-4344-A552-B6A408EC7351 op monitor interval=5s meta migration-threshold=1node26# pcs -f fs_cfg constraint location rr_ost2 prefers node26-ic=50node26# pcs -f fs_cfg resource create fsr_ost2 Filesystem device="/dev/xi_r_ost2" directory="/lustre_t/ost2" fstype="lustre"node26# pcs -f fs_cfg constraint colocation add rr_ost2 with fsr_ost2 INFINITYnode26# pcs -f fs_cfg constraint order rr_ost2 then fsr_ost2
node26# pcs -f fs_cfg resource create rr_ost3 ocf:xraid:raid name=r_ost3 uuid=F749B8A7-3CC4-45A9-A61E-E75EDBB3A53E op monitor interval=5s meta migration-threshold=1node26# pcs -f fs_cfg constraint location rr_ost3 prefers node27-ic=50node26# pcs -f fs_cfg resource create fsr_ost3 Filesystem device="/dev/xi_r_ost3" directory="/lustre_t/ost3" fstype="lustre"node26# pcs -f fs_cfg constraint colocation add rr_ost3 with fsr_ost3 INFINITYnode26# pcs -f fs_cfg constraint order rr_ost3 then fsr_ost3
复制代码


在 xiRAID Classic 4.1 中,需要确保每次只有一个 RAID 可以启动。为此,我们定义以下约束。此限制计划在 xiRAID Classic 4.2 中移除。


node26# pcs -f fs_cfg constraint order start rr_mdt0 then start rr_ost0 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_mdt0 then start rr_ost1 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_mdt0 then start rr_ost2 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_mdt0 then start rr_ost3 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_ost0 then start rr_ost1 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_ost0 then start rr_ost2 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_ost0 then start rr_ost3 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_ost1 then start rr_ost2 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_ost1 then start rr_ost3 kind=Serializenode26# pcs -f fs_cfg constraint order start rr_ost2 then start rr_ost3 kind=Serialize
复制代码


为了确保 Lustre 服务器按正确的顺序启动,我们需要配置集群以确保 MDS 在所有 OSS 之前启动。由于 Linux 内核在挂载 LDISKFS 文件系统时会自动启动 MDS 和 OSS,因此我们只需为 fsr_* 资源设置正确的启动顺序:


node26# pcs -f fs_cfg constraint order fsr_mdt0 then fsr_ost0node26# pcs -f fs_cfg constraint order fsr_mdt0 then fsr_ost1node26# pcs -f fs_cfg constraint order fsr_mdt0 then fsr_ost2node26# pcs -f fs_cfg constraint order fsr_mdt0 then fsr_ost3
复制代码


应用批量集群信息库更改:


node26# pcs cluster cib-push fs_cfg --config
复制代码


检查生成的集群配置:


node26# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node26-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Tue Jul 23 02:14:54 2024 on node26-ic  * Last change:  Tue Jul 23 02:14:50 2024 by root via root on node26-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Online: [ node26-ic node27-ic ]
Full List of Resources: * node27.stonith (stonith:fence_ipmilan): Started node26-ic * node26.stonith (stonith:fence_ipmilan): Started node27-ic * rr_mdt0 (ocf::xraid:raid): Started node26-ic * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost0 (ocf::xraid:raid): Started node26-ic * fsr_ost0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost2 (ocf::xraid:raid): Started node26-ic * fsr_ost2 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Started node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码


在两个节点上仔细检查 RAID 是否处于活动状态以及文件系统是否正确挂载。请注意,基于 /dev/nvme*n1 的所有 OST RAID 都在第一个节点(node26)上激活,而基于 /dev/nvme*n2 的所有 OST RAID 都在第二个节点(node27)上激活,这将帮助我们按计划充分利用 NVMe 的吞吐量。


node26:


node26# xicli raid show╔RAIDs═══╦══════════════════╦═════════════╦════════════════════════╦══════╗║ name   ║ static           ║ state       ║ devices                ║ info ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_mdt0 ║ size: 3576 GiB   ║ online      ║ 0 /dev/nvme0n1 online  ║      ║║        ║ level: 1         ║ initialized ║ 1 /dev/nvme1n1 online  ║      ║║        ║ strip_size: 16   ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: True     ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost0 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n1 online  ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme5n1 online  ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n1 online  ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n1 online  ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n1 online  ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme9n1 online  ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme10n1 online ║      ║║        ║                  ║             ║ 7 /dev/nvme11n1 online ║      ║║        ║                  ║             ║ 8 /dev/nvme13n1 online ║      ║║        ║                  ║             ║ 9 /dev/nvme14n1 online ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost1 ║ size: 14302 GiB  ║ None        ║                        ║      ║║        ║ level: 6         ║             ║                        ║      ║║        ║ strip_size: 128  ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: False    ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost2 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n1 online ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme16n1 online ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n1 online ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n1 online ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n1 online ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme21n1 online ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme22n1 online ║      ║║        ║                  ║             ║ 7 /dev/nvme23n1 online ║      ║║        ║                  ║             ║ 8 /dev/nvme24n1 online ║      ║║        ║                  ║             ║ 9 /dev/nvme25n1 online ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost3 ║ size: 14302 GiB  ║ None        ║                        ║      ║║        ║ level: 6         ║             ║                        ║      ║║        ║ strip_size: 128  ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: False    ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╚════════╩══════════════════╩═════════════╩════════════════════════╩══════╝
node26# df -h|grep xi/dev/xi_r_mdt0 2.1T 5.7M 2.0T 1% /lustre_t/mdt0/dev/xi_r_ost0 14T 1.3M 14T 1% /lustre_t/ost0/dev/xi_r_ost2 14T 1.3M 14T 1% /lustre_t/ost2
复制代码


node27:


node27# xicli raid show╔RAIDs═══╦══════════════════╦═════════════╦════════════════════════╦══════╗║ name   ║ static           ║ state       ║ devices                ║ info ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_mdt0 ║ size: 3576 GiB   ║ None        ║                        ║      ║║        ║ level: 1         ║             ║                        ║      ║║        ║ strip_size: 16   ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: False    ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost0 ║ size: 14302 GiB  ║ None        ║                        ║      ║║        ║ level: 6         ║             ║                        ║      ║║        ║ strip_size: 128  ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: False    ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost1 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n2 online  ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme5n2 online  ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n2 online  ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n2 online  ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n2 online  ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme9n2 online  ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme10n2 online ║      ║║        ║                  ║             ║ 7 /dev/nvme11n2 online ║      ║║        ║                  ║             ║ 8 /dev/nvme13n2 online ║      ║║        ║                  ║             ║ 9 /dev/nvme14n2 online ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost2 ║ size: 14302 GiB  ║ None        ║                        ║      ║║        ║ level: 6         ║             ║                        ║      ║║        ║ strip_size: 128  ║             ║                        ║      ║║        ║ block_size: 4096 ║             ║                        ║      ║║        ║ sparepool: -     ║             ║                        ║      ║║        ║ active: False    ║             ║                        ║      ║║        ║ config: True     ║             ║                        ║      ║╠════════╬══════════════════╬═════════════╬════════════════════════╬══════╣║ r_ost3 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n2 online ║      ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme16n2 online ║      ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n2 online ║      ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n2 online ║      ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n2 online ║      ║║        ║ active: True     ║             ║ 5 /dev/nvme21n2 online ║      ║║        ║ config: True     ║             ║ 6 /dev/nvme22n2 online ║      ║║        ║                  ║             ║ 7 /dev/nvme23n2 online ║      ║║        ║                  ║             ║ 8 /dev/nvme24n2 online ║      ║║        ║                  ║             ║ 9 /dev/nvme25n2 online ║      ║╚════════╩══════════════════╩═════════════╩════════════════════════╩══════╝
node27# df -h|grep xi/dev/xi_r_ost1 14T 1.3M 14T 1% /lustre_t/ost1/dev/xi_r_ost3 14T 1.3M 14T 1% /lustre_t/ost3
复制代码


Lustre 性能调优


在这里,我们设置一些参数以进行性能优化。所有命令必须在运行 MDS 服务器的主机上执行。


服务器端参数:


# OSTs: 16MB bulk RPCsnode26# lctl set_param -P obdfilter.*.brw_size=16node26# lctl set_param -P obdfilter.*.precreate_batch=1024# Clients: 16MB RPCsnode26# lctl set_param -P obdfilter.*.osc.max_pages_per_rpc=4096node26# lctl set_param -P osc.*.max_pages_per_rpc=4096# Clients: 32 RPCs in flightnode26# lctl set_param -P mdc.*.max_rpcs_in_flight=128node26# lctl set_param -P osc.*.max_rpcs_in_flight=128node26# lctl set_param -P mdc.*.max_mod_rpcs_in_flight=127# Clients: Disable memory and wire checksums (~20% performance hit)node26# lctl set_param -P llite.*.checksum_pages=0node26# lctl set_param -P llite.*.checksums=0node26# lctl set_param -P osc.*.checksums=0node26# lctl set_param -P mdc.*.checksums=0
复制代码


翻译

这些参数经过优化以实现最佳性能。它们并非通用,可能在某些情况下不是最优的。

测试

测试平台描述

Lustre 客户端系统:


Lustre 客户端系统由 4 台配置相同的服务器组成,这些服务器连接到同一个 Mellanox QuantumTM HDR Edge Switch QM8700 Infiniband 交换机。SBB 系统节点(集群节点)也连接到同一交换机。Lustre 客户端参数经过修改以获得最佳性能。这些参数更改已被 Lustre 社区接受,用于展示高性能的现代测试。更多详细信息见下表:


| 主机名         | lclient00              | lclient01              | lclient02              | lclient03              ||---------------------|------------------------|------------------------|------------------------|------------------------|| CPU             | AMD EPYC 7502 32 核   | AMD EPYC 7502 32 核   | AMD EPYC 7502 32 核   | AMD EPYC 7502 32 核   || 内存            | 256GB                 | 256GB                 | 256GB                 | 256GB                 || 操作系统驱动器  | INTEL SSDPEKKW256G8    | INTEL SSDPEKKW256G8    | INTEL SSDPEKKW256G8    | INTEL SSDPEKKW256G8    || 操作系统        | Rocky Linux 8.7       | Rocky Linux 8.7       | Rocky Linux 8.7       | Rocky Linux 8.7       || 管理网卡        | 192.168.65.50         | 192.168.65.52         | 192.168.65.54         | 192.168.65.56         || Infiniband LNET HDR | 100.100.100.50   | 100.100.100.52        | 100.100.100.54        | 100.100.100.56        |
复制代码


Lustre 客户端被组合成一个简单的 OpenMPI 集群,并使用标准的并行文件系统测试工具 IOR 进行测试。测试文件创建在 /stripe 文件系统子目录中,该目录是在 Lustre 文件系统中使用以下条带化参数创建的:


lclient01# mount -t lustre 100.100.100.26@o2ib:100.100.100.27@o2ib:/lustre0 /mnt.llclient01# mkdir /mnt.l/stripe4Mlclient01# lfs setstripe -c -1 -S 4M /mnt.l/stripe4M/
复制代码

测试结果

我们使用标准的并行文件系统 IOR 测试来测量安装的性能。例如,我们运行了 4 次测试。每次测试从分布在 4 个客户端上的 128 个线程启动。这些测试通过传输大小(1MB 和 128MB)以及是否启用 directIO 来区分。

正常状态集群性能

启用 directIO 的测试

以下列表显示了传输大小为 1mb 的 director 启用测试的测试命令和结果。


lclient01# /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 1M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile --posix.odirect
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 19005 19005 0.006691 8388608 1024.00 0.008597 55.17 3.92 55.17 0read 82075 82077 0.001545 8388608 1024.00 0.002592 12.78 0.213460 12.78 0Max Write: 19005.04 MiB/sec (19928.23 MB/sec)Max Read: 82075.33 MiB/sec (86062.22 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 19005.04 19005.04 19005.04 0.00 19005.04 19005.04 19005.04 0.00 55.17357 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0read 82075.33 82075.33 82075.33 0.00 82075.33 82075.33 82075.33 0.00 12.77578 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0
复制代码


以下列表显示了传输大小为 128MB 且启用 directIO 的测试命令和结果。

lclient01# /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 128M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile --posix.odirect
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 52892 413.23 0.306686 8388608 131072 0.096920 19.82 0.521081 19.82 0read 70588 551.50 0.229853 8388608 131072 0.002983 14.85 0.723477 14.85 0Max Write: 52892.27 MiB/sec (55461.56 MB/sec)Max Read: 70588.32 MiB/sec (74017.22 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 52892.27 52892.27 52892.27 0.00 413.22 413.22 413.22 0.00 19.82475 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0read 70588.32 70588.32 70588.32 0.00 551.47 551.47 551.47 0.00 14.85481 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0
复制代码

禁用 directIO 的测试

以下列表显示了传输大小为 1MB 的缓冲 IO(禁用 directIO)测试的命令和结果。

lclient01#  /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 1M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 48202 48204 0.002587 8388608 1024.00 0.008528 21.75 1.75 21.75 0read 40960 40960 0.002901 8388608 1024.00 0.002573 25.60 2.39 25.60 0Max Write: 48202.43 MiB/sec (50543.91 MB/sec)Max Read: 40959.57 MiB/sec (42949.22 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 48202.43 48202.43 48202.43 0.00 48202.43 48202.43 48202.43 0.00 21.75359 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0read 40959.57 40959.57 40959.57 0.00 40959.57 40959.57 40959.57 0.00 25.60027 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0
复制代码

以下列表显示了传输大小为 128MB 的缓冲 IO(禁用 directIO)测试的命令和结果。

lclient01#  /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 128M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 46315 361.84 0.349582 8388608 131072 0.009255 22.64 2.70 22.64 0read 39435 308.09 0.368192 8388608 131072 0.002689 26.59 7.65 26.59 0Max Write: 46314.67 MiB/sec (48564.45 MB/sec)Max Read: 39434.54 MiB/sec (41350.12 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 46314.67 46314.67 46314.67 0.00 361.83 361.83 361.83 0.00 22.64026 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0read 39434.54 39434.54 39434.54 0.00 308.08 308.08 308.08 0.00 26.59029 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0
复制代码

故障转移行为


为了检查在节点故障情况下的集群行为,我们将模拟一个节点崩溃以模拟此类故障。在进行故障模拟之前,我们先检查正常的集群状态:

# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Tue Aug 13 19:13:23 2024 on node26-ic  * Last change:  Tue Aug 13 19:13:18 2024 by hacluster via hacluster on node27-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Online: [ node26-ic node27-ic ]
Full List of Resources: * rr_mdt0 (ocf::xraid:raid): Started node26-ic * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost0 (ocf::xraid:raid): Started node26-ic * fsr_ost0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost2 (ocf::xraid:raid): Started node26-ic * fsr_ost2 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Started node27-ic * node27.stonith (stonith:fence_ipmilan): Started node26-ic * node26.stonith (stonith:fence_ipmilan): Started node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码


现在让我们执行 node26 的崩溃:

node26# echo c > /proc/sysrq-trigger
复制代码


此时,node27 检测到 node26 无响应,并开始准备对其进行隔离(fence)。


node27# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Fri Aug 30 00:55:04 2024 on node27-ic  * Last change:  Thu Aug 29 01:26:09 2024 by root via root on node26-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Node node26-ic: UNCLEAN (offline) * Online: [ node27-ic ]
Full List of Resources: * rr_mdt0 (ocf::xraid:raid): Started node26-ic (UNCLEAN) * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node26-ic (UNCLEAN) * rr_ost0 (ocf::xraid:raid): Started node26-ic (UNCLEAN) * fsr_ost0 (ocf::heartbeat:Filesystem): Started node26-ic (UNCLEAN) * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Stopped * rr_ost2 (ocf::xraid:raid): Started node26-ic (UNCLEAN) * fsr_ost2 (ocf::heartbeat:Filesystem): Started node26-ic (UNCLEAN) * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Stopping node27-ic * node27.stonith (stonith:fence_ipmilan): Started node26-ic (UNCLEAN) * node26.stonith (stonith:fence_ipmilan): Started node27-ic
Pending Fencing Actions: * reboot of node26-ic pending: client=pacemaker-controld.286449, origin=node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码

此时,所有集群资源在成功隔离 node26 后都已在线运行于 node27。


在实验过程中,集群大约花费了 1 分钟 50 秒来检测 node26 的离线状态、对其进行隔离,并在幸存的 node27 上按正确的顺序启动所有服务。

node27# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Fri Aug 30 00:56:30 2024 on node27-ic  * Last change:  Thu Aug 29 01:26:09 2024 by root via root on node26-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Online: [ node27-ic ] * OFFLINE: [ node26-ic ]
Full List of Resources: * rr_mdt0 (ocf::xraid:raid): Started node27-ic * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost0 (ocf::xraid:raid): Started node27-ic * fsr_ost0 (ocf::heartbeat:Filesystem): Starting node27-ic * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost2 (ocf::xraid:raid): Started node27-ic * fsr_ost2 (ocf::heartbeat:Filesystem): Starting node27-ic * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Started node27-ic * node27.stonith (stonith:fence_ipmilan): Stopped * node26.stonith (stonith:fence_ipmilan): Started node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码


由于 node26 没有正常关闭,迁移到 node27 的 RAID 正在进行初始化,以防止写入漏洞。这是预期的行为:

node27# xicli raid show╔RAIDs═══╦══════════════════╦═════════════╦════════════════════════╦═══════════════════╗║ name   ║ static           ║ state       ║ devices                ║ info              ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_mdt0 ║ size: 3576 GiB   ║ online      ║ 0 /dev/nvme0n1 online  ║                   ║║        ║ level: 1         ║ initialized ║ 1 /dev/nvme1n1 online  ║                   ║║        ║ strip_size: 16   ║             ║                        ║                   ║║        ║ block_size: 4096 ║             ║                        ║                   ║║        ║ sparepool: -     ║             ║                        ║                   ║║        ║ active: True     ║             ║                        ║                   ║║        ║ config: True     ║             ║                        ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost0 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n1 online  ║ init_progress: 31 ║║        ║ level: 6         ║ initing     ║ 1 /dev/nvme5n1 online  ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n1 online  ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n1 online  ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n1 online  ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme9n1 online  ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme10n1 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme11n1 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme13n1 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme14n1 online ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost1 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme4n2 online  ║                   ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme5n2 online  ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme6n2 online  ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme7n2 online  ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme8n2 online  ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme9n2 online  ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme10n2 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme11n2 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme13n2 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme14n2 online ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost2 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n1 online ║ init_progress: 29 ║║        ║ level: 6         ║ initing     ║ 1 /dev/nvme16n1 online ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n1 online ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n1 online ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n1 online ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme21n1 online ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme22n1 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme23n1 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme24n1 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme25n1 online ║                   ║╠════════╬══════════════════╬═════════════╬════════════════════════╬═══════════════════╣║ r_ost3 ║ size: 14302 GiB  ║ online      ║ 0 /dev/nvme15n2 online ║                   ║║        ║ level: 6         ║ initialized ║ 1 /dev/nvme16n2 online ║                   ║║        ║ strip_size: 128  ║             ║ 2 /dev/nvme17n2 online ║                   ║║        ║ block_size: 4096 ║             ║ 3 /dev/nvme18n2 online ║                   ║║        ║ sparepool: -     ║             ║ 4 /dev/nvme20n2 online ║                   ║║        ║ active: True     ║             ║ 5 /dev/nvme21n2 online ║                   ║║        ║ config: True     ║             ║ 6 /dev/nvme22n2 online ║                   ║║        ║                  ║             ║ 7 /dev/nvme23n2 online ║                   ║║        ║                  ║             ║ 8 /dev/nvme24n2 online ║                   ║║        ║                  ║             ║ 9 /dev/nvme25n2 online ║                   ║╚════════╩══════════════════╩═════════════╩════════════════════════╩═══════════════════╝
复制代码

故障转移状态下的集群性能

现在,所有 Lustre 文件系统服务器都在幸存的节点上运行。在此配置中,我们预计性能会减半,因为所有通信现在都只能通过一台服务器进行。这种情况下的其他瓶颈包括:


  • NVMe 性能下降:由于只有一台服务器在运行,所有工作负载都只能通过 2 条 PCIe 通道传输到 NVMe 驱动器;

  • CPU 不足

  • 内存不足

启用 directIO 的测试

以下列表显示了在仅有一个节点工作的系统上,启用 directIO 并使用 1MB 传输大小的测试命令和结果。


lclient01# /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 1M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile --posix.odirect
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 17185 17185 0.007389 8388608 1024.00 0.012074 61.02 2.86 61.02 0read 45619 45620 0.002803 8388608 1024.00 0.003000 22.99 0.590771 22.99 0Max Write: 17185.06 MiB/sec (18019.84 MB/sec)Max Read: 45619.10 MiB/sec (47835.10 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 17185.06 17185.06 17185.06 0.00 17185.06 17185.06 17185.06 0.00 61.01671 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0read 45619.10 45619.10 45619.10 0.00 45619.10 45619.10 45619.10 0.00 22.98546 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,-
复制代码

以下列表显示了在仅有一个节点工作的系统上,启用 directIO 且传输大小为 128MB 的测试命令和结果。

lclient01# /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 128M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile --posix.odirect
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 30129 235.39 0.524655 8388608 131072 0.798392 34.80 1.64 34.80 0read 35731 279.15 0.455215 8388608 131072 0.002234 29.35 2.37 29.35 0Max Write: 30129.26 MiB/sec (31592.82 MB/sec)Max Read: 35730.91 MiB/sec (37466.57 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 30129.26 30129.26 30129.26 0.00 235.38 235.38 235.38 0.00 34.80258 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0read 35730.91 35730.91 35730.91 0.00 279.15 279.15 279.15 0.00 29.34647 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0
复制代码

禁用 directIO 的测试


以下列表显示了在仅有一个节点工作的系统上,使用 1MB 传输大小进行缓冲 IO(禁用 directIO)测试的命令和结果。

lclient01#  /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 1M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 30967 31042 0.004072 8388608 1024.00 0.008509 33.78 7.55 33.86 0read 38440 38441 0.003291 8388608 1024.00 0.282087 27.28 8.22 27.28 0Max Write: 30966.96 MiB/sec (32471.21 MB/sec)Max Read: 38440.06 MiB/sec (40307.32 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 30966.96 30966.96 30966.96 0.00 30966.96 30966.96 30966.96 0.00 33.86112 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0read 38440.06 38440.06 38440.06 0.00 38440.06 38440.06 38440.06 0.00 27.27821 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 1048576 1048576.0 POSIX 0Finished : Thu Sep 12 03:18:41 2024
复制代码

以下列表显示了在仅有一个节点工作的系统上,使用 1MB 传输大小进行缓冲 IO(禁用 directIO)测试的命令和结果。

lclient01#  /usr/lib64/openmpi/bin/mpirun --allow-run-as-root --hostfile ./hfile -np 128 --map-by node /usr/bin/ior -F -t 128M -b 8G  -k -r -w -o /mnt.l/stripe4M/testfile
. . .
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----write 30728 240.72 0.515679 8388608 131072 0.010178 34.03 8.70 34.12 0read 35974 281.05 0.386365 8388608 131072 0.067996 29.15 10.73 29.15 0Max Write: 30727.85 MiB/sec (32220.49 MB/sec)Max Read: 35974.24 MiB/sec (37721.72 MB/sec)
Summary of all tests:Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNumwrite 30727.85 30727.85 30727.85 0.00 240.06 240.06 240.06 0.00 34.12461 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0read 35974.24 35974.24 35974.24 0.00 281.05 281.05 281.05 0.00 29.14797 NA NA 0 128 32 1 1 0 1 0 0 1 8589934592 134217728 1048576.0 POSIX 0
复制代码

故障恢复

与此同时,node26 在崩溃后已重新启动。在我们的配置中,集群软件不会自动启动。

node26# pcs statusError: error running crm_mon, is pacemaker running?crm_mon: Connection to cluster failed: Connection refused
复制代码

这在实际生活中可能很有用:在将节点重新加入集群之前,管理员应识别、定位并解决问题,以防止问题再次发生。


集群软件在 node27 上正常运行:

node27# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Sat Aug 31 01:13:57 2024 on node27-ic  * Last change:  Thu Aug 29 01:26:09 2024 by root via root on node26-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Online: [ node27-ic ] * OFFLINE: [ node26-ic ]
Full List of Resources: * rr_mdt0 (ocf::xraid:raid): Started node27-ic * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost0 (ocf::xraid:raid): Started node27-ic * fsr_ost0 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost2 (ocf::xraid:raid): Started node27-ic * fsr_ost2 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Started node27-ic * node27.stonith (stonith:fence_ipmilan): Stopped * node26.stonith (stonith:fence_ipmilan): Started node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码

由于我们已经知道 node26 崩溃的原因,我们在该节点上启动集群软件:


node26# pcs cluster startStarting Cluster...
复制代码


一段时间后,集群软件启动,并且本应在 node26 上运行的资源会从 node27 正确地迁移回 node26。故障恢复过程大约耗时 30 秒。

node26# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Sat Aug 31 01:15:03 2024 on node26-ic  * Last change:  Thu Aug 29 01:26:09 2024 by root via root on node26-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Online: [ node26-ic node27-ic ]
Full List of Resources: * rr_mdt0 (ocf::xraid:raid): Started node26-ic * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost0 (ocf::xraid:raid): Started node26-ic * fsr_ost0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost2 (ocf::xraid:raid): Started node26-ic * fsr_ost2 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Started node27-ic * node27.stonith (stonith:fence_ipmilan): Started node26-ic * node26.stonith (stonith:fence_ipmilan): Started node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码


node27# pcs statusCluster name: lustrebox0Cluster Summary:  * Stack: corosync (Pacemaker is running)  * Current DC: node27-ic (version 2.1.7-5.el8_10-0f7f88312) - partition with quorum  * Last updated: Sat Aug 31 01:15:40 2024 on node27-ic  * Last change:  Thu Aug 29 01:26:09 2024 by root via root on node26-ic  * 2 nodes configured  * 12 resource instances configured
Node List: * Online: [ node26-ic node27-ic ]
Full List of Resources: * rr_mdt0 (ocf::xraid:raid): Started node26-ic * fsr_mdt0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost0 (ocf::xraid:raid): Started node26-ic * fsr_ost0 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost1 (ocf::xraid:raid): Started node27-ic * fsr_ost1 (ocf::heartbeat:Filesystem): Started node27-ic * rr_ost2 (ocf::xraid:raid): Started node26-ic * fsr_ost2 (ocf::heartbeat:Filesystem): Started node26-ic * rr_ost3 (ocf::xraid:raid): Started node27-ic * fsr_ost3 (ocf::heartbeat:Filesystem): Started node27-ic * node27.stonith (stonith:fence_ipmilan): Started node26-ic * node26.stonith (stonith:fence_ipmilan): Started node27-ic
Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
复制代码

结论

本文展示了基于 SBB 系统(配备双端口 NVMe 驱动器和 xiRAID Classic 4.1 RAID 引擎)创建小型、高可用性和高性能 Lustre 文件系统的可能性。同时,还演示了 xiRAID Classic 与 Pacemaker 集群集成的便捷性,以及其与 Lustre 经典集群方法的兼容性。


该配置非常简单,需要安装并正确配置以下软件组件:


  • xiRAID Classic 4.1 和 Csync2

  • Lustre 软件

  • Pacemaker 软件


最终构建的系统基于 Viking VDS2249R SBB 系统,配备了两台单 CPU 服务器和 24 块 PCIe 4.0 NVMe 驱动器。使用标准并行文件系统测试程序 IOR,该系统在 Lustre 客户端上实现了高达 55GB/s 的写入性能和高达 86GB/s 的读取性能。


本文稍作修改后,也可用于设置额外的系统以扩展现有的 Lustre 安装。


如需更多信息,请阅读原始博客文章.

用户头像

还未添加个人签名 2025-03-18 加入

还未添加个人简介

评论

发布
暂无评论
高性能、高可用的 Lustre 解决方案:使用 xiRAID 4.1 在双节点共享 NVMe 环境下_性能测试_Sergey Platonov_InfoQ写作社区