写点什么

TIDB DM 功能使用实践

  • 2024-03-15
    北京
  • 本文字数:13125 字

    阅读完需:约 43 分钟

作者: paulli 原文来源:https://tidb.net/blog/272ae2f3

一、DM 原理和使用场景

DM 原理

DM 提供了 Table routingBlock & Allow Table ListsBinlog event filter 等基本功能,适用于不同的迁移场景。在迁移之前,建议先了解下这些基本功能,根据需求进行选择和配置。

Table routing

Table routing 可以将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表,它也是分库分表合并迁移所需的一个核心功能。

Block & Allow Table Lists

上游数据库实例表的黑白名单过滤规则,可以用来过滤或者只迁移某些 database/table 的所有操作。

Binlog event filter

Binlog event filter 是比迁移表黑白名单更加细粒度的过滤规则,可以指定只迁移或者过滤掉某些


schema / table 的指定类型 binlog,比如 INSERTTRUNCATE TABLE

DM 使用场景

1、MySQL 同步数据至 TiDB 集群,常见场景如分库分表的多个 MySQL 数据库同步数据至 TiDB 集群


2、TiDB 集群同步数据至 TIDB 集群

二、搭建 DM 环境

配置 DM

[tidb@tidb53 paul]$ cat dm.yaml # The topology template is used deploy a minimal DM cluster, which suitable# for scenarios with only three machinescontains. The minimal cluster contains# - 3 master nodes# - 3 worker nodes# You can change the hosts according your environment---global:user: "tidb"ssh_port: 22deploy_dir: "/data1/tidb-deploy"data_dir: "/data1/tidb-data"# arch: "amd64"server_configs:master:  log-level: info  # rpc-timeout: "30s"  # rpc-rate-limit: 10.0  # rpc-rate-burst: 40worker:  log-level: infomaster_servers:- host: 172.20.12.52  # name: master1  ssh_port: 22  port: 18261  peer_port: 18291  deploy_dir: "/data1/dm-deploy/dm-master-18261"  data_dir: "/data1/dm-data/dm-master-18261"  log_dir: "/data1/dm-deploy/dm-master-18261/log"- host: 172.20.12.53  # name: master2  ssh_port: 22  port: 18261  peer_port: 18291  deploy_dir: "/data1/dm-deploy/dm-master-18261"  data_dir: "/data1/dm-data/dm-master-18261"  log_dir: "/data1/dm-deploy/dm-master-18261/log"- host: 172.20.12.70  # name: master3  ssh_port: 22  port: 18261  peer_port: 18291  deploy_dir: "/data1/dm-deploy/dm-master-18261"  data_dir: "/data1/dm-data/dm-master-18261"  log_dir: "/data1/dm-deploy/dm-master-18261/log"  worker_servers:- host: 172.20.12.52  ssh_port: 22  port: 18262  deploy_dir: "/data1/dm-deploy/dm-worker-18262"  log_dir: "/data1/dm-deploy/dm-worker-18262/log"  config:    log-level: info- host: 172.20.12.53  ssh_port: 22  port: 18262  deploy_dir: "/data1/dm-deploy/dm-worker-18262"  log_dir: "/data1/dm-deploy/dm-worker-18262/log"  config:    log-level: info- host: 172.20.12.70  ssh_port: 22  port: 18262  deploy_dir: "/data1/dm-deploy/dm-worker-18262"  log_dir: "/data1/dm-deploy/dm-worker-18262/log"  config:    log-level: infomonitoring_servers:- host: 172.20.12.53  ssh_port: 22  port: 19999  deploy_dir: "/data1/tidb-deploy/prometheus-19999"  data_dir: "/data1/tidb-data/prometheus-19999"  log_dir: "/data1/tidb-deploy/prometheus-19999/log"grafana_servers:- host: 172.20.12.53  port: 19998   deploy_dir: /data1/tidb-deploy/grafana-19998alertmanager_servers:- host: 172.20.12.53  ssh_port: 22  web_port: 19997  cluster_port: 19996  deploy_dir: "/data1/tidb-deploy/alertmanager-19997"  data_dir: "/data1/tidb-data/alertmanager-19997"  log_dir: "/data1/tidb-deploy/alertmanager-19997/log"//安装dm[tidb@tidb53 paul]$ tiup dm deploy paul_dm v6.5.1 ./dm.yaml Checking updates for component dm... Timedout (after 2s)Starting component dm: /home/tidb/.tiup/components/dm/v1.14.1/tiup-dm deploy paul_dm v6.5.1 ./dm.yaml+ Detect CPU Arch Name- Detecting node 172.20.12.52 Arch info ... Done- Detecting node 172.20.12.53 Arch info ... Done- Detecting node 172.20.12.70 Arch info ... Done+ Detect CPU OS Name- Detecting node 172.20.12.52 OS info ... Done- Detecting node 172.20.12.53 OS info ... Done- Detecting node 172.20.12.70 OS info ... DonePlease confirm your topology:Cluster type:   dmCluster name:   paul_dmCluster version: v6.5.1Role         Host         Ports       OS/Arch       Directories----         ----         -----       -------       -----------dm-master     172.20.12.52 18261/18291 linux/x86_64 /data1/dm-deploy/dm-master-18261,/data1/dm-data/dm-master-18261dm-master     172.20.12.53 18261/18291 linux/x86_64 /data1/dm-deploy/dm-master-18261,/data1/dm-data/dm-master-18261dm-master     172.20.12.70 18261/18291 linux/x86_64 /data1/dm-deploy/dm-master-18261,/data1/dm-data/dm-master-18261dm-worker     172.20.12.52 18262       linux/x86_64 /data1/dm-deploy/dm-worker-18262,/data1/tidb-data/dm-worker-18262dm-worker     172.20.12.53 18262       linux/x86_64 /data1/dm-deploy/dm-worker-18262,/data1/tidb-data/dm-worker-18262dm-worker     172.20.12.70 18262       linux/x86_64 /data1/dm-deploy/dm-worker-18262,/data1/tidb-data/dm-worker-18262prometheus   172.20.12.53 19999       linux/x86_64 /data1/tidb-deploy/prometheus-19999,/data1/tidb-data/prometheus-19999grafana       172.20.12.53 19998       linux/x86_64 /data1/tidb-deploy/grafana-19998alertmanager 172.20.12.53 19997/19996 linux/x86_64 /data1/tidb-deploy/alertmanager-19997,/data1/tidb-data/alertmanager-19997Attention:  1. If the topology is not what you expected, check your yaml file.  2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: (default=N) y....[tidb@tidb53 paul]$ tiup dm display paul_dmChecking updates for component dm... Timedout (after 2s)Starting component dm: /home/tidb/.tiup/components/dm/v1.14.1/tiup-dm display paul_dmCluster type:       dmCluster name:       paul_dmCluster version:   v6.5.1Deploy user:       tidbSSH type:           builtinGrafana URL:       http://172.20.12.53:19998ID                 Role         Host         Ports       OS/Arch       Status     Data Dir                             Deploy Dir--                 ----         ----         -----       -------       ------     --------                             ----------172.20.12.53:19997 alertmanager 172.20.12.53 19997/19996 linux/x86_64 Up         /data1/tidb-data/alertmanager-19997 /data1/tidb-deploy/alertmanager-19997172.20.12.52:18261 dm-master     172.20.12.52 18261/18291 linux/x86_64 Healthy|L /data1/dm-data/dm-master-18261       /data1/dm-deploy/dm-master-18261172.20.12.53:18261 dm-master     172.20.12.53 18261/18291 linux/x86_64 Healthy   /data1/dm-data/dm-master-18261       /data1/dm-deploy/dm-master-18261172.20.12.70:18261 dm-master     172.20.12.70 18261/18291 linux/x86_64 Healthy   /data1/dm-data/dm-master-18261       /data1/dm-deploy/dm-master-18261172.20.12.52:18262 dm-worker     172.20.12.52 18262       linux/x86_64 Free       /data1/tidb-data/dm-worker-18262     /data1/dm-deploy/dm-worker-18262172.20.12.53:18262 dm-worker     172.20.12.53 18262       linux/x86_64 Free       /data1/tidb-data/dm-worker-18262     /data1/dm-deploy/dm-worker-18262172.20.12.70:18262 dm-worker     172.20.12.70 18262       linux/x86_64 Free       /data1/tidb-data/dm-worker-18262     /data1/dm-deploy/dm-worker-18262172.20.12.53:19998 grafana       172.20.12.53 19998       linux/x86_64 Up         -                                   /data1/tidb-deploy/grafana-19998172.20.12.53:19999 prometheus   172.20.12.53 19999       linux/x86_64 Up         /data1/tidb-data/prometheus-19999   /data1/tidb-deploy/prometheus-19999Total nodes: 9
复制代码

配置 MySQL

3900:[root@tidb52 ~]# mkdir -p /data1/mysql/3900/[root@tidb52 ~]# mkdir -p /data1/mysql/3900/data[root@tidb52 ~]# mkdir -p /data1/mysql/3900/binlog[root@tidb52 ~]# mkdir -p /data1/mysql/3900/log[root@tidb52 ~]# chown -R mysql:mysql /data1/mysql/3900/[root@tidb52 ~]# tar -xzvf mysql-5.7.28-el7-x86_64.tar.gz [root@tidb52 ~]# ln -s mysql-5.7.28-el7-x86_64 mysql[root@tidb52 ~]# /data1/mysql/mysql/bin/mysqld --defaults-file=/data1/mysql/3900/my.cnf --initialize --user=mysql --basedir=/data1/mysql/mysql --datadir=/data1/mysql/3900/data[root@tidb52 ~]# /data1/mysql/mysql/bin/mysql -u root -P 3900 -p -h localhost -S /data1/mysql/3900/mysql.sock
复制代码

创建数据同步用户 -MySQL

[root@tidb52 ~]# mysql -u root -P 3900 -p -h localhost -S /data1/mysql/3900/mysql.sockEnter password: Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 6Server version: 5.7.28-log MySQL Community Server (GPL)Copyright (c) 2000, 2023, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE USER dm_user@'%' identified by 'q1w2e3R4_';Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON *.* TO dm_user@'%';Query OK, 0 rows affected (0.00 sec)mysql> SET @@global.show_compatibility_56=ON;Query OK, 0 rows affected (0.00 sec)
复制代码

创建用户和初始化表结构 -TiDB

[tidb@tidb53 ~]$ mysql -h 172.20.12.53 -P8000 -u root -proot -cmysql: [Warning] Using a password on the command line interface can be insecure.Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 405Server version: 5.7.25-TiDB-v6.5.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatibleCopyright (c) 2000, 2023, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE USER dm_user@'%' identified by 'q1w2e3R4_';Query OK, 0 rows affected (0.03 sec)mysql> GRANT ALL PRIVILEGES ON *.* TO dm_user@'%';Query OK, 0 rows affected (0.02 sec)
复制代码

三、加载数据源

配置数据源

[tidb@tidb53 paul]$ cat mysql-source-conf1.yaml source-id: "mysql-replica-01"from:host: "172.20.12.52"port: 3900user: "dm_user"password: "rjwvv2zB7Vam/2SpwUHxPUBwD+h2fnxAM+s="//source tidb [tidb@tidb53 paul]$ cat tidb-source-conf1.yaml source-id: "tidb-replica-01"from:host: "172.20.12.52"port: 8000user: "dm_user"password: "nI0+yapNOdtbBZD+FRk3IEAMAp68KhRAmo8="
复制代码

注册数据源

[tidb@tidb53 paul]$ tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 operate-source create mysql-source-conf1.yamlStarting component dmctl: /home/tidb/.tiup/components/dmctl/v6.5.1/dmctl/dmctl --master-addr=172.20.12.52:18261 operate-source create mysql-source-conf1.yaml{  "result": true,  "msg": "",  "sources": [      {          "result": true,          "msg": "",          "source": "mysql-replica-01",          "worker": "dm-172.20.12.52-18262"      }  ]}[tidb@tidb53 paul]$ tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 operate-source create tidb-source-conf1.yamlStarting component dmctl: /home/tidb/.tiup/components/dmctl/v6.5.1/dmctl/dmctl --master-addr=172.20.12.52:18261 operate-source create tidb-source-conf1.yaml{  "result": true,  "msg": "",  "sources": [      {          "result": true,          "msg": "",          "source": "tidb-replica-01",          "worker": "dm-172.20.12.53-18262"      }  ]}
复制代码

其他功能

//获取加密密码[tidb@tidb53 ~]$ tiup dmctl --encrypt 'q1w2e3R4_'A new version of dmctl is available: v6.5.1 -> v7.5.0  To update this component:   tiup update dmctl  To update all components:   tiup update --allStarting component dmctl: /home/tidb/.tiup/components/dmctl/v6.5.1/dmctl/dmctl --encrypt q1w2e3R4_9TkSQ/2bRFtijuOIcvtMrED4pX+t94AObh0=//查看所有数据源[tidb@tidb53 paul]$ tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 operate-source show Starting component dmctl: /home/tidb/.tiup/components/dmctl/v6.5.1/dmctl/dmctl --master-addr=172.20.12.52:18261 operate-source show{  "result": true,  "msg": "",  "sources": [      {          "result": true,          "msg": "",          "source": "mysql-replica-01",          "worker": "dm-172.20.12.52-18262"      },      {          "result": true,          "msg": "",          "source": "tidb-replica-01",          "worker": "dm-172.20.12.53-18262"      }  ]}
复制代码

四、配置迁移规则

task-mode 提供一下几种任务模式,可设为


full - 只进行全量数据迁移


incremental - Binlog 实时同步


all - 全量 + Binlog 实时同步

配置迁移全量 + 增量复制规则

[tidb@tidb53 paul]$ cat dm_task_all.yaml name: "dm_task202403051" # The name of the task is dm-task.task-mode: all   # The task mode is set to full (only migrates full data).ignore-checking-items: ["auto_increment_ID"] # Ignorable checking auto_increment_ID.meta-schema: "dm_meta" clean-dump-file: truetarget-database: # set the host, port, user and password of the downstream database TiDB.host: "172.20.12.53"port: 8000user: "dm_user"password: "9TkSQ/2bRFtijuOIcvtMrED4pX+t94AObh0="mysql-instances: # configure data sources that need to be migrated for the data migration task.- source-id: "mysql-replica-01"  route-rules: ["instance-1-user-schema-rule"]  filter-rules: ["trace-table-filter-rule"]  block-allow-list: "log-ignored"  mydumper-config-name: "global"  loader-config-name: "global"  syncer-config-name: "global"routes:instance-1-user-schema-rule: # The first of the routing mapping rule.schema-pattern: "test"target-schema: "test"filters:trace-table-filter-rule:schema-pattern: "test"table-pattern: "t*"events: ["truncate table", "DROP TABLE"]action: Ignoreblock-allow-list:log-ignored:ignore-dbs: ["information_schema","mysql","performance_schema","tidb_binlog"]mydumpers:global:threads: 4chunk-filesize: 64loaders: global: pool-size: 16 dir: "./dumped_data"import-mode: "logical"on-duplicate: "replace"syncers:global:worker-count: 16batch: 100enable-ansi-quotes: truesafe-mode: falsesafe-mode-duration: "60s"compact: truemultiple-rows: false
复制代码

配置迁移增量复制规则

当 task-mode 为 incremental 的时候,且下游数据库的 checkpoint 不存在,指定 meta 读取的 binlog 迁移开始的位置 ;


如果 meta 项和下游数据库的 checkpoint 都不存在,则从上游当前最新的 binlog 位置开始迁移


[tidb@tidb53 paul]$ cat dm_task_increment.yaml name: "dm_task202403051" # The name of the task is dm-task.task-mode: incremental   # The task mode is set to full (only migrates full data).ignore-checking-items: ["auto_increment_ID"] # Ignorable checking auto_increment_ID.meta-schema: "dm_meta" clean-dump-file: truetarget-database: # set the host, port, user and password of the downstream database TiDB.host: "172.20.12.53"port: 8000user: "dm_user"password: "9TkSQ/2bRFtijuOIcvtMrED4pX+t94AObh0="mysql-instances: # configure data sources that need to be migrated for the data migration task.- source-id: "mysql-replica-01"  meta:                                      binlog-name: mysqldb-log.000073    binlog-pos: 386800605    binlog-gtid: ""    route-rules: ["instance-1-user-schema-rule"]  filter-rules: ["trace-table-filter-rule"]  block-allow-list: "log-ignored"  mydumper-config-name: "global"  loader-config-name: "global"  syncer-config-name: "global"routes:instance-1-user-schema-rule: # The first of the routing mapping rule.schema-pattern: "test"target-schema: "test"filters:trace-table-filter-rule:schema-pattern: "test"table-pattern: "t*"events: ["truncate table", "DROP TABLE"]action: Ignoreblock-allow-list:log-ignored:ignore-dbs: ["information_schema","mysql","performance_schema","tidb_binlog"]mydumpers:global:threads: 4chunk-filesize: 64loaders: global: pool-size: 16 dir: "./dumped_data"import-mode: "logical"on-duplicate: "replace"syncers:global:worker-count: 16batch: 100enable-ansi-quotes: truesafe-mode: falsesafe-mode-duration: "60s"compact: truemultiple-rows: false
复制代码

五、DM 集群和任务管理

DM 任务管理

启动任务

tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 start-task dm_task_all.yaml 
复制代码

停止任务

tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 stop-task dm_task_all.yaml 
复制代码

查看任务状态

tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 query-status dm_task_all.yaml 
复制代码

暂停任务状态

tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 pause-task dm_task_all.yaml 
复制代码

恢复任务状态

tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 resume-task dm_task_all.yaml 
复制代码

开启任务 relay log

[tidb@tidb53 ~]$ tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 start-relay -s mysql-replica-01Starting component dmctl: /home/tidb/.tiup/components/dmctl/v6.5.1/dmctl/dmctl --master-addr=172.20.12.52:18261 start-relay -s mysql-replica-01{  "result": true,  "msg": "",  "sources": [  ]}
复制代码

查询状态 relay log

tiup dmctl:v6.5.1 --master-addr=172.20.12.52:18261 query-status -s mysql-replica-01Starting component dmctl: /home/tidb/.tiup/components/dmctl/v6.5.1/dmctl/dmctl --master-addr=172.20.12.52:18261 query-status -s mysql-replica-01{  "result": true,  "msg": "",  "sources": [      {          "result": true,          "msg": "",          "sourceStatus": {              "source": "mysql-replica-01",              "worker": "dm-172.20.12.52-18262",              "result": null,              "relayStatus": {                  "masterBinlog": "(mysqldb-log.000002, 9637)",                  "masterBinlogGtid": "",                  "relaySubDir": "5bb7f595-b90b-11ee-ad78-4cd98f4baa6a.000001",                  "relayBinlog": "(mysqldb-log.000002, 9637)",                  "relayBinlogGtid": "",                  "relayCatchUpMaster": true,                  "stage": "Running",                  "result": null              }          },          "subTaskStatus": [              {                  "name": "dm-task",                  "stage": "Running",                  "unit": "Sync",                  "result": null,                  "unresolvedDDLLockID": "",                  "sync": {                      "totalEvents": "0",                      "totalTps": "0",                      "recentTps": "0",                      "masterBinlog": "(mysqldb-log.000002, 9637)",                      "masterBinlogGtid": "",                      "syncerBinlog": "(mysqldb-log.000002, 9637)",                      "syncerBinlogGtid": "",                      "blockingDDLs": [                      ],                      "unresolvedGroups": [                      ],                      "synced": true,                      "binlogType": "local",                      "secondsBehindMaster": "0",                      "blockDDLOwner": "",                      "conflictMsg": "",                      "totalRows": "0",                      "totalRps": "0",                      "recentRps": "0"                  },                  "validation": null              }          ]      }  ]}
复制代码

DM 集群管理

查看 dm 集群状态

[tidb@tidb53 paul]$ tiup dm display paul_dmChecking updates for component dm... Timedout (after 2s)Starting component dm: /home/tidb/.tiup/components/dm/v1.14.1/tiup-dm display paul_dmCluster type:       dmCluster name:       paul_dmCluster version:   v6.5.1Deploy user:       tidbSSH type:           builtinGrafana URL:       http://172.20.12.53:19998ID                 Role         Host         Ports       OS/Arch       Status     Data Dir                             Deploy Dir--                 ----         ----         -----       -------       ------     --------                             ----------172.20.12.53:19997 alertmanager 172.20.12.53 19997/19996 linux/x86_64 Up         /data1/tidb-data/alertmanager-19997 /data1/tidb-deploy/alertmanager-19997172.20.12.52:18261 dm-master     172.20.12.52 18261/18291 linux/x86_64 Healthy|L /data1/dm-data/dm-master-18261       /data1/dm-deploy/dm-master-18261172.20.12.53:18261 dm-master     172.20.12.53 18261/18291 linux/x86_64 Healthy   /data1/dm-data/dm-master-18261       /data1/dm-deploy/dm-master-18261172.20.12.70:18261 dm-master     172.20.12.70 18261/18291 linux/x86_64 Healthy   /data1/dm-data/dm-master-18261       /data1/dm-deploy/dm-master-18261172.20.12.52:18262 dm-worker     172.20.12.52 18262       linux/x86_64 Bound     /data1/tidb-data/dm-worker-18262     /data1/dm-deploy/dm-worker-18262172.20.12.53:18262 dm-worker     172.20.12.53 18262       linux/x86_64 Free       /data1/tidb-data/dm-worker-18262     /data1/dm-deploy/dm-worker-18262172.20.12.70:18262 dm-worker     172.20.12.70 18262       linux/x86_64 Bound     /data1/tidb-data/dm-worker-18262     /data1/dm-deploy/dm-worker-18262172.20.12.53:19998 grafana       172.20.12.53 19998       linux/x86_64 Up         -                                   /data1/tidb-deploy/grafana-19998172.20.12.53:19999 prometheus   172.20.12.53 19999       linux/x86_64 Up         /data1/tidb-data/prometheus-19999   /data1/tidb-deploy/prometheus-19999Total nodes: 9
复制代码

缩容 work 节点

[tidb@tidb53 paul]$ tiup dm scale-in paul_dm -N 172.20.12.53:18262Checking updates for component dm... Timedout (after 2s)Starting component dm: /home/tidb/.tiup/components/dm/v1.14.1/tiup-dm scale-in paul_dm -N 172.20.12.53:18262This operation will delete the 172.20.12.53:18262 nodes in `paul_dm` and all their data.Do you want to continue? [y/N]:(default=N) yScale-in nodes...+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/dm/clusters/paul_dm/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/dm/clusters/paul_dm/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.52+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.52+ [Parallel] - UserSSH: user=tidb, host=172.20.12.70+ [Parallel] - UserSSH: user=tidb, host=172.20.12.70+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [ Serial ] - ScaleInCluster: options={Roles:[] Nodes:[172.20.12.53:18262] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:tidb SSHProxyIdentity:/home/tidb/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 SSHCustomScripts:{BeforeRestartInstance:{Raw:} AfterRestartInstance:{Raw:}} CleanupData:false CleanupLog:false CleanupAuditLog:false RetainDataRoles:[] RetainDataNodes:[] DisplayMode:default Operation:StartOperation}Stopping component dm-worker      Stopping instance 172.20.12.53      Stop dm-worker 172.20.12.53:18262 successDestroying component dm-worker      Destroying instance 172.20.12.53Destroy 172.20.12.53 finished- Destroy dm-worker paths: [/data1/dm-deploy/dm-worker-18262 /etc/systemd/system/dm-worker-18262.service /data1/tidb-data/dm-worker-18262 /data1/dm-deploy/dm-worker-18262/log]+ [ Serial ] - UpdateMeta: cluster=paul_dm, deleted=`'172.20.12.53:18262'`+ Refresh instance configs- Generate config dm-master -> 172.20.12.52:18261 ... Done- Generate config dm-master -> 172.20.12.53:18261 ... Done- Generate config dm-master -> 172.20.12.70:18261 ... Done- Generate config dm-worker -> 172.20.12.52:18262 ... Done- Generate config dm-worker -> 172.20.12.70:18262 ... Done- Generate config prometheus -> 172.20.12.53:19999 ... Done- Generate config grafana -> 172.20.12.53:19998 ... Done- Generate config alertmanager -> 172.20.12.53:19997 ... Done+ Reload prometheus and grafana- Reload prometheus -> 172.20.12.53:19999 ... Done- Reload grafana -> 172.20.12.53:19998 ... DoneScaled cluster `paul_dm` in successfully
复制代码

扩容 work 节点

[tidb@tidb53 paul]$ cat dm_scale.yaml worker_servers:- host: 172.20.12.53  ssh_port: 22  port: 18262  deploy_dir: "/data1/dm-deploy/dm-worker-18262"  log_dir: "/data1/dm-deploy/dm-worker-18262/log"  config:    log-level: info[tidb@tidb53 paul]$ tiup dm scale-out paul_dm dm_scale.yamlChecking updates for component dm... Timedout (after 2s)Starting component dm: /home/tidb/.tiup/components/dm/v1.14.1/tiup-dm scale-out paul_dm dm_scale.yaml+ Detect CPU Arch Name- Detecting node 172.20.12.53 Arch info ... Done+ Detect CPU OS Name- Detecting node 172.20.12.53 OS info ... DonePlease confirm your topology:Cluster type:   dmCluster name:   paul_dmCluster version: v6.5.1Role       Host         Ports OS/Arch       Directories----       ----         ----- -------       -----------dm-worker 172.20.12.53 18262 linux/x86_64 /data1/dm-deploy/dm-worker-18262,/data1/tidb-data/dm-worker-18262Attention:  1. If the topology is not what you expected, check your yaml file.  2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: (default=N) y+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/dm/clusters/paul_dm/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/dm/clusters/paul_dm/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=172.20.12.70+ [Parallel] - UserSSH: user=tidb, host=172.20.12.52+ [Parallel] - UserSSH: user=tidb, host=172.20.12.52+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.70+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ [Parallel] - UserSSH: user=tidb, host=172.20.12.53+ Download TiDB components- Download dm-worker:v6.5.1 (linux/amd64) ... Done+ Initialize target host environments+ Deploy TiDB instance- Deploy instance dm-worker -> 172.20.12.53:18262 ... Done+ Copy certificate to remote host+ Generate scale-out config- Generate scale-out config dm-worker -> 172.20.12.53:18262 ... Done+ Init monitor configEnabling component dm-worker      Enabling instance 172.20.12.53:18262      Enable instance 172.20.12.53:18262 success+ [ Serial ] - Save meta+ [ Serial ] - Start new instancesStarting component dm-worker      Starting instance 172.20.12.53:18262      Start instance 172.20.12.53:18262 success+ Refresh components conifgs- Generate config dm-master -> 172.20.12.52:18261 ... Done- Generate config dm-master -> 172.20.12.53:18261 ... Done- Generate config dm-master -> 172.20.12.70:18261 ... Done- Generate config dm-worker -> 172.20.12.52:18262 ... Done- Generate config dm-worker -> 172.20.12.70:18262 ... Done- Generate config dm-worker -> 172.20.12.53:18262 ... Done- Generate config prometheus -> 172.20.12.53:19999 ... Done- Generate config grafana -> 172.20.12.53:19998 ... Done- Generate config alertmanager -> 172.20.12.53:19997 ... Done+ Reload prometheus and grafana- Reload prometheus -> 172.20.12.53:19999 ... Done- Reload grafana -> 172.20.12.53:19998 ... DoneScaled cluster `paul_dm` out successfully
复制代码


发布于: 刚刚阅读数: 4
用户头像

TiDB 社区官网:https://tidb.net/ 2021-12-15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
TIDB DM功能使用实践_6.x 实践_TiDB 社区干货传送门_InfoQ写作社区