机房搬迁更改集群 IP
作者:TiDB 社区干货传送门
- 2023-02-03 北京
本文字数:22429 字
阅读完需:约 74 分钟
作者: weixiaobing 原文来源:https://tidb.net/blog/2db20d98
1、查看当前集群状态
[tidb@vm172-16-201-64 ~]$ tiup cluster display tidb-devtiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-devCluster type: tidbCluster name: tidb-devCluster version: v5.4.1Deploy user: tidbSSH type: builtinDashboard URL: http://172.16.201.151:2379/dashboardID Role Host Ports OS/Arch Status Data Dir Deploy Dir-- ---- ---- ----- ------- ------ -------- ----------172.16.201.150:9093 alertmanager 172.16.201.150 9093/9094 linux/x86_64 Up /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093172.16.201.150:8300 cdc 172.16.201.150 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.152:8300 cdc 172.16.201.152 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.150:8249 drainer 172.16.201.150 8249 linux/x86_64 Up /data1/binlog /data1/tidb-deploy/drainer-8249172.16.201.150:3000 grafana 172.16.201.150 3000 linux/x86_64 Up - /data1/tidb-deploy/grafana-3000172.16.201.150:2379 pd 172.16.201.150 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.151:2379 pd 172.16.201.151 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.152:2379 pd 172.16.201.152 2379/2380 linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.150:9090 prometheus 172.16.201.150 9090/12020 linux/x86_64 Up /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090172.16.201.150:8250 pump 172.16.201.150 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.151:8250 pump 172.16.201.151 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.152:8250 pump 172.16.201.152 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.150:4000 tidb 172.16.201.150 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000172.16.201.151:4000 tidb 172.16.201.151 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000172.16.201.152:4000 tidb 172.16.201.152 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000172.16.201.150:9000 tiflash 172.16.201.150 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.152:9000 tiflash 172.16.201.152 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.150:20160 tikv 172.16.201.150 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.151:20160 tikv 172.16.201.151 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.152:20160 tikv 172.16.201.152 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160Total nodes: 20[tidb@vm172-16-201-64 ~]$
复制代码
2、IP 映射关系
3、stop 集群
tiup cluster stop tidb-dev
复制代码
4、机房搬迁并调整机器 IP 地址
5、修改 meta.yaml 文件,替换对应 IP
先备份原来的.tiup 目录,防止修改出错
[tidb@vm172-16-201-64 tidb-dev]$ more meta.yamluser: tidbtidb_version: v5.4.1topology: global: user: tidb ssh_port: 22 ssh_type: builtin deploy_dir: /data1/tidb-deploy data_dir: /data1/tidb-data os: linux arch: amd64 monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 deploy_dir: /data1/tidb-deploy/monitor-9100 data_dir: /data1/tidb-data/monitor-9100 log_dir: /data1/tidb-deploy/monitor-9100/log server_configs: tidb: binlog.enable: false binlog.ignore-error: true log.level: error new_collations_enabled_on_first_bootstrap: true performance.txn-total-size-limit: 2147483648 pessimistic-txn.max-retry-count: 0 prepared-plan-cache.enabled: true tikv: server.snap-max-write-bytes-per-sec: 200MB pd: {} tidb_dashboard: {} tiflash: {} tiflash-learner: {} pump: {} drainer: {} cdc: debug.enable-db-sorter: true per-table-memory-quota: 1073741824 sorter.chunk-size-limit: 268435456 sorter.max-memory-consumption: 30 sorter.max-memory-percentage: 70 sorter.num-workerpool-goroutine: 30 kvcdc: {} grafana: {} tidb_servers: - host: 172.16.201.155 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux tikv_servers: - host: 172.16.201.155 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux tiflash_servers: - host: 172.16.201.155 ssh_port: 22 tcp_port: 9000 http_port: 8123 flash_service_port: 3930 flash_proxy_port: 20170 flash_proxy_status_port: 20292 metrics_port: 8234 deploy_dir: /data1/tidb-deploy/tiflash-9000 data_dir: /data1/tidb-data/tiflash-9000 log_dir: /data1/tidb-deploy/tiflash-9000/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 tcp_port: 9000 http_port: 8123 flash_service_port: 3930 flash_proxy_port: 20170 flash_proxy_status_port: 20292 metrics_port: 8234 deploy_dir: /data1/tidb-deploy/tiflash-9000 data_dir: /data1/tidb-data/tiflash-9000 log_dir: /data1/tidb-deploy/tiflash-9000/log arch: amd64 os: linux pd_servers: - host: 172.16.201.153 ssh_port: 22 name: pd-172.16.201.153-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 name: pd-172.16.201.154-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 name: pd-172.16.201.155-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux pump_servers: - host: 172.16.201.153 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux drainer_servers: - host: 172.16.201.153 ssh_port: 22 port: 8249 deploy_dir: /data1/tidb-deploy/drainer-8249 data_dir: /data1/binlog log_dir: /data1/tidb-deploy/drainer-8249/log config: syncer.db-type: file arch: amd64 os: linux cdc_servers: - host: 172.16.201.153 ssh_port: 22 port: 8300 deploy_dir: /data1/tidb-deploy/cdc-8300 data_dir: /data1/tidb-data/cdc-8300 log_dir: /data1/tidb-deploy/cdc-8300/log ticdc_cluster_id: "" arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 port: 8300 deploy_dir: /data1/tidb-deploy/cdc-8300 data_dir: /data1/tidb-data/cdc-8300 log_dir: /data1/tidb-deploy/cdc-8300/log ticdc_cluster_id: "" arch: amd64 os: linux monitoring_servers: - host: 172.16.201.153 ssh_port: 22 port: 9090 ng_port: 12020 deploy_dir: /data1/tidb-deploy/prometheus-9090 data_dir: /data1/tidb-data/prometheus-9090 log_dir: /data1/tidb-deploy/prometheus-9090/log external_alertmanagers: [] arch: amd64 os: linux grafana_servers: - host: 172.16.201.153 ssh_port: 22 port: 3000 deploy_dir: /data1/tidb-deploy/grafana-3000 arch: amd64 os: linux username: admin password: admin anonymous_enable: false root_url: "" domain: "" alertmanager_servers: - host: 172.16.201.153 ssh_port: 22 web_port: 9093 cluster_port: 9094 deploy_dir: /data1/tidb-deploy/alertmanager-9093 data_dir: /data1/tidb-data/alertmanager-9093 log_dir: /data1/tidb-deploy/alertmanager-9093/log arch: amd64 os: linux
复制代码
6、获取 Cluster ID
[root@vm172-16-201-64 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id"[2022/10/25 15:56:05.470 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 15:58:40.012 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 17:52:28.338 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062][2022/10/25 17:58:14.750 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 18:00:42.983 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528]
[root@vm172-16-201-63 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id"[2022/10/25 15:56:46.475 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 15:58:37.002 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 17:52:28.307 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062][2022/10/25 17:58:14.758 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 18:00:42.981 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528]
[root@vm172-16-201-95 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id"[2022/10/25 15:56:02.450 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 15:58:36.990 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 17:52:28.213 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062][2022/10/25 17:58:14.658 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528][2022/10/25 18:00:42.879 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528]
复制代码
7、获取已分配 ID
[root@vm172-16-201-95 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F'=' '{print $2}' | awk -F']' '{print $1}' | sort -r | head -n 14000[root@vm172-16-201-95 ~]#
[root@vm172-16-201-63 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F'=' '{print $2}' | awk -F']' '{print $1}' | sort -r | head -n 15000[root@vm172-16-201-63 ~]#
[root@vm172-16-201-64 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F'=' '{print $2}' | awk -F']' '{print $1}' | sort -r | head -n 13000[root@vm172-16-201-64 ~]#
复制代码
8、移除所有 PD 旧数据目录
[tidb@vm172-16-201-95 tidb-data]$ mv pd-2379/ pd-2379_bak[tidb@vm172-16-201-95 tidb-data]$ lltotal 20drwxr-xr-x 2 tidb tidb 4096 Aug 9 16:46 drainer-8249drwxr-xr-x 2 tidb tidb 4096 Oct 25 15:55 monitor-9100drwx------ 5 tidb tidb 4096 Oct 25 20:41 pd-2379_bakdrwxr-xr-x 4 tidb tidb 4096 Oct 25 15:56 pump-8250drwxr-xr-x 6 tidb tidb 4096 Oct 25 18:00 tikv-20160[tidb@vm172-16-201-95 tidb-data]$
复制代码
9、部署新的 PD 集群
[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster display tidb-devtiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-devCluster type: tidbCluster name: tidb-devCluster version: v5.4.1Deploy user: tidbSSH type: builtinID Role Host Ports OS/Arch Status Data Dir Deploy Dir-- ---- ---- ----- ------- ------ -------- ----------172.16.201.153:9093 alertmanager 172.16.201.153 9093/9094 linux/x86_64 Down /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093172.16.201.153:8300 cdc 172.16.201.153 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.155:8300 cdc 172.16.201.155 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.153:8249 drainer 172.16.201.153 8249 linux/x86_64 Down /data1/binlog /data1/tidb-deploy/drainer-8249172.16.201.153:3000 grafana 172.16.201.153 3000 linux/x86_64 Down - /data1/tidb-deploy/grafana-3000172.16.201.154:2379 pd 172.16.201.154 2379/2380 linux/x86_64 Down /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.155:2379 pd 172.16.201.155 2379/2380 linux/x86_64 Down /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.153:9090 prometheus 172.16.201.153 9090/12020 linux/x86_64 Down /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090172.16.201.153:8250 pump 172.16.201.153 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.154:8250 pump 172.16.201.154 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.155:8250 pump 172.16.201.155 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.153:4000 tidb 172.16.201.153 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000172.16.201.154:4000 tidb 172.16.201.154 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000172.16.201.155:4000 tidb 172.16.201.155 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000172.16.201.153:9000 tiflash 172.16.201.153 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.155:9000 tiflash 172.16.201.155 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.154:20160 tikv 172.16.201.154 20160/20180 linux/x86_64 N/A /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.155:20160 tikv 172.16.201.155 20160/20180 linux/x86_64 N/A /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160Total nodes: 18[tidb@vm172-16-201-64 tidb-dev]$ [tidb@vm172-16-201-64 tidb-dev]$ tiup cluster start tidb-dev -R pdtiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster start tidb-dev -R pdStarting cluster tidb-dev...+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [ Serial ] - StartClusterStarting component pd Starting instance 172.16.201.155:2379 Starting instance 172.16.201.154:2379 Starting instance 172.16.201.153:2379 Start instance 172.16.201.153:2379 success Start instance 172.16.201.154:2379 success Start instance 172.16.201.155:2379 successStarting component node_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 successStarting component blackbox_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success+ [ Serial ] - UpdateTopology: cluster=tidb-devStarted cluster `tidb-dev` successfully[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster reload tidb-dev -R pd tiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster reload tidb-dev -R pdWill reload the cluster tidb-dev with restart policy is true, nodes: , roles: pd.Do you want to continue? [y/N]:(default=N) y+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [ Serial ] - UpdateTopology: cluster=tidb-dev+ Refresh instance configs - Generate config pd -> 172.16.201.153:2379 ... Done - Generate config pd -> 172.16.201.154:2379 ... Done - Generate config pd -> 172.16.201.155:2379 ... Done - Generate config tikv -> 172.16.201.155:20160 ... Done - Generate config tikv -> 172.16.201.153:20160 ... Done - Generate config tikv -> 172.16.201.154:20160 ... Done - Generate config pump -> 172.16.201.153:8250 ... Done - Generate config pump -> 172.16.201.155:8250 ... Done - Generate config pump -> 172.16.201.154:8250 ... Done - Generate config tidb -> 172.16.201.155:4000 ... Done - Generate config tidb -> 172.16.201.153:4000 ... Done - Generate config tidb -> 172.16.201.154:4000 ... Done - Generate config tiflash -> 172.16.201.155:9000 ... Done - Generate config tiflash -> 172.16.201.153:9000 ... Done - Generate config drainer -> 172.16.201.153:8249 ... Done - Generate config cdc -> 172.16.201.153:8300 ... Done - Generate config cdc -> 172.16.201.155:8300 ... Done - Generate config prometheus -> 172.16.201.153:9090 ... Done - Generate config grafana -> 172.16.201.153:3000 ... Done - Generate config alertmanager -> 172.16.201.153:9093 ... Done+ Refresh monitor configs - Generate config node_exporter -> 172.16.201.153 ... Done - Generate config node_exporter -> 172.16.201.154 ... Done - Generate config node_exporter -> 172.16.201.155 ... Done - Generate config blackbox_exporter -> 172.16.201.153 ... Done - Generate config blackbox_exporter -> 172.16.201.154 ... Done - Generate config blackbox_exporter -> 172.16.201.155 ... Done+ [ Serial ] - Upgrade ClusterUpgrading component pd Restarting instance 172.16.201.153:2379 Restart instance 172.16.201.153:2379 success Restarting instance 172.16.201.155:2379 Restart instance 172.16.201.155:2379 success Restarting instance 172.16.201.154:2379 Restart instance 172.16.201.154:2379 successStopping component node_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stopping instance 172.16.201.154 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 successStopping component blackbox_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stopping instance 172.16.201.154 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 successStarting component node_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 successStarting component blackbox_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 successReloaded cluster `tidb-dev` successfully
复制代码
10、确认 pd 启动状态
[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster display tidb-devtiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-devCluster type: tidbCluster name: tidb-devCluster version: v5.4.1Deploy user: tidbSSH type: builtinDashboard URL: http://172.16.201.154:2379/dashboardID Role Host Ports OS/Arch Status Data Dir Deploy Dir-- ---- ---- ----- ------- ------ -------- ----------172.16.201.153:9093 alertmanager 172.16.201.153 9093/9094 linux/x86_64 Down /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093172.16.201.153:8300 cdc 172.16.201.153 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.155:8300 cdc 172.16.201.155 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.153:8249 drainer 172.16.201.153 8249 linux/x86_64 Down /data1/binlog /data1/tidb-deploy/drainer-8249172.16.201.153:3000 grafana 172.16.201.153 3000 linux/x86_64 Down - /data1/tidb-deploy/grafana-3000172.16.201.153:2379 pd 172.16.201.153 2379/2380 linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.154:2379 pd 172.16.201.154 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.155:2379 pd 172.16.201.155 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.153:9090 prometheus 172.16.201.153 9090/12020 linux/x86_64 Down /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090172.16.201.153:8250 pump 172.16.201.153 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.154:8250 pump 172.16.201.154 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.155:8250 pump 172.16.201.155 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.153:4000 tidb 172.16.201.153 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000172.16.201.154:4000 tidb 172.16.201.154 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000172.16.201.155:4000 tidb 172.16.201.155 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000172.16.201.153:9000 tiflash 172.16.201.153 9000/8123/3930/20170/20292/8234 linux/x86_64 Down /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.155:9000 tiflash 172.16.201.155 9000/8123/3930/20170/20292/8234 linux/x86_64 Down /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.153:20160 tikv 172.16.201.153 20160/20180 linux/x86_64 Down /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.154:20160 tikv 172.16.201.154 20160/20180 linux/x86_64 Down /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.155:20160 tikv 172.16.201.155 20160/20180 linux/x86_64 Down /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160Total nodes: 20
复制代码
11、使用 pd-recover 恢复 PD 集群
注意使用调整后的新 PD IP 来进行 pd-recover,pd-recover 和集群版本保持一致
[tidb@vm172-16-201-64 tidb-dev]$ tiup pd-recover -endpoints http://172.16.201.153:2379 -cluster-id 7158355689478888528 -alloc-id 20000tiup is checking updates for component pd-recover ...Starting component `pd-recover`: /home/tidb/.tiup/components/pd-recover/v5.4.1/pd-recover /home/tidb/.tiup/components/pd-recover/v5.4.1/pd-recover -endpoints http://172.16.201.153:2379 -cluster-id 7158355689478888528 -alloc-id 20000recover success! please restart the PD cluster[tidb@vm172-16-201-64 tidb-dev]$
复制代码
12、reload 新集群配置
[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster reload tidb-dev --skip-restarttiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster reload tidb-dev --skip-restartWill reload the cluster tidb-dev with restart policy is false, nodes: , roles: .Do you want to continue? [y/N]:(default=N) y+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ Refresh instance configs - Generate config pd -> 172.16.201.153:2379 ... Done - Generate config pd -> 172.16.201.154:2379 ... Done - Generate config pd -> 172.16.201.155:2379 ... Done - Generate config tikv -> 172.16.201.155:20160 ... Done - Generate config tikv -> 172.16.201.153:20160 ... Done - Generate config tikv -> 172.16.201.154:20160 ... Done - Generate config pump -> 172.16.201.153:8250 ... Done - Generate config pump -> 172.16.201.155:8250 ... Done - Generate config pump -> 172.16.201.154:8250 ... Done - Generate config tidb -> 172.16.201.155:4000 ... Done - Generate config tidb -> 172.16.201.153:4000 ... Done - Generate config tidb -> 172.16.201.154:4000 ... Done - Generate config tiflash -> 172.16.201.155:9000 ... Done - Generate config tiflash -> 172.16.201.153:9000 ... Done - Generate config drainer -> 172.16.201.153:8249 ... Done - Generate config cdc -> 172.16.201.153:8300 ... Done - Generate config cdc -> 172.16.201.155:8300 ... Done - Generate config prometheus -> 172.16.201.153:9090 ... Done - Generate config grafana -> 172.16.201.153:3000 ... Done - Generate config alertmanager -> 172.16.201.153:9093 ... Done+ Refresh monitor configs - Generate config node_exporter -> 172.16.201.153 ... Done - Generate config node_exporter -> 172.16.201.154 ... Done - Generate config node_exporter -> 172.16.201.155 ... Done - Generate config blackbox_exporter -> 172.16.201.153 ... Done - Generate config blackbox_exporter -> 172.16.201.154 ... Done - Generate config blackbox_exporter -> 172.16.201.155 ... DoneReloaded cluster `tidb-dev` successfully
复制代码
13、启动集群
[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster restart tidb-dev tiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster restart tidb-devWill restart the cluster tidb-dev with nodes: roles: .Cluster will be unavailableDo you want to continue? [y/N]:(default=N) y+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.154+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.155+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [Parallel] - UserSSH: user=tidb, host=172.16.201.153+ [ Serial ] - RestartClusterStopping component alertmanager Stopping instance 172.16.201.153 Stop alertmanager 172.16.201.153:9093 successStopping component grafana Stopping instance 172.16.201.153 Stop grafana 172.16.201.153:3000 successStopping component prometheus Stopping instance 172.16.201.153 Stop prometheus 172.16.201.153:9090 successStopping component cdc Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stop cdc 172.16.201.153:8300 success Stop cdc 172.16.201.155:8300 successStopping component drainer Stopping instance 172.16.201.153 Stop drainer 172.16.201.153:8249 successStopping component tiflash Stopping instance 172.16.201.153 Stopping instance 172.16.201.155 Stop tiflash 172.16.201.153:9000 success Stop tiflash 172.16.201.155:9000 successStopping component tidb Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stopping instance 172.16.201.155 Stop tidb 172.16.201.153:4000 success Stop tidb 172.16.201.154:4000 success Stop tidb 172.16.201.155:4000 successStopping component pump Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stopping instance 172.16.201.155 Stop pump 172.16.201.153:8250 success Stop pump 172.16.201.154:8250 success Stop pump 172.16.201.155:8250 successStopping component tikv Stopping instance 172.16.201.154 Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stop tikv 172.16.201.153:20160 success Stop tikv 172.16.201.154:20160 success Stop tikv 172.16.201.155:20160 successStopping component pd Stopping instance 172.16.201.155 Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stop pd 172.16.201.154:2379 success Stop pd 172.16.201.153:2379 success Stop pd 172.16.201.155:2379 successStopping component node_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stopping instance 172.16.201.154 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 successStopping component blackbox_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 successStarting component pd Starting instance 172.16.201.155:2379 Starting instance 172.16.201.153:2379 Starting instance 172.16.201.154:2379 Start instance 172.16.201.153:2379 success Start instance 172.16.201.154:2379 success Start instance 172.16.201.155:2379 successStarting component tikv Starting instance 172.16.201.154:20160 Starting instance 172.16.201.155:20160 Starting instance 172.16.201.153:20160 Start instance 172.16.201.153:20160 success Start instance 172.16.201.154:20160 success Start instance 172.16.201.155:20160 successStarting component pump Starting instance 172.16.201.154:8250 Starting instance 172.16.201.153:8250 Starting instance 172.16.201.155:8250 Start instance 172.16.201.153:8250 success Start instance 172.16.201.154:8250 success Start instance 172.16.201.155:8250 successStarting component tidb Starting instance 172.16.201.154:4000 Starting instance 172.16.201.155:4000 Starting instance 172.16.201.153:4000 Start instance 172.16.201.153:4000 success Start instance 172.16.201.154:4000 success Start instance 172.16.201.155:4000 successStarting component tiflash Starting instance 172.16.201.155:9000 Starting instance 172.16.201.153:9000 Start instance 172.16.201.153:9000 success Start instance 172.16.201.155:9000 successStarting component drainer Starting instance 172.16.201.153:8249 Start instance 172.16.201.153:8249 successStarting component cdc Starting instance 172.16.201.155:8300 Starting instance 172.16.201.153:8300 Start instance 172.16.201.153:8300 success Start instance 172.16.201.155:8300 successStarting component prometheus Starting instance 172.16.201.153:9090 Start instance 172.16.201.153:9090 successStarting component grafana Starting instance 172.16.201.153:3000 Start instance 172.16.201.153:3000 successStarting component alertmanager Starting instance 172.16.201.153:9093 Start instance 172.16.201.153:9093 successStarting component node_exporter Starting instance 172.16.201.154 Starting instance 172.16.201.155 Starting instance 172.16.201.153 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 successStarting component blackbox_exporter Starting instance 172.16.201.154 Starting instance 172.16.201.155 Starting instance 172.16.201.153 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 successRestarted cluster `tidb-dev` successfully
复制代码
14、确认集群状态
[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster display tidb-devtiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-devCluster type: tidbCluster name: tidb-devCluster version: v5.4.1Deploy user: tidbSSH type: builtinDashboard URL: http://172.16.201.154:2379/dashboardID Role Host Ports OS/Arch Status Data Dir Deploy Dir-- ---- ---- ----- ------- ------ -------- ----------172.16.201.153:9093 alertmanager 172.16.201.153 9093/9094 linux/x86_64 Up /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093172.16.201.153:8300 cdc 172.16.201.153 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.155:8300 cdc 172.16.201.155 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300172.16.201.153:8249 drainer 172.16.201.153 8249 linux/x86_64 Up /data1/binlog /data1/tidb-deploy/drainer-8249172.16.201.153:3000 grafana 172.16.201.153 3000 linux/x86_64 Up - /data1/tidb-deploy/grafana-3000172.16.201.153:2379 pd 172.16.201.153 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.154:2379 pd 172.16.201.154 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.155:2379 pd 172.16.201.155 2379/2380 linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379172.16.201.153:9090 prometheus 172.16.201.153 9090/12020 linux/x86_64 Down /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090172.16.201.153:8250 pump 172.16.201.153 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.154:8250 pump 172.16.201.154 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.155:8250 pump 172.16.201.155 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250172.16.201.153:4000 tidb 172.16.201.153 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000172.16.201.154:4000 tidb 172.16.201.154 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000172.16.201.155:4000 tidb 172.16.201.155 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000172.16.201.153:9000 tiflash 172.16.201.153 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.155:9000 tiflash 172.16.201.155 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000172.16.201.153:20160 tikv 172.16.201.153 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.154:20160 tikv 172.16.201.154 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160172.16.201.155:20160 tikv 172.16.201.155 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160Total nodes: 20[tidb@vm172-16-201-64 tidb-dev]$
复制代码
15、检查集群
监控、dashboard、查询数据
划线
评论
复制
发布于: 刚刚阅读数: 3
版权声明: 本文为 InfoQ 作者【TiDB 社区干货传送门】的原创文章。
原文链接:【http://xie.infoq.cn/article/d2cef943a99ac15ec21de35b4】。文章转载请联系作者。
TiDB 社区干货传送门
关注
TiDB 社区官网:https://tidb.net/ 2021-12-15 加入
TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/










评论