写点什么

TiDB 迁移、升级与案例分享(TiDB v4.0.11 → v6.5.1)

  • 2023-04-21
    北京
  • 本文字数:30580 字

    阅读完需:约 100 分钟

作者: xingzhenxiang 原文来源:https://tidb.net/blog/110d9839

环境描述

Cluster version:    v4.0.11


部署情况



数据大小


****



****

主要任务

  1. 升级版本到 v6.5.1

  2. 替换老旧机器 Node1-Node3,并优化部署结构

升级测试

配置文件是否需要编辑

以源配置文件减配为 3tidb+3pd+3tikv 安装 v4.0.11 然后升级到 v6.5.1。


这样可以验证配置文件是否有需要修改变更之处

监控迁移测试

生产系统从没有遇到过监控组件的迁移测试,为了保证过程安全可控,在测试环境进行监控组件迁移测试。


主要命令:


tiup cluster scale-in tidb-test -N XXX.XXX.XXX.XXX:9093,XXX.XXX.XXX.XXX:3000,XXX.XXX.XXX.XXX:9090tiup cluster check  ./single-monitor.yaml  --user root -p (check因为tiup版本不同,方式有所不同)tiup cluster scale-out tidb-test single-monitor.yaml  --user root -p
复制代码


遇到问题:迁移后部分数据不显示


迁移前显示内容:



****


迁移后显示内容:



****


原因和解决方法:测试环境,新的监控组件所在服务器时间比正常晚了 5 分钟左右,时间同步正常

带数据升级测试

按照使用方提供的标准从生产环境导入部分数据,然后程序接入测试环境,做一次升级测试,并观察是否有不适应的地方

生产机器资源调整

tikv server 通过扩容缩容的方式迁移

主要命令:


tiup cluster check  ./scale-out20230301.yaml  --user root -ptiup cluster scale-out <cluster-name> scale-out20230301.yaml  --user root -ptiup cluster scale-in <cluster-name>  --node XXX.XXX.XXX.60:20160,XXX.XXX.XXX.61:20160,XXX.XXX.XXX.62:20160
复制代码


回收过程监控变化:



****


遇到问题:display 没有 Tombstone,监控还是有 Tombstone


解决方法:


curl -X DELETE {pd_leader_ip}:2379/pd/api/v1/stores/remove-tombstone
复制代码

tidb server 通过扩容缩容的方式迁移

由于 tidb 是无状态节点,直接扩缩容


主要命令:


tiup cluster check  ./tidb-scale-out2023031701.yaml  --user root -ptiup cluster scale-out <cluster-name> tidb-scale-out2023031701.yaml  --user root -ptiup cluster scale-in <cluster-name>  --node XXX.XXX.XXX.XXX:3306
复制代码

pd server 通过扩容缩容的方式迁移

pd server 迁移官网注意提示:



****


主要命令:


tiup cluster check  ./pd-scale-out2023031601.yaml  --user root -ptiup cluster scale-out <cluster-name> pd-scale-out2023031601.yaml  --user root -ptiup cluster scale-in <cluster-name> --node XXX.XXX.XXX.60:2379
复制代码


原来 leader 缩容:


我这里是通过 reload 的方式让主切换到新的机器上去的,可以先尝试 reload,如果没有自动切换,再尝试手动切换 leader


监控组件迁移

步骤和测试场景一样

调整后部署情况

生产升级

先升级 TiUP 和 cluster 版本(建议 tiup 版本不低于 1.11.0

tiup update --selftiup --versiontiup update clustertiup cluster --version
复制代码

检查当前集群的健康状况

tiup cluster check <cluster-name> –cluster


其他确认项

变更配置文件,根据测试修改需要变更的参数检查当前集群没有 DDL 和 Backup 在执行表记录数据情况统计,方便升级完成验证

执行不停机升级升级

tiup cluster upgrade <cluster-name> v6.5.1


``markdown<br>[09:52:11][tidb@Node1 ~]$ tiup cluster check <cluster-name> --cluster<br>[09:52:11]tiup is checking updates for component cluster ...<br>[09:52:11]Starting componentcluster: /home/tidb/.tiup/components/cluster/v1.11.3/tiup-cluster check <cluster-name> --cluster<br>[09:52:13]Run command on XXX.XXX.XXX.68(sudo:false): /tmp/tiup/bin/insight<br>[09:52:13]Run command on XXX.XXX.XXX.67(sudo:false): /tmp/tiup/bin/insight<br>[09:52:13]Run command on XXX.XXX.XXX.98(sudo:false): /tmp/tiup/bin/insight<br>[09:52:13]Run command on XXX.XXX.XXX.99(sudo:false): /tmp/tiup/bin/insight<br>[09:52:13]Run command on XXX.XXX.XXX.97(sudo:false): /tmp/tiup/bin/insight<br>[09:52:14]Run command on XXX.XXX.XXX.103(sudo:false): /tmp/tiup/bin/insight<br>[09:52:15]Run command on XXX.XXX.XXX.102(sudo:false): /tmp/tiup/bin/insight<br>[09:52:15]Run command on XXX.XXX.XXX.101(sudo:false): /tmp/tiup/bin/insight<br>[09:52:17]Run command on XXX.XXX.XXX.97(sudo:false): cat /etc/security/limits.conf<br>[09:52:17]Run command on XXX.XXX.XXX.98(sudo:false): cat /etc/security/limits.conf<br>[09:52:17]Run command on XXX.XXX.XXX.99(sudo:false): cat /etc/security/limits.conf<br>[09:52:17]Run command on XXX.XXX.XXX.97(sudo:true): sysctl -a<br>[09:52:17]Run command on XXX.XXX.XXX.98(sudo:true): sysctl -a<br>[09:52:17]Run command on XXX.XXX.XXX.67(sudo:false): cat /etc/security/limits.conf<br>[09:52:17]Run command on XXX.XXX.XXX.99(sudo:true): sysctl -a<br>[09:52:17]Run command on XXX.XXX.XXX.68(sudo:false): cat /etc/security/limits.conf<br>[09:52:18]Run command on XXX.XXX.XXX.67(sudo:true): sysctl -a<br>[09:52:18]Run command on XXX.XXX.XXX.68(sudo:true): sysctl -a<br>[09:52:19]Run command on XXX.XXX.XXX.101(sudo:false): cat /etc/security/limits.conf<br>[09:52:20]Run command on XXX.XXX.XXX.102(sudo:false): cat /etc/security/limits.conf<br>[09:52:20]Run command on XXX.XXX.XXX.103(sudo:false): cat /etc/security/limits.conf<br>[09:52:20]Run command on XXX.XXX.XXX.101(sudo:true): sysctl -a<br>[09:52:20]Run command on XXX.XXX.XXX.103(sudo:true): sysctl -a<br>[09:52:20]Run command on XXX.XXX.XXX.102(sudo:true): sysctl -a<br>[09:52:22]Node Check Result Message<br>[09:52:22]---- ----- ------ -------<br>[09:52:22]XXX.XXX.XXX.103 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009<br>[09:52:22]XXX.XXX.XXX.103 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.103 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.103 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv2/tidb-data/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv4/tidb-data/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv3/tidb-data/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv1/tidb-data/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.103 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.103 network Pass network speed of bond0 is 20000MB<br>[09:52:22]XXX.XXX.XXX.103 network Pass network speed of em1 is 10000MB<br>[09:52:22]XXX.XXX.XXX.103 network Pass network speed of em2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.103 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.103 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.103 cpu-cores Pass number of CPU cores / threads: 72<br>[09:52:22]XXX.XXX.XXX.103 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.97 permission Pass /tidb-deploy/pd-2379 is writable<br>[09:52:22]XXX.XXX.XXX.97 permission Pass /tidb-data/pd-2379 is writable<br>[09:52:22]XXX.XXX.XXX.97 permission Pass /tidb-deploy/tidb-3306 is writable<br>[09:52:22]XXX.XXX.XXX.97 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009<br>[09:52:22]XXX.XXX.XXX.97 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.97 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.97 network Pass network speed of p2p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.97 network Pass network speed of bond0 is 20000MB<br>[09:52:22]XXX.XXX.XXX.97 network Pass network speed of em1 is 10000MB<br>[09:52:22]XXX.XXX.XXX.97 network Pass network speed of em2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.97 network Pass network speed of p2p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.97 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.97 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.97 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.97 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.98 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.98 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009<br>[09:52:22]XXX.XXX.XXX.98 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-data/alertmanager-9093 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/pd-2379 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-data/pd-2379 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/tidb-3306 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/prometheus-9090 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/grafana-3000 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/alertmanager-9093 is writable<br>[09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-data/prometheus-9090 is writable<br>[09:52:22]XXX.XXX.XXX.98 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.98 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.98 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of em2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of p1p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of p1p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of p2p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of p2p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of bond0 is 20000MB<br>[09:52:22]XXX.XXX.XXX.98 network Pass network speed of em1 is 10000MB<br>[09:52:22]XXX.XXX.XXX.98 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.98 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.99 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009<br>[09:52:22]XXX.XXX.XXX.99 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.99 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.99 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.99 permission Pass /tidb-deploy/pd-2379 is writable<br>[09:52:22]XXX.XXX.XXX.99 permission Pass /tidb-data/pd-2379 is writable<br>[09:52:22]XXX.XXX.XXX.99 permission Pass /tidb-deploy/tidb-3306 is writable<br>[09:52:22]XXX.XXX.XXX.99 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.99 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of p2p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of p2p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of bond0 is 20000MB<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of em1 is 10000MB<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of em2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of p1p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.99 network Pass network speed of p1p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.99 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.99 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.67 os-version Pass OS is CentOS Linux 7 (Core) 7.7.1908<br>[09:52:22]XXX.XXX.XXX.67 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of em2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of em3 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of em4 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of bond0 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of em1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p4 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p3 is 1000MB<br>[09:52:22]XXX.XXX.XXX.67 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.67 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.67 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv1/tidb-data/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv2/tidb-data/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv3/tidb-data/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv4/tidb-data/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.67 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.67 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.67 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.67 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of bond0 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of em1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of em2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of em3 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of em4 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p3 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p4 is 1000MB<br>[09:52:22]XXX.XXX.XXX.68 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.68 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv2/tidb-data/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv3/tidb-data/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv1/tidb-data/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv4/tidb-data/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.68 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.68 os-version Pass OS is CentOS Linux 7 (Core) 7.7.1908<br>[09:52:22]XXX.XXX.XXX.68 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.68 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.68 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.68 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.68 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv1/tidb-data/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv4/tidb-data/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv2/tidb-data/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.101 permission Pass /tikv3/tidb-data/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.101 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009<br>[09:52:22]XXX.XXX.XXX.101 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of em3 is 1000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of em4 is 1000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of p1p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of p2p4 is 10000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of bond0 is 20000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of em2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of p1p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of p2p2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.101 network Pass network speed of em1 is 10000MB<br>[09:52:22]XXX.XXX.XXX.101 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.101 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.101 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.101 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.101 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.101 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.102 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai<br>[09:52:22]XXX.XXX.XXX.102 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009<br>[09:52:22]XXX.XXX.XXX.102 cpu-cores Pass number of CPU cores / threads: 56<br>[09:52:22]XXX.XXX.XXX.102 selinux Pass SELinux is disabled<br>[09:52:22]XXX.XXX.XXX.102 thp Pass THP is disabled<br>[09:52:22]XXX.XXX.XXX.102 command Pass numactl: policy: default<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv4/tidb-data/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv3/tidb-data/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv2/tidb-data/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv1/tidb-data/tikv-20160 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable<br>[09:52:22]XXX.XXX.XXX.102 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable<br>[09:52:22]XXX.XXX.XXX.102 cpu-governor Warn Unable to determine current CPU frequency governor policy<br>[09:52:22]XXX.XXX.XXX.102 memory Pass memory size is 131072MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of p2p3 is 10000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of bond0 is 20000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of em1 is 10000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of em2 is 10000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of em4 is 1000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of p1p2 is 1000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of em3 is 1000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of p1p1 is 1000MB<br>[09:52:22]XXX.XXX.XXX.102 network Pass network speed of p2p4 is 10000MB<br>[09:52:22]Checking region status of the cluster <cluster-name>...<br>[09:52:22]All regions are healthy.<br>[09:53:35][tidb@Node1 ~]$ tiup cluster upgrade <cluster-name> v6.5.1<br>[09:53:35]tiup is checking updates for component cluster ...<br>[09:53:35]Starting componentcluster`: /home/tidb/.tiup/components/cluster/v1.11.3/tiup-cluster upgradev6.5.1[09:53:35]Before the upgrade, it is recommended to read the upgrade guide at https://docs.pingcap.com/tidb/stable/upgrade-tidb-using-tiup and finish the preparation steps.[09:53:35]This operation will upgrade tidb v4.0.11 clusterto v6.5.1.[09:53:59]Do you want to continue? [y/N]:(default=N) y[09:53:59]Upgrading cluster…[09:53:59]+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters//ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters//ssh/id_rsa.pub[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.99[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.97[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.97[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.99[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98[09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98[09:53:59]+ [ Serial ] - Download: component=grafana, version=v6.5.1, os=linux, arch=amd64[09:53:59]+ [ Serial ] - Download: component=tidb, version=v6.5.1, os=linux, arch=amd64[09:53:59]+ [ Serial ] - Download: component=prometheus, version=v6.5.1, os=linux, arch=amd64[09:53:59]+ [ Serial ] - Download: component=pd, version=v6.5.1, os=linux, arch=amd64[09:53:59]+ [ Serial ] - Download: component=tikv, version=v6.5.1, os=linux, arch=amd64[09:54:06]+ [ Serial ] - Download: component=alertmanager, version=, os=linux, arch=amd64[09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=‘/tikv2/tidb-data/tikv-20161’[09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=‘/tidb-data/pd-2379’[09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.99, directories=‘/tidb-data/pd-2379’[09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=‘/tikv1/tidb-data/tikv-20160’[09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.97, directories=‘/tidb-data/pd-2379’[09:54:22]+ [ Serial ] - BackupComponent: component=pd, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/pd-2379[09:54:22]+ [ Serial ] - BackupComponent: component=pd, currentVersion=v4.0.11, remote=XXX.XXX.XXX.99:/tidb-deploy/pd-2379[09:54:22]+ [ Serial ] - BackupComponent: component=pd, currentVersion=v4.0.11, remote=XXX.XXX.XXX.97:/tidb-deploy/pd-2379[09:54:22]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv2/tidb-deploy/tikv-20161[09:54:22]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv1/tidb-deploy/tikv-20160[09:54:22]+ [ Serial ] - CopyComponent: component=pd, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/pd-2379 os=linux, arch=amd64[09:54:22]+ [ Serial ] - CopyComponent: component=pd, version=v6.5.1, remote=XXX.XXX.XXX.99:/tidb-deploy/pd-2379 os=linux, arch=amd64[09:54:22]+ [ Serial ] - CopyComponent: component=pd, version=v6.5.1, remote=XXX.XXX.XXX.97:/tidb-deploy/pd-2379 os=linux, arch=amd64[09:54:22]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64[09:54:22]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64[09:54:24]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.99, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:25]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:25]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.97, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:26]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=‘/tikv3/tidb-data/tikv-20162’[09:54:27]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=‘/tikv4/tidb-data/tikv-20163’[09:54:27]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=‘/tikv1/tidb-data/tikv-20160’[09:54:28]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv3/tidb-deploy/tikv-20162[09:54:28]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv1/tidb-deploy/tikv-20160[09:54:28]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv4/tidb-deploy/tikv-20163[09:54:28]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64[09:54:29]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64[09:54:29]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64[09:54:31]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:31]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:35]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=‘/tikv2/tidb-data/tikv-20161’[09:54:35]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=‘/tikv3/tidb-data/tikv-20162’[09:54:36]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv2/tidb-deploy/tikv-20161[09:54:36]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv3/tidb-deploy/tikv-20162[09:54:37]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64[09:54:37]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64[09:54:37]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:38]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:39]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:41]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=‘/tikv4/tidb-data/tikv-20163’[09:54:42]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv4/tidb-deploy/tikv-20163[09:54:42]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=‘/tikv1/tidb-data/tikv-20160’[09:54:42]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64[09:54:43]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv1/tidb-deploy/tikv-20160[09:54:43]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=‘/tikv2/tidb-data/tikv-20161’[09:54:43]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64[09:54:44]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv2/tidb-deploy/tikv-20161[09:54:45]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:45]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:45]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64[09:54:49]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=‘/tikv3/tidb-data/tikv-20162’[09:54:49]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=‘/tikv4/tidb-data/tikv-20163’[09:54:50]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:50]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv3/tidb-deploy/tikv-20162[09:54:50]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv4/tidb-deploy/tikv-20163[09:54:50]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64[09:54:50]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64[09:54:52]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:53]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:53]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=‘/tikv1/tidb-data/tikv-20160’[09:54:54]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv1/tidb-deploy/tikv-20160[09:54:55]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64[09:54:56]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=‘/tikv2/tidb-data/tikv-20161’[09:54:57]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv2/tidb-deploy/tikv-20161[09:54:57]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=‘/tikv3/tidb-data/tikv-20162’[09:54:57]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64[09:54:58]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:58]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv3/tidb-deploy/tikv-20162[09:54:58]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:54:58]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64[09:55:01]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:02]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=‘/tikv4/tidb-data/tikv-20163’[09:55:03]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=‘/tikv1/tidb-data/tikv-20160’[09:55:03]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv4/tidb-deploy/tikv-20163[09:55:03]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64[09:55:03]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv1/tidb-deploy/tikv-20160[09:55:04]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:04]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64[09:55:05]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=‘/tikv2/tidb-data/tikv-20161’[09:55:05]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:06]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv2/tidb-deploy/tikv-20161[09:55:06]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64[09:55:08]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=‘/tikv3/tidb-data/tikv-20162’[09:55:08]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv3/tidb-deploy/tikv-20162[09:55:09]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64[09:55:10]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=‘/tikv4/tidb-data/tikv-20163’[09:55:10]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:10]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv4/tidb-deploy/tikv-20163[09:55:11]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64[09:55:12]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:13]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:14]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.97, directories=”[09:55:14]+ [ Serial ] - BackupComponent: component=tidb, currentVersion=v4.0.11, remote=XXX.XXX.XXX.97:/tidb-deploy/tidb-3306[09:55:15]+ [ Serial ] - CopyComponent: component=tidb, version=v6.5.1, remote=XXX.XXX.XXX.97:/tidb-deploy/tidb-3306 os=linux, arch=amd64[09:55:16]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:18]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.97, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tidb-3306.service, deploy_dir=/tidb-deploy/tidb-3306, data_dir=[], log_dir=/tidb-deploy/tidb-3306/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:18]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:19]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=”[09:55:19]+ [ Serial ] - BackupComponent: component=tidb, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/tidb-3306[09:55:20]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.99, directories=”[09:55:20]+ [ Serial ] - BackupComponent: component=tidb, currentVersion=v4.0.11, remote=XXX.XXX.XXX.99:/tidb-deploy/tidb-3306[09:55:20]+ [ Serial ] - CopyComponent: component=tidb, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/tidb-3306 os=linux, arch=amd64[09:55:20]+ [ Serial ] - CopyComponent: component=tidb, version=v6.5.1, remote=XXX.XXX.XXX.99:/tidb-deploy/tidb-3306 os=linux, arch=amd64[09:55:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=‘/tidb-data/prometheus-9090’[09:55:22]+ [ Serial ] - BackupComponent: component=prometheus, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/prometheus-9090[09:55:22]+ [ Serial ] - CopyComponent: component=prometheus, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/prometheus-9090 os=linux, arch=amd64[09:55:22]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tidb-3306.service, deploy_dir=/tidb-deploy/tidb-3306, data_dir=[], log_dir=/tidb-deploy/tidb-3306/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:23]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.99, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/tidb-3306.service, deploy_dir=/tidb-deploy/tidb-3306, data_dir=[], log_dir=/tidb-deploy/tidb-3306/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:24]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=”[09:55:24]+ [ Serial ] - BackupComponent: component=grafana, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/grafana-3000[09:55:24]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=‘/tidb-data/alertmanager-9093’[09:55:24]+ [ Serial ] - CopyComponent: component=grafana, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/grafana-3000 os=linux, arch=amd64[09:55:25]+ [ Serial ] - BackupComponent: component=alertmanager, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/alertmanager-9093[09:55:25]+ [ Serial ] - CopyComponent: component=alertmanager, version=, remote=XXX.XXX.XXX.98:/tidb-deploy/alertmanager-9093 os=linux, arch=amd64[09:55:26]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/prometheus-9090.service, deploy_dir=/tidb-deploy/prometheus-9090, data_dir=[/tidb-data/prometheus-9090], log_dir=/tidb-deploy/prometheus-9090/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:26]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/alertmanager-9093.service, deploy_dir=/tidb-deploy/alertmanager-9093, data_dir=[/tidb-data/alertmanager-9093], log_dir=/tidb-deploy/alertmanager-9093/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:27]+ [ Serial ] - InitConfig: cluster=, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters//config-cache/grafana-3000.service, deploy_dir=/tidb-deploy/grafana-3000, data_dir=[], log_dir=/tidb-deploy/grafana-3000/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters//config-cache[09:55:34]+ [ Serial ] - UpgradeCluster[09:55:34]Upgrading component pd[09:55:34] Restarting instance XXX.XXX.XXX.98:2379[09:55:36] Restart instance XXX.XXX.XXX.98:2379 success[09:55:38] Restarting instance XXX.XXX.XXX.99:2379[09:55:40] Restart instance XXX.XXX.XXX.99:2379 success[09:55:49] Restarting instance XXX.XXX.XXX.97:2379[09:55:51] Restart instance XXX.XXX.XXX.97:2379 success[09:55:53]Upgrading component tikv[09:55:53] Evicting 7109 leaders from store XXX.XXX.XXX.67:20160…[09:55:53] Still waitting for 7109 store leaders to transfer…[09:55:55] Still waitting for 7109 store leaders to transfer…[09:55:57] Still waitting for 7109 store leaders to transfer…[09:56:00] Still waitting for 7109 store leaders to transfer…[09:56:02] Still waitting for 7109 store leaders to transfer…[09:56:04] Still waitting for 5342 store leaders to transfer…[09:56:06] Still waitting for 5342 store leaders to transfer…[09:56:08] Still waitting for 5342 store leaders to transfer…[09:56:10] Still waitting for 5342 store leaders to transfer…[09:56:12] Still waitting for 4140 store leaders to transfer…[09:56:14] Still waitting for 4140 store leaders to transfer…[09:56:16] Still waitting for 4140 store leaders to transfer…[09:56:18] Still waitting for 4140 store leaders to transfer…[09:56:21] Still waitting for 4140 store leaders to transfer…[09:56:23] Still waitting for 2942 store leaders to transfer…[09:56:25] Still waitting for 2942 store leaders to transfer…[09:56:27] Still waitting for 2942 store leaders to transfer…[09:56:29] Still waitting for 2942 store leaders to transfer…[09:56:31] Still waitting for 2942 store leaders to transfer…[09:56:33] Still waitting for 1810 store leaders to transfer…[09:56:35] Still waitting for 1810 store leaders to transfer…[09:56:38] Still waitting for 1810 store leaders to transfer…[09:56:40] Still waitting for 1810 store leaders to transfer…[09:56:42] Still waitting for 1810 store leaders to transfer…[09:56:44] Still waitting for 488 store leaders to transfer…[09:56:46] Still waitting for 488 store leaders to transfer…[09:56:48] Still waitting for 488 store leaders to transfer…[09:56:50] Still waitting for 488 store leaders to transfer…[09:56:52] Restarting instance XXX.XXX.XXX.67:20160[09:57:17] Restart instance XXX.XXX.XXX.67:20160 success[09:57:17] Evicting 7469 leaders from store XXX.XXX.XXX.67:20161…[09:57:17] Still waitting for 7469 store leaders to transfer…[09:57:19] Still waitting for 7469 store leaders to transfer…[09:57:21] Still waitting for 7469 store leaders to transfer…[09:57:23] Still waitting for 7469 store leaders to transfer…[09:57:26] Still waitting for 5324 store leaders to transfer…[09:57:28] Still waitting for 5324 store leaders to transfer…[09:57:30] Still waitting for 5324 store leaders to transfer…[09:57:32] Still waitting for 5324 store leaders to transfer…[09:57:34] Still waitting for 5324 store leaders to transfer…[09:57:36] Still waitting for 2513 store leaders to transfer…[09:57:38] Still waitting for 2513 store leaders to transfer…[09:57:40] Still waitting for 2513 store leaders to transfer…[09:57:42] Still waitting for 2513 store leaders to transfer…[09:57:45] Still waitting for 2513 store leaders to transfer…[09:57:47] Restarting instance XXX.XXX.XXX.67:20161[09:58:11] Restart instance XXX.XXX.XXX.67:20161 success[09:58:11] Evicting 7481 leaders from store XXX.XXX.XXX.67:20162…[09:58:11] Still waitting for 7481 store leaders to transfer…[09:58:13] Still waitting for 7481 store leaders to transfer…[09:58:15] Still waitting for 7481 store leaders to transfer…[09:58:18] Still waitting for 5796 store leaders to transfer…[09:58:20] Still waitting for 5796 store leaders to transfer…[09:58:22] Still waitting for 5796 store leaders to transfer…[09:58:24] Still waitting for 5796 store leaders to transfer…[09:58:26] Still waitting for 5796 store leaders to transfer…[09:58:28] Still waitting for 2902 store leaders to transfer…[09:58:30] Still waitting for 2902 store leaders to transfer…[09:58:32] Still waitting for 2902 store leaders to transfer…[09:58:34] Still waitting for 2902 store leaders to transfer…[09:58:37] Still waitting for 2902 store leaders to transfer…[09:58:39] Still waitting for 22 store leaders to transfer…[09:58:41] Still waitting for 22 store leaders to transfer…[09:58:43] Still waitting for 22 store leaders to transfer…[09:58:45] Still waitting for 22 store leaders to transfer…[09:58:47] Still waitting for 22 store leaders to transfer…[09:58:49] Restarting instance XXX.XXX.XXX.67:20162[09:59:15] Restart instance XXX.XXX.XXX.67:20162 success[09:59:15] Evicting 7480 leaders from store XXX.XXX.XXX.67:20163…[09:59:15] Still waitting for 7480 store leaders to transfer…[09:59:17] Still waitting for 7480 store leaders to transfer…[09:59:20] Still waitting for 7480 store leaders to transfer…[09:59:22] Still waitting for 5687 store leaders to transfer…[09:59:24] Still waitting for 5687 store leaders to transfer…[09:59:26] Still waitting for 5687 store leaders to transfer…[09:59:28] Still waitting for 5687 store leaders to transfer…[09:59:30] Still waitting for 5687 store leaders to transfer…[09:59:32] Still waitting for 2842 store leaders to transfer…[09:59:34] Still waitting for 2842 store leaders to transfer…[09:59:36] Still waitting for 2842 store leaders to transfer…[09:59:39] Still waitting for 2842 store leaders to transfer…[09:59:41] Still waitting for 2842 store leaders to transfer…[09:59:43] Still waitting for 7 store leaders to transfer…[09:59:45] Still waitting for 7 store leaders to transfer…[09:59:47] Still waitting for 7 store leaders to transfer…[09:59:49] Still waitting for 7 store leaders to transfer…[09:59:51] Still waitting for 7 store leaders to transfer…[09:59:53] Restarting instance XXX.XXX.XXX.67:20163[10:00:19] Restart instance XXX.XXX.XXX.67:20163 success[10:00:19] Evicting 7480 leaders from store XXX.XXX.XXX.68:20160…[10:00:19] Still waitting for 7480 store leaders to transfer…[10:00:22] Still waitting for 7063 store leaders to transfer…[10:00:24] Still waitting for 7063 store leaders to transfer…[10:00:26] Still waitting for 7063 store leaders to transfer…[10:00:28] Still waitting for 7063 store leaders to transfer…[10:00:30] Still waitting for 7063 store leaders to transfer…[10:00:32] Still waitting for 4180 store leaders to transfer…[10:00:34] Still waitting for 4180 store leaders to transfer…[10:00:36] Still waitting for 4180 store leaders to transfer…[10:00:38] Still waitting for 4180 store leaders to transfer…[10:00:41] Still waitting for 4180 store leaders to transfer…[10:00:43] Still waitting for 1318 store leaders to transfer…[10:00:45] Still waitting for 1318 store leaders to transfer…[10:00:47] Still waitting for 1318 store leaders to transfer…[10:00:49] Still waitting for 1318 store leaders to transfer…[10:00:51] Restarting instance XXX.XXX.XXX.68:20160[10:01:16] Restart instance XXX.XXX.XXX.68:20160 success[10:01:16] Evicting 7481 leaders from store XXX.XXX.XXX.68:20161…[10:01:16] Still waitting for 7481 store leaders to transfer…[10:01:18] Still waitting for 7481 store leaders to transfer…[10:01:20] Still waitting for 7481 store leaders to transfer…[10:01:22] Still waitting for 7481 store leaders to transfer…[10:01:24] Still waitting for 7481 store leaders to transfer…[10:01:26] Still waitting for 4703 store leaders to transfer…[10:01:29] Still waitting for 4703 store leaders to transfer…[10:01:31] Still waitting for 4703 store leaders to transfer…[10:01:33] Still waitting for 4703 store leaders to transfer…[10:01:35] Still waitting for 4703 store leaders to transfer…[10:01:37] Still waitting for 1823 store leaders to transfer…[10:01:39] Still waitting for 1823 store leaders to transfer…[10:01:41] Still waitting for 1823 store leaders to transfer…[10:01:43] Still waitting for 1823 store leaders to transfer…[10:01:45] Still waitting for 1823 store leaders to transfer…[10:01:48] Restarting instance XXX.XXX.XXX.68:20161[10:02:13] Restart instance XXX.XXX.XXX.68:20161 success[10:02:13] Evicting 7481 leaders from store XXX.XXX.XXX.68:20162…[10:02:13] Still waitting for 7481 store leaders to transfer…[10:02:16] Still waitting for 7481 store leaders to transfer…[10:02:18] Still waitting for 6401 store leaders to transfer…[10:02:20] Still waitting for 6401 store leaders to transfer…[10:02:22] Still waitting for 6401 store leaders to transfer…[10:02:24] Still waitting for 6401 store leaders to transfer…[10:02:26] Still waitting for 6401 store leaders to transfer…[10:02:27][10:02:28] Still waitting for 3513 store leaders to transfer…[10:02:30] Still waitting for 3513 store leaders to transfer…[10:02:32] Still waitting for 3513 store leaders to transfer…[10:02:35] Still waitting for 3513 store leaders to transfer…[10:02:35][10:02:36][10:02:37] Still waitting for 3513 store leaders to transfer…[10:02:39] Still waitting for 647 store leaders to transfer…[10:02:41] Still waitting for 647 store leaders to transfer…[10:02:43] Still waitting for 647 store leaders to transfer…[10:02:45] Still waitting for 647 store leaders to transfer…[10:02:47] Restarting instance XXX.XXX.XXX.68:20162[10:03:02][10:03:12] Restart instance XXX.XXX.XXX.68:20162 success[10:03:12] Evicting 7482 leaders from store XXX.XXX.XXX.68:20163…[10:03:12] Still waitting for 7482 store leaders to transfer…[10:03:14] Still waitting for 7191 store leaders to transfer…[10:03:16] Still waitting for 7191 store leaders to transfer…[10:03:18] Still waitting for 7191 store leaders to transfer…[10:03:20] Still waitting for 7191 store leaders to transfer…[10:03:23] Still waitting for 7191 store leaders to transfer…[10:03:25] Still waitting for 4395 store leaders to transfer…[10:03:27] Still waitting for 4395 store leaders to transfer…[10:03:29] Still waitting for 4395 store leaders to transfer…[10:03:31] Still waitting for 4395 store leaders to transfer…[10:03:33] Still waitting for 1569 store leaders to transfer…[10:03:35] Still waitting for 1569 store leaders to transfer…[10:03:37] Still waitting for 1569 store leaders to transfer…[10:03:39] Still waitting for 1569 store leaders to transfer…[10:03:42] Still waitting for 1569 store leaders to transfer…[10:03:44] Restarting instance XXX.XXX.XXX.68:20163[10:04:04][10:04:08] Restart instance XXX.XXX.XXX.68:20163 success[10:04:09] Evicting 7472 leaders from store XXX.XXX.XXX.101:20160…[10:04:09] Still waitting for 7472 store leaders to transfer…[10:04:11] Still waitting for 7472 store leaders to transfer…[10:04:13] Still waitting for 6740 store leaders to transfer…[10:04:15] Still waitting for 6740 store leaders to transfer…[10:04:17] Still waitting for 6740 store leaders to transfer…[10:04:19] Still waitting for 6740 store leaders to transfer…[10:04:21] Still waitting for 3869 store leaders to transfer…[10:04:23] Still waitting for 3869 store leaders to transfer…[10:04:25] Still waitting for 3869 store leaders to transfer…[10:04:27] Still waitting for 3869 store leaders to transfer…[10:04:30] Still waitting for 3869 store leaders to transfer…[10:04:32] Still waitting for 949 store leaders to transfer…[10:04:34] Still waitting for 949 store leaders to transfer…[10:04:36] Still waitting for 949 store leaders to transfer…[10:04:38] Still waitting for 949 store leaders to transfer…[10:04:40] Still waitting for 949 store leaders to transfer…[10:04:42] Restarting instance XXX.XXX.XXX.101:20160[10:05:05] Restart instance XXX.XXX.XXX.101:20160 success[10:05:05] Evicting 7474 leaders from store XXX.XXX.XXX.101:20161…[10:05:05] Still waitting for 7474 store leaders to transfer…[10:05:07] Still waitting for 7474 store leaders to transfer…[10:05:09] Still waitting for 7474 store leaders to transfer…[10:05:11] Still waitting for 7474 store leaders to transfer…[10:05:14] Still waitting for 7474 store leaders to transfer…[10:05:16] Still waitting for 4891 store leaders to transfer…[10:05:18] Still waitting for 4891 store leaders to transfer…[10:05:20] Still waitting for 4891 store leaders to transfer…[10:05:22] Still waitting for 4891 store leaders to transfer…[10:05:24] Still waitting for 4891 store leaders to transfer…[10:05:26] Still waitting for 1999 store leaders to transfer…[10:05:28] Still waitting for 1999 store leaders to transfer…[10:05:30] Still waitting for 1999 store leaders to transfer…[10:05:32] Still waitting for 1999 store leaders to transfer…[10:05:34] Restarting instance XXX.XXX.XXX.101:20161[10:05:57] Restart instance XXX.XXX.XXX.101:20161 success[10:05:57] Evicting 7483 leaders from store XXX.XXX.XXX.101:20162…[10:05:57] Still waitting for 7483 store leaders to transfer…[10:06:00] Still waitting for 7483 store leaders to transfer…[10:06:02] Still waitting for 7483 store leaders to transfer…[10:06:04] Still waitting for 6184 store leaders to transfer…[10:06:06] Still waitting for 6184 store leaders to transfer…[10:06:08] Still waitting for 6184 store leaders to transfer…[10:06:10] Still waitting for 6184 store leaders to transfer…[10:06:12] Still waitting for 3305 store leaders to transfer…[10:06:14] Still waitting for 3305 store leaders to transfer…[10:06:16] Still waitting for 3305 store leaders to transfer…[10:06:18] Still waitting for 3305 store leaders to transfer…[10:06:20] Still waitting for 3305 store leaders to transfer…[10:06:23] Still waitting for 409 store leaders to transfer…[10:06:25] Still waitting for 409 store leaders to transfer…[10:06:27] Still waitting for 409 store leaders to transfer…[10:06:29] Still waitting for 409 store leaders to transfer…[10:06:31] Still waitting for 409 store leaders to transfer…[10:06:33] Restarting instance XXX.XXX.XXX.101:20162[10:06:55] Restart instance XXX.XXX.XXX.101:20162 success[10:06:55] Evicting 7483 leaders from store XXX.XXX.XXX.101:20163…[10:06:55] Still waitting for 7483 store leaders to transfer…[10:06:57] Still waitting for 7441 store leaders to transfer…[10:06:59] Still waitting for 7441 store leaders to transfer…[10:07:01] Still waitting for 7441 store leaders to transfer…[10:07:03] Still waitting for 7441 store leaders to transfer…[10:07:05] Still waitting for 4564 store leaders to transfer…[10:07:07] Still waitting for 4564 store leaders to transfer…[10:07:09] Still waitting for 4564 store leaders to transfer…[10:07:12] Still waitting for 4564 store leaders to transfer…[10:07:14] Still waitting for 4564 store leaders to transfer…[10:07:16] Still waitting for 1694 store leaders to transfer…[10:07:18] Still waitting for 1694 store leaders to transfer…[10:07:20] Still waitting for 1694 store leaders to transfer…[10:07:22] Still waitting for 1694 store leaders to transfer…[10:07:24] Still waitting for 1694 store leaders to transfer…[10:07:26] Restarting instance XXX.XXX.XXX.101:20163[10:07:49] Restart instance XXX.XXX.XXX.101:20163 success[10:07:49] Evicting 7478 leaders from store XXX.XXX.XXX.102:20160…[10:07:49] Still waitting for 7478 store leaders to transfer…[10:07:51] Still waitting for 7478 store leaders to transfer…[


发布于: 刚刚阅读数: 2
用户头像

TiDB 社区官网:https://tidb.net/ 2021-12-15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
TiDB迁移、升级与案例分享(TiDB v4.0.11 → v6.5.1)_迁移_TiDB 社区干货传送门_InfoQ写作社区