写点什么

tidb-v7.4 初体验

  • 2024-01-12
    北京
  • 本文字数:24854 字

    阅读完需:约 82 分钟

作者: 哈喽沃德原文来源:https://tidb.net/blog/96884f9d

一、背景

随着公司业务的日益增长,以及国产化数据库的发展趋势,公司开始着手进行数据库国产化选型,主要考虑如下几个因素:


  1. 数据安全和合规性:在当前信息安全意识增强的环境下,公司对数据安全和合规性的要求日益提高。因此,选择一个符合国家相关法律法规要求的国产数据库可以降低数据泄露和违规风险。

  2. 数据性能和扩展性:随着业务的发展,公司的数据量和访问量可能会不断增加,因此需要一个具备良好性能和扩展性的数据库,以满足高并发访问、大规模存储和复杂查询等需求。

  3. 技术支持和生态系统:选择一个有稳定可靠的技术支持团队和完善的生态系统的国产数据库,可以提供及时的技术支持和解决方案,帮助公司快速响应和解决问题。

  4. 成本效益:作为一个企业,成本效益是选择国产数据库的重要考虑因素之一。相较于国外商业数据库,国产数据库可能拥有更具竞争力的价格,并且节约了跨境购买和维护成本。


综上所述,公司在进行数据库国产化选型时,需要考虑数据安全、性能、技术支持、成本效益等因素,以确保选择一个符合需求并具备竞争力的国产数据库。经过一段时间对 TIDB 的了解,刚好条件都符合,所以进行一下初步体验,本次主要测试 playground 方式运行和单机部署集群方式进行初步部署测试。

二、在线安装 TiUP

1、设置环境变量,如果不修改默认会安装到 /root/.tiup[root\@dmdca /]# export TIUP_HOME=/data[root\@dmdca /]# echo $TIUP_HOME/data


2、执行如下命令安装 TiUP 工具:


[root\@dmdca /]# curl –proto ‘=https’ –tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed100 7385k 100 7385k 0 0 7363k 0 0:00:01 0:00:01 –:–:– 7363kWARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.jsonYou can revoke this by remove /data/bin/7b8e153f2e2d0928.root.jsonSuccessfully set mirror to https://tiup-mirrors.pingcap.comDetected shell: bashShell profile: /root/.bash_profile/root/.bash_profile has been modified to add tiup to PATHopen a new terminal or source /root/.bash_profile to use itInstalled path: /data/bin/tiup


3、按如下步骤设置 TiUP 环境变量:[root\@dmdca /]# source .bash_profile-bash: .bash_profile: 没有那个文件或目录[root\@dmdca /]# source /root/.bash_profile


4、确认 TiUP 工具是否安装:[root\@dmdca /]# which tiup/data/bin/tiup


5、安装 TiUP cluster 组件:[root\@dmdca /]# tiup clustertiup is checking updates for component cluster …A new version of cluster is available:The latest version: v1.13.1Local installed version:Update current component: tiup update clusterUpdate all components: tiup update –all


The component cluster version is not installed; downloading from repository.download https://tiup-mirrors.pingcap.com/cluster-v1.13.1-linux-amd64.tar.gz 8.74 MiB / 8.74 MiB 100.00% 15.06 MiB/sStarting component cluster: /data/components/cluster/v1.13.1/tiup-clusterDeploy a TiDB cluster for production


Usage:tiup cluster [command]


Available Commands:check Perform preflight checks for the cluster.deploy Deploy a cluster for productionstart Start a TiDB clusterstop Stop a TiDB clusterrestart Restart a TiDB clusterscale-in Scale in a TiDB clusterscale-out Scale out a TiDB clusterdestroy Destroy a specified clusterclean (EXPERIMENTAL) Cleanup a specified clusterupgrade Upgrade a specified TiDB clusterdisplay Display information of a TiDB clusterprune Destroy and remove instances that is in tombstone statelist List all clustersaudit Show audit log of cluster operationimport Import an exist TiDB cluster from TiDB-Ansibleedit-config Edit TiDB cluster configshow-config Show TiDB cluster configreload Reload a TiDB cluster’s config and restart if neededpatch Replace the remote package with a specified package and restart the servicerename Rename the clusterenable Enable a TiDB cluster automatically at bootdisable Disable automatic enabling of TiDB clusters at bootreplay Replay previous operation and skip successed stepstemplate Print topology templatetls Enable/Disable TLS between TiDB componentsmeta backup/restore meta informationrotatessh rotate ssh keys on all nodeshelp Help about any commandcompletion Generate the autocompletion script for the specified shell


Flags:-c, –concurrency int max number of parallel tasks allowed (default 5)--format string (EXPERIMENTAL) The format of output, available values are [default, json] (default “default”)-h, –help help for tiup--ssh string (EXPERIMENTAL) The executor type: ‘builtin’, ‘system’, ‘none’.--ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don’t need an SSH connection. (default 5)-v, –version version for tiup--wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don’t fit. (default 120)-y, –yes Skip all confirmations and assumes ‘yes’


Use “tiup cluster help [command]” for more information about a command.


6、如果已经安装,则更新 TiUP cluster 组件至最新版本:


tiup update –self && tiup update cluster 预期输出 “Update successfully!” 字样。


验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本:[root\@dmdca /]# tiup –binary cluster/data/components/cluster/v1.13.1/tiup-cluster[root\@dmdca /]#

三、playground 方式运行

以这种方式执行的 playground,在结束部署测试后 TiUP 会清理掉原集群数据,重新执行该命令后会得到一个全新的集群。若希望持久化数据,可以执行 TiUP 的 –tag 参数:tiup –tag playground1、直接运行 tiup playground 命令会运行最新版本的 TiDB 集群,其中 TiDB、TiKV、PD 和 TiFlash 实例各 1 个:tiup playground


2、也可以指定 TiDB 版本以及各组件实例个数,命令类似于:tiup playground v7.1.1 –db 2 –pd 3 –kv 3


3、指定 host: 不指定 host 默认只能本机 127.0.0.1 访问,指定 0.0.0.0 没有限制。tiup playground –host 0.0.0.0


[root\@dmdca /]# tiup playground –host 0.0.0.0tiup is checking updates for component playground …Starting component playground: /data/components/playground/v1.13.1/tiup-playground –host 0.0.0.0Using the version v7.4.0 for version constraint “”.


If you’d like to use a TiDB version other than v7.4.0, cancel and retry with the following arguments:Specify version manually: tiup playgroundSpecify version range: tiup playground ^5The nightly version: tiup playground nightly


Start pd instance:v7.4.0Start tikv instance:v7.4.0Start tidb instance:v7.4.0Waiting for tidb instances ready172.16.60.94:4000 … DoneStart tiflash instance:v7.4.0tiflash quit: signal: segmentation faultWaiting for tiflash instances ready172.16.60.94:3930 … Error


🎉 TiDB Playground Cluster is started, enjoy!


Connect TiDB: mysql –comments –host 172.16.60.94 –port 4000 -u rootTiDB Dashboard: http://172.16.60.94:2379/dashboardGrafana: http://0.0.0.0:3000


4、新开启一个 session 以访问 TiDB 数据库。使用 TiUP client 连接 TiDB:tiup client


[root\@dmdca bin]# tiup clientPlease check for root manifest file, you may download one from the repository mirror, or try tiup mirror set to force reset it.Error: initial repository from mirror(https://tiup-mirrors.pingcap.com/) failed: error loading manifest root.json: open /root/.tiup/bin/root.json: no such file or directory[root\@dmdca bin]# cd[root\@dmdca ~]# echo $TIUP_HOME


[root\@dmdca ~]# pwd/root[root\@dmdca ~]# vi .bash_profile 增加环境变量:export TIUP_HOME=/data,修改后配置如下:


.bash_profile


Get the aliases and functions


if [ -f ~/.bashrc ]; then. ~/.bashrcfi


User specific environment and startup programs


export TIUP_HOME=/dataPATH=HOME/bin


export PATH


export PATH=/data/bin:$PATH


[root\@dmdca ~]# source .bash_profile[root\@dmdca ~]# tiup clienttiup is checking updates for component client …A new version of client is available:The latest version: v1.13.1Local installed version:Update current component: tiup update clientUpdate all components: tiup update –all


The component client version is not installed; downloading from repository.download https://tiup-mirrors.pingcap.com/client-v1.13.1-linux-amd64.tar.gz 4.81 MiB / 4.81 MiB 100.00% 19.43 MiB/sStarting component client: /data/components/client/v1.13.1/tiup-clientConnected with driver mysql (8.0.11-TiDB-v7.4.0)Type “help” for help.


my:root\@172.16.60.94:4000=> show databases;Database


INFORMATION_SCHEMAMETRICS_SCHEMAPERFORMANCE_SCHEMAmysqltest(6 rows)my:root\@172.16.60.94:4000=> use test;USEmy:root\@172.16.60.94:4000=> show tables;Tables_in_test


t_test(1 row)


my:root\@172.16.60.94:4000=> select * from t_test;f_id | f_name——+—————–1 | 测试修改内容 2 | 测试修改内容 4 | test 中文测试 1235 | test 中文测试 1236 | test 中文测试 123(5 rows)


my:root\@172.16.60.94:4000=>my:root\@172.16.60.94:4000=> quit


也可使用 MySQL 客户端连接 TiDB:mysql –host 127.0.0.1 –port 4000 -u root


[root\@dmdca ~]# mysql –host 127.0.0.1 –port 4000 -u rootWelcome to the MariaDB monitor. Commands end with ; or \g.Your MySQL connection id is 845152332Server version: 8.0.11-TiDB-v7.4.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible


Copyright © 2000, 2018, Oracle, MariaDB Corporation Ab and others.


Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.


MySQL [(none)]> show databases;+——————–+| Database |+——————–+| INFORMATION_SCHEMA || METRICS_SCHEMA || PERFORMANCE_SCHEMA || mysql || test |+——————–+6 rows in set (0.024 sec)


MySQL [(none)]> use test;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -A


Database changedMySQL [test]> show tables;+——————+| Tables_in_hdtydb |+——————+| t_test |+——————+1 row in set (0.001 sec)


MySQL [test]> select * from t_test;+——+———————+| f_id | f_name |+——+———————+| 1 | 测试修改内容 || 2 | 测试修改内容 || 4 | test 中文测试 123 || 5 | test 中文测试 123 || 6 | test 中文测试 123 |+——+———————+5 rows in set (0.003 sec)


MySQL [hdtydb]>MySQL [hdtydb]> quitBye[root\@dmdca ~]#


5、通过 http://127.0.0.1:9090 访问 TiDB 的 Prometheus 管理界面。(这里要输入操作系统的用户及密码。)


6、通过 http://127.0.0.1:2379/dashboard 访问 TiDB Dashboard 页面,默认用户名为 root,密码为空。


7、通过 http://127.0.0.1:3000 访问 TiDB 的 Grafana 界面,默认用户名和密码都为 admin。


8、(可选)将数据加载到 TiFlash 进行分析。


9、测试完成之后,可以通过执行以下步骤来清理集群:按下 Control+C 键停掉上述启用的 TiDB 服务。等待服务退出操作完成后,执行以下命令:tiup clean –all

四、在单机上模拟部署生产环境集群

1、适用场景:希望用单台 Linux 服务器,体验 TiDB 最小的完整拓扑的集群,并模拟生产环境下的部署步骤。本节介绍如何参照 TiUP 最小拓扑的一个 YAML 文件部署 TiDB 集群。


2、准备环境开始部署 TiDB 集群前,准备一台部署主机,确保其软件满足需求:


推荐安装 CentOS 7.3 及以上版本运行环境可以支持互联网访问,用于下载 TiDB 及相关软件安装包最小规模的 TiDB 集群拓扑包含以下实例:


实例 个数 IP 配置 TiKV 3 172.16.60.94 避免端口和目录冲突 172.16.60.94 避免端口和目录冲突 172.16.60.94 避免端口和目录冲突 TiDB 1 172.16.60.94 默认端口、全局目录配置 PD 1 172.16.60.94 默认端口、全局目录配置 TiFlash 1 172.16.60.94 默认端口、全局目录配置 Monitor 1 172.16.60.94 默认端口、全局目录配置


3、部署主机软件和环境要求如下:


部署需要使用部署主机的 root 用户及密码部署主机关闭防火墙或者开放 TiDB 集群的节点间所需端口目前 TiUP Cluster 支持在 x86_64(AMD64)和 ARM 架构上部署 TiDB 集群在 AMD64 架构下,建议使用 CentOS 7.3 及以上版本 Linux 操作系统在 ARM 架构下,建议使用 CentOS 7.6 1810 版本 Linux 操作系统


4、实施部署注意你可以使用 Linux 系统的任一普通用户或 root 用户登录主机,以下步骤以 root 用户为例。


下载并安装 TiUP:curl –proto ‘=https’ –tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh 声明全局环境变量:注意 TiUP 安装完成后会提示对应 Shell profile 文件的绝对路径。在执行以下 source 命令前,需要将 {your_shell_profile}安装 TiUP 的 cluster 组件:tiup cluster 如果机器已经安装 TiUP cluster,需要更新软件版本:tiup update –self && tiup update cluster 由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制:修改 /etc/ssh/sshd_config 将 MaxSessions 调至 20。重启 sshd 服务:service sshd restart 创建并启动集群按下面的配置模板,编辑配置文件,命名为 topo.yaml,其中:user: “tidb”:表示通过 tidb 系统用户(部署会自动创建)来做集群的内部管理,默认使用 22 端口通过 ssh 登录目标机器 replication.enable-placement-rules:设置这个 PD 参数来确保 TiFlash 正常运行 host:设置为本部署主机的 IP


配置模板如下:


# Global variables are applied to all deployments and used as the default value of


# the deployments if a specific deployment value is missing.


global:user: “tidb”ssh_port: 22deploy_dir: “/data/tidb-deploy”data_dir: “/data/tidb-data”


# Monitored variables are applied to all the machines.


monitored:node_exporter_port: 9100blackbox_exporter_port: 9115


server_configs:tidb:instance.tidb_slow_log_threshold: 300tikv:readpool.storage.use-unified-pool: falsereadpool.coprocessor.use-unified-pool: truepd:replication.enable-placement-rules: truereplication.location-labels: [“host”]tiflash:logger.level: “info”


pd_servers:


  • host: 172.16.60.94


tidb_servers:


  • host: 172.16.60.94


tikv_servers:


  • host: 172.16.60.94port: 20160status_port: 20180config:server.labels: { host: “logic-host-1” }

  • host: 172.16.60.94port: 20161status_port: 20181config:server.labels: { host: “logic-host-2” }

  • host: 172.16.60.94port: 20162status_port: 20182config:server.labels: { host: “logic-host-3” }


tiflash_servers:


  • host: 172.16.60.94


monitoring_servers:


  • host: 172.16.60.94


grafana_servers:


  • host: 172.16.60.94 执行集群部署命令:


tiup cluster deploy ./topo.yaml –user root -p 参数 表示设置集群名称


参数 表示设置集群版本,例如 v7.1.1。可以通过 tiup list tidb 命令来查看当前支持部署的 TiDB 版本


参数 -p 表示在连接目标机器时使用密码登录


注意如果主机通过密钥进行 SSH 认证,请使用 -i 参数指定密钥文件路径,-i 与 -p 不可同时使用。


按照引导,输入”y”及 root 密码,来完成部署:


Do you want to continue? [y/N]: yInput SSH password:


[root\@dmdca /]# pwd/[root\@dmdca /]# vi topo.yaml[root\@dmdca /]# tiup cluster deploy tidb-cluster v7.4.0 topo.yaml –user root -ptiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster deploy tidb-cluster v7.4.0 topo.yaml –user root -pInput SSH password:


  • Detect CPU Arch Name

  • Detecting node 172.16.60.94 Arch info … Done

  • Detect CPU OS Name

  • Detecting node 172.16.60.94 OS info … DonePlease confirm your topology:Cluster type: tidbCluster name: tidb-clusterCluster version: v7.4.0Role Host Ports OS/Arch Directories




pd 172.16.60.94 23792380 linux/x86_64 /data/tidb-deploy/pd-2379,/data/tidb-data/pd-2379tikv 172.16.60.94 2016020180 linux/x86_64 /data/tidb-deploy/tikv-20160,/data/tidb-data/tikv-20160tikv 172.16.60.94 2016120181 linux/x86_64 /data/tidb-deploy/tikv-20161,/data/tidb-data/tikv-20161tikv 172.16.60.94 2016220182 linux/x86_64 /data/tidb-deploy/tikv-20162,/data/tidb-data/tikv-20162tidb 172.16.60.94 400010080 linux/x86_64 /data/tidb-deploy/tidb-4000tiflash 172.16.60.94 9000/8123/3930/20170/20292/8234 linux/x86_64 /data/tidb-deploy/tiflash-9000,/data/tidb-data/tiflash-9000prometheus 172.16.60.94 909012020 linux/x86_64 /data/tidb-deploy/prometheus-9090,/data/tidb-data/prometheus-9090grafana 172.16.60.94 3000 linux/x86_64 /data/tidb-deploy/grafana-3000Attention:1. If the topology is not what you expected, check your yaml file.2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: (default=N) y


  • Generate SSH keys … Done

  • Download TiDB components

  • Download pd:v7.4.0 (linux/amd64) … Done

  • Download tikv:v7.4.0 (linux/amd64) … Done

  • Download tidb:v7.4.0 (linux/amd64) … Done

  • Download tiflash:v7.4.0 (linux/amd64) … Done

  • Download prometheus:v7.4.0 (linux/amd64) … Done

  • Download grafana:v7.4.0 (linux/amd64) … Done

  • Download node_exporter: (linux/amd64) … Done

  • Download blackbox_exporter: (linux/amd64) … Done

  • Initialize target host environments

  • Prepare 172.16.60.94:22 … Done

  • Deploy TiDB instance

  • Copy pd -> 172.16.60.94 … Done

  • Copy tikv -> 172.16.60.94 … Done

  • Copy tikv -> 172.16.60.94 … Done

  • Copy tikv -> 172.16.60.94 … Done

  • Copy tidb -> 172.16.60.94 … Done

  • Copy tiflash -> 172.16.60.94 … Done

  • Copy prometheus -> 172.16.60.94 … Done

  • Copy grafana -> 172.16.60.94 … Done

  • Deploy node_exporter -> 172.16.60.94 … Done

  • Deploy blackbox_exporter -> 172.16.60.94 … Done

  • Copy certificate to remote host

  • Init instance configs

  • Generate config pd -> 172.16.60.94:2379 … Done

  • Generate config tikv -> 172.16.60.94:20160 … Done

  • Generate config tikv -> 172.16.60.94:20161 … Done

  • Generate config tikv -> 172.16.60.94:20162 … Done

  • Generate config tidb -> 172.16.60.94:4000 … Done

  • Generate config tiflash -> 172.16.60.94:9000 … Done

  • Generate config prometheus -> 172.16.60.94:9090 … Done

  • Generate config grafana -> 172.16.60.94:3000 … Done

  • Init monitor configs

  • Generate config node_exporter -> 172.16.60.94 … Done

  • Generate config blackbox_exporter -> 172.16.60.94 … DoneEnabling component pdEnabling instance 172.16.60.94:2379Enable instance 172.16.60.94:2379 successEnabling component tikvEnabling instance 172.16.60.94:20162Enabling instance 172.16.60.94:20160Enabling instance 172.16.60.94:20161Enable instance 172.16.60.94:20160 successEnable instance 172.16.60.94:20161 successEnable instance 172.16.60.94:20162 successEnabling component tidbEnabling instance 172.16.60.94:4000Enable instance 172.16.60.94:4000 successEnabling component tiflashEnabling instance 172.16.60.94:9000Enable instance 172.16.60.94:9000 successEnabling component prometheusEnabling instance 172.16.60.94:9090Enable instance 172.16.60.94:9090 successEnabling component grafanaEnabling instance 172.16.60.94:3000Enable instance 172.16.60.94:3000 successEnabling component node_exporterEnabling instance 172.16.60.94Enable 172.16.60.94 successEnabling component blackbox_exporterEnabling instance 172.16.60.94Enable 172.16.60.94 successCluster tidb-cluster deployed successfully, you can start it with command: tiup cluster start tidb-cluster --init


启动集群:


tiup cluster start


[root\@dmdca log]# tiup cluster start tidb-clustertiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster start tidb-clusterStarting cluster tidb-cluster…


  • [ Serial ] - SSHKeySet: privateKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [ Serial ] - StartClusterStarting component pdStarting instance 172.16.60.94:2379Start instance 172.16.60.94:2379 successStarting component tikvStarting instance 172.16.60.94:20160Starting instance 172.16.60.94:20162Starting instance 172.16.60.94:20161Start instance 172.16.60.94:20160 successStart instance 172.16.60.94:20161 successStart instance 172.16.60.94:20162 successStarting component tidbStarting instance 172.16.60.94:4000Start instance 172.16.60.94:4000 successStarting component tiflashStarting instance 172.16.60.94:9000Start instance 172.16.60.94:9000 successStarting component prometheusStarting instance 172.16.60.94:9090Start instance 172.16.60.94:9090 successStarting component grafanaStarting instance 172.16.60.94:3000Start instance 172.16.60.94:3000 successStarting component node_exporterStarting instance 172.16.60.94Start 172.16.60.94 successStarting component blackbox_exporterStarting instance 172.16.60.94Start 172.16.60.94 success

  • [ Serial ] - UpdateTopology: cluster=tidb-clusterStarted cluster tidb-cluster successfully[root\@dmdca log]#


启动过程中有如下报错,多启动几次即可,可能是内存太小缘故,一共内存才 4G,swap 使用了 6G,可用剩余内存只有 600M.[root\@dmdca log]# free -mtotal used free shared buff/cache availableMem: 4675 3904 125 50 645 437Swap: 8191 6571 1620[root\@dmdca log]# free -gtotal used free shared buff/cache availableMem: 4 3 0 0 0 0Swap: 7 6 1[root\@dmdca log]#


[root\@dmdca /]# tiup cluster start tidb-clustertiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster start tidb-clusterStarting cluster tidb-cluster…


  • [ Serial ] - SSHKeySet: privateKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [ Serial ] - StartClusterStarting component pdStarting instance 172.16.60.94:2379Start instance 172.16.60.94:2379 successStarting component tikvStarting instance 172.16.60.94:20162Starting instance 172.16.60.94:20160Starting instance 172.16.60.94:20161Start instance 172.16.60.94:20160 successStart instance 172.16.60.94:20162 success


Error: failed to start tikv: failed to start: 172.16.60.94 tikv-20161.service, please check the instance’s log(/data/tidb-deploy/tikv-20161/log) for more detail.: timed out waiting for port 20161 to be started after 2m0s


Verbose debug logs has been written to /data/logs/tiup-cluster-debug-2023-10-24-12-28-12.log.


[root\@dmdca log]# tiup cluster start tidb-clustertiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster start tidb-clusterStarting cluster tidb-cluster…


  • [ Serial ] - SSHKeySet: privateKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94

  • [ Serial ] - StartClusterStarting component pdStarting instance 172.16.60.94:2379Start instance 172.16.60.94:2379 successStarting component tikvStarting instance 172.16.60.94:20162Starting instance 172.16.60.94:20160Starting instance 172.16.60.94:20161Start instance 172.16.60.94:20160 successStart instance 172.16.60.94:20162 successStart instance 172.16.60.94:20161 successStarting component tidbStarting instance 172.16.60.94:4000Start instance 172.16.60.94:4000 successStarting component tiflashStarting instance 172.16.60.94:9000Start instance 172.16.60.94:9000 successStarting component prometheusStarting instance 172.16.60.94:9090Start instance 172.16.60.94:9090 successStarting component grafanaStarting instance 172.16.60.94:3000


Error: failed to start grafana: failed to start: 172.16.60.94 grafana-3000.service, please check the instance’s log(/data/tidb-deploy/grafana-3000/log) for more detail.: executor.ssh.execute_failed: Failed to execute command over SSH for ‘tidb\@172.16.60.94:22’ {ssh_stderr: , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c “systemctl daemon-reload && systemctl start grafana-3000.service”}, cause: Run Command Timeout


Verbose debug logs has been written to /data/logs/tiup-cluster-debug-2023-10-24-14-14-31.log.


访问集群:


安装 MySQL 客户端。如果已安装 MySQL 客户端则可跳过这一步骤。yum -y install mysql 访问 TiDB 数据库,密码为空:mysql -h 172.16.60.94 -P 4000 -u root[root\@dmdca ~]# mysql -h 172.16.60.94 -P 4000 -u rootWelcome to the MariaDB monitor. Commands end with ; or \g.Your MySQL connection id is 2401239046Server version: 8.0.11-TiDB-v7.4.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible


Copyright © 2000, 2018, Oracle, MariaDB Corporation Ab and others.


Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.


MySQL [(none)]> show databases;+——————–+| Database |+——————–+| INFORMATION_SCHEMA || METRICS_SCHEMA || PERFORMANCE_SCHEMA || mysql || test |+——————–+5 rows in set (0.001 sec)


MySQL [(none)]>


访问 TiDB 的 Grafana 监控:通过 http://{grafana-ip}:3000 访问集群 Grafana 监控页面,默认用户名和密码均为 admin。http://172.16.60.94:3000/login



访问 TiDB 的 Dashboard:通过 http://{pd-ip}:2379/dashboard 访问集群 TiDB Dashboard 监控页面,默认用户名为 root,密码为空。http://172.16.60.94:2379/dashboard/#/signin



通过 http://172.16.60.94:9090 访问 TiDB 的 Prometheus 管理界面。(这里要输入操作系统的用户及密码。)



执行以下命令确认当前已经部署的集群列表:tiup cluster list


[root\@dmdca ~]# tiup cluster listtiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster listName User Version Path PrivateKey




tidb-cluster tidb v7.4.0 /data/storage/cluster/clusters/tidb-cluster /data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa[root\@dmdca ~]#


执行以下命令查看集群的拓扑结构和状态:tiup cluster display


[root\@dmdca ~]# tiup cluster display tidb-clustertiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster display tidb-clusterCluster type: tidbCluster name: tidb-clusterCluster version: v7.4.0Deploy user: tidbSSH type: builtinDashboard URL: http://172.16.60.94:2379/dashboardGrafana URL: http://172.16.60.94:3000ID Role Host Ports OS/Arch Status Data Dir Deploy Dir




172.16.60.94:3000 grafana 172.16.60.94 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000172.16.60.94:2379 pd 172.16.60.94 23792380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379172.16.60.94:9090 prometheus 172.16.60.94 909012020 linux/x86_64 Down /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090172.16.60.94:4000 tidb 172.16.60.94 400010080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000172.16.60.94:9000 tiflash 172.16.60.94 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000172.16.60.94:20160 tikv 172.16.60.94 2016020180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160172.16.60.94:20161 tikv 172.16.60.94 2016120181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161172.16.60.94:20162 tikv 172.16.60.94 2016220182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162Total nodes: 8[root\@dmdca ~]#


发现 prometheus 状态为 Down 尝试以下方式启动均无效:[root\@dmdca log]# tiup cluster start tidb-cluster[root\@dmdca log]# tiup cluster start tidb-cluster -R prometheus[root\@dmdca log]# tiup cluster start tidb-cluster -R pd[root\@dmdca log]# tiup cluster restart tidb-cluster


[root\@dmdca log]# pwd/data/tidb-deploy/prometheus-9090/log[root\@dmdca log]#[root\@dmdca log]# ll 总用量 38364-rw-r–r– 1 tidb tidb 3885 10 月 25 08:53 docdb.log-rw-r–r– 1 tidb tidb 1191767 10 月 25 11:02 ng.log-rw-r–r– 1 tidb tidb 5467040 10 月 25 11:03 prometheus.log-rw-r–r– 1 tidb tidb 0 10 月 24 14:09 service.log-rw-r–r– 1 tidb tidb 32573530 10 月 25 11:02 tsdb.log[root\@dmdca log]# tail -f prometheus.loglevel=info ts=2023-10-25T03:02:46.353Z caller=web.go:540 component=web msg=“Start listening for connections” address=:9090level=error ts=2023-10-25T03:02:46.353Z caller=main.go:632 msg=“Unable to start web listener” err=“listen tcp :9090: bind: address already in use”level=warn ts=2023-10-25T03:03:01.586Z caller=main.go:377 deprecation_notice=“‘storage.tsdb.retention’ flag is deprecated use ‘storage.tsdb.retention.time’ instead.”level=info ts=2023-10-25T03:03:01.586Z caller=main.go:426 msg=“Starting Prometheus” version=“(version=2.27.1, branch=HEAD, revision=db7f0bcec27bd8aeebad6b08ac849516efa9ae02)”level=info ts=2023-10-25T03:03:01.586Z caller=main.go:431 build_context=“(go=go1.16.4, user=root\@fd804fbd4f25, date=20210518-14:17:54)”level=info ts=2023-10-25T03:03:01.586Z caller=main.go:432 host_details=“(Linux 4.19.90-24.4.v2101.ky10.x86_64 #1 SMP Mon May 24 12:14:55 CST 2021 x86_64 dmdca (none))”level=info ts=2023-10-25T03:03:01.586Z caller=main.go:433 fd_limits=“(soft=1000000, hard=1000000)”level=info ts=2023-10-25T03:03:01.587Z caller=main.go:434 vm_limits=“(soft=unlimited, hard=unlimited)”level=info ts=2023-10-25T03:03:01.589Z caller=web.go:540 component=web msg=“Start listening for connections” address=:9090level=error ts=2023-10-25T03:03:01.590Z caller=main.go:632 msg=“Unable to start web listener” err=“listen tcp :9090: bind: address already in use”level=warn ts=2023-10-25T03:03:16.835Z caller=main.go:377 deprecation_notice=“‘storage.tsdb.retention’ flag is deprecated use ‘storage.tsdb.retention.time’ instead.”level=info ts=2023-10-25T03:03:16.835Z caller=main.go:426 msg=“Starting Prometheus” version=“(version=2.27.1, branch=HEAD, revision=db7f0bcec27bd8aeebad6b08ac849516efa9ae02)”level=info ts=2023-10-25T03:03:16.835Z caller=main.go:431 build_context=“(go=go1.16.4, user=root\@fd804fbd4f25, date=20210518-14:17:54)”level=info ts=2023-10-25T03:03:16.835Z caller=main.go:432 host_details=“(Linux 4.19.90-24.4.v2101.ky10.x86_64 #1 SMP Mon May 24 12:14:55 CST 2021 x86_64 dmdca (none))”level=info ts=2023-10-25T03:03:16.835Z caller=main.go:433 fd_limits=“(soft=1000000, hard=1000000)”level=info ts=2023-10-25T03:03:16.835Z caller=main.go:434 vm_limits=“(soft=unlimited, hard=unlimited)”level=info ts=2023-10-25T03:03:16.838Z caller=web.go:540 component=web msg=“Start listening for connections” address=:9090level=error ts=2023-10-25T03:03:16.838Z caller=main.go:632 msg=“Unable to start web listener” err=“listen tcp :9090: bind: address already in use”


查看端口使用情况:[root\@dmdca log]# netstat -tlnpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 127.0.0.1:46667 0.0.0.0:* LISTEN 7030/cockpit-bridgetcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 966/rpcbindtcp 0 0 0.0.0.0:20180 0.0.0.0:* LISTEN 38741/bin/tikv-servtcp 0 0 0.0.0.0:20181 0.0.0.0:* LISTEN 38743/bin/tikv-servtcp 0 0 0.0.0.0:20182 0.0.0.0:* LISTEN 38744/bin/tikv-servtcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1475/sshd: /usr/sbitcp 0 0 127.0.0.1:34555 0.0.0.0:* LISTEN 37997/bin/pd-servertcp 0 0 127.0.0.1:40091 0.0.0.0:* LISTEN 37997/bin/pd-servertcp 0 0 0.0.0.0:20292 0.0.0.0:* LISTEN 43577/bin/tiflash/ttcp 0 0 127.0.0.1:39945 0.0.0.0:* LISTEN 6966/cockpit-bridgetcp 0 0 0.0.0.0:8234 0.0.0.0:* LISTEN 43577/bin/tiflash/ttcp6 0 0 :::2379 :::* LISTEN 37997/bin/pd-servertcp6 0 0 :::9100 :::* LISTEN 46643/bin/node_expotcp6 0 0 :::2380 :::* LISTEN 37997/bin/pd-servertcp6 0 0 :::4236 :::* LISTEN 1643/dmaptcp6 0 0 :::111 :::* LISTEN 966/rpcbindtcp6 0 0 :::5236 :::* LISTEN 1642/dmservertcp6 0 0 :::5237 :::* LISTEN 1646/dmservertcp6 0 0 :::22 :::* LISTEN 1475/sshd: /usr/sbitcp6 0 0 :::3000 :::* LISTEN 45787/bin/bin/grafatcp6 0 0 172.16.60.94:3930 :::* LISTEN 43577/bin/tiflash/ttcp6 0 0 :::9115 :::* LISTEN 47105/bin/blackbox_tcp6 0 0 :::10080 :::* LISTEN 42543/bin/tidb-servtcp6 0 0 :::4000 :::* LISTEN 42543/bin/tidb-servtcp6 0 0 :::20160 :::* LISTEN 38741/bin/tikv-servtcp6 0 0 :::20161 :::* LISTEN 38743/bin/tikv-servtcp6 0 0 :::9090 :::* LISTEN 1/systemdtcp6 0 0 :::20162 :::* LISTEN 38744/bin/tikv-servtcp6 0 0 :::20170 :::* LISTEN 43577/bin/tiflash/t


[root\@dmdca log]# ss -tlnpState Recv-Q Send-Q Local Address:Port Peer Address:Port ProcessLISTEN 0 64 127.0.0.1:46667 0.0.0.0:* users:((“cockpit-bridge”,pid=7030,fd=12))LISTEN 0 128 0.0.0.0:111 0.0.0.0:* users:((“rpcbind”,pid=966,fd=7))LISTEN 0 128 0.0.0.0:20180 0.0.0.0:* users:((“tikv-server”,pid=38741,fd=164))LISTEN 0 128 0.0.0.0:20181 0.0.0.0:* users:((“tikv-server”,pid=38743,fd=161))LISTEN 0 128 0.0.0.0:20182 0.0.0.0:* users:((“tikv-server”,pid=38744,fd=161))LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:((“sshd”,pid=1475,fd=5))LISTEN 0 512 127.0.0.1:34555 0.0.0.0:* users:((“pd-server”,pid=37997,fd=41))LISTEN 0 512 127.0.0.1:40091 0.0.0.0:* users:((“pd-server”,pid=37997,fd=40))LISTEN 0 128 0.0.0.0:20292 0.0.0.0:* users:((“TiFlashMain”,pid=43577,fd=145))LISTEN 0 64 127.0.0.1:39945 0.0.0.0:* users:((“cockpit-bridge”,pid=6966,fd=13))LISTEN 0 200 0.0.0.0:8234 0.0.0.0:* users:((“TiFlashMain”,pid=43577,fd=41))LISTEN 0 512 *:2379 : users:((“pd-server”,pid=37997,fd=9))LISTEN 0 512 *:9100 : users:((“node_exporter”,pid=46643,fd=3))LISTEN 0 512 *:2380 : users:((“pd-server”,pid=37997,fd=8))LISTEN 0 128 :4236 : users:((“dmap”,pid=1643,fd=5))LISTEN 0 128 [::]:111 [::]: users:((“rpcbind”,pid=966,fd=9))LISTEN 0 128 *:5236 : users:((“dmserver”,pid=1642,fd=5))LISTEN 0 128 :5237 : users:((“dmserver”,pid=1646,fd=5))LISTEN 0 128 [::]:22 [::]: users:((“sshd”,pid=1475,fd=6))LISTEN 0 512 *:3000 : users:((“grafana-server”,pid=45787,fd=8))LISTEN 0 512 [::ffff:172.16.60.94]:3930 : users:((“TiFlashMain”,pid=43577,fd=37))LISTEN 0 512 [::ffff:172.16.60.94]:3930 : users:((“TiFlashMain”,pid=43577,fd=38))LISTEN 0 512 *:9115 : users:((“blackbox_export”,pid=47105,fd=3))LISTEN 0 512 *:10080 : users:((“tidb-server”,pid=42543,fd=32))LISTEN 0 512 *:4000 : users:((“tidb-server”,pid=42543,fd=27))LISTEN 0 512 *:20160 : users:((“tikv-server”,pid=38741,fd=101))LISTEN 0 512 *:20160 : users:((“tikv-server”,pid=38741,fd=102))LISTEN 0 512 *:20160 : users:((“tikv-server”,pid=38741,fd=103))LISTEN 0 512 *:20160 : users:((“tikv-server”,pid=38741,fd=104))LISTEN 0 512 *:20160 : users:((“tikv-server”,pid=38741,fd=105))LISTEN 0 512 *:20161 : users:((“tikv-server”,pid=38743,fd=99))LISTEN 0 512 *:20161 : users:((“tikv-server”,pid=38743,fd=102))LISTEN 0 512 *:20161 : users:((“tikv-server”,pid=38743,fd=103))LISTEN 0 512 *:20161 : users:((“tikv-server”,pid=38743,fd=104))LISTEN 0 512 *:20161 : users:((“tikv-server”,pid=38743,fd=105))LISTEN 0 128 *:9090 : users:((“cockpit-ws”,pid=6886,fd=3),(“systemd”,pid=1,fd=160))LISTEN 0 512 *:20162 : users:((“tikv-server”,pid=38744,fd=99))LISTEN 0 512 *:20162 : users:((“tikv-server”,pid=38744,fd=100))LISTEN 0 512 *:20162 : users:((“tikv-server”,pid=38744,fd=101))LISTEN 0 512 *:20162 : users:((“tikv-server”,pid=38744,fd=102))LISTEN 0 512 *:20162 : users:((“tikv-server”,pid=38744,fd=103))LISTEN 0 512 *:20170 : users:((“TiFlashMain”,pid=43577,fd=92))LISTEN 0 512 *:20170 : users:((“TiFlashMain”,pid=43577,fd=93))LISTEN 0 512 *:20170 : users:((“TiFlashMain”,pid=43577,fd=94))LISTEN 0 512 *:20170 : users:((“TiFlashMain”,pid=43577,fd=95))LISTEN 0 512 *:20170 : users:((“TiFlashMain”,pid=43577,fd=96))


停用系统自带的 cockpit 组件,此操作系统为银河麒麟 V10,一般 CentOS7.6 不会带这个。[root\@dmdca log]# systemctl status cockpit● cockpit.service - Cockpit Web ServiceLoaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled)Active: active (running) since Wed 2023-10-25 08:54:42 CST; 3h 10min agoDocs: man:cockpit-ws(8)Main PID: 6886 (cockpit-ws)Tasks: 9Memory: 7.8MCGroup: /system.slice/cockpit.service├─6886 /usr/libexec/cockpit-ws└─6964 /usr/bin/ssh-agent


10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method10 月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method[root\@dmdca log]# systemctl stop cockpitWarning: Stopping cockpit.service, but it can still be activated by:cockpit.socket[root\@dmdca log]# systemctl status cockpit● cockpit.service - Cockpit Web ServiceLoaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled)Active: active (running) since Wed 2023-10-25 12:05:17 CST; 3s agoDocs: man:cockpit-ws(8)Process: 57451 ExecStartPre=/usr/sbin/remotectl certificate –ensure –user=root –group=cockpit-ws –selinux-type=etc_t (code=exited, status=0/SUCCESS)Main PID: 57454 (cockpit-ws)Tasks: 3Memory: 2.6MCGroup: /system.slice/cockpit.service└─57454 /usr/libexec/cockpit-ws


10 月 25 12:05:18 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:20 dmdca cockpit-ws[57454]: received unsupported HTTP method10 月 25 12:05:20 dmdca cockpit-ws[57454]: received unsupported HTTP method[root\@dmdca log]# systemctl stop cockpitWarning: Stopping cockpit.service, but it can still be activated by:cockpit.socket[root\@dmdca log]# systemctl stop cockpit.socket[root\@dmdca log]# systemctl status cockpit● cockpit.service - Cockpit Web ServiceLoaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled)Active: inactive (dead) since Wed 2023-10-25 12:05:36 CST; 3s agoDocs: man:cockpit-ws(8)Process: 57626 ExecStartPre=/usr/sbin/remotectl certificate –ensure –user=root –group=cockpit-ws –selinux-type=etc_t (code=exited, status=0/SUCCESS)Process: 57628 ExecStart=/usr/libexec/cockpit-ws (code=killed, signal=TERM)Main PID: 57628 (code=killed, signal=TERM)


10 月 25 12:05:35 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method10 月 25 12:05:36 dmdca systemd[1]: Stopping Cockpit Web Service…10 月 25 12:05:36 dmdca systemd[1]: cockpit.service: Succeeded.10 月 25 12:05:36 dmdca systemd[1]: Stopped Cockpit Web Service.


[root\@dmdca log]# tiup cluster display tidb-clustertiup is checking updates for component cluster …Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster display tidb-clusterCluster type: tidbCluster name: tidb-clusterCluster version: v7.4.0Deploy user: tidbSSH type: builtinDashboard URL: http://172.16.60.94:2379/dashboardGrafana URL: http://172.16.60.94:3000ID Role Host Ports OS/Arch Status Data Dir Deploy Dir




172.16.60.94:3000 grafana 172.16.60.94 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000172.16.60.94:2379 pd 172.16.60.94 23792380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379172.16.60.94:9090 prometheus 172.16.60.94 909012020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090172.16.60.94:4000 tidb 172.16.60.94 400010080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000172.16.60.94:9000 tiflash 172.16.60.94 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000172.16.60.94:20160 tikv 172.16.60.94 2016020180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160172.16.60.94:20161 tikv 172.16.60.94 2016120181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161172.16.60.94:20162 tikv 172.16.60.94 2016220182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162Total nodes: 8


至此单机集群部署完毕。

五、总结

1、简单测试使用 playground 方式运行即可。


2、单机集群部署也是非常方便,还要得益于强大的 TIUP 工具。


3、强大的运维监控平台非常直观,就是 Grafana 没有中文界面,官方能否出个插件呢。


发布于: 12 分钟前阅读数: 4
用户头像

TiDB 社区官网:https://tidb.net/ 2021-12-15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
tidb-v7.4初体验_7.x 实践_TiDB 社区干货传送门_InfoQ写作社区