写点什么

Centos7 安装 TiDB 集群 (最小化安装)

  • 2024-09-06
    北京
  • 本文字数:10816 字

    阅读完需:约 35 分钟

作者: koby 原文来源:https://tidb.net/blog/da87db00

1、系统环境检查

1.1、检测及关闭系统 swap

TiDB 运行需要有足够的内存。如果想保持性能稳定,则建议永久关闭系统 swap,但可能在内存偏小时触发 OOM 问题;如果想避免此类 OOM 问题,则可只将 swap 优先级调低,但不做永久关闭。开启并使用 swap 可能会引入性能抖动问题,对于低延迟、稳定性要求高的数据库服务,建议永久关闭操作系统层 swap。要永久关闭 swap,可使用以下方法:在操作系统初始化阶段,不单独划分 swap 分区盘。如果在操作系统初始化阶段,已经单独划分了 swap 分区盘,并且启用了 swap,则使用以下命令进行关闭:
echo "vm.swappiness = 0">> /etc/sysctl.conf swapoff -a sysctl -p
如果主机内存偏小,关闭系统 swap 可能会更容易触发 OOM 问题,可参考以如下方法将 swap 优先级调低,但不做永久关闭:echo "vm.swappiness = 0">> /etc/sysctl.conf sysctl -p
复制代码

1.2、设置 TiDB 节点的临时空间

TiDB 的部分操作需要向服务器写入临时文件,因此需要确保运行 TiDB 的操作系统用户具有足够的权限对目标目录进行读写。如果 TiDB 实例不是以 root 权限启动,则需要检查目录权限并进行正确设置。1)TiDB 临时工作区哈希表构建、排序等内存消耗较大的操作可能会向磁盘写入临时数据,用来减少内存消耗,提升稳定性。写入的磁盘位置由配置项 tmp-storage-path 定义。在默认设置下,确保运行 TiDB 的用户对操作系统临时文件夹(通常为 /tmp)有读写权限。2)Fast Online DDL 工作区当变量 tidb_ddl_enable_fast_reorg 被设置为 ON(v6.5.0 及以上版本中默认值为 ON)时,会激活 Fast Online DDL,这时部分 DDL 要对临时文件进行读写。临时文件位置由配置 temp-dir 定义,需要确保运行 TiDB 的用户对操作系统中该目录有读写权限。以默认目录 /tmp/tidb 为例:
注意如果业务中可能存在针对大对象的 DDL 操作,推荐为 temp-dir 配置独立文件系统及更大的临时空间
mkdir /tmp/tidbchmod -R 777 /tmp/tidb
复制代码

1.3、配置操作系统参数

配置hostsecho "192.168.1.113 tidb">>  /etc/hostscat  /etc/hosts
创建tidb数据目录pvcreate /dev/sdbvgcreate tidbvg /dev/sdblvcreate -n tidblv -L 100000M tidbvgvi /etc/fstab #添加如下一行/dev/oraclevg/tidblv /tidb-data xfs defaults,noatime 0 0创建目录mkdir /tidb-data挂载目录mount /tidb-data
创建临时目录,并授权mkdir /tmp/tidbchmod -R 777 /tmp/tidb
添加用户并设置密码useradd tidb && passwd tidb
执行以下命令,将 tidb ALL=(ALL) NOPASSWD: ALL 添加到文件末尾,即配置好 sudo 免密码。visudo
tidb ALL=(ALL) NOPASSWD: ALL
配置免密:以 tidb 用户登录到中控机,执行以下命令。将 10.0.1.1 替换成你的部署目标机器 IP,按提示输入部署目标机器 tidb 用户密码,执行成功后即创建好 SSH 互信,其他机器同理。新建的 tidb 用户下没有 .ssh 目录,需要执行生成 rsa 密钥的命令来生成 .ssh 目录。如果要在中控机上部署 TiDB 组件,需要为中控机和中控机自身配置互信。# 登录tidb账号su tidb# 为了后续方便,直接按enter键ssh-keygen -t rsa #ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.1.113以 tidb 用户登录中控机,通过 ssh 的方式登录目标机器 IP。如果不需要输入密码并登录成功,即表示 SSH 互信配置成功。ssh 192.168.1.113以 tidb 用户登录到部署目标机器后,执行以下命令,不需要输入密码并切换到 root 用户,表示 tidb 用户 sudo 免密码配置成功。sudo -su root
设置本地yum源mount /dev/cdrom /mntcd /etc/yum.repos.dmkdir bkmv *.repo bk/
vi itpux.repo
echo "[EL]" >> /etc/yum.repos.d/tidb.repo echo "name =Linux 7.x DVD" >> /etc/yum.repos.d/tidb.repo echo "baseurl=file:///mnt" >> /etc/yum.repos.d/tidb.repo echo "gpgcheck=0" >> /etc/yum.repos.d/tidb.repo echo "enabled=1" >> /etc/yum.repos.d/tidb.repo
cat /etc/yum.repos.d/tidb.repo
复制代码

1.4、安装 NTP 服务

TiDB 是一套分布式数据库系统,需要节点间保证时间的同步,从而确保 ACID 模型的事务线性一致性。可以通过互联网中的 pool.ntp.org 授时服务来保证节点的时间同步,也可以使用离线环境自己搭建的 NTP 服务来解决授时
yum install -y ntp*

1.4.1、将本机作为NTP服务器# 在服务端修改ntp配置开放客户端所在的网段vim /etc/ntp.conf
# For more information about this file, see the man pages# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not# permit the source to query or modify the service on this system.restrict default nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could# be tightened as well, but to do so would effect some of# the administrative functions.restrict 192.168.1.113 nomodify notrap nopeer noqueryrestrict 192.168.1.0restrict 127.0.0.1restrict ::1
# 主要配置

# 允许访问的网络端restrict 192.168.1.113 mask 255.255.255.0 nomodify notrap
server 192.168.1.113 preferserver 192.168.1.113server 127.127.1.0Fudge 127.127.1.0 stratum 10

# Enable public key cryptography.#crypto
includefile /etc/ntp/crypto/pw
# Key file containing the keys and key identifiers used when operating# with symmetric key cryptography.keys /etc/ntp/keys
# Specify the key identifiers which are trusted.#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.#requestkey 8
# Specify the key identifier to use with the ntpq utility.#controlkey 8
# Enable writing of statistics records.#statistics clockstats cryptostats loopstats peerstats
# Disable the monitoring facility to prevent amplification attacks using ntpdc# monlist command when default restrict does not include the noquery flag. See# CVE-2013-5211 for more details.# Note: Monitoring will not be disabled with the limited restriction flag.disable monitor

重启服务systemctl restart ntpdsystemctl status ntpd设置开启启动systemctl enable ntpd检查NTP同步状态:ntpq -p
复制代码

1.5、配置系统优化参数

1、修改当前的内核配置立即关闭透明大页echo never > /sys/kernel/mm/transparent_hugepage/enabledecho never > /sys/kernel/mm/transparent_hugepage/defrag
配置服务器重启后关闭透明大页vi /etc/rc.d/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; thenecho never > /sys/kernel/mm/transparent_hugepage/enabledfiif test -f /sys/kernel/mm/transparent_hugepage/defrag; thenecho never > /sys/kernel/mm/transparent_hugepage/defragfi

2、执行以下命令配置用户的 limits.conf 文件。cat << EOF >>/etc/security/limits.conftidb soft nproc 16384tidb hard nproc 16384tidb soft nofile 1000000tidb hard nofile 1000000tidb soft stack 32768tidb hard stack 32768EOF
3、执行以下命令修改 sysctl 参数echo "fs.file-max = 1000000">> /etc/sysctl.confecho "net.core.somaxconn = 32768">> /etc/sysctl.confecho "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.confecho "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.confecho "vm.overcommit_memory = 1">> /etc/sysctl.confecho "vm.min_free_kbytes = 1048576">> /etc/sysctl.confsysctl -p
4、安装 numactl 工具本段主要介绍如何安装 NUMA 工具。在生产环境中,因为硬件机器配置往往高于需求,为了更合理规划资源,会考虑单机多实例部署 TiDB 或者 TiKV。NUMA 绑核工具的使用,主要为了防止 CPU 资源的争抢,引发性能衰退。安装 NUMA 工具有两种方法:方法 1:登录到目标节点进行安装(以 CentOS Linux release 7.7.1908 (Core) 为例)。yum -y install numactl
复制代码

2、离线安装

2.1、下载离线安装包

下载地址从文末 参考文档上,两个包都需要下载


2.2、解压安装

注意后续操作全部以 tidb 账号登录



su tidbcd /home/tidb tar -zxf tidb-community-toolkit-v8.1.0-linux-amd64.tar.gztar -zxf tidb-community-server-v8.1.0-linux-amd64.tar.gz sh tidb-community-server-v8.1.0-linux-amd64/local_install.shsource /home/tidb/.bashrc
复制代码

2.3、最小配置模板 topology.yaml

其中:


1)pd_server:元数据管理、集群一致性管理


2)tidb_server:对外提供连接访问,对内与 tikv_server 通信


3)tikv_server:实际存储数据节点


cat topology.yaml
# # Global variables are applied to all deployments and used as the default value of# # the deployments if a specific deployment value is missing.global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 # deploy_dir: "/tidb-deploy/monitored-9100" # data_dir: "/tidb-data/monitored-9100" # log_dir: "/tidb-deploy/monitored-9100/log"
# # Server configs are used to specify the runtime configuration of TiDB components.# # All configuration items can be found in TiDB docs:# # - TiDB: https://docs.pingcap.com/zh/tidb/stable/tidb-configuration-file# # - TiKV: https://docs.pingcap.com/zh/tidb/stable/tikv-configuration-file# # - PD: https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file# # All configuration items use points to represent the hierarchy, e.g:# # readpool.storage.use-unified-pool# ## # You can overwrite this configuration via the instance-level `config` field.
server_configs: tidb: log.slow-threshold: 300 binlog.enable: false binlog.ignore-error: false tikv: # server.grpc-concurrency: 4 # raftstore.apply-pool-size: 2 # raftstore.store-pool-size: 2 # rocksdb.max-sub-compactions: 1 # storage.block-cache.capacity: "16GB" # readpool.unified.max-thread-count: 12 readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.location-labels: ["zone","dc","host"] schedule.leader-schedule-limit: 4 schedule.region-schedule-limit: 2048 schedule.replica-schedule-limit: 64
pd_servers: - host: 192.168.1.113 # ssh_port: 22 # name: "pd-1" # client_port: 2379 # peer_port: 2380 # deploy_dir: "/tidb-deploy/pd-2379" # data_dir: "/tidb-data/pd-2379" # log_dir: "/tidb-deploy/pd-2379/log" # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.pd` values. # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000
tidb_servers: - host: 192.168.1.113 # ssh_port: 22 # port: 4000 # status_port: 10080 # deploy_dir: "/tidb-deploy/tidb-4000" # log_dir: "/tidb-deploy/tidb-4000/log" # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.tidb` values. # config: # log.slow-query-file: tidb-slow-overwrited.log

tikv_servers: - host: 192.168.1.113 # ssh_port: 22 port: 20161 status_port: 20181 deploy_dir: "/tidb-deploy/tikv-20161" data_dir: "/tidb-data/tikv-20161" log_dir: "/tidb-deploy/tikv-20161/log" # numa_node: "0,1" # # The following configs are used to overwrite the `server_configs.tikv` values. config: # server.grpc-concurrency: 4 server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
- host: 192.168.1.113 port: 20162 status_port: 20182 deploy_dir: "/tidb-deploy/tikv-20162" data_dir: "/tidb-data/tikv-20162" log_dir: "/tidb-deploy/tikv-20162/log" config: server.labels: { zone: "zone1", dc: "dc1", host: "host2" }
- host: 192.168.1.113 port: 20163 status_port: 20183 deploy_dir: "/tidb-deploy/tikv-20163" data_dir: "/tidb-data/tikv-20163" log_dir: "/tidb-deploy/tikv-20163/log" config: server.labels: { zone: "zone1", dc: "dc1", host: "host3" }
monitoring_servers: - host: 192.168.1.113 # ssh_port: 22 # port: 9090 # deploy_dir: "/tidb-deploy/prometheus-8249" # data_dir: "/tidb-data/prometheus-8249" # log_dir: "/tidb-deploy/prometheus-8249/log"
grafana_servers: - host: 192.168.1.113 # port: 3000 # deploy_dir: /tidb-deploy/grafana-3000
alertmanager_servers: - host: 192.168.1.113 # ssh_port: 22 # web_port: 9093 # cluster_port: 9094 # deploy_dir: "/tidb-deploy/alertmanager-9093" # data_dir: "/tidb-data/alertmanager-9093" # log_dir: "/tidb-deploy/alertmanager-9093/log"
复制代码


也可以使用命令生成模板配置文件


#执行如下命令,生成集群初始化配置文件:tiup cluster template > topology.yaml  #混合部署场景:单台机器部署多个实例tiup cluster template --full > topology.yaml #跨机房部署场景:跨机房部署 TiDB 集群tiup cluster template --multi-dc > topology.yaml
复制代码


这里使用上面 topology.yaml 配置好的

2.4、风险检查

[tidb@tidb ~]$ tiup cluster check ./topology.yaml --user tidb输出结果如下:+ Detect CPU Arch Name  - Detecting node 192.168.1.113 Arch info ... Done
+ Detect CPU OS Name - Detecting node 192.168.1.113 OS info ... Done+ Download necessary tools - Downloading check tools for linux/amd64 ... Done+ Collect basic system information+ Collect basic system information - Getting system info of 192.168.1.113:22 ... Done+ Check time zone - Checking node 192.168.1.113 ... Done+ Check system requirements+ Check system requirements+ Check system requirements+ Check system requirements - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done - Checking node 192.168.1.113 ... Done+ Cleanup check files - Cleanup check files on 192.168.1.113:22 ... DoneNode Check Result Message---- ----- ------ -------192.168.1.113 cpu-governor Warn Unable to determine current CPU frequency governor policy192.168.1.113 disk Warn mount point /tidb-data does not have 'noatime' option set192.168.1.113 swap Warn swap is enabled, please disable it for best performance192.168.1.113 memory Pass memory size is 32768MB192.168.1.113 network Pass network speed of ens33 is 1000MB192.168.1.113 disk Fail multiple components tikv:/tidb-data/tikv-20161,tikv:/tidb-data/tikv-20162,tikv:/tidb-data/tikv-20163 are using the same partition 192.168.1.113:/tidb-data as data dir192.168.1.113 selinux Pass SELinux is disabled192.168.1.113 thp Pass THP is disabled192.168.1.113 os-version Pass OS is CentOS Linux 7 (Core) 7.5.1804192.168.1.113 cpu-cores Pass number of CPU cores / threads: 4192.168.1.113 command Pass numactl: policy: default

复制代码

2.5、风险自动修复

tiup cluster check ./topology.yaml --apply --user tidb
复制代码

2.6、部署 TiDB 集群

[tidb@tidb ~]$ tiup cluster deploy tidb-test v8.1.0 ./topology.yaml --user tidb输出结果如下:+ Detect CPU Arch Name  - Detecting node 192.168.1.113 Arch info ... Done
+ Detect CPU OS Name - Detecting node 192.168.1.113 OS info ... DonePlease confirm your topology:Cluster type: tidbCluster name: tidb-testCluster version: v8.1.0Role Host Ports OS/Arch Directories---- ---- ----- ------- -----------pd 192.168.1.113 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379tikv 192.168.1.113 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161tikv 192.168.1.113 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162tikv 192.168.1.113 20163/20183 linux/x86_64 /tidb-deploy/tikv-20163,/tidb-data/tikv-20163tidb 192.168.1.113 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000prometheus 192.168.1.113 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090grafana 192.168.1.113 3000 linux/x86_64 /tidb-deploy/grafana-3000alertmanager 192.168.1.113 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: (default=N) y+ Generate SSH keys ... Done+ Download TiDB components - Download pd:v8.1.0 (linux/amd64) ... Done - Download tikv:v8.1.0 (linux/amd64) ... Done - Download tidb:v8.1.0 (linux/amd64) ... Done - Download prometheus:v8.1.0 (linux/amd64) ... Done - Download grafana:v8.1.0 (linux/amd64) ... Done - Download alertmanager: (linux/amd64) ... Done - Download node_exporter: (linux/amd64) ... Done - Download blackbox_exporter: (linux/amd64) ... Done+ Initialize target host environments - Prepare 192.168.1.113:22 ... Done+ Deploy TiDB instance - Copy pd -> 192.168.1.113 ... Done - Copy tikv -> 192.168.1.113 ... Done - Copy tikv -> 192.168.1.113 ... Done - Copy tikv -> 192.168.1.113 ... Done - Copy tidb -> 192.168.1.113 ... Done - Copy prometheus -> 192.168.1.113 ... Done - Copy grafana -> 192.168.1.113 ... Done - Copy alertmanager -> 192.168.1.113 ... Done - Deploy node_exporter -> 192.168.1.113 ... Done - Deploy blackbox_exporter -> 192.168.1.113 ... Done+ Copy certificate to remote host+ Init instance configs - Generate config pd -> 192.168.1.113:2379 ... Done - Generate config tikv -> 192.168.1.113:20161 ... Done - Generate config tikv -> 192.168.1.113:20162 ... Done - Generate config tikv -> 192.168.1.113:20163 ... Done - Generate config tidb -> 192.168.1.113:4000 ... Done - Generate config prometheus -> 192.168.1.113:9090 ... Done - Generate config grafana -> 192.168.1.113:3000 ... Done - Generate config alertmanager -> 192.168.1.113:9093 ... Done+ Init monitor configs - Generate config node_exporter -> 192.168.1.113 ... Done - Generate config blackbox_exporter -> 192.168.1.113 ... DoneEnabling component pd Enabling instance 192.168.1.113:2379 Enable instance 192.168.1.113:2379 successEnabling component tikv Enabling instance 192.168.1.113:20163 Enabling instance 192.168.1.113:20161 Enabling instance 192.168.1.113:20162 Enable instance 192.168.1.113:20163 success Enable instance 192.168.1.113:20162 success Enable instance 192.168.1.113:20161 successEnabling component tidb Enabling instance 192.168.1.113:4000 Enable instance 192.168.1.113:4000 successEnabling component prometheus Enabling instance 192.168.1.113:9090 Enable instance 192.168.1.113:9090 successEnabling component grafana Enabling instance 192.168.1.113:3000 Enable instance 192.168.1.113:3000 successEnabling component alertmanager Enabling instance 192.168.1.113:9093 Enable instance 192.168.1.113:9093 successEnabling component node_exporter Enabling instance 192.168.1.113 Enable 192.168.1.113 successEnabling component blackbox_exporter Enabling instance 192.168.1.113 Enable 192.168.1.113 successCluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test --init`
复制代码


tidb-test 为部署的集群名称。


v8.1.0 为部署的集群版本,可以通过执行 tiup list tidb 来查看 TiUP 支持的最新可用版本。


初始化配置文件为 topology.yaml。


--user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。


[-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 –user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。

3、集群管理

3.1、检查集群

#查看 TiUP 管理的集群情况tiup cluster list #TiUP 支持管理多个 TiDB 集群,该命令会输出当前通过 TiUP cluster 管理的所有集群信息,包括集群名称、部署用户、版本、密钥信息等。 #执行如下命令检查 tidb-test 集群情况:tiup cluster display tidb-test
复制代码


3.2、启动集群

注意启动密码【请保管好密码】,如果不加 --init ,则无密码root访问集群# 安全启动 tiup cluster start tidb-test --init输出如下Starting cluster tidb-test...+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [Parallel] - UserSSH: user=tidb, host=192.168.1.113+ [ Serial ] - StartClusterStarting component pd        Starting instance 192.168.1.113:2379        Start instance 192.168.1.113:2379 successStarting component tikv        Starting instance 192.168.1.113:20163        Starting instance 192.168.1.113:20161        Starting instance 192.168.1.113:20162        Start instance 192.168.1.113:20161 success        Start instance 192.168.1.113:20163 success        Start instance 192.168.1.113:20162 successStarting component tidb        Starting instance 192.168.1.113:4000        Start instance 192.168.1.113:4000 successStarting component prometheus        Starting instance 192.168.1.113:9090        Start instance 192.168.1.113:9090 successStarting component grafana        Starting instance 192.168.1.113:3000        Start instance 192.168.1.113:3000 successStarting component alertmanager        Starting instance 192.168.1.113:9093        Start instance 192.168.1.113:9093 successStarting component node_exporter        Starting instance 192.168.1.113        Start 192.168.1.113 successStarting component blackbox_exporter        Starting instance 192.168.1.113        Start 192.168.1.113 success+ [ Serial ] - UpdateTopology: cluster=tidb-testStarted cluster `tidb-test` successfullyThe root password of TiDB database has been changed.The new password is: 'Xw3_82^Zz1@x7-tK5L'.Copy and record it to somewhere safe, it is only displayed once, and will not be stored.The generated password can NOT be get and shown again.


只有第一次执行需要添加 --init 参数,后续则移除,否则再次启动报错


复制代码

3.2、验证启动

tiup cluster display tidb-test
复制代码


3.4、访问平台

输入 root 账号,及密码登录


http://192.168.1.113:2379/dashboard/



发布于: 刚刚阅读数: 3
用户头像

TiDB 社区官网:https://tidb.net/ 2021-12-15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
Centos7 安装 TiDB 集群(最小化安装)_8.x 实践_TiDB 社区干货传送门_InfoQ写作社区