作者: ShawnYan 原文来源:https://tidb.net/blog/529996b3
网传,ACE 装国产数据库花了两周,吃瓜群众纷纷表示惊讶,
都 4202 年了,DBaaS 概念已经不热了,安装数据库怎么还跟 10 年前一样需要持久战~
…
使用 TiUP 部署 TiDB v8.0.0 集群
回看两年前发的文章, TiUP:TiDBAer 必备利器
现在更加肯定 TiUP 真乃神器也,使用 TiUP 在本地模拟部署 TiDB v8.0.0 集群仅需 10 分钟。
具体演示如下:
1. 下载并安装 TiUP
[root@shawnyan ~ 11:57:40]$ curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5095k 100 5095k 0 0 5065k 0 0:00:01 0:00:01 --:--:-- 5069k
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile: /root/.bash_profile
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try: tiup playground
===============================================
[root@shawnyan ~ 11:57:46]$ source /root/.bash_profile
复制代码
2. 调大 sshd 服务的连接数限制
由于是本地安装集群,会并发连接,调整最大会话数量到 20。
vi /etc/ssh/sshd_config
MaxSessions = 20
复制代码
并重启 sshd 服务。
3. 创建集群拓扑文件
可用 tiup cluster template
命令生成拓扑模板。
[root@shawnyan ~ 11:59:00]$ vi topo.yaml
复制代码
4. 部署集群
[root@shawnyan ~ 12:00:40]$ tiup cluster deploy mytidb v8.0.0 ./topo.yaml --user root -p
Input SSH password:
+ Detect CPU Arch Name
+ Detect CPU OS Name
Please confirm your topology:
Cluster type: tidb
Cluster name: mytidb
Cluster version: v8.0.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.8.161 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.8.161 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.8.161 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tidb 192.168.8.161 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash 192.168.8.161 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus 192.168.8.161 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.8.161 3000 linux/x86_64 /tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
+ Initialize target host environments
+ Deploy TiDB instance
+ Copy certificate to remote host
+ Init instance configs
+ Init monitor configs
Enabling component pd
Enabling component tikv
Enabling component tidb
Enabling component tiflash
Enabling component prometheus
Enabling component grafana
Enabling component node_exporter
Enabling component blackbox_exporter
Cluster `mytidb` deployed successfully, you can start it with command: `tiup cluster start mytidb --init`
复制代码
集群安装完成,接下来启动集群。
5. 启动集群
[root@shawnyan ~ 12:04:17]$ tiup cluster start mytidb --init
Starting cluster mytidb...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.8.161
...
+ [ Serial ] - StartCluster
Starting component pd
Starting component tikv
Starting component tidb
Starting component tiflash
Starting component prometheus
Starting component grafana
Starting component node_exporter
Starting component blackbox_exporter
+ [ Serial ] - UpdateTopology: cluster=mytidb
Started cluster `mytidb` successfully
The root password of TiDB database has been changed.
The new password is: '2mKNM_976^-+1h8BEL'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
复制代码
此时,TiDB 集群已启动成功,并且自动创建了 root 账号密码。
6. 查看 TiDB 集群状态
连接 TiDB,查看版本号。
[root@shawnyan ~ 12:05:20]$ mysql -h 192.168.8.161 -P 4000 -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3290431496
Server version: 8.0.11-TiDB-v8.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v8.0.0
Edition: Community
Git Commit Hash: 8ba1fa452b1ccdbfb85879ea94b9254aabba2916
Git Branch: HEAD
UTC Build Time: 2024-03-28 14:22:15
GoVersion: go1.21.4
Race Enabled: false
Check Table Before Drop: false
Store: tikv
1 row in set (0.00 sec)
mysql> \q
Bye
复制代码
查看 TiDB 集群列表。
[root@shawnyan ~ 12:05:46]$ tiup cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
mytidb tidb v8.0.0 /root/.tiup/storage/cluster/clusters/mytidb /root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa
复制代码
查看集群拓扑结构和状态。
[root@shawnyan ~ 12:05:54]$ tiup cluster display mytidb
Cluster type: tidb
Cluster name: mytidb
Cluster version: v8.0.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.8.161:2379/dashboard
Grafana URL: http://192.168.8.161:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.8.161:3000 grafana 192.168.8.161 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.8.161:2379 pd 192.168.8.161 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.8.161:9090 prometheus 192.168.8.161 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.8.161:4000 tidb 192.168.8.161 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.8.161:9000 tiflash 192.168.8.161 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.8.161:20160 tikv 192.168.8.161 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.8.161:20161 tikv 192.168.8.161 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
Total nodes: 7
[root@shawnyan ~ 12:06:04]$
复制代码
到此,仅用了不到 10 分钟,就部署完成了一套 TiDB 集群。
遥想当年写 Shell 脚本和 Ansible 脚本的岁月,有 TiUP 助力简直不要 Ti 幸福。
“带货”时间:
🥳 TiDB 社区第三届专栏征文大赛火热进行中!
🙌 投稿时间:3 月 1 号 - 5 月 30 号
🎉 超丰富的周边奖励、TiDB 特别定制行李箱、2024 社区新款冲锋衣、降噪耳机、按摩仪等你来拿!
🧡 留言有奖:在活动帖子评论区留下你最想看到的文章主题,即可参与 2024 社区春款山系机能冲锋衣抽奖哦~
【TiDB 社区第三届专栏征文大赛】超丰富周边奖励、返场行李箱、BOSE 耳机、SKG 颈椎按摩仪、新款冲锋衣等你来拿!
恭喜 TiProxy 组件 GA!
之前发文 《 TiDB 7.x 源码编译之 TiProxy 篇,及尝鲜体验 》 介绍过 TiProxy,接下来看看有什么新变化。
“
TiProxy 是 TiDB 的官方代理组件,位于客户端和 TiDB server 之间,为 TiDB 提供负载均衡、连接保持功能,让 TiDB 集群的负载更加均衡,并在维护操作期间不影响用户对数据库的连接访问。
在 v8.0.0 中,TiProxy 成为正式功能,完善了签名证书自动生成、监控等功能。
TiProxy 的应用场景如下:
在 TiDB 集群进行滚动重启、滚动升级、缩容等维护操作时,TiDB server 会发生变动,导致客户端与发生变化的 TiDB server 的连接中断。通过使用 TiProxy,可以在这些维护操作过程中平滑地将连接迁移至其他 TiDB server,从而让客户端不受影响。
所有客户端对 TiDB server 的连接都无法动态迁移至其他 TiDB server。当多个 TiDB server 的负载不均衡时,可能出现整体集群资源充足,但某些 TiDB server 资源耗尽导致延迟大幅度增加的情况。为解决此问题,TiProxy 提供连接动态迁移功能,在客户端无感的前提下,将连接从一个 TiDB server 迁移至其他 TiDB server,从而实现 TiDB 集群的负载均衡。
延续上文的 TiDB v8.0.0 集群话题,接下来演示如何增加 TiProxy 服务。
1. 创建 TiProxy 拓扑文件
[root@shawnyan ~ 12:48:48]$ vi tiproxy.toml
复制代码
文件中指定 IP 地址。
tiproxy_servers:
- host: 192.168.8.161
复制代码
2. 使用 TiUP 命令部署 TiProxy 服务
使用 tiup cluster scale-out 命令对集群进行扩容。
[root@shawnyan ~ 12:49:12]$ tiup cluster scale-out mytidb ./tiproxy.toml --user root -p
Input SSH password:
...
Please confirm your topology:
Cluster type: tidb
Cluster name: mytidb
Cluster version: v8.0.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
tiproxy 192.168.8.161 6000/3080 linux/x86_64 /tidb-deploy/tiproxy-6000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
...
+ Download TiDB components
- Download tiproxy: (linux/amd64) ... Done
+ Initialize target host environments
+ Deploy TiDB instance
- Deploy instance tiproxy -> 192.168.8.161:6000 ... Done
+ Copy certificate to remote host
+ Generate scale-out config
- Generate scale-out config tiproxy -> 192.168.8.161:6000 ... Done
+ Init monitor config
...
+ [ Serial ] - Save meta
+ [ Serial ] - Start new instances
Starting component tiproxy
Starting component node_exporter
Starting component blackbox_exporter
+ Refresh components conifgs
- Generate config tiproxy -> 192.168.8.161:6000 ... Done
...
+ Reload prometheus and grafana
- Reload prometheus -> 192.168.8.161:9090 ... Done
- Reload grafana -> 192.168.8.161:3000 ... Done
+ [ Serial ] - UpdateTopology: cluster=mytidb
Scaled cluster `mytidb` out successfully
复制代码
查看集群状态。
[root@shawnyan ~ 12:49:35]$ tiup cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
mytidb tidb v8.0.0 /root/.tiup/storage/cluster/clusters/mytidb /root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa
[root@shawnyan ~ 12:49:49]$ tiup cluster display mytidb
Cluster type: tidb
Cluster name: mytidb
Cluster version: v8.0.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.8.161:2379/dashboard
Grafana URL: http://192.168.8.161:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.8.161:3000 grafana 192.168.8.161 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.8.161:2379 pd 192.168.8.161 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.8.161:9090 prometheus 192.168.8.161 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.8.161:4000 tidb 192.168.8.161 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.8.161:9000 tiflash 192.168.8.161 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.8.161:20160 tikv 192.168.8.161 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.8.161:20161 tikv 192.168.8.161 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.8.161:6000 tiproxy 192.168.8.161 6000/3080 linux/x86_64 Up - /tidb-deploy/tiproxy-6000
Total nodes: 8
复制代码
可以看到,已经增加了 TiProxy 服务,监听 6000⁄3080 端口。
3. 通过 TiProxy 连接 TiDB 数据库
通过 6000 端口连接 TiDB 数据库。
[root@shawnyan ~ 12:54:46]$ mysql -h 192.168.8.161 -P 6000 -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 0
Server version: 8.0.11-TiDB-v8.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select version();
+--------------------+
| version() |
+--------------------+
| 8.0.11-TiDB-v8.0.0 |
+--------------------+
1 row in set (0.01 sec)
mysql> \q
Bye
复制代码
测试 TiProxy 在集群缩容时对客户端的影响
一般情况下,在使用 TiProxy 时,对 TiDB Server 进行缩容操作,TiProxy 可以将客户端平滑迁移到其他 TiDB Server。
下面通过一个小案例进行演示。
当前 TiDB 集群有 1 个 TiProxy 和 2 个 TiDB Server。
[root@shawnyan ~ 22:54:11]$ tiup cluster display mytidb
...
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.8.161:4000 tidb 192.168.8.161 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.8.161:4001 tidb 192.168.8.161 4001/10081 linux/x86_64 Up - /tidb-deploy/tidb-4001
192.168.8.161:6000 tiproxy 192.168.8.161 6000/3080 linux/x86_64 Up - /tidb-deploy/tiproxy-6000
...
Total nodes: 9
复制代码
编写一个 Go 语言的小程序,连接到 TiProxy,并查询当前时间和端口号。
[root@shawnyan ~/mygo 23:37:31]$ ./mygo
2024-04-09 23:37:35.008276 3290458852 4000
复制代码
对 TiDB Server 进行缩容,观察连接状态。
[root@shawnyan ~ 23:54:02]$ date; tiup cluster scale-in mytidb -N 192.168.8.161:4001
Tue Apr 9 23:54:28 CST 2024
This operation will delete the 192.168.8.161:4001 nodes in `mytidb` and all their data.
Do you want to continue? [y/N]:(default=N) y
Scale-in nodes...
...
Scaled cluster `mytidb` in successfully
[root@shawnyan ~ 23:54:42]$
复制代码
小程序打印的连接信息如下,在 31.702979
之后,4001 端口就无法连接了。
2024-04-09 23:54:31.628317 3290470616 4000
2024-04-09 23:54:31.644864 190842836 4001
2024-04-09 23:54:31.658394 3290470618 4000
2024-04-09 23:54:31.671760 190842838 4001
2024-04-09 23:54:31.685303 3290470620 4000
2024-04-09 23:54:31.702979 190842840 4001
2024-04-09 23:54:31.715786 3290470622 4000
2024-04-09 23:54:31.940418 3290470624 4000
2024-04-09 23:54:32.155463 3290470626 4000
2024-04-09 23:54:32.368296 3290470628 4000
2024-04-09 23:54:32.579094 3290470630 4000
2024-04-09 23:54:32.792323 3290470634 4000
2024-04-09 23:54:32.803430 3290470636 4000
2024-04-09 23:54:32.814547 3290470638 4000
2024-04-09 23:54:32.824227 3290470640 4000
2024-04-09 23:54:32.833193 3290470642 4000
复制代码
4001 端口的 tidb server 日志如下。
[2024/04/09 23:54:31.725 +08:00] [INFO] [signal_posix.go:54] ["got signal to exit"] [signal=terminated]
[2024/04/09 23:54:31.725 +08:00] [INFO] [server.go:584] ["setting tidb-server to report unhealthy (shutting-down)"]
[2024/04/09 23:54:31.725 +08:00] [ERROR] [http_status.go:530] ["start status/rpc server error"] [error="accept tcp [::]:10081: use of closed network connection"]
[2024/04/09 23:54:31.725 +08:00] [ERROR] [http_status.go:520] ["grpc server error"] [error="mux: server closed"]
[2024/04/09 23:54:31.725 +08:00] [ERROR] [http_status.go:525] ["http server error"] [error="http: Server closed"]
[2024/04/09 23:54:31.733 +08:00] [INFO] [server.go:990] ["start drain clients"]
[2024/04/09 23:54:31.733 +08:00] [INFO] [server.go:1019] ["all sessions quit in drain wait time"]
[2024/04/09 23:54:31.733 +08:00] [INFO] [server.go:971] ["kill all connections."] [category=server]
...
复制代码
小测验到此先告一段落。
通过上面的小案例,可以看到使用 TiProxy 的集群在缩容时,流量会切到其他 Up 的节点,过程还是平滑的。
总结
无论国产非国产,数据库不仅仅是一个 OS 里的 Service,更需要好的 Support Service。
好的生态工具,可以提升开发者和 DBA 的生产效率,并降低运营成本。
总之,更容易上手的数据库才是好产品。
欢迎讨论
-- END –
如果这篇文章为你带来了灵感或启发,就请帮忙点『赞』or『在看』or『转发』吧,感谢!(๑˃̵ᴗ˂̵)
评论