写点什么

TiDB 部署 ----openEuler2203/2003 单机部署 TiDB 6.1.1

  • 2022 年 9 月 16 日
    北京
  • 本文字数:12881 字

    阅读完需:约 42 分钟

作者: tracy0984 原文来源:https://tidb.net/blog/7f127822

背景

TiDB 6.1.1 版本发布之后,已经支持在正式环境下将 tidb 部署到麒麟 V10 系统。 一直想在 openEuler 系统行运行 TiDB,就进行了新版本的安装测试。给大家提供参考。

安装环境

操作系统:openEuler2203 或 openEuler2003 SP3


TiDB 数据库版本:TiDB v6.1.1

系统配置

挂载数据盘

  1. 查看数据盘。/dev/sdb

  2. 创建分区。

  3. 格式化文件系统。

  4. 查看数据盘分区 UUID。


本例中 sdb1 的 UUID 为 87d0467c-d3a1-4916-8112-aed259bf8c8c。


  1. 编辑 /etc/fstab 文件,添加 nodelalloc 挂载参数。

  2. 挂载数据盘。

  3. 执行以下命令,如果文件系统为 ext4,并且挂载参数中包含 nodelalloc,则表示已生效。

关闭防火墙

  1. 检查防火墙状态(以 CentOS Linux release 7.7.1908 (Core) 为例)

  2. 关闭防火墙服务

  3. 关闭防火墙自动启动服务

  4. 检查防火墙状态

开启时钟同步

检查 chronyd 服务状态


[root@cen7-pg-01 ~]# systemctl status chronyd.service● chronyd.service - NTP client/server   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled)   Active: inactive (dead)     Docs: man:chronyd(8)           man:chrony.conf(5)[root@cen7-pg-01 ~]# chronyc tracking506 Cannot talk to daemon
复制代码


设置 chronyd 同步服务器,开启所有节点时钟同步功能


# 修改同步服务器配置文件:# vi /etc/chrony.conf # Please consider joining the pool (http://www.pool.ntp.org/join.html).#server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburstserver 192.168.56.10 iburst# Allow NTP client access from local network.allow 192.168.0.0/16# Serve time even if not synchronized to a time source.local stratum 10
# 修改其他节点chronyd服务配置文件:# vi /etc/chrony.conf # Use public servers from the pool.ntp.org project.# Please consider joining the pool (http://www.pool.ntp.org/join.html).#server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburstserver 192.168.56.10 iburst
# 启动chronyd服务:# systemctl start chronyd.service# systemctl status chronyd.service● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2022-07-22 14:39:39 CST; 4s ago Docs: man:chronyd(8) man:chrony.conf(5) Process: 5505 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS) Process: 5501 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 5504 (chronyd) Tasks: 1 CGroup: /system.slice/chronyd.service └─5504 /usr/sbin/chronydJul 22 14:39:39 cen7-mysql-01 systemd[1]: Stopped NTP client/server.Jul 22 14:39:39 cen7-mysql-01 systemd[1]: Starting NTP client/server...Jul 22 14:39:39 cen7-mysql-01 chronyd[5504]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG)Jul 22 14:39:39 cen7-mysql-01 chronyd[5504]: Frequency 0.156 +/- 235.611 ppm read from /var/lib/chrony/driftJul 22 14:39:39 cen7-mysql-01 systemd[1]: Started NTP client/server.Jul 22 14:39:44 cen7-mysql-01 chronyd[5504]: Selected source 192.168.56.10[root@cen7-mysql-01 ~]# chronyc trackingReference ID : C0A8380A (cen7-mysql-01)Stratum : 11Ref time (UTC) : Fri Jul 22 06:39:43 2022System time : 0.000000000 seconds fast of NTP timeLast offset : -0.000003156 secondsRMS offset : 0.000003156 secondsFrequency : 0.107 ppm fastResidual freq : -0.019 ppmSkew : 250.864 ppmRoot delay : 0.000026199 secondsRoot dispersion : 0.001826834 secondsUpdate interval : 0.0 secondsLeap status : Normal# systemctl enable chronyd.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
复制代码

优化系统参数

在生产系统的 TiDB 中,建议按照官方文档对操作系统进行配置优化


TiDB 环境与系统配置检查 | PingCAP Docs


在虚机环境下只进行了关闭透明大页操作:


  1. 查看透明大页的开启状态。


手动关闭操作:


-- 临时关闭透明大页:# echo never > /sys/kernel/mm/transparent_hugepage/enabled# echo never > /sys/kernel/mm/transparent_hugepage/defrag # cat /sys/kernel/mm/transparent_hugepage/enabledalways madvise [never]# cat /sys/kernel/mm/transparent_hugepage/defrag always defer defer+madvise madvise [never]
-- 下面是永久关闭透明大页操作方法:# vi /etc/rc.d/rc.local -- 在文件尾最佳下面内容if test -f /sys/kernel/mm/transparent_hugepage/enabled; thenecho never > /sys/kernel/mm/transparent_hugepage/enabledfiif test -f /sys/kernel/mm/transparent_hugepage/defrag; thenecho never > /sys/kernel/mm/transparent_hugepage/defragfi-- 赋予/etc/rc.d/rc.local可执行权限# chmod +x /etc/rc.d/rc.local
复制代码

修改操作系统参数

修改 /etc/sysctl.conf,设置操作系统相关参数


# echo "fs.file-max = 1000000">> /etc/sysctl.conf# echo "net.core.somaxconn = 32768">> /etc/sysctl.conf# echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf# echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf# echo "vm.overcommit_memory = 1">> /etc/sysctl.conf-- 使修改生效# sysctl -p
复制代码

openEuler2203 系统修改修改 sshd 服务配置文件,支持 ssh-rsa

-- 修改sshd服务配置文件# echo “PubkeyAcceptedKeyTypes=+ssh-rsa” >>/etc/ssh/sshd_config-- 重启sshd服务# systemctl restart sshd
复制代码

创建 TiDB 用户并赋权

# groupadd tidb# useradd -g tidb tidb# passwd tidbChanging password for user tidb.New password: tidbBAD PASSWORD: The password is shorter than 8 charactersRetype new password: tidbpasswd: all authentication tokens updated successfully.
-- 赋予tidb用户在/u02创建目录的权限,用于创建tidb的安装目录和数据目录# chown -R tidb: /u02
-- 赋予 tidb用户sudo权限# visudo-- 添加一行:tidb ALL=(ALL) NOPASSWD: ALL## Same thing without a password# %wheel ALL=(ALL) NOPASSWD: ALLtidb ALL=(ALL) NOPASSWD: ALL
--修改/etc/security/limits.conf文件, 配置tidb用户的操作资源系统限制cat << EOF >>/etc/security/limits.conftidb soft nofile 1000000tidb hard nofile 1000000tidb soft stack 32768tidb hard stack 32768EOF
复制代码

安装 TiDB

部署离线环境 TiUP 组件

-- 解压安装包# tar -zxf tidb-community-server-v6.1.1-linux-amd64.tar.gz # tar -zxf tidb-community-toolkit-v6.1.1-linux-amd64.tar.gz # chown -R tidb: /u02/soft/ti*-- 配置环境变量# su - tidb$ cd /u02/soft/tidb-community-server-v6.1.1-linux-amd64/$ sh ./local_install.sh && source ~/.bash_profileDisable telemetry successSuccessfully set mirror to /u02/soft/tidb-community-server-v6.1.1-linux-amd64Detected shell: bashShell profile:  /home/tidb/.bash_profile/home/tidb/.bash_profile has been modified to to add tiup to PATHopen a new terminal or source /home/tidb/.bash_profile to use itInstalled path: /home/tidb/.tiup/bin/tiup===============================================1. source /home/tidb/.bash_profile2. Have a try:   tiup playground===============================================$ which tiup/home/tidb/.tiup/bin/tiup
复制代码

合并 toolkit 包

$ cd /u02/soft/tidb-community-server-v6.1.1-linux-amd64/$ cp -rp keys ~/.tiup/$ tiup mirror merge ../tidb-community-toolkit-v6.1.1-linux-amd64
复制代码


创建安装拓扑文件 这里由于测试条件限制,pd,tidb server 和 tikv 都设置为了单节点 …


$ vi topology.yamlglobal:  user: "tidb"  --数据库安装用户  ssh_port: 22  --ssh 端口号  deploy_dir: "/u02/tidb-deploy" -- TiDB安装目录  data_dir: "/u02/tidb-data"  -- TiDB数据目录#server_configs: {}pd_servers:  -- pd节点配置- host: 192.168.56.11tidb_servers:  -- tidb server节点配置- host: 192.168.56.11tikv_servers:  -- tikv 节点配置- host: 192.168.56.11monitoring_servers: -- 监控节点配置- host: 192.168.56.11grafana_servers:  -- 监控节点配置- host: 192.168.56.11alertmanager_servers: -- 监控节点配置- host: 192.168.56.11
复制代码

安装 tidb 集群

-- tidb安装环境检查$ tiup cluster check  /home/tidb/topology.yaml -ptiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster check /home/tidb/topology.yaml -pInput SSH password: 输入安装用户密码
+ Detect CPU Arch Name - Detecting node 192.168.56.11 Arch info ... Done
+ Detect CPU OS Name - Detecting node 192.168.56.11 OS info ... Done+ Download necessary tools - Downloading check tools for linux/amd64 ... Done+ Collect basic system information+ Collect basic system information - Getting system info of 192.168.56.11:22 ... Done+ Check time zone - Checking node 192.168.56.11 ... Done+ Check system requirements+ Check system requirements+ Check system requirements+ Check system requirements - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done+ Cleanup check files - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... DoneNode Check Result Message---- ----- ------ -------192.168.56.11 cpu-cores Pass number of CPU cores / threads: 1192.168.56.11 command Fail numactl not usable, bash: line 1: numactl: command not found192.168.56.11 thp Pass THP is disabled192.168.56.11 service Fail service irqbalance is not running192.168.56.11 os-version Fail os vendor openEuler not supported192.168.56.11 cpu-governor Warn Unable to determine current CPU frequency governor policy192.168.56.11 swap Warn swap is enabled, please disable it for best performance192.168.56.11 memory Pass memory size is 0MB192.168.56.11 network Pass network speed of enp0s3 is 1000MB192.168.56.11 selinux Pass SELinux is disabled-- TiDB集群安装环境问题修复$ tiup cluster check /home/tidb/topology.yaml --apply -ptiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster check /home/tidb/topology.yaml --apply -pInput SSH password: 输入安装用户密码
+ Detect CPU Arch Name - Detecting node 192.168.56.11 Arch info ... Done
+ Detect CPU OS Name - Detecting node 192.168.56.11 OS info ... Done+ Download necessary tools - Downloading check tools for linux/amd64 ... Done+ Collect basic system information+ Collect basic system information - Getting system info of 192.168.56.11:22 ... Done+ Check time zone - Checking node 192.168.56.11 ... Done+ Check system requirements+ Check system requirements+ Check system requirements+ Check system requirements - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done - Checking node 192.168.56.11 ... Done+ Cleanup check files - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... Done - Cleanup check files on 192.168.56.11:22 ... DoneNode Check Result Message---- ----- ------ -------192.168.56.11 cpu-governor Warn Unable to determine current CPU frequency governor policy, auto fixing not supported192.168.56.11 memory Pass memory size is 0MB192.168.56.11 selinux Pass SELinux is disabled192.168.56.11 thp Pass THP is disabled192.168.56.11 command Fail numactl not usable, bash: line 1: numactl: command not found, auto fixing not supported192.168.56.11 os-version Fail os vendor openEuler not supported, auto fixing not supported192.168.56.11 cpu-cores Pass number of CPU cores / threads: 1192.168.56.11 swap Warn will try to disable swap, please also check /etc/fstab manually192.168.56.11 network Pass network speed of enp0s3 is 1000MB192.168.56.11 service Fail will try to 'start irqbalance.service'
+ Try to apply changes to fix failed checks - Applying changes on 192.168.56.11 ... Done-- TiDB集群安装$ ]$ tiup cluster deploy tidb-v611 v6.1.1 /home/tidb/topology.yaml -ptiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster deploy tidb-v611 v6.1.1 /home/tidb/topology.yaml -pInput SSH password:
+ Detect CPU Arch Name - Detecting node 192.168.56.11 Arch info ... Done
+ Detect CPU OS Name - Detecting node 192.168.56.11 OS info ... DonePlease confirm your topology:Cluster type: tidbCluster name: tidb-v611Cluster version: v6.1.1Role Host Ports OS/Arch Directories---- ---- ----- ------- -----------pd 192.168.56.11 2379/2380 linux/x86_64 /u02/tidb-deploy/pd-2379,/u02/tidb-data/pd-2379tikv 192.168.56.11 20160/20180 linux/x86_64 /u02/tidb-deploy/tikv-20160,/u02/tidb-data/tikv-20160tidb 192.168.56.11 4000/10080 linux/x86_64 /u02/tidb-deploy/tidb-4000prometheus 192.168.56.11 9090/12020 linux/x86_64 /u02/tidb-deploy/prometheus-9090,/u02/tidb-data/prometheus-9090grafana 192.168.56.11 3000 linux/x86_64 /u02/tidb-deploy/grafana-3000alertmanager 192.168.56.11 9093/9094 linux/x86_64 /u02/tidb-deploy/alertmanager-9093,/u02/tidb-data/alertmanager-9093Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: (default=N) y+ Generate SSH keys ... Done+ Download TiDB components - Download pd:v6.1.1 (linux/amd64) ... Done - Download tikv:v6.1.1 (linux/amd64) ... Done - Download tidb:v6.1.1 (linux/amd64) ... Done - Download prometheus:v6.1.1 (linux/amd64) ... Done - Download grafana:v6.1.1 (linux/amd64) ... Done - Download alertmanager: (linux/amd64) ... Done - Download node_exporter: (linux/amd64) ... Done - Download blackbox_exporter: (linux/amd64) ... Done+ Initialize target host environments - Prepare 192.168.56.11:22 ... Done+ Deploy TiDB instance - Copy pd -> 192.168.56.11 ... Done - Copy tikv -> 192.168.56.11 ... Done - Copy tidb -> 192.168.56.11 ... Done - Copy prometheus -> 192.168.56.11 ... Done - Copy grafana -> 192.168.56.11 ... Done - Copy alertmanager -> 192.168.56.11 ... Done - Deploy node_exporter -> 192.168.56.11 ... Done - Deploy blackbox_exporter -> 192.168.56.11 ... Done+ Copy certificate to remote host+ Init instance configs - Generate config pd -> 192.168.56.11:2379 ... Done - Generate config tikv -> 192.168.56.11:20160 ... Done - Generate config tidb -> 192.168.56.11:4000 ... Done - Generate config prometheus -> 192.168.56.11:9090 ... Done - Generate config grafana -> 192.168.56.11:3000 ... Done - Generate config alertmanager -> 192.168.56.11:9093 ... Done+ Init monitor configs - Generate config node_exporter -> 192.168.56.11 ... Done - Generate config blackbox_exporter -> 192.168.56.11 ... DoneEnabling component pd Enabling instance 192.168.56.11:2379 Enable instance 192.168.56.11:2379 successEnabling component tikv Enabling instance 192.168.56.11:20160 Enable instance 192.168.56.11:20160 successEnabling component tidb Enabling instance 192.168.56.11:4000 Enable instance 192.168.56.11:4000 successEnabling component prometheus Enabling instance 192.168.56.11:9090 Enable instance 192.168.56.11:9090 successEnabling component grafana Enabling instance 192.168.56.11:3000 Enable instance 192.168.56.11:3000 successEnabling component alertmanager Enabling instance 192.168.56.11:9093 Enable instance 192.168.56.11:9093 successEnabling component node_exporter Enabling instance 192.168.56.11 Enable 192.168.56.11 successEnabling component blackbox_exporter Enabling instance 192.168.56.11 Enable 192.168.56.11 successCluster `tidb-v611` deployed successfully, you can start it with command: `tiup cluster start tidb-v611 --init`
-- 查看TiDB集群信息$ tiup cluster listtiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster listName User Version Path PrivateKey---- ---- ------- ---- ----------tidb-v611 tidb v6.1.1 /home/tidb/.tiup/storage/cluster/clusters/tidb-v611 /home/tidb/.tiup/storage/cluster/clusters/tidb-v611/ssh/id_rsa
-- 启动TiDB集群$ tiup cluster start tidb-v611 --inittiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster start tidb-v611 --initStarting cluster tidb-v611...+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-v611/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-v611/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=192.168.56.11+ [Parallel] - UserSSH: user=tidb, host=192.168.56.11+ [Parallel] - UserSSH: user=tidb, host=192.168.56.11+ [Parallel] - UserSSH: user=tidb, host=192.168.56.11+ [Parallel] - UserSSH: user=tidb, host=192.168.56.11+ [Parallel] - UserSSH: user=tidb, host=192.168.56.11+ [ Serial ] - StartClusterStarting component pd Starting instance 192.168.56.11:2379 Start instance 192.168.56.11:2379 successStarting component tikv Starting instance 192.168.56.11:20160 Start instance 192.168.56.11:20160 successStarting component tidb Starting instance 192.168.56.11:4000 Start instance 192.168.56.11:4000 successStarting component prometheus Starting instance 192.168.56.11:9090 Start instance 192.168.56.11:9090 successStarting component grafana Starting instance 192.168.56.11:3000 Start instance 192.168.56.11:3000 successStarting component alertmanager Starting instance 192.168.56.11:9093 Start instance 192.168.56.11:9093 successStarting component node_exporter Starting instance 192.168.56.11 Start 192.168.56.11 successStarting component blackbox_exporter Starting instance 192.168.56.11 Start 192.168.56.11 success+ [ Serial ] - UpdateTopology: cluster=tidb-v611Started cluster `tidb-v611` successfullyThe root password of TiDB database has been changed.The new password is: '097zS8&!1@Vmc+x5Ug'.Copy and record it to somewhere safe, it is only displayed once, and will not be stored.The generated password can NOT be get and shown again.
-- 查看TiDB集群状态$ tiup cluster display tidb-v611 tiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-v611Cluster type: tidbCluster name: tidb-v611Cluster version: v6.1.1Deploy user: tidbSSH type: builtinDashboard URL: http://192.168.56.11:2379/dashboardGrafana URL: http://192.168.56.11:3000ID Role Host Ports OS/Arch Status Data Dir Deploy Dir-- ---- ---- ----- ------- ------ -------- ----------192.168.56.11:9093 alertmanager 192.168.56.11 9093/9094 linux/x86_64 Up /u02/tidb-data/alertmanager-9093 /u02/tidb-deploy/alertmanager-9093192.168.56.11:3000 grafana 192.168.56.11 3000 linux/x86_64 Up - /u02/tidb-deploy/grafana-3000192.168.56.11:2379 pd 192.168.56.11 2379/2380 linux/x86_64 Up|L|UI /u02/tidb-data/pd-2379 /u02/tidb-deploy/pd-2379192.168.56.11:9090 prometheus 192.168.56.11 9090/12020 linux/x86_64 Up /u02/tidb-data/prometheus-9090 /u02/tidb-deploy/prometheus-9090192.168.56.11:4000 tidb 192.168.56.11 4000/10080 linux/x86_64 Up - /u02/tidb-deploy/tidb-4000192.168.56.11:20160 tikv 192.168.56.11 20160/20180 linux/x86_64 Up /u02/tidb-data/tikv-20160 /u02/tidb-deploy/tikv-20160Total nodes: 6
-- 附,测试后,可以使用下面命令删除集群:tiup cluster destroy tidb-v611
复制代码

总结

在 openEuler2203 和 openEuler2003SP3 系统都进行了测试, 均能正常完成 tidbv6.1.1 的安装,不过目前官方还未支持建正式环境使用 openEuler 系统。


注意事项:


1. 在 openEuler2203 系统中,普通用户不再具有 systemctl 命令的执行权限。


在安装过程中,tidb cluster check、deploy 命令建议不要指定 –user 参数为 root 以外的用户。


如果想指定 –user 为其他用户,可以提前修改 /usr/share/polkit-1/actions/org.freedesktop.systemd1.policy 的配置。


# vi /usr/share/polkit-1/actions/org.freedesktop.systemd1.policy--修改action-id为 org.freedesktop.systemd1.manage-units,org.freedesktop.systemd1.manage-unit-files,org.freedesktop.systemd1.set-environment和org.freedesktop.systemd1.reload-daemon的项中<defaults>部分内容如下:           <defaults>                       <!--                        <allow_any>auth_admin</allow_any>                        <allow_inactive>auth_admin</allow_inactive>                        <allow_active>auth_admin_keep</allow_active>                        -->                        <allow_any>yes</allow_any>                        <allow_inactive>yes</allow_inactive>                        <allow_active>yes</allow_active>            </defaults>
复制代码


期待官方早日支持 TiDB 正式环境可以部署到 openEuler2203/2003 系统。

附,安装过程报错处理

1. tidb cluster check 命令报错:failed to fetch cpu-arch or kernel-name

$ tiup cluster check  /home/tidb/topology.yaml -ptiup is checking updates for component cluster ...Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster check /home/tidb/topology.yaml -pInput SSH password: 
+ Detect CPU Arch Name - Detecting node 192.168.56.12 Arch info ... Error
Error: failed to fetch cpu-arch or kernel-name: executor.ssh.execute_failed: Failed to execute command over SSH for 'tidb@192.168.56.12:22' {ssh_stderr: We trust you have received the usual lecture from the local SystemAdministrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper, ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c "uname -m"}, cause: Process exited with status 1
Verbose debug logs has been written to /home/tidb/.tiup/logs/tiup-cluster-debug-2022-09-11-02-11-58.log.
复制代码


问题原因:tidb 用户没有 sudo 权限


解决方法:


# visudo-- 添加一行:tidb        ALL=(ALL)       NOPASSWD: ALL## Same thing without a password# %wheel        ALL=(ALL)       NOPASSWD: ALLtidb        ALL=(ALL)       NOPASSWD: ALL

复制代码

2. tidb cluster deploy 命令报错:failed to enable/disable pd: failed to enable: 192.168.56.11 pd-2379.service

# tiup cluster deploy tidb-v530 v5.3.0 /home/tidb/topology.yaml --user root -pStarting component `cluster`: /root/.tiup/components/cluster/v1.7.0/tiup-cluster deploy tidb-v530 v5.3.0 /home/tidb/topology.yaml --user root -pInput SSH password: 

Run command on 192.168.56.11(sudo:false): uname -mPlease confirm your topology:Cluster type: tidbCluster name: tidb-v530Cluster version: v5.3.0Role Host Ports OS/Arch Directories---- ---- ----- ------- -----------pd 192.168.56.11 2379/2380 linux/ /u02/tidb-deploy/pd-2379,/u02/tidb-data/pd-2379tikv 192.168.56.11 20160/20180 linux/ /u02/tidb-deploy/tikv-20160,/u02/tidb-data/tikv-20160tidb 192.168.56.11 4000/10080 linux/ /u02/tidb-deploy/tidb-4000prometheus 192.168.56.11 9090 linux/ /u02/tidb-deploy/prometheus-9090,/u02/tidb-data/prometheus-9090grafana 192.168.56.11 3000 linux/ /u02/tidb-deploy/grafana-3000alertmanager 192.168.56.11 9093/9094 linux/ /u02/tidb-deploy/alertmanager-9093,/u02/tidb-data/alertmanager-9093Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: (default=N) y
......Enabling component pd Enabling instance 192.168.56.11:2379
Error: failed to enable/disable pd: failed to enable: 192.168.56.11 pd-2379.service, please check the instance's log(/u02/tidb-deploy/pd-2379/log) for more detail.: executor.ssh.execute_failed: Failed to execute command over SSH for 'tidb@192.168.56.11:22' {ssh_stderr: , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c "systemctl daemon-reload && systemctl enable pd-2379.service"}, cause: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2022-09-10-17-11-50.log.Error: run `/root/.tiup/components/cluster/v1.7.0/tiup-cluster` (wd:/root/.tiup/data/TH1LwcK) failed: exit status 1
复制代码


问题原因:sshd 服务不支持 ssh-rsa


解决方法:


-- 修改sshd服务配置文件# echo “PubkeyAcceptedKeyTypes=+ssh-rsa” >>/etc/ssh/sshd_config-- 重启sshd服务# systemctl restart sshd
复制代码


发布于: 刚刚阅读数: 5
用户头像

TiDB 社区官网:https://tidb.net/ 2021.12.15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
TiDB部署----openEuler2203/2003 单机部署TiDB 6.1.1_安装 & 部署_TiDB 社区干货传送门_InfoQ写作社区