写点什么

使用 Vagrant + VirtualBox 虚拟机搭建 TiDB v5.4 实验环境

  • 2022 年 7 月 11 日
  • 本文字数:20365 字

    阅读完需:约 67 分钟

作者: Hacker_majy 原文来源:https://tidb.net/blog/b197dddc

实例环境配置信息

硬件信息: Intel i7(8c) + 16G 内存 + 1T SSD


软件:Oracle VM Virtual Box 6.1.26 + Vagrant 2.2.16


ISO:CentOS-7.9-x86_64-DVD-2009


TiDB 版本:TiDB V5.4


虚拟机 VM 数量:5 个


各个 VM 配置:Cpu:1c , Memory:2G 硬盘 50G


各个虚拟机节点信息:


| 组件 | 虚拟机名称 | 机器名称 | IP 地址 | 数量 || ———— | ———— | ———— | ————– | – || pd | tidb-pd | tidb-pd | 192.168.56.160 | 1 || altermanager | tidb-pd | tidb-pd | 192.168.56.160 | || prometheus | tidb-pd | tidb-pd | 192.168.56.160 | || grafana | tidb-pd | tidb-pd | 192.168.56.160 | || tidb-server | tidb-server | tidb-tidb | 192.168.56.161 | 1 || tikv1 | tidb-tikv1 | tidb-tikv1 | 192.168.56.162 | 1 || tikv2 | tidb-tikv2 | tidb-tikv2 | 192.168.56.163 | 1 || tiflash | tidb-tiflash | tidb-tiflash | 192.168.56.164 | 1 |


各组件网络端口配置要求


Windows 10 下安装 VirtualBox 和 Vagrant

软件下载地址

Oracle VM VirtualBox 下载网址:https://www.virtualbox.org/wiki/Downloads


Vagrant 下载网址:https://www.vagrantup.com/downloads


Vagrant Box 文件地址:https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=centos7

安装 VritualBox Oracle VM

  • 下载安装文件



VirtualBox 是一款开源的虚拟机软件,和 VMWare 是同类型的软件,用于在当前的电脑上创建虚拟机。


VirtualBox 6.1.34 下载地址 https://download.virtualbox.org/virtualbox/6.1.34/VirtualBox-6.1.34a-150636-Win.exe


VirtualBox 6.1.34 Oracle VM VirtualBox 扩展包下载地址 https://download.virtualbox.org/virtualbox/6.1.34/Oracle_VM_VirtualBox_Extension_Pack-6.1.34.vbox-extpack


  • 安装 VirtualBox

  • 双击下载好的 VirtualBox-6.1.34a-150636-Win.exe 文件进行安装,

  • 点击“下一步”

  • 设置安装位置,点击“下一步”



点击“下一步”



点击”是“



点击”安装 ”



点击“完成”。


安装过程非常简单,根据提示点击一下就完成对 VirtualBox 的安装。


  • 安装 VirtualBox 扩展包

  • 双击下载的“Oracle_VM_VirtualBox_Extension_Pack-6.1.34.vbox-extpack” 扩展包文件,根据提示进行安装。

修改 VirtualBox 配置信息

  • 修改虚拟机默认存放路径


点击菜单“管理”–>“全局设定” 修改 “ 默认虚拟电脑位置:” 的值为 g:\ovm_machine



  • 添加网卡管理

  • 菜单“管理”–>“主机网络管理器”,点击“创建”

安装 Vagrant

Vagrant 2.2.19 Windows 版本下载地址 https://releases.hashicorp.com/vagrant/2.2.19/vagrant_2.2.19_x86_64.msi


双击“vagrant_2.2.19_x86_64.msi” 进行安装



点击”Next“



点选“复选框”,点击”Next”



设置安装路径,点击”Next“



点击”Install”。



点击“Finish”完成安装。

Vagrant 设置 Path 环境变量

点击“此电脑”右键选“属性”,点“高级系统设置”,



在调出窗口中点击“高级”标签栏,点击“环境变量”



选择系统变量的“Path”,点击“编辑”,新增“G:\HashiCorp\Vagrant\bin”变量值。

查看 Vagrant 安装版本

打开 cmd 窗口,输入 vagrant -v


vagrant 使用

vagrant 创建虚拟机

查找虚拟镜像

在线查找需要的 box, 官方网址:https://app.vagrantup.com/boxes/search 搜索 centos7 虚拟机 box。


在线安装


#PS G:\HashiCorp\vagrant_vbox_data\centos7_test> pwdPath----G:\HashiCorp\vagrant_vbox_data\centos7_test---初始化Vagrantfile文件PS G:\HashiCorp\vagrant_vbox_data\centos7_test> vagrant init generic/centos7A `Vagrantfile` has been placed in this directory. You are nowready to `vagrant up` your first virtual environment! Please readthe comments in the Vagrantfile as well as documentation on`vagrantup.com` for more information on using Vagrant.PS G:\HashiCorp\vagrant_vbox_data\centos7_test> dir    目录: G:\HashiCorp\vagrant_vbox_data\centos7_testMode                 LastWriteTime         Length Name----                 -------------         ------ -----a----        2022/06/04     15:16           3091 VagrantfilePS G:\HashiCorp\vagrant_vbox_data\centos7_test>vagrant up
复制代码


备注: 使用 vagrant 创建虚拟机后,默认创建了 vagrant 用户,密码是 vagrant。 root 用户密码也是 vagrant。

vagrant 命令

box 管理命令

安装 TiDB 过程使用 shell 文件

## 文件存放路径


 Vagrantfile 配置文件及shell文件存放路径
复制代码


    目录: G:\HashiCorp\vagrant-master\TiDB-5.4Mode                 LastWriteTime         Length Name----                 -------------         ------ ----d-----        2022/06/16     17:24                .vagrantd-----        2022/06/16     17:12                shared_scripts-a----        2022/06/16     17:29           1938 VagrantfilePS G:\HashiCorp\vagrant-master\TiDB-5.4> tree /F卷 SSD 的文件夹 PATH 列表卷序列号为 E22C-4CB0G:.│ Vagrantfile└─shared_scripts        root_setup.sh        setup.sh        shell_init_os.sh        tiup_deploy.sh 
复制代码


备注:


  • shared_scripts 目录存放虚拟机初始化的系统配置脚本。

  • setup.sh:Vagrantfile 调用 shell 文件进行系统配置,此脚本内容是执行 root_setup.sh

  • root_setup.sh:设置主机名与 sshd 配置,调用 shell_init_os.sh 脚本

  • shell_init_os.sh:对安装 tidb 前进行操作系统进行配置。

  • tiup_deploy.sh:安装 tiup 工具软件

  • Vagrantfile 文件是 vagrant 的虚拟机配置文件

setup.sh 文件内容

#/bin/bashsudo bash -c 'sh /vagrant_scripts/root_setup.sh'
复制代码

root_setup.sh 文件内容

#/bin/bashif [ -f /vagrant_config/install.env ]; then  . /vagrant_config/install.envfi
#设置代理echo "******************************************************************************"echo "set http proxy." `date`echo "******************************************************************************"if [ "$HTTP_PROXY" != "" ]; then echo "http_proxy=http://${HTTP_PROXY}" >> /etc/profile echo "https_proxy=http://${HTTP_PROXY}" >> /etc/profile echo "export http_proxy https_proxy" >> /etc/profile source /etc/profilefi
#安装package yum install -y wget net-tools sshpass
#设置PS1export LS_COLORS='no=00:fi=00:di=01;33;40:ln=01;36;40:'export PS1="\[\033[01;35m\][\[\033[00m\]\[\033[01;32m\]\u@\h\[\033[00m\] \[\033[01;34m\]\w\[\033[00m\]\[\033[01;35m\]]\[\033[00m\]\$ "echo "alias l='ls -lrtha'" >>/root/.bashrc#echo "alias vi=vim" >>/root/.bashrcsource /root/.bashrc
#修改root密码if [ "$ROOT_PASSWORD" == "" ]; then ROOT_PASSWORD="rootpasswd"fi
echo "******************************************************************************"echo "Set root password and change ownership." `date`echo "******************************************************************************"echo -e "${ROOT_PASSWORD}\n${ROOT_PASSWORD}" | passwd
#设置时区timedatectl set-timezone Asia/Shanghai
#关闭firewalldsystemctl stop firewalld.servicesystemctl disable firewalld.service
#设置selinux sed -i "s?SELINUX=enforcing?SELINUX=disabled?" /etc/selinux/configsetenforce 0
#设置sshd_configecho "******************************************************************************"echo "Set sshd service and disable firewalld service." `date`echo "******************************************************************************"sed -i "s?^#PermitRootLogin yes?PermitRootLogin yes?" /etc/ssh/sshd_configsed -i "s?^#PasswordAuthentication yes?PasswordAuthentication yes?" /etc/ssh/sshd_configsed -i "s?^PasswordAuthentication no?#PasswordAuthentication no?" /etc/ssh/sshd_configsed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_configsystemctl restart sshd.service
#设置主机名if [ "$PUBLIC_SUBNET" != "" ]; then IP_NET=`echo $PUBLIC_SUBNET |cut -d"." -f1,2,3` IPADDR=`ip addr |grep $IP_NET |awk -F"/" '{print $1}'|awk -F" " '{print $2}'` PRIF=`grep $IPADDR /vagrant_config/install.env |awk -F"_" '{print $1}'` if [ "$PRIF" != "" ]; then HOSTNAME=`grep $PRIF"_HOSTNAME" /vagrant_config/install.env |awk -F"=" '{print $2}'` hostnamectl set-hostname $HOSTNAME
#设置/etc/hosts CNT=`grep $IPADDR /etc/hosts|wc -l ` if [ "$CNT" == "0" ]; then echo "$IPADDR $HOSTNAME">> /etc/hosts fi fifi
#初化始系统配置信息if [ -f /vagrant_scripts/shell_init_os.sh ]; then sh /vagrant_scripts/shell_init_os.shfi


复制代码

shell_init_os.sh 文件内容

#/bin/bash#1.检测及关闭系统 swapecho "vm.swappiness = 0">> /etc/sysctl.confswapoff -a && swapon -asysctl -p
#2.检测及关闭目标部署机器的防火墙#关闭firewalldsystemctl stop firewalld.servicesystemctl disable firewalld.service
#设置selinux sed -i "s?SELINUX=enforcing?SELINUX=disabled?" /etc/selinux/configsetenforce 0
#3.检测及安装 NTP 服务yum -y install numactl yum -y install ntp ntpdate
#设置NTPsystemctl status ntpd.servicesystemctl start ntpd.service systemctl enable ntpd.servicentpstat
#4.检查和配置操作系统优化参数#关闭THP和NUMARESULT=`grep "GRUB_CMDLINE_LINUX" /etc/default/grub |grep "transparent_hugepage"`if [ "$RESULT" == "" ]; then \cp /etc/default/grub /etc/default/grub.bak sed -i 's#quiet#quiet transparent_hugepage=never numa=off#g' /etc/default/grub grub2-mkconfig -o /boot/grub2/grub.cfg if [ -f /boot/efi/EFI/redhat/grub.cfg ]; then grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg fifi
#关闭透明大页if [ -d /sys/kernel/mm/transparent_hugepage ]; then thp_path=/sys/kernel/mm/transparent_hugepageelif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then thp_path=/sys/kernel/mm/redhat_transparent_hugepagefiecho "echo 'never' > ${thp_path}/enabled" >> /etc/rc.d/rc.localecho "echo 'never' > ${thp_path}/defrag" >> /etc/rc.d/rc.local echo 'never' > ${thp_path}/enabledecho 'never' > ${thp_path}/defrag chmod +x /etc/rc.d/rc.local
#创建 CPU 节能策略配置服务。 #启动irqbalance服务systemctl start irqbalance systemctl enable irqbalance
#执行以下命令修改 sysctl 参数。echo "fs.file-max = 1000000">> /etc/sysctl.confecho "net.core.somaxconn = 32768">> /etc/sysctl.confecho "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.confecho "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.confecho "vm.overcommit_memory = 1">> /etc/sysctl.confsysctl -p
#执行以下命令配置用户的 limits.conf 文件cat << EOF >>/etc/security/limits.conftidb soft nofile 1000000tidb hard nofile 1000000tidb soft stack 32768tidb hard stack 32768EOF
#创建tidb用户if [ "$TIDB_PASSWORD" == "" ]; then TIDB_PASSWORD="tidbpasswd"fiTIDB_PWD=`echo "$TIDB_PASSWORD" |openssl passwd -stdin`useradd tidb -p "$TIDB_PWD" -m
#将tidb加入sudoecho "tidb ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

复制代码

tiup_deploy.sh 文件内容

#/bin/bash
if [ -f /home/vagrant/Vagrantfile ]; then for siteip in `cat /home/vagrant/Vagrantfile |grep ":eth1 =>" |awk -F"\"" '{print $2}'`; do ping -c1 -W1 ${siteip} &> /dev/null if [ "$?" == "0" ]; then echo "$siteip is UP" else echo "$siteip is DOWN" exit -1 fi if [ -f /root/.ssh/known_hosts ]; then sed -i '/${siteip}/d' /root/.ssh/known_hosts fi donefi
#设置ssh免密 if [ "$ROOT_PASSWORD" == "" ]; then ROOT_PASSWORD="rootpasswd"fi
rm -f ~/.ssh/id_rsa && ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa <<<y >/dev/null 2>&1 for ipaddr in `cat /home/vagrant/Vagrantfile |grep ":eth1 =>" |awk -F"\"" '{print $2}'`; do sshpass -p $ROOT_PASSWORD ssh-copy-id $ipaddrdone
#下载tidb工具curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
#生效tiup环境变量source ~/.bash_profile
#安装TiUP cluster组件tiup cluster
#更新TiUP Cluster到最新的版本tiup update --self && tiup update cluster
#查看TiUP Cluster的版本echo "view tiup cluster version"tiup --binary cluster

#生成tidb拓扑文件cat > ~/topology.yaml<<EOFglobal: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" arch: "amd64"
monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115

pd_servers: - host: 192.168.56.160
tidb_servers: - host: 192.168.56.161
tikv_servers: - host: 192.168.56.162 - host: 192.168.56.163 tiflash_servers: - host: 192.168.56.164
monitoring_servers: - host: 192.168.56.160
grafana_servers: - host: 192.168.56.160
alertmanager_servers: - host: 192.168.56.160EOF

复制代码

创建使用 Vagrantfile 配置文件

新建 Vagrantfile 配置文件,boxes 项是配置虚拟机的 IP 地址,主机名,内存,cpu。


boxes = [    {        :name => "tidb-pd",        :eth1 => "192.168.56.160",        :mem => "2048",        :cpu => "1",        :sshport => 22230    },    {        :name => "tidb-server",        :eth1 => "192.168.56.161",        :mem => "2048",        :cpu => "1",        :sshport => 22231    },    {        :name => "tidb-tikv1",        :eth1 => "192.168.56.162",        :mem => "2048",        :cpu => "1",        :sshport => 22232    },    {        :name => "tidb-tikv2",        :eth1 => "192.168.56.163",        :mem => "2048",        :cpu => "1",        :sshport => 22233    },    {        :name => "tidb-tiflash",        :eth1 => "192.168.56.164",        :mem => "2048",        :cpu => "1",        :sshport => 22234    }]Vagrant.configure(2) do |config|    config.vm.box = "generic/centos7"    Encoding.default_external = 'UTF-8'    config.vm.synced_folder ".", "/home/vagrant"    #config.vm.synced_folder "./config", "/vagrant_config"    config.vm.synced_folder "./shared_scripts", "/vagrant_scripts"           boxes.each do |opts|        config.vm.define opts[:name] do |config|            config.vm.hostname = opts[:name]            config.vm.network "private_network", ip: opts[:eth1]            config.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: "true"            config.vm.network "forwarded_port", guest: 22, host: opts[:sshport]            #config.ssh.username = "root"            #config.ssh.password = "root"            #config.ssh.port=opts[:sshport]            #config.ssh.insert_key = false            #config.vm.synced_folder ".", "/vagrant", type: "rsync"             config.vm.provider "vmware_fusion" do |v|                v.vmx["memsize"] = opts[:mem]                v.vmx["numvcpus"] = opts[:cpu]            end            config.vm.provider "virtualbox" do |v|                v.memory = opts[:mem];                v.cpus = opts[:cpu];                v.name = opts[:name];                v.customize ['storageattach', :id, '--storagectl', "IDE Controller", '--port', '1', '--device', '0','--type', 'dvddrive', '--medium', 'G:\HashiCorp\repo_vbox\CentOS7\CentOS-7.9-x86_64-DVD-2009.iso']            end        end    end            config.vm.provision "shell", inline: <<-SHELL        sh /vagrant_scripts/setup.sh    SHELL  end
复制代码

执行 vagrant 创建虚拟机

在 powershell 或 cmd 窗口执行 vagrant up 创建虚拟机,如下是其中一个虚拟机创建的输出记录


G:\HashiCorp\vagrant_vbox_data\TiDB-5.4>vagrant up==> tidb-tiflash: Importing base box 'generic/centos7'...==> tidb-tiflash: Matching MAC address for NAT networking...==> tidb-tiflash: Checking if box 'generic/centos7' version '3.6.10' is up to date...==> tidb-tiflash: Setting the name of the VM: tidb-tiflash==> tidb-tiflash: Clearing any previously set network interfaces...==> tidb-tiflash: Preparing network interfaces based on configuration...    tidb-tiflash: Adapter 1: nat    tidb-tiflash: Adapter 2: hostonly==> tidb-tiflash: Forwarding ports...    tidb-tiflash: 22 (guest) => 22234 (host) (adapter 1)==> tidb-tiflash: Running 'pre-boot' VM customizations...==> tidb-tiflash: Booting VM...==> tidb-tiflash: Waiting for machine to boot. This may take a few minutes...    tidb-tiflash: SSH address: 127.0.0.1:22234    tidb-tiflash: SSH username: vagrant    tidb-tiflash: SSH auth method: private key    tidb-tiflash:    tidb-tiflash: Vagrant insecure key detected. Vagrant will automatically replace    tidb-tiflash: this with a newly generated keypair for better security.    tidb-tiflash:    tidb-tiflash: Inserting generated public key within guest...    tidb-tiflash: Removing insecure key from the guest if it's present...    tidb-tiflash: Key inserted! Disconnecting and reconnecting using new SSH key...==> tidb-tiflash: Machine booted and ready!==> tidb-tiflash: Checking for guest additions in VM...    tidb-tiflash: The guest additions on this VM do not match the installed version of    tidb-tiflash: VirtualBox! In most cases this is fine, but in rare cases it can    tidb-tiflash: prevent things such as shared folders from working properly. If you see    tidb-tiflash: shared folder errors, please make sure the guest additions within the    tidb-tiflash: virtual machine match the version of VirtualBox you have installed on    tidb-tiflash: your host and reload your VM.    tidb-tiflash:    tidb-tiflash: Guest Additions Version: 5.2.44    tidb-tiflash: VirtualBox Version: 6.1==> tidb-tiflash: Setting hostname...==> tidb-tiflash: Configuring and enabling network interfaces...==> tidb-tiflash: Mounting shared folders...    tidb-tiflash: /home/vagrant => G:/HashiCorp/vagrant_vbox_data/TiDB-5.4    tidb-tiflash: /vagrant_scripts => G:/HashiCorp/vagrant_vbox_data/TiDB-5.4/shared_scripts==> tidb-tiflash: Running provisioner: shell...    tidb-tiflash: Running: inline script    tidb-tiflash: ******************************************************************************    tidb-tiflash: set http proxy. Thu Jun 16 09:48:05 UTC 2022    tidb-tiflash: ******************************************************************************    tidb-tiflash: Loaded plugins: fastestmirror    tidb-tiflash: Determining fastest mirrors    tidb-tiflash:  * base: mirrors.ustc.edu.cn    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn    tidb-tiflash:  * extras: mirrors.ustc.edu.cn    tidb-tiflash:  * updates: mirrors.ustc.edu.cn    tidb-tiflash: Package wget-1.14-18.el7_6.1.x86_64 already installed and latest version    tidb-tiflash: Package net-tools-2.0-0.25.20131004git.el7.x86_64 already installed and latest version    tidb-tiflash: Resolving Dependencies    tidb-tiflash: --> Running transaction check    tidb-tiflash: ---> Package sshpass.x86_64 0:1.06-2.el7 will be installed    tidb-tiflash: --> Finished Dependency Resolution    tidb-tiflash:    tidb-tiflash: Dependencies Resolved    tidb-tiflash:    tidb-tiflash: ================================================================================    tidb-tiflash:  Package           Arch             Version              Repository        Size    tidb-tiflash: ================================================================================    tidb-tiflash: Installing:    tidb-tiflash:  sshpass           x86_64           1.06-2.el7           extras            21 k    tidb-tiflash:    tidb-tiflash: Transaction Summary    tidb-tiflash: ================================================================================    tidb-tiflash: Install  1 Package    tidb-tiflash:    tidb-tiflash: Total download size: 21 k    tidb-tiflash: Installed size: 38 k    tidb-tiflash: Downloading packages:    tidb-tiflash: Running transaction check    tidb-tiflash: Running transaction test    tidb-tiflash: Transaction test succeeded    tidb-tiflash: Running transaction    tidb-tiflash:   Installing : sshpass-1.06-2.el7.x86_64                                    1/1    tidb-tiflash:   Verifying  : sshpass-1.06-2.el7.x86_64                                    1/1    tidb-tiflash:    tidb-tiflash: Installed:    tidb-tiflash:   sshpass.x86_64 0:1.06-2.el7    tidb-tiflash:    tidb-tiflash: Complete!    tidb-tiflash: ******************************************************************************    tidb-tiflash: Set root password and change ownership. Thu Jun 16 09:49:49 UTC 2022    tidb-tiflash: ******************************************************************************    tidb-tiflash: New password: BAD PASSWORD: The password contains the user name in some form    tidb-tiflash: Changing password for user root.    tidb-tiflash: passwd: all authentication tokens updated successfully.    tidb-tiflash: Retype new password: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.    tidb-tiflash: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.    tidb-tiflash: ******************************************************************************    tidb-tiflash: Set sshd service and disable firewalld service. Thu Jun 16 17:49:50 CST 2022    tidb-tiflash: ******************************************************************************    tidb-tiflash: net.ipv6.conf.all.disable_ipv6 = 1    tidb-tiflash: vm.swappiness = 0    tidb-tiflash: Loaded plugins: fastestmirror    tidb-tiflash: Loading mirror speeds from cached hostfile    tidb-tiflash:  * base: mirrors.ustc.edu.cn    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn    tidb-tiflash:  * extras: mirrors.ustc.edu.cn    tidb-tiflash:  * updates: mirrors.ustc.edu.cn    tidb-tiflash: Resolving Dependencies    tidb-tiflash: --> Running transaction check    tidb-tiflash: ---> Package numactl.x86_64 0:2.0.12-5.el7 will be installed    tidb-tiflash: --> Finished Dependency Resolution    tidb-tiflash:    tidb-tiflash: Dependencies Resolved    tidb-tiflash:    tidb-tiflash: ================================================================================    tidb-tiflash:  Package           Arch             Version                Repository      Size    tidb-tiflash: ================================================================================    tidb-tiflash: Installing:    tidb-tiflash:  numactl           x86_64           2.0.12-5.el7           base            66 k    tidb-tiflash:    tidb-tiflash: Transaction Summary    tidb-tiflash: ================================================================================    tidb-tiflash: Install  1 Package    tidb-tiflash:    tidb-tiflash: Total download size: 66 k    tidb-tiflash: Installed size: 141 k    tidb-tiflash: Downloading packages:    tidb-tiflash: Running transaction check    tidb-tiflash: Running transaction test    tidb-tiflash: Transaction test succeeded    tidb-tiflash: Running transaction    tidb-tiflash:   Installing : numactl-2.0.12-5.el7.x86_64                                  1/1    tidb-tiflash:   Verifying  : numactl-2.0.12-5.el7.x86_64                                  1/1    tidb-tiflash:    tidb-tiflash: Installed:    tidb-tiflash:   numactl.x86_64 0:2.0.12-5.el7    tidb-tiflash:    tidb-tiflash: Complete!    tidb-tiflash: Loaded plugins: fastestmirror    tidb-tiflash: Loading mirror speeds from cached hostfile    tidb-tiflash:  * base: mirrors.ustc.edu.cn    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn    tidb-tiflash:  * extras: mirrors.ustc.edu.cn    tidb-tiflash:  * updates: mirrors.ustc.edu.cn    tidb-tiflash: Resolving Dependencies    tidb-tiflash: --> Running transaction check    tidb-tiflash: ---> Package ntp.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed    tidb-tiflash: --> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-29.el7.centos.2.x86_64    tidb-tiflash: ---> Package ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed    tidb-tiflash: --> Running transaction check    tidb-tiflash: ---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed    tidb-tiflash: --> Finished Dependency Resolution    tidb-tiflash:    tidb-tiflash: Dependencies Resolved    tidb-tiflash:    tidb-tiflash: ================================================================================    tidb-tiflash:  Package              Arch        Version                       Repository    tidb-tiflash:                                                                            Size    tidb-tiflash: ================================================================================    tidb-tiflash: Installing:    tidb-tiflash:  ntp                  x86_64      4.2.6p5-29.el7.centos.2       base      549 k    tidb-tiflash:  ntpdate              x86_64      4.2.6p5-29.el7.centos.2       base       87 k    tidb-tiflash: Installing for dependencies:    tidb-tiflash:  autogen-libopts      x86_64      5.18-5.el7                    base       66 k    tidb-tiflash:    tidb-tiflash: Transaction Summary    tidb-tiflash: ================================================================================    tidb-tiflash: Install  2 Packages (+1 Dependent package)    tidb-tiflash:    tidb-tiflash: Total download size: 701 k    tidb-tiflash: Installed size: 1.6 M    tidb-tiflash: Downloading packages:    tidb-tiflash: --------------------------------------------------------------------------------    tidb-tiflash: Total                                              309 kB/s | 701 kB  00:02    tidb-tiflash: Running transaction check    tidb-tiflash: Running transaction test    tidb-tiflash: Transaction test succeeded    tidb-tiflash: Running transaction    tidb-tiflash:   Installing : autogen-libopts-5.18-5.el7.x86_64                            1/3    tidb-tiflash:   Installing : ntpdate-4.2.6p5-29.el7.centos.2.x86_64                       2/3    tidb-tiflash:   Installing : ntp-4.2.6p5-29.el7.centos.2.x86_64                           3/3    tidb-tiflash:   Verifying  : ntpdate-4.2.6p5-29.el7.centos.2.x86_64                       1/3    tidb-tiflash:   Verifying  : ntp-4.2.6p5-29.el7.centos.2.x86_64                           2/3    tidb-tiflash:   Verifying  : autogen-libopts-5.18-5.el7.x86_64                            3/3    tidb-tiflash:    tidb-tiflash: Installed:    tidb-tiflash:   ntp.x86_64 0:4.2.6p5-29.el7.centos.2 ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2    tidb-tiflash:    tidb-tiflash: Dependency Installed:    tidb-tiflash:   autogen-libopts.x86_64 0:5.18-5.el7    tidb-tiflash:    tidb-tiflash: Complete!    tidb-tiflash: ● ntpd.service - Network Time Service    tidb-tiflash:    Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)    tidb-tiflash:    Active: inactive (dead)    tidb-tiflash: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.    tidb-tiflash: unsynchronised    tidb-tiflash:   time server re-starting    tidb-tiflash:    polling server every 8 s    tidb-tiflash: Generating grub configuration file ...    tidb-tiflash: Found linux image: /boot/vmlinuz-3.10.0-1160.59.1.el7.x86_64    tidb-tiflash: Found initrd image: /boot/initramfs-3.10.0-1160.59.1.el7.x86_64.img    tidb-tiflash: Found linux image: /boot/vmlinuz-0-rescue-319af63f75e64c3395b38885010692bf    tidb-tiflash: Found initrd image: /boot/initramfs-0-rescue-319af63f75e64c3395b38885010692bf.img    tidb-tiflash: done    tidb-tiflash: net.ipv6.conf.all.disable_ipv6 = 1    tidb-tiflash: vm.swappiness = 0    tidb-tiflash: fs.file-max = 1000000    tidb-tiflash: net.core.somaxconn = 32768    tidb-tiflash: net.ipv4.tcp_tw_recycle = 0    tidb-tiflash: net.ipv4.tcp_syncookies = 0    tidb-tiflash: vm.overcommit_memory = 1
复制代码

登录 tidb-pd 虚拟机,安装 tiup 工具

使用 root 用户登录,执行 tiup_deploy.sh 脚本 安装 tiup 工具


[root@tidb-pd shared_scripts]$ sh tiup_deploy.sh192.168.56.160 is UP192.168.56.161 is UP192.168.56.162 is UP192.168.56.163 is UP192.168.56.164 is UP/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysNumber of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.56.160'"and check to make sure that only the key(s) you wanted were added./bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysNumber of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.56.161'"and check to make sure that only the key(s) you wanted were added./bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysNumber of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.56.162'"and check to make sure that only the key(s) you wanted were added./bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysNumber of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.56.163'"and check to make sure that only the key(s) you wanted were added./bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysNumber of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.56.164'"and check to make sure that only the key(s) you wanted were added.  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100 6968k  100 6968k    0     0  1514k      0  0:00:04  0:00:04 --:--:-- 1514kWARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.jsonYou can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.jsonSuccessfully set mirror to https://tiup-mirrors.pingcap.comDetected shell: bashShell profile:  /root/.bash_profile/root/.bash_profile has been modified to add tiup to PATHopen a new terminal or source /root/.bash_profile to use itInstalled path: /root/.tiup/bin/tiup===============================================Have a try:     tiup playground===============================================tiup is checking updates for component cluster ...timeout!The component `cluster` version  is not installed; downloading from repository.download https://tiup-mirrors.pingcap.com/cluster-v1.10.2-linux-amd64.tar.gz 8.28 MiB / 8.28 MiB 100.00% 2.48 MiB/sStarting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-clusterDeploy a TiDB cluster for productionUsage:  tiup cluster [command]Available Commands:  check       Perform preflight checks for the cluster.  deploy      Deploy a cluster for production  start       Start a TiDB cluster  stop        Stop a TiDB cluster  restart     Restart a TiDB cluster  scale-in    Scale in a TiDB cluster  scale-out   Scale out a TiDB cluster  destroy     Destroy a specified cluster  clean       (EXPERIMENTAL) Cleanup a specified cluster  upgrade     Upgrade a specified TiDB cluster  display     Display information of a TiDB cluster  prune       Destroy and remove instances that is in tombstone state  list        List all clusters  audit       Show audit log of cluster operation  import      Import an exist TiDB cluster from TiDB-Ansible  edit-config Edit TiDB cluster config  show-config Show TiDB cluster config  reload      Reload a TiDB cluster's config and restart if needed  patch       Replace the remote package with a specified package and restart the service  rename      Rename the cluster  enable      Enable a TiDB cluster automatically at boot  disable     Disable automatic enabling of TiDB clusters at boot  replay      Replay previous operation and skip successed steps  template    Print topology template  tls         Enable/Disable TLS between TiDB components  meta        backup/restore meta information  help        Help about any command  completion  Generate the autocompletion script for the specified shellFlags:  -c, --concurrency int     max number of parallel tasks allowed (default 5)      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")  -h, --help                help for tiup      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)  -v, --version             version for tiup      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)  -y, --yes                 Skip all confirmations and assumes 'yes'Use "tiup cluster help [command]" for more information about a command.download https://tiup-mirrors.pingcap.com/tiup-v1.10.2-linux-amd64.tar.gz 6.81 MiB / 6.81 MiB 100.00% 3.53 MiB/sUpdated successfully!component cluster version v1.10.2 is already installedUpdated successfully!/root/.tiup/components/cluster/v1.10.2/tiup-cluster
复制代码

初始化集群拓扑文件

在执行 tish_deploy.sh 脚本后,生成了 /home/tidb/topology.yaml 集群拓扑文件


[tidb@tidb-pd ~]$ cat topology.yamlglobal:  user: "tidb"  ssh_port: 22  deploy_dir: "/tidb-deploy"  data_dir: "/tidb-data"  arch: "amd64"monitored:  node_exporter_port: 9100  blackbox_exporter_port: 9115pd_servers:  - host: 192.168.56.160tidb_servers:  - host: 192.168.56.161tikv_servers:  - host: 192.168.56.162  - host: 192.168.56.163tiflash_servers:  - host: 192.168.56.164monitoring_servers:  - host:grafana_servers:  - host: 192.168.56.160alertmanager_servers:  - host: 192.168.56.160
复制代码

执行部署命令

  • 检查集群存在的潜在风险:


  • 自动修复集群存在的潜在风险:

  • 部署 TiDB 集群:


以上部署示例中:


  • tidb-test 为部署的集群名称。

  • v5.4.1 为部署的集群版本,可以通过执行 tiup list tidb 来查看 TiUP 支持的最新可用版本。

  • 初始化配置文件为 topology.yaml

  • --user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。

  • [-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 –user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。


预期日志结尾输出 Deployed clustertidb-testsuccessfully 关键词,表示部署成功。


## 查看 TiUP 管理的集群情况


  # tiup cluster list
复制代码


TiUP 支持管理多个 TiDB 集群,该命令会输出当前通过 TiUP cluster 管理的所有集群信息,包括集群名称、部署用户、版本、密钥信息等。


## 检查部署的 TiDB 集群情况


  # tiup cluster display tidb-test
复制代码


## 启动集群


安全启动是 TiUP cluster 从 v1.9.0 起引入的一种新的启动方式,采用该方式启动数据库可以提高数据库安全性。推荐使用安全启动。


安全启动后,TiUP 会自动生成 TiDB root 用户的密码,并在命令行界面返回密码。


注意:


  • 使用安全启动方式后,不能通过无密码的 root 用户登录数据库,你需要记录命令行返回的密码进行后续操作。

  • 该自动生成的密码只会返回一次,如果没有记录或者忘记该密码,请参照忘记 root 密码修改密码。


方式一:安全启动


  # tiup cluster start tidb-test --init  tiup is checking updates for component cluster ...  Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster start tidb-test --init  Starting cluster tidb-test...  + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub  + [Parallel] - UserSSH: user=tidb, host=192.168.56.164  + [Parallel] - UserSSH: user=tidb, host=192.168.56.160  + [Parallel] - UserSSH: user=tidb, host=192.168.56.160  + [Parallel] - UserSSH: user=tidb, host=192.168.56.160  + [Parallel] - UserSSH: user=tidb, host=192.168.56.160  + [Parallel] - UserSSH: user=tidb, host=192.168.56.162  + [Parallel] - UserSSH: user=tidb, host=192.168.56.163  + [Parallel] - UserSSH: user=tidb, host=192.168.56.161  + [ Serial ] - StartCluster  Starting component pd          Starting instance 192.168.56.160:2379          Start instance 192.168.56.160:2379 success  Starting component tikv          Starting instance 192.168.56.163:20160          Starting instance 192.168.56.162:20160          Start instance 192.168.56.163:20160 success          Start instance 192.168.56.162:20160 success  Starting component tidb          Starting instance 192.168.56.161:4000          Start instance 192.168.56.161:4000 success  Starting component tiflash          Starting instance 192.168.56.164:9000          Start instance 192.168.56.164:9000 success  Starting component prometheus          Starting instance 192.168.56.160:9090          Start instance 192.168.56.160:9090 success  Starting component grafana          Starting instance 192.168.56.160:3000          Start instance 192.168.56.160:3000 success  Starting component alertmanager          Starting instance 192.168.56.160:9093          Start instance 192.168.56.160:9093 success  Starting component node_exporter          Starting instance 192.168.56.163          Starting instance 192.168.56.161          Starting instance 192.168.56.164          Starting instance 192.168.56.160          Starting instance 192.168.56.162          Start 192.168.56.161 success          Start 192.168.56.162 success          Start 192.168.56.163 success          Start 192.168.56.160 success          Start 192.168.56.164 success  Starting component blackbox_exporter          Starting instance 192.168.56.163          Starting instance 192.168.56.161          Starting instance 192.168.56.164          Starting instance 192.168.56.160          Starting instance 192.168.56.162          Start 192.168.56.163 success          Start 192.168.56.162 success          Start 192.168.56.161 success          Start 192.168.56.164 success          Start 192.168.56.160 success  + [ Serial ] - UpdateTopology: cluster=tidb-test  Started cluster `tidb-test` successfully  The root password of TiDB database has been changed.  The new password is: '45s6W&_w9!1KcB^aH8'.  Copy and record it to somewhere safe, it is only displayed once, and will not be stored.  The generated password can NOT be get and shown again.
复制代码


预期结果如下,表示启动成功。


  Started cluster `tidb-test` successfully.  The root password of TiDB database has been changed.  The new password is: 'y_+3Hwp=*AWz8971s6'.  Copy and record it to somewhere safe, it is only displayed once, and will not be stored.  The generated password can NOT be got again in future.
复制代码


方式二:普通启动


  # tiup cluster start tidb-test
复制代码


预期结果输出 Started clustertidb-testsuccessfully,表示启动成功。使用普通启动方式后,可通过无密码的 root 用户登录数据库。


## 验证集群运行状态


  # tiup cluster display tidb-test  tiup is checking updates for component cluster ...  Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster display tidb-test  Cluster type:       tidb  Cluster name:       tidb-test  Cluster version:    v5.4.1  Deploy user:        tidb  SSH type:           builtin  Dashboard URL:      http://192.168.56.160:2379/dashboard  Grafana URL:        http://192.168.56.160:3000  ID                    Role          Host            Ports                            OS/Arch       Status   Data Dir                      Deploy Dir  --                    ----          ----            -----                            -------       ------   --------                      ----------  192.168.56.160:9093   alertmanager  192.168.56.160  9093/9094                        linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093  192.168.56.160:3000   grafana       192.168.56.160  3000                             linux/x86_64  Up       -                             /tidb-deploy/grafana-3000  192.168.56.160:2379   pd            192.168.56.160  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379  192.168.56.160:9090   prometheus    192.168.56.160  9090/12020                       linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090  192.168.56.161:4000   tidb          192.168.56.161  4000/10080                       linux/x86_64  Up       -                             /tidb-deploy/tidb-4000  192.168.56.164:9000   tiflash       192.168.56.164  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000       /tidb-deploy/tiflash-9000  192.168.56.162:20160  tikv          192.168.56.162  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160  192.168.56.163:20160  tikv          192.168.56.163  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160  Total nodes: 8

复制代码


  预期结果输出:各节点 Status 状态信息为 `Up` 说明集群状态正常。
复制代码


参考文档:https://docs.pingcap.com/zh/tidb/stable/check-before-deployment


发布于: 刚刚阅读数: 4
用户头像

TiDB 社区官网:https://tidb.net/ 2021.12.15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
使用 Vagrant + VirtualBox 虚拟机搭建TiDB v5.4 实验环境_安装 & 部署_TiDB 社区干货传送门_InfoQ写作社区