写在前面
业界对系统的高可用有着基本的要求,简单的说,这些要求可以总结为如下所示。
一般情况下系统的高可用可以用几个 9 来评估。所谓的几个 9 就是系统可以保证对外提供的服务的时间达到总时间的百分比。例如如果需要达到 99.99 的高可用,则系统全年发生故障的总时间不能超过 52 分钟。
系统高可用架构
我们既然需要实现系统的高可用架构,那么,我们到底需要搭建一个什么样的系统架构呢?我们可以将需要搭建的系统架构简化成下图所示。
服务器规划
由于我电脑资源有限,我这里在 4 台服务器上搭建高可用环境,大家可以按照本文将环境扩展到更多的服务器,搭建步骤都是一样的。
| 主机名 | IP 地址 | 安装的服务 |
| --------- | --------------- | ---------------------------------------------------- |
| binghe151 | 192.168.175.151 | Mycat、Zookeeper、MySQL、HAProxy、Keepalived、Xinetd |
| binghe152 | 192.168.175.152 | Zookeeper、MySQL |
| binghe153 | 192.168.175.153 | Zookeeper、MySQL |
| binghe154 | 192.168.175.154 | Mycat、MySQL、HAProxy、Keepalived、Xinetd |
| binghe155 | 192.168.175.155 | MySQL |
<font color="#FF0000">注意:HAProxy 和 Keepalived 最好和 Mycat 部署在同一台服务器上。</font>
安装 MySQL
小伙伴们可以关注【冰河技术】微信公众号,参考《MySQL之——源码编译MySQL8.x+升级gcc+升级cmake(亲测完整版)》
安装 JDK
由于 Mycat 和 Zookeeper 的运行需要 JDK 环境的支持,所有我们需要在每台服务器上安装 JDK 环境。
这里,我以在 binghe151 服务器上安装 JDK 为例,其他服务器的安装方式与在 binghe151 服务器上的安装方式相同。安装步骤如下所示。
(1)到 JDK 官网下载 JDK 1.8 版本,JDK1.8 的下载地址为:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html。
<font color="#FF0000">注:我下载的 JDK 安装包版本为:jdk-8u212-linux-x64.tar.gz,如果 JDK 版本已更新,大家下载对应的版本即可。</font>
(2)将下载的 jdk-8u212-linux-x64.tar.gz 安装包上传到 binghe151 服务器的/usr/local/src 目录下。
(3)解压 jdk-8u212-linux-x64.tar.gz 文件,如下所示。
tar -zxvf jdk-8u212-linux-x64.tar.gz
复制代码
(4)将解压的 jdk1.8.0_212 目录移动到 binghe151 服务器下的/usr/local 目录下,如下所示。
mv jdk1.8.0_212/ /usr/local/src/
复制代码
(5)配置 JDK 系统环境变量,如下所示。
vim /etc/profile
JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH
复制代码
使系统环境变量生效,如下所示。
(6)查看 JDK 版本,如下所示。
[root@binghe151 ~]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
复制代码
结果显示,正确输出了 JDK 的版本信息,说明 JDK 安装成功。
安装 Mycat
下载 Mycat 1.6.7.4 Release 版本,解压到服务器的/usr/local/mycat 目录下,并配置 Mycat 的系统环境变量,随后,配置 Mycat 的配置文件,Mycat 的最终结果配置如下所示。
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="shop" checkSQLschema="false" sqlMaxLimit="1000">
<!--<table name="order_master" primaryKey="order_id" dataNode = "ordb"/>-->
<table name="order_master" primaryKey="order_id" dataNode = "orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" autoIncrement="true">
<childTable name="order_detail" primaryKey="order_detail_id" joinKey="order_id" parentKey="order_id" autoIncrement="true"/>
</table>
<table name="order_cart" primaryKey="cart_id" dataNode = "ordb"/>
<table name="order_customer_addr" primaryKey="customer_addr_id" dataNode = "ordb"/>
<table name="region_info" primaryKey="region_id" dataNode = "ordb,prodb,custdb" type="global"/>
<table name="serial" primaryKey="id" dataNode = "ordb"/>
<table name="shipping_info" primaryKey="ship_id" dataNode = "ordb"/>
<table name="warehouse_info" primaryKey="w_id" dataNode = "ordb"/>
<table name="warehouse_proudct" primaryKey="wp_id" dataNode = "ordb"/>
<table name="product_brand_info" primaryKey="brand_id" dataNode = "prodb"/>
<table name="product_category" primaryKey="category_id" dataNode = "prodb"/>
<table name="product_comment" primaryKey="comment_id" dataNode = "prodb"/>
<table name="product_info" primaryKey="product_id" dataNode = "prodb"/>
<table name="product_pic_info" primaryKey="product_pic_id" dataNode = "prodb"/>
<table name="product_supplier_info" primaryKey="supplier_id" dataNode = "prodb"/>
<table name="customer_balance_log" primaryKey="balance_id" dataNode = "custdb"/>
<table name="customer_inf" primaryKey="customer_inf_id" dataNode = "custdb"/>
<table name="customer_level_inf" primaryKey="customer_level" dataNode = "custdb"/>
<table name="customer_login" primaryKey="customer_id" dataNode = "custdb"/>
<table name="customer_login_log" primaryKey="login_id" dataNode = "custdb"/>
<table name="customer_point_log" primaryKey="point_id" dataNode = "custdb"/>
</schema>
<dataNode name="mycat" dataHost="binghe151" database="mycat" />
<dataNode name="ordb" dataHost="binghe152" database="order_db" />
<dataNode name="prodb" dataHost="binghe153" database="product_db" />
<dataNode name="custdb" dataHost="binghe154" database="customer_db" />
<dataNode name="orderdb01" dataHost="binghe152" database="orderdb01" />
<dataNode name="orderdb02" dataHost="binghe152" database="orderdb02" />
<dataNode name="orderdb03" dataHost="binghe153" database="orderdb03" />
<dataNode name="orderdb04" dataHost="binghe153" database="orderdb04" />
<dataHost name="binghe151" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe51" url="192.168.175.151:3306" user="mycat" password="mycat"/>
</dataHost>
<dataHost name="binghe152" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe52" url="192.168.175.152:3306" user="mycat" password="mycat"/>
</dataHost>
<dataHost name="binghe153" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe53" url="192.168.175.153:3306" user="mycat" password="mycat"/>
</dataHost>
<dataHost name="binghe154" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe54" url="192.168.175.154:3306" user="mycat" password="mycat"/>
</dataHost>
</mycat:schema>
复制代码
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
<system>
<property name="useHandshakeV10">1</property>
<property name="defaultSqlParser">druidparser</property>
<property name="serverPort">3307</property>
<property name="managerPort">3308</property>
<property name="nonePasswordLogin">0</property>
<property name="bindIp">0.0.0.0</property>
<property name="charset">utf8mb4</property>
<property name="frontWriteQueueSize">2048</property>
<property name="txIsolation">2</property>
<property name="processors">2</property>
<property name="idleTimeout">1800000</property>
<property name="sqlExecuteTimeout">300</property>
<property name="useSqlStat">0</property>
<property name="useGlobleTableCheck">0</property>
<property name="sequenceHandlerType">1</property>
<property name="defaultMaxLimit">1000</property>
<property name="maxPacketSize">104857600</property>
<property name="sqlInterceptor">
io.mycat.server.interceptor.impl.StatisticsSqlInterceptor
</property>
<property name="sqlInterceptorType">
UPDATE,DELETE,INSERT
</property>
<property name="sqlInterceptorFile">/tmp/sql.txt</property>
</system>
<firewall>
<whitehost>
<host user="mycat" host="192.168.175.151"></host>
</whitehost>
<blacklist check="true">
<property name="noneBaseStatementAllow">true</property>
<property name="deleteWhereNoneCheck">true</property>
</blacklist>
</firewall>
<user name="mycat" defaultAccount="true">
<property name="usingDecrypt">1</property>
<property name="password">cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYA==</property>
<property name="schemas">shop</property>
</user>
</mycat:server>
复制代码
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
<tableRule name="order_master">
<rule>
<columns>customer_id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<property name="count">4</property>
</function>
</mycat:rule>
复制代码
#sequence stored in datanode
GLOBAL=mycat
ORDER_MASTER=mycat
ORDER_DETAIL=mycat
复制代码
关于 Mycat 的配置,仅供大家参考,大家不一定非要按照我这里配置,根据自身业务需要配置即可。本文的重点是实现 Mycat 的高可用环境搭建。
在 MySQL 中创建 Mycat 连接 MySQL 的账户,如下所示。
CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat';
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;
复制代码
安装 Zookeeper 集群
安装配置完 JDK 后,就需要搭建 Zookeeper 集群了,根据对服务器的规划,现将 Zookeeper 集群搭建在“binghe151”、“binghe152”、“binghe153”三台服务器上。
1.下载 Zookeeper
到 Apache 官网去下载 Zookeeper 的安装包,Zookeeper 的安装包下载地址为:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/。具体如下图所示。
也可以在 binghe151 服务器上执行如下命令直接下载 zookeeper-3.5.5。
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz
复制代码
执行上述命令就可以直接把 apache-zookeeper-3.5.5-bin.tar.gz 安装包下载到 binghe151 服务器上。
2.安装并配置 Zookeeper
<font color="#FF0000">注意:(1)、(2)、(3)步都是在 binghe152 服务器上执行的。</font>
(1)解压 Zookeeper 安装包
在 binghe151 服务器上执行如下命令,将 Zookeeper 解压到“/usr/local/”目录下,并将 Zookeeper 目录修改为 zookeeper-3.5.5。
tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz
mv apache-zookeeper-3.5.5-bin zookeeper-3.5.5
复制代码
(2)配置 Zookeeper 系统环境变量
同样,需要在/etc/profile 文件中配置 Zookeeper 系统环境变量,如下:
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
PATH=$ZOOKEEPER_HOME/bin:$PATH
export ZOOKEEPER_HOME PATH
复制代码
结合之前配置的 JDK 系统环境变量,/etc/profile,总体配置如下:
MYSQL_HOME=/usr/local/mysql
JAVA_HOME=/usr/local/jdk1.8.0_212
MYCAT_HOME=/usr/local/mycat
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
MPC_HOME=/usr/local/mpc-1.1.0
GMP_HOME=/usr/local/gmp-6.1.2
MPFR_HOME=/usr/local/mpfr-4.0.2
CLASS_PATH=.:$JAVA_HOME/lib
LD_LIBRARY_PATH=$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATH
PATH=$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATH
export JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH
复制代码
(3)配置 Zookeeper
首先,需要将$ZOOKEEPER_HOME/conf
($ZOOKEEPERHOME 为 Zookeeper 的安装目录)目录下的 zoosample.cfg 文件修改为 zoo.cfg 文件。具体命令如下:
cd /usr/local/zookeeper-3.5.5/conf/
mv zoo_sample.cfg zoo.cfg
复制代码
接下来修改 zoo.cfg 文件,修改后的具体内容如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.5.5/data
dataLogDir=/usr/local/zookeeper-3.5.5/dataLog
clientPort=2181
server.1=binghe151:2888:3888
server.2=binghe152:2888:3888
server.3=binghe153:2888:3888
复制代码
在 Zookeeper 的安装目录下创建 data 和 dataLog 两个文件夹。
mkdir -p /usr/local/zookeeper-3.5.5/data
mkdir -p /usr/local/zookeeper-3.5.5/dataLog
复制代码
切换到新建的 data 目录下,创建 myid 文件,具体内容为数字 1,如下所示:
cd /usr/local/zookeeper-3.5.5/data
vim myid
复制代码
将数字 1 写入到文件 myid。
3.将 Zookeeper 和系统环境变量文件复制到其他服务器
<font color="#FF0000">注意:(1)、(2)步是在 binghe151 服务器上执行的。</font>
(1)复制 Zookeeper 到其他服务器
根据对服务器的规划,现将 Zookeeper 复制到 binghe152 和 binghe53 服务器,具体执行操作如下所示:
scp -r /usr/local/zookeeper-3.5.5/ binghe152:/usr/local/
scp -r /usr/local/zookeeper-3.5.5/ binghe153:/usr/local/
复制代码
(2)复制系统环境变量文件到其他服务器
根据对服务器的规划,现将系统环境变量文件/etc/profile 复制到 binghe152、binghe153 服务器,具体执行操作如下所示:
scp /etc/profile binghe152:/etc/
scp /etc/profile binghe153:/etc/
复制代码
上述操作可能会要求输入密码,根据提示输入密码即可。
4.修改其他服务器上的 myid 文件
修改 binghe152 服务器上 Zookeeper 的 myid 文件内容为数字 2,同时修改 binghe153 服务器上 Zookeeper 的 myid 文件内容为数字 3。具体如下:
在 binghe152 服务器上执行如下操作:
echo "2" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
2
复制代码
在 binghe153 服务器上执行如下操作:
echo "3" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
3
复制代码
5.使环境变量生效
分别在 binghe151、binghe152、binghe153 上执行如下操作,使系统环境变量生效。
6.启动 Zookeeper 集群
分别在 binghe151、binghe152、binghe153 上执行如下操作,启动 Zookeeper 集群。
7.查看 Zookeeper 集群的启动状态
[root@binghe151 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
复制代码
[root@binghe152 local]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
复制代码
[root@binghe153 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
复制代码
可以看到,binghe151 和 binghe153 服务器上的 Zookeeper 角色为 follower,binghe152 服务器上的 Zookeeper 角色为 leader。
初始化 Mycat 配置到 Zookeeper 集群
<font color="#FF0000">注意:初始化 Zookeeper 中的数据,是在 binghe151 服务器上进行的,原因是之前我们已经在 binghe151 服务器上安装了 Mycat。</font>
1.查看初始化脚本
在 Mycat 安装目录下的 bin 目录中提供了一个 initzkdata.sh 脚本文件,如下所示。
[root@binghe151 ~]# ll /usr/local/mycat/bin/
total 384
-rwxr-xr-x 1 root root 3658 Feb 26 17:10 dataMigrate.sh
-rwxr-xr-x 1 root root 1272 Feb 26 17:10 init_zk_data.sh
-rwxr-xr-x 1 root root 15701 Feb 28 20:51 mycat
-rwxr-xr-x 1 root root 2986 Feb 26 17:10 rehash.sh
-rwxr-xr-x 1 root root 2526 Feb 26 17:10 startup_nowrap.sh
-rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root 99401 Feb 28 20:51 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 Feb 28 20:51 wrapper-linux-x86-64
复制代码
initzkdata.sh 脚本文件就是用来向 Zookeeper 中初始化 Mycat 的配置的,这个文件会通过读取 Mycat 安装目录下的 conf 目录下的配置文件,将其初始化到 Zookeeper 集群中。
2.复制 Mycat 配置文件
首先,我们查看下 Mycat 安装目录下的 conf 目录下的文件信息,如下所示。
[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties
-rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml
-rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml
-rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
复制代码
接下来,将 Mycat 安装目录下的 conf 目录下的 schema.xml 文件、server.xml 文件、rule.xml 文件和 sequencedbconf.properties 文件复制到 conf 目录下的 zkconf 目录下,如下所示。
cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/
复制代码
3.将 Mycat 配置信息写入 Zookeeper 集群
执行 initzkdata.sh 脚本文件,向 Zookeeper 集群中初始化配置信息,如下所示。
[root@binghe151 bin]# /usr/local/mycat/bin/init_zk_data.sh
o2020-03-08 20:03:13 INFO JAVA_CMD=/usr/local/jdk1.8.0_212/bin/java
o2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeeper
o2020-03-08 20:03:14 INFO Done
复制代码
根据以上信息得知,Mycat 向 Zookeeper 写入初始化配置信息成功。
4.验证 Mycat 配置信息是否成功写入 Mycat
我们可以使用 Zookeeper 的客户端命令 zkCli.sh 登录 Zookeeper 来验证 Mycat 的配置信息是否成功写入 Mycat。
首先,登录 Zookeeper,如下所示。
[root@binghe151 ~]# zkCli.sh
Connecting to localhost:2181
###################此处省略N行输出######################
Welcome to ZooKeeper!
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0]
复制代码
接下来,在 Zookeeper 命令行查看 mycat 的信息,如下所示。
[zk: localhost:2181(CONNECTED) 0] ls /
[mycat, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
[zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1
[cache, line, rules, schema, sequences, server]
[zk: localhost:2181(CONNECTED) 3]
复制代码
可以看到,在/mycat/mycat-cluster-1 下存在 6 个目录,接下来,查看下 schema 目录下的信息,如下所示。
[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema
[dataHost, dataNode, schema]
复制代码
接下来,我们查看下 dataHost 的配置,如下所示。
[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
复制代码
上面的输出信息格式比较乱,但可以看出是 Json 格式的信息,我们可以将输出信息进行格式化,格式化后的结果如下所示。
[
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe151",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe51",
"url": "192.168.175.151:3306",
"password": "root",
"user": "root"
}
]
},
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe152",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe52",
"url": "192.168.175.152:3306",
"password": "root",
"user": "root"
}
]
},
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe153",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe53",
"url": "192.168.175.153:3306",
"password": "root",
"user": "root"
}
]
},
{
"balance": 1,
"maxCon": 1000,
"minCon": 10,
"name": "binghe154",
"writeType": 0,
"switchType": 1,
"slaveThreshold": 100,
"dbType": "mysql",
"dbDriver": "native",
"heartbeat": "select user()",
"writeHost": [
{
"host": "binghe54",
"url": "192.168.175.154:3306",
"password": "root",
"user": "root"
}
]
}
]
复制代码
可以看到,我们在 Mycat 的 schema.xml 文件中配置的 dataHost 节点的信息,成功写入到 Zookeeper 中了。
为了验证 Mycat 的配置信息,是否已经同步到 Zookeeper 的其他节点上,我们也可以在 binghe152 和 binghe153 服务器上登录 Zookeeper,查看 Mycat 配置信息是否写入成功。
[root@binghe152 ~]# zkCli.sh
Connecting to localhost:2181
#################省略N行输出信息################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
复制代码
可以看到,Mycat 的配置信息成功同步到了 binghe152 服务器上的 Zookeeper 中。
[root@binghe153 ~]# zkCli.sh
Connecting to localhost:2181
#####################此处省略N行输出信息#####################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
复制代码
可以看到,Mycat 的配置信息成功同步到了 binghe153 服务器上的 Zookeeper 中。
配置 Mycat 支持 Zookeeper 启动
1.在 binghe151 服务器上配置 Mycat
在 binghe151 服务器上进入 Mycat 安装目录的 conf 目录下,查看文件信息,如下所示。
[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties
-rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml
-rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml
-rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
复制代码
可以看到,在 Mycat 的 conf 目录下,存在一个 myid.properties 文件,接下来,使用 vim 编辑器编辑这个文件,如下所示。
编辑后的 myid.properties 文件的内容如下所示。
loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_151
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server booster ; booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1
复制代码
其中几个重要的参数说明如下所示。
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
复制代码
myid:当前 Mycat 节点的 id,这里我的命名方式为 mycat_前缀加上 IP 地址的最后三位;
clusterSize:表示 Mycat 集群中的 Mycat 节点个数,这里,我们在 binghe151 和 binghe154 节点上部署 Mycat,所以 Mycat 节点的个数为 2。
clusterNodes:Mycat 集群中,所有的 Mycat 节点,此处的节点需要配置 myid 中配置的 Mycat 节点 id,多个节点之前以逗号分隔。这里我配置的节点为:mycat151,mycat154。
2.在 binghe154 服务器上安装全新的 Mycat
在 binghe154 服务器上下载并安装和 binghe151 服务器上相同版本的 Mycat,并将其解压到 binghe154 服务器上的/usr/local/mycat 目录下。
也可以在 binghe151 服务器上直接输入如下命令将 Mycat 的安装目录复制到 binghe154 服务器上。
[root@binghe151 ~]# scp -r /usr/local/mycat binghe154:/usr/local
复制代码
<font color="#FF0000">注意:别忘了在 binghe154 服务器上配置 Mycat 的系统环境变量。</font>
3.修改 binghe154 服务器上的 Mycat 配置
在 binghe154 服务器上修改 Mycat 安装目录下的 conf 目录中的 myid.properties 文件,如下所示。
vim /usr/local/mycat/conf/myid.properties
复制代码
修改后的 myid.properties 文件的内容如下所示。
loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_154
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server booster ; booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1
复制代码
4.重启 Mycat
分别重启 binghe151 服务器和 binghe154 服务器上的 Mycat,如下所示。
注意:先重启
[root@binghe151 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
复制代码
[root@binghe154 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
复制代码
在 binghe151 和 binghe154 服务器上分别查看 Mycat 的启动日志,如下所示。
STATUS | wrapper | 2020/03/08 21:08:15 | <-- Wrapper Stopped
STATUS | wrapper | 2020/03/08 21:08:15 | --> Wrapper Started as Daemon
STATUS | wrapper | 2020/03/08 21:08:15 | Launching a JVM...
INFO | jvm 1 | 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
INFO | jvm 1 | 2020/03/08 21:08:16 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
INFO | jvm 1 | 2020/03/08 21:08:16 |
INFO | jvm 1 | 2020/03/08 21:08:28 | MyCAT Server startup successfully. see logs in logs/mycat.log
复制代码
从日志的输出结果可以看出,Mycat 重启成功。
<font color="#FF0000">此时,先重启 binghe151 服务器上的 Mycat,再重启 binghe154 服务器上的 Mycat 之后,我们会发现 binghe154 服务器上的 Mycat 的 conf 目录下的 schema.xml、server.xml、rule.xml 和 sequence_db_conf.properties 文件与 binghe151 服务器上 Mycat 的配置文件相同,这就是 binghe154 服务器上的 Mycat 从 Zookeeper 上读取配置文件的结果。</font>
<font color="#FF0000">以后,我们只需要修改 Zookeeper 中有关 Mycat 的配置,这些配置就会自动同步到 Mycat 中,这样可以保证多个 Mycat 节点的配置是一致的。</font>
配置虚拟 IP
分别在 binghe151 和 binghe154 服务器上配置虚拟 IP,如下所示。
ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1
复制代码
配置完虚拟 IP 的效果如下所示,以 binghe151 服务器为例。
[root@binghe151 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45
inet addr:192.168.175.151 Bcast:192.168.175.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:116766 errors:0 dropped:0 overruns:0 frame:0
TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25559422 (24.3 MiB) TX bytes:55997016 (53.4 MiB)
eth0:1 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45
inet addr:192.168.175.110 Bcast:192.168.175.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:51102 errors:0 dropped:0 overruns:0 frame:0
TX packets:51102 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2934009 (2.7 MiB) TX bytes:2934009 (2.7 MiB)
复制代码
<font color="#FF0000">注意:</font>在命令行添加 VIP 后,当服务器重启后,VIP 信息会消失,所以,最好是将创建 VIP 的命令写到一个脚本文件中,例如,将命令写到/usr/local/script/vip.sh 文件中,如下所示。
mkdir /usr/local/script
vim /usr/local/script/vip.sh
复制代码
文件的内容如下所示。
ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1
复制代码
接下来,将/usr/local/script/vip.sh 文件添加到服务器开机启动项中,如下所示。
echo /usr/local/script/vip.sh >> /etc/rc.d/rc.local
复制代码
配置 IP 转发
在 binghe151 和 binghe154 服务器上配置系统内核 IP 转发功能,编辑/etc/sysctl.conf 文件,如下所示。
找到如下一行代码。
将其修改成如下所示的代码。
保存并退出 vim 编辑器,并运行如下命令使配置生效。
安装并配置 xinetd 服务
我们需要在安装 HAProxy 的服务器上,也就是在 binghe151 和 binghe154 服务器上安装 xinetd 服务来开启 48700 端口。
(1)在服务器命令行执行如下命令安装 xinetd 服务,如下所示。
(2)编辑/etc/xinetd.conf 文件,如下所示。
检查文件中是否存在如下配置。
如果/etc/xinetd.conf 文件中没有以上配置,则在/etc/xinetd.conf 文件中添加以上配置;如果存在以上配置,则不用修改。
(3)创建/etc/xinetd.d 目录,如下所示。
注意:如果/etc/xinetd.d 目录已经存在,创建目录时会报如下错误。
mkdir: cannot create directory `/etc/xinetd.d': File exists
复制代码
大家可不必理会此错误信息。
(4)在/etc/xinetd.d 目录下添加 Mycat 状态检测服务器的配置文件 mycat_status,如下所示。
touch /etc/xinetd.d/mycat_status
复制代码
(5)编辑 mycat_status 文件,如下所示。
vim /etc/xinetd.d/mycat_status
复制代码
编辑后的 mycat_status 文件中的内容如下所示。
service mycat_status
{
flags = REUSE
socket_type = stream
port = 48700
wait = no
user = root
server =/usr/local/bin/mycat_check.sh
log_on_failure += USERID
disable = no
}
复制代码
部分 xinetd 配置参数说明如下所示。
(6)在/usr/local/bin 目录下添加 mycat_check.sh 服务脚本,如下所示。
touch /usr/local/bin/mycat_check.sh
复制代码
(7)编辑/usr/local/bin/mycat_check.sh 文件,如下所示。
vim /usr/local/bin/mycat_check.sh
复制代码
编辑后的文件内容如下所示。
#!/bin/bash
mycat=`/usr/local/mycat/bin/mycat status | grep 'not running' | wc -l`
if [ "$mycat" = "0" ]; then
/bin/echo -e "HTTP/1.1 200 OK\r\n"
else
/bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"
/usr/local/mycat/bin/mycat start
fi
复制代码
为 mycat_check.sh 文件赋予可执行权限,如下所示。
chmod a+x /usr/local/bin/mycat_check.sh
复制代码
(8)编辑/etc/services 文件,如下所示。
在文件末尾添加如下所示的内容。
mycat_status 48700/tcp # mycat_status
复制代码
其中,端口号需要与在/etc/xinetd.d/mycat_status 文件中配置的端口号相同。
(9)重启 xinetd 服务,如下所示。
(10)查看 mycat_status 服务是否成功启动,如下所示。
[root@binghe151 ~]# netstat -antup|grep 48700
tcp 0 0 :::48700 :::* LISTEN 2776/xinetd
复制代码
[root@binghe154 ~]# netstat -antup|grep 48700
tcp 0 0 :::48700 :::* LISTEN 6654/xinetd
复制代码
结果显示,两台服务器上的 mycat_status 服务器启动成功。
至此,xinetd 服务安装并配置成功,即 Mycat 状态检查服务安装成功。
安装并配置 HAProxy
我们直接在 binghe151 和 binghe154 服务器上使用如下命令安装 HAProxy。
安装完成后,我们需要对 HAProxy 进行配置,HAProxy 的配置文件目录为/etc/haproxy,我们查看这个目录下的文件信息,如下所示。
[root@binghe151 ~]# ll /etc/haproxy/
total 4
-rw-r--r-- 1 root root 3142 Oct 21 2016 haproxy.cfg
复制代码
发现/etc/haproxy/目录下存在一个 haproxy.cfg 文件。接下来,我们就修改 haproxy.cfg 文件,修改后的 haproxy.cfg 文件的内容如下所示。
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen admin_status
bind 0.0.0.0:48800
stats uri /admin-status
stats auth admin:admin
listen allmycat_service
bind 0.0.0.0:3366
mode tcp
option tcplog
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
balance roundrobin
server mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3
server mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3
listen allmycat_admin
bind 0.0.0.0:3377
mode tcp
option tcplog
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
balance roundrobin
server mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3
server mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3
复制代码
接下来,在 binghe151 服务器和 binghe154 服务器上启动 HAProxy,如下所示。
haproxy -f /etc/haproxy/haproxy.cfg
复制代码
接下来,我们使用 mysql 命令连接 HAProxy 监听的虚拟 IP 和端口来连接 Mycat,如下所示。
[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
复制代码
可以看到,连接 Mycat 成功。
安装 Keepalived
1.安装并配置 Keepalived
直接在 binghe151 和 binghe154 服务器上输入如下命令安装 Keepalived。
yum install keepalived -y
复制代码
安装成功后,会在/etc 目录下生成一个 keepalived 目录,接下来,我们在/etc/keepalived 目录下配置 keepalived.conf 文件,如下所示。
vim /etc/keepalived/keepalived.conf
复制代码
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.175.110 dev eth0 scope global
}
}
复制代码
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state SLAVE
interface eth0
virtual_router_id 51
priority 120
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.175.110 dev eth0 scope global
}
}
复制代码
2.编写检测 HAProxy 的脚本
接下来,需要分别在 binghe151 和 binghe154 服务器上的/etc/keepalived 目录下创建 check_haproxy.sh 脚本,脚本内容如下所示。
#!/bin/bash
STARTHAPROXY="/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg"
STOPKEEPALIVED="/etc/init.d/keepalived stop"
#STOPKEEPALIVED="/usr/bin/systemctl stop keepalived"
LOGFILE="/var/log/keepalived-haproxy-state.log"
echo "[check_haproxy status]" >> $LOGFILE
A=`ps -C haproxy --no-header |wc -l`
echo "[check_haproxy status]" >> $LOGFILE
date >> $LOGFILE
if [ $A -eq 0 ];then
echo $STARTHAPROXY >> $LOGFILE
$STARTHAPROXY >> $LOGFILE 2>&1
sleep 5
fi
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
exit 0
else
exit 1
fi
复制代码
使用如下命令为 check_haproxy.sh 脚本授予可执行权限。
chmod a+x /etc/keepalived/check_haproxy.sh
复制代码
3.启动 Keepalived
配置完成后,我们就可以启动 Keepalived 了,分别在 binghe151 和 binghe154 服务器上启动 Keepalived,如下所示。
/etc/init.d/keepalived start
复制代码
查看 Keepalived 是否启动成功,如下所示。
[root@binghe151 ~]# ps -ef | grep keepalived
root 1221 1 0 20:06 ? 00:00:00 keepalived -D
root 1222 1221 0 20:06 ? 00:00:00 keepalived -D
root 1223 1221 0 20:06 ? 00:00:02 keepalived -D
root 93290 3787 0 21:42 pts/0 00:00:00 grep keepalived
复制代码
[root@binghe154 ~]# ps -ef | grep keepalived
root 1224 1 0 20:06 ? 00:00:00 keepalived -D
root 1225 1224 0 20:06 ? 00:00:00 keepalived -D
root 1226 1224 0 20:06 ? 00:00:02 keepalived -D
root 94636 3798 0 21:43 pts/0 00:00:00 grep keepalived
复制代码
可以看到,两台服务器上的 Keepalived 服务启动成功。
4.验证 Keepalived 绑定的虚拟 IP
接下来,我们分别查看两台服务器上的 Keepalived 是否绑定了虚拟 IP。
[root@binghe151 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/32 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::20c:29ff:fe10:a145/64 scope link
valid_lft forever preferred_lft forever
复制代码
可以看到如下一行代码。
inet 192.168.175.110/32 scope global eth0
复制代码
说明 binghe151 服务器上的 Keepalived 绑定了虚拟 IP 192.168.175.110。
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::250:56ff:fe22:2a75/64 scope link
valid_lft forever preferred_lft forever
复制代码
可以看到 binghe154 服务器上的 Keepalived 并没有绑定虚拟 IP。
5.测试虚拟 IP 的漂移
如何测试虚拟 IP 的漂移呢?首先,我们停止 binghe151 服务器上的 Keepalived,如下所示。
/etc/init.d/keepalived stop
复制代码
接下来,查看 binghe154 服务器上 Keepalived 绑定虚拟 IP 的情况,如下所示。
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/32 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::250:56ff:fe22:2a75/64 scope link
valid_lft forever preferred_lft forever
复制代码
可以看到,在输出的结果信息中,存在如下一行信息。
inet 192.168.175.110/32 scope global eth0
复制代码
说明 binghe154 服务器上的 Keepalived 绑定了虚拟 IP 192.168.175.110,虚拟 IP 漂移到了 binghe154 服务器上。
6.binghe151 服务器上的 Keepalived 抢占虚拟 IP
接下来,我们启动 binghe151 服务器上的 Keepalived,如下所示。
/etc/init.d/keepalived start
复制代码
启动成功后,我们再次查看虚拟 IP 的绑定情况,如下所示。
[root@binghe151 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/32 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::20c:29ff:fe10:a145/64 scope link
valid_lft forever preferred_lft forever
复制代码
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
inet6 fe80::250:56ff:fe22:2a75/64 scope link
valid_lft forever preferred_lft forever
复制代码
由于 binghe151 服务器上配置的 Keepalived 优先级要高于 binghe154 服务器上的 Keepalived,所以,再次启动 binghe151 服务器上的 Keepalived 后,binghe151 服务器上的 Keepalived 会抢占虚拟 IP。
配置 MySQL 主从复制
这里,为了简单,我将 binghe154 和 binghe155 服务器上的 MySQL 配置成主从复制,大家也可以根据实际情况,自行配置其他服务器上 MySQL 的主从复制(注意:我这里配置的是一主一从模式)。
1.编辑 my.cnf 文件
server_id = 154
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
复制代码
server_id = 155
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
复制代码
2.同步两台服务器上 MySQL 的数据
在 binghe154 服务器上只有一个 customerdb 数据库,我们使用 mysqldump 命令导出 customerdb 数据库,如下所示。
[root@binghe154 ~]# mysqldump --master-data=2 --single-transaction -uroot -p --databases customer_db > binghe154.sql
Enter password:
复制代码
接下来,我们查看 binghe154.sql 文件。
在文件中,我们可以找到如下信息。
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995;
复制代码
说明当前 MySQL 的二进制日志文件为 mysql-bin.000042,二进制日志文件的位置为 995。
接下来,我们将 binghe154.sql 文件复制到 binghe155 服务器上,如下所示。
scp binghe154.sql 192.168.175.155:/usr/local/src
复制代码
在 binghe155 服务器上,将 binghe154.sql 脚本导入到 MySQL 中,如下所示。
mysql -uroot -p < /usr/local/src/binghe154.sql
复制代码
此时,完成了数据的初始化。
3.创建主从复制账号
在 binghe154 服务器的 MySQL 中,创建用于主从复制的 MySQL 账号,如下所示。
mysql> CREATE USER 'repl'@'192.168.175.%' IDENTIFIED BY 'repl123456';
Query OK, 0 rows affected (0.01 sec)
mysql> ALTER USER 'repl'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'repl123456';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.175.%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
复制代码
4.配置复制链路
登录 binghe155 服务器上的 MySQL,并使用如下命令配置复制链路。
mysql> change master to
> master_host='192.168.175.154',
> master_port=3306,
> master_user='repl',
> master_password='repl123456',
> MASTER_LOG_FILE='mysql-bin.000042',
> MASTER_LOG_POS=995;
复制代码
其中,MASTERLOGFILE='mysql-bin.000042', MASTERLOGPOS=995 就是在 binghe154.sql 文件中找到的。
5.启动从库
在 binghe155 服务器的 MySQL 命令行启动从库,如下所示。
查看从库是否启动成功,如下所示。
mysql> SHOW slave STATUS \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.175.151
Master_User: binghe152
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000007
Read_Master_Log_Pos: 1360
Relay_Log_File: relay-bin.000003
Relay_Log_Pos: 322
Relay_Master_Log_File: mysql-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
#################省略部分输出结果信息##################
复制代码
结果显示 SlaveIORunning 选项和 SlaveSQLRunning 选项的值均为 Yes,说明 MySQL 主从复制环境搭建成功。
最后,别忘了在 binghe155 服务器的 MySQL 中创建 Mycat 连接 MySQL 的用户,如下所示。
CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat';
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;
复制代码
配置 Mycat 读写分离
修改 Mycatd 的 schema.xml 文件,实现 binghe154 和 binghe155 服务器上的 MySQL 读写分离。在 Mycat 安装目录的 conf/zkconf 目录下,修改 schema.xml 文件,修改后的 schema.xml 文件如下所示。
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="shop" checkSQLschema="true" sqlMaxLimit="1000">
<table name="order_master" dataNode="orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" primaryKey="order_id" autoIncrement="true">
<childTable name="order_detail" joinKey="order_id" parentKey="order_id" primaryKey="order_detail_id" autoIncrement="true"/>
</table>
<table name="order_cart" dataNode="ordb" primaryKey="cart_id"/>
<table name="order_customer_addr" dataNode="ordb" primaryKey="customer_addr_id"/>
<table name="region_info" dataNode="ordb,prodb,custdb" primaryKey="region_id" type="global"/>
<table name="serial" dataNode="ordb" primaryKey="id"/>
<table name="shipping_info" dataNode="ordb" primaryKey="ship_id"/>
<table name="warehouse_info" dataNode="ordb" primaryKey="w_id"/>
<table name="warehouse_proudct" dataNode="ordb" primaryKey="wp_id"/>
<table name="product_brand_info" dataNode="prodb" primaryKey="brand_id"/>
<table name="product_category" dataNode="prodb" primaryKey="category_id"/>
<table name="product_comment" dataNode="prodb" primaryKey="comment_id"/>
<table name="product_info" dataNode="prodb" primaryKey="product_id"/>
<table name="product_pic_info" dataNode="prodb" primaryKey="product_pic_id"/>
<table name="product_supplier_info" dataNode="prodb" primaryKey="supplier_id"/>
<table name="customer_balance_log" dataNode="custdb" primaryKey="balance_id"/>
<table name="customer_inf" dataNode="custdb" primaryKey="customer_inf_id"/>
<table name="customer_level_inf" dataNode="custdb" primaryKey="customer_level"/>
<table name="customer_login" dataNode="custdb" primaryKey="customer_id"/>
<table name="customer_login_log" dataNode="custdb" primaryKey="login_id"/>
<table name="customer_point_log" dataNode="custdb" primaryKey="point_id"/>
</schema>
<dataNode name="mycat" dataHost="binghe151" database="mycat"/>
<dataNode name="ordb" dataHost="binghe152" database="order_db"/>
<dataNode name="prodb" dataHost="binghe153" database="product_db"/>
<dataNode name="custdb" dataHost="binghe154" database="customer_db"/>
<dataNode name="orderdb01" dataHost="binghe152" database="orderdb01"/>
<dataNode name="orderdb02" dataHost="binghe152" database="orderdb02"/>
<dataNode name="orderdb03" dataHost="binghe153" database="orderdb03"/>
<dataNode name="orderdb04" dataHost="binghe153" database="orderdb04"/>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe151" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe51" url="192.168.175.151:3306" password="mycat" user="mycat"/>
</dataHost>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe152" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe52" url="192.168.175.152:3306" password="mycat" user="mycat"/>
</dataHost>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe153" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe53" url="192.168.175.153:3306" password="mycat" user="mycat"/>
</dataHost>
<dataHost balance="1" maxCon="1000" minCon="10" name="binghe154" writeType="0" switchTymycate="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
<heartbeat>select user()</heartbeat>
<writeHost host="binghe54" url="192.168.175.154:3306" password="mycat" user="mycat">
<readHost host="binghe55", url="192.168.175.155:3306" user="mycat" password="mycat"/>
</writeHost>
<writeHost host="binghe55" url="192.168.175.155:3306" password="mycat" user="mycat"/>
</dataHost>
</mycat:schema>
复制代码
保存并退出 vim 编辑器,接下来,初始化 Zookeeper 中的数据,如下所示。
/usr/local/mycat/bin/init_zk_data.sh
复制代码
上述命令执行成功后,会自动将配置同步到 binghe151 和 binghe154 服务器上的 Mycat 的安装目录下的 conf 目录下的 schema.xml 中。
接下来,分别启动 binghe151 和 binghe154 服务器上的 Mycat 服务。
如何访问高可用环境
此时,整个高可用环境配置完成,上层应用连接高可用环境时,需要连接 HAProxy 监听的 IP 和端口。比如使用 mysql 命令连接高可用环境如下所示。
[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+----------+
| DATABASE |
+----------+
| shop |
+----------+
1 row in set (0.10 sec)
mysql> use shop;
Database changed
mysql> show tables;
+-----------------------+
| Tables in shop |
+-----------------------+
| customer_balance_log |
| customer_inf |
| customer_level_inf |
| customer_login |
| customer_login_log |
| customer_point_log |
| order_cart |
| order_customer_addr |
| order_detail |
| order_master |
| product_brand_info |
| product_category |
| product_comment |
| product_info |
| product_pic_info |
| product_supplier_info |
| region_info |
| serial |
| shipping_info |
| warehouse_info |
| warehouse_proudct |
+-----------------------+
21 rows in set (0.00 sec)
复制代码
这里,我只是对 binghe154 服务器上的 MySQL 扩展了读写分离环境,大家也可以根据实际情况对其他服务器的 MySQL 实现主从复制和读写分离,这样,整个高可用环境就实现了 HAProxy 的高可用、Mycat 的高可用、MySQL 的高可用、Zookeeper 的高可用和 Keepalived 的高可用。
好了,今天就到这儿吧,我是冰河,大家有啥问题可以在下方留言,也可以加我微信,一起交流技术,一起进阶,一起牛逼~~
评论