操作系统基本信息
查看 cpu 信息
lscpu
Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 4On-line CPU(s) list: 0-3Thread(s) per core: 1Core(s) per socket: 1Socket(s): 4NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 58Model name: Intel Xeon E3-12xx v2 (Ivy Bridge)Stepping: 9CPU MHz: 2599.998BogoMIPS: 5199.99Hypervisor vendor: KVMVirtualization type: fullL1d cache: 32KL1i cache: 32KL2 cache: 4096KNUMA node0 CPU(s): 0-3Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
查看操作系统信息
cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
Installing MySQL
下载地址:https://downloads.mysql.com/archives/community/
安装参考文档:https://dev.mysql.com/doc/refman/8.0/en/linux-installation-rpm.html
1.安装
# Common files for server and client libraries
rpm -Uvh mysql-community-common-5.7.27-1.el7.x86_64.rpm
复制代码
warning: mysql-community-common-5.7.27-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEYPreparing... ################################# [100%]file /usr/share/mysql/czech/errmsg.sys from install of mysql-community-common-5.7.27-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.44-2.el7.centos.x86_64
# content os 7 默认安装 mysql 数据库移除
yum -y remove mariadb-libs-1:5.5.44-2.el7.centos.x86_6`
复制代码
# Common files for server and client libraries
rpm -Uvh mysql-community-common-5.7.27-1.el7.x86_64.rpm
复制代码
# Shared libraries for MySQL database client applications
rpm -Uvh mysql-community-libs-5.7.27-1.el7.x86_64.rpm
复制代码
# SMySQL client applications and tools
rpm -Uvh mysql-community-client-5.7.27-1.el7.x86_64.rpm
复制代码
# Database server and related tools
mysql-community-server-5.7.27-1.el7.x86_64.rpm
复制代码
2.启动数据库
# 启动数据库
systemctl start mysqld
复制代码
3.权限修改
# 查看临时密码
grep 'temporary password' /var/log/mysqld.log
复制代码
# 修改密码
ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass4!';
复制代码
# 修改密码设置权限--便于设置简单密码
SHOW VARIABLES LIKE 'validate_password%';
set global validate_password_length=6;
set global validate_password_policy=LOW;
复制代码
4.创建用户
# 创建测试用户
create user 'config-center'@'%' identified by '123456';
# 为测试用户赋值数据库访问权限
grant all privileges on `config-center`.* to 'config-center'@'%' identified by '123456';
# 创建测试数据库
CREATE DATABASE `config-center`;
复制代码
# 开启数据库的binlog日志权限
vi /etc/my.cnf
server_id=1918
log_bin = mysql-bin
binlog_format = ROW
expire_logs_days=30
#查看设置结果 并重启数据库实例
show variables like '%log_bin%';
复制代码
Installing Redis Cluster
虚拟机:172.27.2.38,63,48 重复 1,2,3 操作 ,4,5 选择一台执行即可
1.下载安装
# 下载 redis 包 并进行编译安装
wget https://download.redis.io/releases/redis-4.0.9.tar.gz
tar xzf redis-4.0.9.tar.gz
cd redis-4.0.9
make
复制代码
2.编辑节点配置文件
# 编辑启用 redis 集群 ----- 共 3台虚拟机 部署策略 3 主,3 从 每台集群有2个节点 端口为 7000 7001
vi redis.conf
复制代码
# 具体配置项 每个虚拟机两个节点
port 7000
cluster-enabled yes
cluster-config-file nodes-7000.conf
cluster-node-timeout 5000
appendonly yes
复制代码
# 为方便配置文件区分,建立两个文件夹存放 redis.conf
mkdir 7000
cp redis.conf ./7000
mkdir 7001
cp redis.conf ./7001
复制代码
#修改netip绑定
vi ./7000/redis.conf
bind 172.27.3.38
port 7000
cluster-config-file nodes-7000.conf
vi ./7001/redis.conf
bind 172.27.3.38
port 7001
cluster-config-file nodes-7001.conf
复制代码
3.启动每一个节点
# 启动 3台机器重复以上操作
./src/redis-server ./7000/redis.conf &
./src/redis-server ./7001/redis.conf &
复制代码
4.安装 gem
# for redis 3 or 4 为 执行 ./redis-trib.rb 前置条件 ----- 只需要在单一集群上执行即可
gem install redis
复制代码
# have to update Ruby >=2.3.0 (安装or升级Ruby)
# https://www.ruby-lang.org/en/documentation/installation/#ruby-install
# https://www.ruby-lang.org/en/downloads/
# Ruby 2.7.3
$ ./configure
$ make
$ make install
复制代码
5.创建集群
# 创建集群
./redis-trib.rb create --replicas 1 172.27.2.38:7000 172.27.2.38:7001 172.27.2.48:7000 172.27.2.48:7001 172.27.2.63:7000 172.27.2.63:7001
复制代码
Creating clusterPerforming hash slots allocation on 6 nodes...Using 3 masters:172.27.2.38:7000172.27.2.48:7000172.27.2.63:7000Adding replica 172.27.2.48:7001 to 172.27.2.38:7000Adding replica 172.27.2.63:7001 to 172.27.2.48:7000Adding replica 172.27.2.38:7001 to 172.27.2.63:7000M: 43837d1bc6faadd823a713e2a79670f2f7ea3f31 172.27.2.38:7000slots:0-5460 (5461 slots) masterS: 76efe3a8c213e27535648d5caa2bb9b881ebdcd5 172.27.2.38:7001replicates 9cd922525743bc14905016d0c4a66f13787dde58M: 87e91840bb2335a5b881d530ef97ebc9f89340fc 172.27.2.48:7000slots:5461-10922 (5462 slots) masterS: f187e478fd9c40e6c4b0ab66435e6bd7bbe46056 172.27.2.48:7001replicates 43837d1bc6faadd823a713e2a79670f2f7ea3f31M: 9cd922525743bc14905016d0c4a66f13787dde58 172.27.2.63:7000slots:10923-16383 (5461 slots) masterS: e3a3861e7d491deb38176556d3a5a4c903dc5446 172.27.2.63:7001replicates 87e91840bb2335a5b881d530ef97ebc9f89340fcCan I set the above configuration? (type 'yes' to accept): yesNodes configuration updatedAssign a different config epoch to each nodeSending CLUSTER MEET messages to join the clusterWaiting for the cluster to join....Performing Cluster Check (using node 172.27.2.38:7000)M: 43837d1bc6faadd823a713e2a79670f2f7ea3f31 172.27.2.38:7000slots:0-5460 (5461 slots) master1 additional replica(s)M: 87e91840bb2335a5b881d530ef97ebc9f89340fc 172.27.2.48:7000slots:5461-10922 (5462 slots) master1 additional replica(s)S: 76efe3a8c213e27535648d5caa2bb9b881ebdcd5 172.27.2.38:7001slots: (0 slots) slavereplicates 9cd922525743bc14905016d0c4a66f13787dde58S: e3a3861e7d491deb38176556d3a5a4c903dc5446 172.27.2.63:7001slots: (0 slots) slavereplicates 87e91840bb2335a5b881d530ef97ebc9f89340fcM: 9cd922525743bc14905016d0c4a66f13787dde58 172.27.2.63:7000slots:10923-16383 (5461 slots) master1 additional replica(s)S: f187e478fd9c40e6c4b0ab66435e6bd7bbe46056 172.27.2.48:7001slots: (0 slots) slavereplicates 43837d1bc6faadd823a713e2a79670f2f7ea3f31[OK] All nodes agree about slots configuration.Check for open slots...Check slots coverage...[OK] All 16384 slots covered.
Installing Zookeeper
下载地址 :https://zookeeper.apache.org/releases.html
帮助文档:https://zookeeper.apache.org/doc/r3.7.0/zookeeperOver.html
1.安装软件
# 安装zk 集群-------最新集群 使用3台虚拟机
# 解压zk安装包
tar apache-zookeeper-3.6.3-bin.tar.gz
cd apache-zookeeper-3.6.3-bin
复制代码
2.修改配置文件
# 复制配置文件
cp zoo_sample.cfg zoo.cfg
复制代码
# 编辑配置信息
vi zoo.cfg
tickTime=2000
dataDir=/opt/supp_app/data/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
#节点选举及数据同步配置
server.1=172.27.2.38:2888:3888
server.2=172.27.2.48:2888:3888
server.3=172.27.2.63:2888:3888
复制代码
3.写入 id 标识
# 在 dataDir 目录下创建一个名称为 myid 的文件 并将当前主机对应的 server.X 中的 X 写入文件。
cd /opt/supp_app/data/zookeeper
# 将 1 复制到 myid 每一个ip对应唯一的 x
vi myid
复制代码
4.启动服务
# 启动命令 将 zoo.cfg 相关配置复制到其他主机并执行以上操作即可。
./bin/zkServer.sh start
./bin/zkServer.sh status
复制代码
Installing Kafka
参考文档:http://kafka.apache.org/documentation/#introduction
下载地址:http://kafka.apache.org/downloads
1.配置更新
# 解压安装包接口
vi config/server.properties
# 修改配置文件信息 id 不可重复
broker.id=0
zookeeper.connect=172.27.2.38:2181,172.27.2.48:2181,172.27.2.63:2181
# 集群情况下需要增加默认失效时长
zookeeper.connection.timeout.ms=180000
复制代码
2.启动 broker
# 启动
./bin/kafka-server-start.sh ./config/server.properties &
# 守护进程方式启动
./bin/kafka-server-start.sh -daemon ./config/server.properties &
复制代码
The producer sends data directly to the broker that is the leader for the partition without any intervening routing tier.
To help the producer do this all Kafka nodes can answer a request for metadata about which servers are alive and where the leaders for the partitions of a topic are at any given time to allow the producer to appropriately direct its requests.
创建 topic
# 创建topic replication 需要于 <= broker size (节点数量)
./kafka-topics.sh --create --zookeeper 172.27.2.38:2181,172.27.2.48:2181,172.27.2.63:2181 --replication-factor 3 --partitions 4 --topic dx_datacenter_topic
./kafka-topics.sh --create --zookeeper 172.27.2.38:2181,172.27.2.48:2181,172.27.2.63:2181 --replication-factor 3 --partitions 4 --topic dx_guijiaicall_topic
./kafka-topics.sh --create --zookeeper 172.27.2.38:2181,172.27.2.48:2181,172.27.2.63:2181 --replication-factor 3 --partitions 4 --topic dx_trfailureretry_topic
./kafka-topics.sh --create --zookeeper 172.27.2.38:2181,172.27.2.48:2181,172.27.2.63:2181 --replication-factor 3 --partitions 12 --topic dx_trcallback_topic
复制代码
Installing Kafka-Eagle
参考文档 :https://www.kafka-eagle.org/articles/docs/installation/linux-macos.html
下载地址:http://www.kafka-eagle.org/articles/docs/changelog/changelog.html
1.下载软件包
wget https://github.com/smartloli/kafka-eagle-bin/archive/v2.0.5.tar.gz
# 参考文档安装即可
复制代码
2.修改 kafka 脚本(optional)
# 修改kafka 启动脚本 --- (解决 kafka 相关节点获取 内存使用信息) 需要重启服务
vi bin/kafka-server-start.sh
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
#新增JMX端口
export JMX_PORT="9099"
fi
复制代码
Installing Mongodb (Sharded Cluster)
下载地址:https://www.mongodb.com/download-center/community/releases
参考文档:https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/
虚拟机配置:172.27.2.63/38/48
1.初始软件环境
参考文档 https://docs.mongodb.com/manual/tutorial/deploy-shard-cluster/
以上机器分别安装以上软件
# 创建mongodb 仓库地址
vi /etc/yum.repos.d/mongodb-org-4.4.repo
[mongodb-org-4.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc
复制代码
# 安装db
yum install -y mongodb-org
复制代码
# 观察配置文件信息
cat /etc/mongod.conf
复制代码
2.创建配置文件
以上机器分别配置
第一步:Start each member of the config server replica set
# For a production deployment, deploy a config server replica set with at least three members
vi /opt/soft/mongodb_sharding_config/03/shared.conf
复制代码
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /opt/supp_app/data/mongodb/logs/03/mongod.log
# Where and how to store data.
storage:
dbPath: /opt/supp_app/data/mongodb/03
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /opt/soft/mongodb_sharding_config/03/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces 默认端口为 27019
net:
# port: 27019
bindIp: 172.27.2.63 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
security:
authorization: enabled
keyFile: /opt/soft/mongodb_sharding_config/ssl_key # openssl rand -base64 756 > ssl_key
#operationProfiling:
#replication:
replication:
oplogSizeMB: 4096
replSetName: "confs"
sharding:
clusterRole: configsvr
## Enterprise-Only Options
#auditLog:
#snmp:
复制代码
第二步:Create the Shard Replica Sets "rep1"
# 第二步:
vi /opt/soft/mongodb_sharding_config/01/shared.conf
复制代码
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /opt/supp_app/data/mongodb/logs/01/mongod.log
# Where and how to store data.
storage:
dbPath: /opt/supp_app/data/mongodb/01
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /opt/soft/mongodb_sharding_config/01/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces 默认端口 27018
net:
# port: 27018
bindIp: 172.27.2.63 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
security:
authorization: enabled
keyFile: /opt/soft/mongodb_sharding_config/ssl_key
#operationProfiling:
replication:
oplogSizeMB: 4096
replSetName: "rep1"
sharding:
clusterRole: shardsvr
## Enterprise-Only Options
#auditLog:
#snmp:
复制代码
第三步:Create the Shard Replica Sets "rep2"
vi /opt/soft/mongodb_sharding_config/02/shared.conf
复制代码
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /opt/supp_app/data/mongodb/logs/02/mongod.log
# Where and how to store data.
storage:
dbPath: /opt/supp_app/data/mongodb/02
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /opt/soft/mongodb_sharding_config/02/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 172.27.2.63 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
security:
authorization: enabled
keyFile: /opt/soft/mongodb_sharding_config/ssl_key
#operationProfiling:
#replication:
replication:
oplogSizeMB: 4096
replSetName: "rep2"
sharding:
clusterRole: shardsvr
## Enterprise-Only Options
#auditLog:
#snmp:
复制代码
第三步:Start a mongos
for the Sharded Cluster
vi /opt/soft/mongodb_sharding_config/mongos.conf
复制代码
systemLog:
destination: file
logAppend: true
path: /opt/supp_app/data/mongodb/logs/mongod.log
processManagement:
fork: true # fork and run in background
pidFilePath: /opt/soft/mongodb_sharding_config/mongos.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
security:
keyFile: /opt/soft/mongodb_sharding_config/ssl_key
sharding:
configDB: confs/172.27.2.38:27019,172.27.2.48:27019,172.27.2.63:27019
net:
port: 3000
bindIp: 172.27.2.63
复制代码
3.启动 mongo
以上机器分别执行
# 1.启动 Shard Replica 集合 rep1
/usr/bin/mongod -f /opt/soft/mongodb_sharding_config/01/shared.conf
# 2.启动 Shard Replica 集合 rep2
/usr/bin/mongod -f /opt/soft/mongodb_sharding_config/02/shared.conf
# 3.启动 Config Server
/usr/bin/mongod -f /opt/soft/mongodb_sharding_config/03/shared.conf
# 4.启动集群 启动集群
/usr/bin/mongos -f /opt/soft/mongodb_sharding_config/mongos.conf
# -----------------------------------------------------------------------停止
/usr/bin/mongod -f /opt/soft/mongodb_sharding_config/01/shared.conf --shutdown
复制代码
4.构建集群
以上机器分别执行
# 1.初始 Shard Replica confs
/usr/bin/mongo --host 172.27.2.48:27019
#初始成员
rs.initiate( {
_id : "confs",
configsvr: true,
members: [
{ _id: 0, host: "172.27.2.38:27019" },
{ _id: 1, host: "172.27.2.48:27019" },
{ _id: 2, host: "172.27.2.63:27019" }
]
})
# To identify the primary in the replica set.
rs.status()
# 2.初始 Shard Replica 集合 rep1
/usr/bin/mongo --host 172.27.2.48:27018
#初始成员
rs.initiate( {
_id : "rep1",
members: [
{ _id: 0, host: "172.27.2.38:27018" },
{ _id: 1, host: "172.27.2.48:27018" },
{ _id: 2, host: "172.27.2.63:27018" }
]
})
# To identify the primary in the replica set.
rs.status()
# 3.初始 Shard Replica 集合 rep2
/usr/bin/mongo --host 172.27.2.48:27017
#初始成员
rs.initiate( {
_id : "rep2",
members: [
{ _id: 0, host: "172.27.2.38:27017" },
{ _id: 1, host: "172.27.2.48:27017" },
{ _id: 2, host: "172.27.2.63:27017" }
]
})
# To identify the primary in the replica set.
rs.status()
# 连接集群
/usr/bin/mongo --host 172.27.2.48:3000
# 添加集群 shardAdded
sh.addShard( "rep1/172.27.2.38:27018,172.27.2.48:27018,172.27.2.63:27018")
sh.addShard( "rep2/172.27.2.38:27017,172.27.2.48:27017,172.27.2.63:27017")
复制代码
5.数据备份即回复
/usr/bin/mongodump --host=192.168.100.118 --port=27017 --db=dxLogDB --out=/opt/data/
/usr/bin/mongorestore --host=172.27.2.63 --port=3000 /opt/data/
复制代码
2021-06-17T10:51:18.452+0800 preparing collections to restore from2021-06-17T10:51:18.456+0800 reading metadata for dxLogDB.dXLog from /opt/data/dxLogDB/dXLog.metadata.json2021-06-17T10:51:18.456+0800 reading metadata for dxLogDB.callPlanDTO from /opt/data/dxLogDB/callPlanDTO.metadata.json2021-06-17T10:51:18.456+0800 reading metadata for dxLogDB.taskCreateBackDTO from /opt/data/dxLogDB/taskCreateBackDTO.metadata.json2021-06-17T10:51:18.456+0800 reading metadata for dxLogDB.dxClientLogVo from /opt/data/dxLogDB/dxClientLogVo.metadata.json2021-06-17T10:51:18.814+0800 restoring dxLogDB.dXLog from /opt/data/dxLogDB/dXLog.bson2021-06-17T10:51:18.870+0800 restoring dxLogDB.callPlanDTO from /opt/data/dxLogDB/callPlanDTO.bson2021-06-17T10:51:18.982+0800 restoring dxLogDB.taskCreateBackDTO from /opt/data/dxLogDB/taskCreateBackDTO.bson2021-06-17T10:51:19.061+0800 restoring dxLogDB.dxClientLogVo from /opt/data/dxLogDB/dxClientLogVo.bson2021-06-17T10:51:19.437+0800 no indexes to restore2021-06-17T10:51:19.437+0800 finished restoring dxLogDB.taskCreateBackDTO (98 documents, 0 failures)2021-06-17T10:51:19.437+0800 reading metadata for dxLogDB.taskCallBackDTO from /opt/data/dxLogDB/taskCallBackDTO.metadata.json2021-06-17T10:51:19.437+0800 no indexes to restore2021-06-17T10:51:19.437+0800 finished restoring dxLogDB.callPlanDTO (319 documents, 0 failures)2021-06-17T10:51:19.437+0800 no indexes to restore2021-06-17T10:51:19.438+0800 finished restoring dxLogDB.dxClientLogVo (30 documents, 0 failures)2021-06-17T10:51:19.593+0800 restoring dxLogDB.taskCallBackDTO from /opt/data/dxLogDB/taskCallBackDTO.bson2021-06-17T10:51:19.790+0800 no indexes to restore2021-06-17T10:51:19.790+0800 finished restoring dxLogDB.taskCallBackDTO (99 documents, 0 failures)2021-06-17T10:51:21.451+0800 [######..................] dxLogDB.dXLog 36.4MB/135MB (27.0%)2021-06-17T10:51:24.451+0800 [################........] dxLogDB.dXLog 90.6MB/135MB (67.2%)2021-06-17T10:51:26.821+0800 [########################] dxLogDB.dXLog 135MB/135MB (100.0%)2021-06-17T10:51:26.821+0800 no indexes to restore2021-06-17T10:51:26.821+0800 finished restoring dxLogDB.dXLog (63481 documents, 0 failures)2021-06-17T10:51:26.821+0800 64027 document(s) restored successfully. 0 document(s) failed to restore.
评论