极客时间运维进阶训练营第五周作业
- 2022-11-28 北京
本文字数:14462 字
阅读完需:约 47 分钟
1.完全基于 Pipeline 实现完整的代码部署流水线
环境信息:
gitlab 10.0.0.132
jenkins master 10.0.0.135
harbor 10.0.0.133
SonarQube 10.0.0.140
PostgreSQL 10.0.0.141
docker-node1 10.0.0.134
在 gitlab 的代码仓库里上传好需要的文件,包括 nginx 打包需要的前端文件、Dockerfile 等,以及用于代码扫描的 python 文件。项目结构及相关文件内容如下:
root@jenkins-master:/tmp/app2# tree
.
├── build-command.sh
├── Dockerfile
├── images
│ └── 1.jpg
├── index.html
├── nginx.conf
├── README.md
├── sonar-project.properties
└── src
└── test.py
2 directories, 8 files
root@jenkins-master:/tmp/app2# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.magedu.net/magedu/nginx:${TAG} .
docker push harbor.magedu.net/magedu/nginx:${TAG}
root@jenkins-master:/tmp/app2# cat Dockerfile
FROM harbor.magedu.net/magedu/nginx:v1
ADD nginx.conf /apps/nginx/conf/
ADD frontend.tar.gz /apps/nginx/html/
ENTRYPOINT ["/apps/nginx/sbin/nginx","-g","daemon off;"]
root@jenkins-master:/tmp/app2# cat index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Nginx 测试页面</title>
</head>
<body>
<h2>Nginx 测试web页面</h2>
<img src="./images/1.jpg">
<p>
<a href="http://www.jd.com" target="_blank">app跳转</a>
</p>
</body>
基于 main 分支创建 develop 分支,并提交新的改动。
安装插件 SSH Pipeline Steps, 以支持 跨机器执行脚本。
新建 pipeline 类型的 jenkins job。pipeline 脚本如下:
#!groovy
pipeline{
agent any //全局必须带有agent,表明此pipeline执行节点
options {
buildDiscarder(logRotator(numToKeepStr: '5')) //保留最近5个构建历史
disableConcurrentBuilds() //禁用并发构建
}
//声明环境变量
environment {
//定义镜像仓库地址
def GIT_URL = 'git@10.0.0.132:magedu/app2.git'
//镜像仓库变量
def HARBOR_URL = 'harbor.magedu.net'
//镜像项目变量
def IMAGE_PROJECT = 'magedu'
//镜像名称变量
IMAGE_NAME = 'nginx'
def DATE = sh(script:"date +%F_%H-%M-%S", returnStdout: true).trim() //基于shell命令获取当前时间
}
//参数定义
parameters {
string(name: 'BRANCH', defaultValue: 'develop', description: 'branch select') //字符串参数,会配置在jenkins的参数化构建过程中
choice(name: 'DEPLOY_ENV', choices: ['develop', 'production'], description: 'deploy env') //选项参数,会配置在jenkins的参数化构建过程中
}
stages{
stage("code clone"){
steps {
deleteDir() //删除workDir当前目录
script {
if ( env.BRANCH == 'main' ) {
git branch: 'main', credentialsId: 'bef9de81-4a6c-4125-9a21-8b012d51532c', url: 'git@10.0.0.132:magedu/app2.git'
} else if ( env.BRANCH == 'develop' ) {
git branch: 'develop', credentialsId: 'bef9de81-4a6c-4125-9a21-8b012d51532c', url: 'git@10.0.0.132:magedu/app2.git'
} else {
echo '您传递的分支参数BRANCH ERROR,请检查分支参数是否正确'
}
GIT_COMMIT_TAG = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() //获取clone完成的分支tagId,用于做镜像做tag
}
}
}
stage("sonarqube-scanner"){
steps{
dir(env.WORKSPACE) {
sh '/apps/sonar-scanner/bin/sonar-scanner -Dsonar.projectKey=magedu -Dsonar.projectName=magedu-app1 -Dsonar.projectVersion=1.2 -Dsonar.sources=./ -Dsonar.language=py -Dsonar.sourceEncoding=UTF-8'
}
}
}
stage("code build"){
steps{
dir(env.WORKSPACE) {
sh 'tar czvf frontend.tar.gz ./index.html ./images'
}
}
}
stage("file sync"){ //SSH Pipeline Steps
steps{
dir(env.WORKSPACE) {
script {
stage('file copy') {
def remote = [:]
remote.name = 'ubuntu200401'
remote.host = '10.0.0.134'
remote.user = 'root'
remote.password = 'zx123456'
remote.allowAnyHosts = true
//sshCommand remote: remote, command: "docker images" // 执行远程命令
sshCommand remote: remote, command: "mkdir -p /opt/ubuntu-dockerfile"
sshPut remote: remote, from: 'frontend.tar.gz', into: '/opt/ubuntu-dockerfile' //将本地文件put到远端主机
sshPut remote: remote, from: 'build-command.sh', into: '/opt/ubuntu-dockerfile'
sshPut remote: remote, from: 'Dockerfile', into: '/opt/ubuntu-dockerfile'
sshPut remote: remote, from: 'nginx.conf', into: '/opt/ubuntu-dockerfile'
}
}
}
}
}
stage("image build"){ //SSH Pipeline Steps
steps{
dir(env.WORKSPACE) {
script {
stage('image put') {
def remote = [:]
remote.name = 'ubuntu200401'
remote.host = '10.0.0.134'
remote.user = 'root'
remote.password = 'zx123456'
remote.allowAnyHosts = true
sshCommand remote: remote, command: "cd /opt/ubuntu-dockerfile/ && bash build-command.sh ${GIT_COMMIT_TAG}-${DATE}"
}
}
}
}
}
stage('docker-compose image update') {
steps {
sh """
ssh root@10.0.0.134 "echo ${DATE} && cd /data/magedu-app1 && sed -i 's#image: harbor.magedu.net/magedu/nginx:.*#image: harbor.magedu.net/magedu/nginx:${GIT_COMMIT_TAG}-${DATE}#' docker-compose.yml"
"""
}
}
stage('docker-compose app update') {
steps {
//sh """
// ssh root@172.31.6.202 "echo ${DATE} && cd /data/magedu-app1 && docker-compose pull && docker-compose up -d"
//"""
//}
script {
stage('image update') {
def remote = [:]
remote.name = 'docker-server'
remote.host = '10.0.0.134'
remote.user = 'root'
remote.password = 'zx123456'
remote.allowAnyHosts = true
sshCommand remote: remote, command: "cd /data/magedu-app1 && docker-compose pull && docker-compose up -d"
}
}
}
}
stage('send email') {
steps {
sh 'echo send email'
}
post {
always {
script {
mail to: '360159416@qq.com',
subject: "Pipeline Name: ${currentBuild.fullDisplayName}",
body: " ${env.JOB_NAME} -Build Number-${env.BUILD_NUMBER} \n Build URL-'${env.BUILD_URL}' "
}
}
}
}
}
}
第一次运行会失败,并生成 Build with Parameters 按钮。点击该按钮,选择 develop 分支,运行成功。
nginx 页面显示正常。
2. ELK 各组件介绍、ES 的角色类型
2.1 ELK 组件
elasticsearch 负责数据存储及检索。elasticsearch 使用 java 语言开发,是建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎。支持分布式以实现集群高可用。
logstash 负责日志收集、日志处理并发送至 elasticsearch。Logstash 是一个具有实时传输能力的数据收集与处理组件,其可以通过插件实现各场景的日志收集、日志过滤、日志处理及日志输出,支持普通 log、json 格式等格式的日志解析,处理完成后把日志发送给 elasticsearch cluster 进行存储。
Kibana 负责从 es 读取数据进行可视化展示和数据处理。Kibana 为 elasticsearch 提供一个查看数据的 web 界面,其主要是通过 elasticsearch 的 API 接口进行数据查找,并进行前端数据可视化的展现,另外还可以针对特定格式的数据生成相应的表格、柱状图、饼图等。
2.2 elasticsearch 角色类型
elasticsearch 主要节点角色类型如下:
data node: 数据节点,负责数据的存储,如分片的创建及删除、数据的读写、数据的更新、数据的删除等操作。
master node:主节点,负责 index 的创建、删除,分片的分配,node 节点的添加、删除,node 节点宕机时进行将状态通告至其他可用 node 节点。一个 ES 集群只有一个活跃的 master node 节点,其他非活跃的 master 备用节点将等待 master 宕机以后进行新的 master 的选举。
coordinating node:协调节点,负责将数据读写请求转发到 data node,将集群管理相关的操作转发到 master node。客户端节点只作为集群的访问入口,其不存储任何数据,也不参与 master 角色的选举。
Ingest 节点:预处理节点,在检索数据之前可以先对数据做预处理操作(Ingest pipelines,数据提取管道),可以在管道对数据实现对数据的字段删除、文本提取等操作。
如果不显式指定节点类型,则默认具有所有角色类型。
3. 索引、doc、分片与副本的概念
Document(文档)
文档指的是用户提交给 ES 的一条数据。需要注意的是,这里的文档并非指的是一个纯字符串文本,在 ES 中文档指的是一条 JSON 数据。
Index(索引)
Index(索引) 可以理解为是文档的集合,同在一个索引中的文档共同建立倒排索引。
此外,提交给同一个索引中的文档,最好拥有相同的结构。这样对于 ES 来说,不管是存储还是查询,都更容易优化。
Shards(分片)
ES 将数据分散到多个物理的 Lucene 索引,这些 Lucene 索引就称为分片。分散分片的过程,称之为 sharding。
Replicat(副本)
一个分片的跨主机完整备份,分为主分片和副本分片,数据写入主分片时立即同步到副本分片,以实现数据高可用及主分片宕机的故障转移,副本分片可以读,多副本分片可以提高 ES 集群的读性能,只有在主分片宕机以后才会给提升为主分片继续写入数据,并为其添加新的副本分片。
4.不同环境的 ELK 部署规划,基于 deb 或二进制部署 Elasticsearch 集群
4.1 不同环境 ELK 部署规划
小型业务环境
es 需要保证高可用,所以一般部署 3 台 es。
在业务环境上装 logstash,收集日志直接写入 es 集群。
kibana 可以只部署一台,用于展示 es 数据。
中型业务环境(几十数百个服务器)
先将日志收集写入一个队列中(如 kafka、redis),然后 logstash 将日志从队列中消费出来,写入 es 集群。这样可以缓解在业务高峰期日志收集客户端并行往 es 写数据的负载比较高而可能出现的数据丢失、数据阻塞的情况。
大型业务环境
如果中型业务环境那样一个 logstash 的性能已经达到了瓶颈,在高峰期消费不了(队列中消息大量堆积),那就启动多个 logstash 消费队列的数据,写入 es。
4.2 部署 ES 集群
我们这里部署个小型的 es 集群。
准备三台虚拟机。
es1 2C4G 10.0.0.150
es2 2C2G 10.0.0.151
es3 2C2G 10.0.0.152
上传二进制包 elasticsearch-8.5.1-linux-x86_64.tar.gz 到三台机器的/apps 目录下,然后进行安装。
步骤1-6 在所有节点执行。步骤7在es1节点执行。
# 1. 内核参数优化
vi /etc/sysctl.conf
新增参数:
vm.max_map_count=262144
使参数生效
sysctl -p
# 2. 各节点配置主机名解析
vi /etc/hosts
新增三个节点的ip和主机名映射
10.0.0.150 es1 es1.example.com
10.0.0.151 es2 es2.example.com
10.0.0.152 es3 es3.example.com
# 3. 资源限制优化
vi /etc/security/limits.conf
新增如下内容:
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000
# 4. 创建普通用户运行环境
# groupadd -g 2888 elasticsearch && useradd -u 2888 -g 2888 -r -m -s /bin/bash elasticsearch
# passwd elasticsearch
按照提示输入两次密码。这里密码设置为zx123456
# mkdir /data/esdata /data/eslogs /apps -pv
# chown elasticsearch.elasticsearch /data /apps/ -R
# 5. 重启所有节点,使配置生效。
# reboot
# 6. 部署es集群
# cd /apps
# tar xf elasticsearch-8.5.1-linux-x86_64.tar.gz
# ln -sv /apps/elasticsearch-8.5.1 /apps/elasticsearch
# chown -R elasticsearch.elasticsearch *
# 7. 配置证书
# su - elasticsearch
# 新建instances.yml
elasticsearch@es1:/apps/elasticsearch$ cat instances.yml
instances:
- name: "es1.example.com"
ip:
- "10.0.0.150"
- name: "es2.example.com"
ip:
- "10.0.0.151"
- name: "es3.example.com"
ip:
- "10.0.0.152"
#⽣成CA私钥,默认名字为elastic-stack-ca.p12,不指定密码。
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-certutil ca
#⽣产CA公钥,默认名称为elastic-certificates.p12,不指定密码。
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
#签发elasticsearch集群主机证书:
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-certutil cert --silent --in instances.yml --out certs.zip --pass magedu123 --ca elastic-stack-ca.p12 #指定证书密码为magedu123
Enter password for CA (elastic-stack-ca.p12) : #CA私钥如果没有密码就直接按回⻋确认
# 在3个节点都创建/apps/elasticsearch/config/certs目录。然后证书分发。
# 本机(es1)证书
elasticsearch@es1:/apps/elasticsearch$ unzip certs.zip
elasticsearch@es1:/apps/elasticsearch$ cp -rp es1.example.com/es1.example.com.p12 config/certs/
# es2证书
elasticsearch@es1:/apps/elasticsearch$ scp -rp es2.example.com/es2.example.com.p12 es2:/apps/elasticsearch/config/certs/
# es3证书
elasticsearch@es1:/apps/elasticsearch$ scp -rp es3.example.com/es3.example.com.p12 es3:/apps/elasticsearch/config/certs/
#⽣成 keystore ⽂件(keystore是保存了证书密码的认证⽂件magedu123)
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-keystore create #创建keystore⽂件
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
Enter value for xpack.security.transport.ssl.keystore.secure_password: #magedu123
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Enter value for xpack.security.transport.ssl.truststore.secure_password: #magedu123
# 分发认证文件
elasticsearch@es1:/apps/elasticsearch$ scp config/elasticsearch.keystore es2:/apps/elasticsearch/config/
elasticsearch@es1:/apps/elasticsearch$ scp config/elasticsearch.keystore es3:/apps/elasticsearch/config/
修改配置文件 elasticsearch.yml(所有节点执行)
#es1 节点/apps/elasticsearch/oonf/elasticsearch.yml
elasticsearch@es1:/apps/elasticsearch/config$ grep -Ev '^$|^#' elasticsearch.yml
cluster.name: magedu-es-cluster1
node.name: es1
path.data: /data/esdata
path.logs: /data/eslogs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.0.0.150", "10.0.0.151","10.0.0.152"]
cluster.initial_master_nodes: ["10.0.0.150", "10.0.0.151","10.0.0.152"]
action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es1.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es1.example.com.p12
# es2节点
elasticsearch@es2:/apps/elasticsearch/config$ grep -Ev '^$|^#' elasticsearch.yml
cluster.name: magedu-es-cluster1
node.name: es2
path.data: /data/esdata
path.logs: /data/eslogs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.0.0.150", "10.0.0.151","10.0.0.152"]
cluster.initial_master_nodes: ["10.0.0.150", "10.0.0.151","10.0.0.152"]
action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es2.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es2.example.com.p12
# es3节点
elasticsearch@es3:/apps/elasticsearch/config$ grep -Ev '^$|^#' elasticsearch.yml
cluster.name: magedu-es-cluster1
node.name: es3
path.data: /data/esdata
path.logs: /data/eslogs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.0.0.150", "10.0.0.151","10.0.0.152"]
cluster.initial_master_nodes: ["10.0.0.150", "10.0.0.151","10.0.0.152"]
action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es3.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es3.example.com.p12
各 node 节点配置 service 文件
# node1 节点新增service文件,然后拷贝到es2、es3的相同路径。文件内容如下。
# vi /lib/systemd/system/elasticsearch.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
RuntimeDirectory=elasticsearch
Environment=ES_HOME=/apps/elasticsearch
Environment=ES_PATH_CONF=/apps/elasticsearch/config
Environment=PID_DIR=/apps/elasticsearch
WorkingDirectory=/apps/elasticsearch
User=elasticsearch
Group=elasticsearch
ExecStart=/apps/elasticsearch/bin/elasticsearch --quiet
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
启动 es
# systemctl daemon-reload && systemctl start elasticsearch.service && systemctl enable elasticsearch.service
启动报错。报错信息为:
[2022-11-26T18:21:31,542][INFO ][o.e.n.Node ] [es1] closing ...
[2022-11-26T18:21:31,552][INFO ][o.e.n.Node ] [es1] closed
[2022-11-26T18:21:31,553][INFO ][o.e.x.m.p.NativeController] [es1] Native controller process has stopped - no new native processes can be started
[2022-11-26T18:21:31,555][WARN ][stderr ] [es1] The system environment variables are not available to Log4j due to security restrictions: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.*")The system environment variables are not available to Log4j due to security restrictions: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.*")The system environment variables are not available to Log4j due to security restrictions: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.*")The system environment variables are not available to Log4j due to security restrictions: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.*")The system environment variables are not available to Log4j due to security restrictions: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.*")
[2022-11-26T18:21:31,556][WARN ][stderr ] [es1] java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.*")
检查配置项,未发现配置异常。网上检索错误信息,有网友的解决方案如下:
在服务文件中增加此选项后,服务启动正常。
用户管理
批量修改默认账户密码
elasticsearch@es1:~$ cd /apps/elasticsearch
# 设置密码,这里密码都设置为123456
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-setup-passwords interactive
******************************************************************************
Note: The 'elasticsearch-setup-passwords' tool has been deprecated. This command will be removed in a future release.
******************************************************************************
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
如果想自定义用户的话,可以通过如下方式创建。如创建 magedu,密码 123456。
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-users useradd magedu -p 123456 -r superuser
自定义账户创建命令需要在所有节点都执行。
# 虽然es1有账户magedu,但是es3上未执行命令创建magedu,访问失败。
elasticsearch@es1:/apps/elasticsearch$ curl -u magedu:123456 http://10.0.0.152:9200/_cat/health
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [magedu] for REST request [/_cat/health]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","ApiKey"]}}],"type":"security_exception","reason":"unable to authenticate user [magedu] for REST request [/_cat/health]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","ApiKey"]}},"status":401}elasticsearch@es1:/apps/elasticsearch$
# es3创建magedu后,es1用该账户访问es3的地址成功。
elasticsearch@es1:/apps/elasticsearch$ curl -u magedu:123456 http://10.0.0.152:9200/_cat/health
1669560678 14:51:18 magedu-es-cluster1 green 3 3 4 2 0 0 0 0 - 100.0%
5. 了解 Elasticsearch API 的简单使用,安装 head 插件管理 ES 的数据
5.1 Elasticsearch API 简单使用
# curl -u magedu:123456 -X GET http://10.0.0.150:9200 #获取集群状态
# curl -u magedu:123456 -X GET http://10.0.0.150:9200/_cat #集群支持的操作
# curl -u magedu:123456 -X GET http://10.0.0.150:9200/_cat/master?v #获取master信息
# curl -u magedu:123456 -X GET http://10.0.0.150:9200/_cat/nodes?v #获取node节点信息
# curl -u magedu:123456 -X GET http://10.0.0.150:9200/_cat/health?v #获取集群心跳信息
# curl -u magedu:123456 -X PUT http://10.0.0.150:9200/test_index?pretty #创建索引test_index,pretty 为格式序列化
# curl -u magedu:123456 -X GET http://10.0.0.150:9200/test_index?pretty #查看索引
# curl -u magedu:123456 -X POST "http://10.0.0.150:9200/test_index/_doc/1?pretty" -H 'Content-Type: application/json' -d' {"name": "Jack","age": 19}' #上传数据
# curl -u magedu:123456 -X GET "http://10.0.0.150:9200/test_index/_doc/1?pretty" #查看文档
# curl -u magedu:123456 -X PUT http://10.0.0.150:9200/test_index/_settings -H 'content-Type:application/json' -d '{"number_of_replicas": 2}' #修改副本数,副本数可动态调整
# curl -u magedu:123456 -X GET http://10.0.0.150:9200/test_index/_settings?pretty #查看索引设置
# curl -u magedu:123456 -X DELETE "http://10.0.0.150:9200/test_index?pretty" #删除索引
# curl -u magedu:123456 -X POST "http://10.0.0.150:9200/test_index/_close" #关闭索引
# curl -u magedu:123456 -X POST "http://10.0.0.150:9200/test_index/_open?pretty" #打开索引
# curl -u magedu:123456 -X PUT http://10.0.0.150:9200/_cluster/settings -H 'Content-Type: application/json' -d'
{
"persistent" : {
"cluster.max_shards_per_node" : "1000000"
}
}' #修改集群每个节点的最大可分配的分片数,es7默认为1000,用完后创建新的分片报错误状态码400
# curl -u magedu:123456 -X PUT http://10.0.0.150:9200/_cluster/settings -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "95%",
"cluster.routing.allocation.disk.watermark.high": "95%"
}
}' #磁盘最低和最高使用百分比95%,默认85%不会在当前节点创新新的分配副本、90%开始将副本移动至其它节点、95所有索引只读。
5.2 安装 head 插件
打开 chrome 浏览器,选择扩展程序,打开开发者模式,选择加载已解压的扩展程序,选择插件目录。
插件安装后会显示在页面中。
点击扩展程序,选择 head 插件,就可以使用了。
6.安装 Logstash 收集不同类型的系统日志并写入到 ES 的不同 index
环境信息: web1 10.0.0.153
上传 logstash 的 deb 包并安装。
root@web1:~# dpkg -i logstash-8.5.1-amd64.deb
由于logstash默认是普通用户运行的。而在收集系统日志时,可能由于没有访问权限而无法收集。所以这里将启动用户改为root。
root@web1:~# vi /usr/lib/systemd/system/logstash.service
root@web1:~# cat /usr/lib/systemd/system/logstash.service
[Unit]
Description=logstash
[Service]
Type=simple
User=root
Group=root
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384
# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity
[Install]
WantedBy=multi-user.target
root@web1:~# systemctl daemon-reload
配置文件放到/etc/logstash/conf.d 目录下。
新建 syslog-to-es.conf,收集 syslog 和 authlog,写入不同的索引。配置如下。
input {
file {
path => "/var/log/syslog"
stat_interval => "1"
start_position => "beginning"
type => "syslog"
}
file {
path => "/var/log/auth.log"
stat_interval => "1"
start_position => "beginning"
type => "authlog"
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "magedu-app1-syslog-%{+yyyy.MM.dd}"
user => "magedu"
password => "123456"
}}
if [type] == "authlog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "magedu-app1-authlog-%{+yyyy.MM.dd}"
user => "magedu"
password => "123456"
}}
}
启动 logstash 服务。
root@web1:/etc/logstash/conf.d# systemctl start logstash
root@web1:/etc/logstash/conf.d# systemctl status logstash
● logstash.service - logstash
Loaded: loaded (/lib/systemd/system/logstash.service; disabled; vendor preset: enabled)
Active: active (running) since Sun 2022-11-27 15:57:04 UTC; 5s ago
Main PID: 2549 (java)
Tasks: 21 (limit: 2236)
Memory: 314.5M
CGroup: /system.slice/logstash.service
└─2549 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfM>
Nov 27 15:57:04 web1 systemd[1]: Started logstash.
Nov 27 15:57:04 web1 logstash[2549]: Using bundled JDK: /usr/share/logstash/jdk
索引成功写入。
7. 安装 Kibana、查看 ES 集群的数据
kibana 和 es1 部署在同一台机器上。
上传 kibana 的 deb 包并安装。
root@es1:~# dpkg -i kibana-8.5.1-amd64.deb
修改配置以连接es集群
root@es1:~# vi /etc/kibana/kibana.yml
root@es1:~# grep -Ev '^$|^#' /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://10.0.0.150:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
logging:
appenders:
file:
type: file
fileName: /var/log/kibana/kibana.log
layout:
type: json
root:
appenders:
- default
- file
pid.file: /run/kibana/kibana.pid
i18n.locale: "zh-CN"
启动 kibana 服务。
root@es1:~# systemctl start kibana
root@es1:~# systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/lib/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: active (running) since Sun 2022-11-27 16:07:52 UTC; 8s ago
Docs: https://www.elastic.co
Main PID: 6010 (node)
Tasks: 11 (limit: 4575)
Memory: 242.8M
CGroup: /system.slice/kibana.service
└─6010 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist
Nov 27 16:07:52 es1 systemd[1]: Started Kibana.
Nov 27 16:07:54 es1 kibana[6010]: [2022-11-27T16:07:54.152+00:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]
# 设置开机自启动
root@es1:~# systemctl enable kibana
Created symlink /etc/systemd/system/multi-user.target.wants/kibana.service → /lib/systemd/system/kibana.service.
浏览器访问 kibana, http://10.0.0.150:5601
输入用户名密码 是 elasticsearch 里的用户名密码。这里用 magedu/123456
查看数据方法如下:
stack Managerment -> 数据视图 -> 创建数据视图。
然后在 Discover 页面,选择数据视图进行查看。
Starry
还未添加个人签名 2018-12-10 加入
还未添加个人简介
评论