极客时间运维进阶训练营第六周作业
- 2022-12-07 北京
本文字数:17040 字
阅读完需:约 56 分钟
1.基于 logstash filter 功能将 nginx 默认的访问日志及 error log 转换为 json 格式并写入 elasticsearch
logstash filter 插件中,转换为 json 格式需要用到 grok,编写正则表达式。
下面是正则表达式中"?"符号用法的简单介绍。
正则表达式中"?"的用法大概有以下几种:
直接跟随在子表达式后面
这种方式是最常用的用法,具体表示匹配前面的一次或者 0 次,类似于{0,1},如:abc(d)?可匹配 abc 和 abcd
非贪婪匹配
关于贪婪与非贪婪,贪婪匹配的意思是,在同一个匹配项中,尽量匹配更多所搜索的字符,非贪婪则相反。正则匹配的默认模式是贪婪模式,当?号跟在如下限制符后面时,使用贪婪模式
(*,+,?,{n},{n,},{n,m})
如正则表达式\S+c 匹配字符串 aaaacaaaaaac 的结果是 aaaacaaaaaac,而\S+?c 则会优先匹配 aaaac。
非获取匹配
当我们使用正则表达式的时候,捕获的字符串会被缓存起来以供后续使用,具体表现为每个()中的表达式所匹配到的内容在进行正则匹配的过程中,都会被缓存下来。如以下代码
var testReg=/(a+)(b*)c/;
testReg.test('aaaabbbccc') //输出true
console.log(RegExp.$1); //输出aaaa
console.log(RegExp.$2); //输出bbb
但是,如果在子分组中加入?: 之后,分组依然成立,但是不会被缓存下来,看以下代码
var testReg=/(a+)(?:b*)c/;
testReg.test('aaaabbbccc') //输出true
console.log(RegExp.$1); //输出aaaa
console.log(RegExp.$2); //输出""
断言
正则表达式中大部分的结构所匹配的文本最终会出现在匹配结果中,但也有一部分结构并不真正匹配文本,而只是负责判断某个位置左/右侧是否符合要求,这种结构被称为断言。
常用的断言有以下四种
(?=pattern) 非获取匹配,正向肯定预查,在任何匹配 pattern 的字符串开始处匹配查找字符串,该匹配不需要获取供以后使用。
(?!pattern) 非获取匹配,正向否定预查,在任何不匹配 pattern 的字符串开始处匹配查找字符串,该匹配不需要获取供以后使用。
(?<=pattern) 非获取匹配,反向肯定预查,与正向肯定预查类似,只是方向相反。
(?<!patte_n) 非获取匹配,反向否定预查,与正向否定预查类似,只是方向相反。
这个断言我是没看懂,后面碰到再继续研究。
环境信息:
logstash-web 10.0.0.153
部署 nginx
root@web1:~# wget https://nginx.org/download/nginx-1.22.1.tar.gz
root@web1:~# apt update
root@web1:~# apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip make
# cd nginx-1.22.1
# ./configure --prefix=/apps/nginx
# make
# make install
# 在/apps/nginx/conf/nginx.conf 的server中加上域名www.magedu.net,即:
server {
listen 80;
server_name localhost www.magedu.net;
# 修改主页内容,如:
# cat /apps/nginx/html/index.html
<h1>magedu web1 20221204</h1>
# 启动nginx,确保nginx正常访问
# /apps/nginx/sbin/nginx
在浏览器所在机器上配置域名解析。C:\Windows\System32\drivers\etc\hosts 中添加:
10.0.0.153 www.magedu.net
浏览器访问:http://www.magedu.net/
查看nginx日志:
root@web1:/apps/nginx# tail /apps/nginx/logs/access.log
10.0.0.1 - - [03/Dec/2022:16:36:24 +0000] "GET / HTTP/1.1" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
10.0.0.1 - - [03/Dec/2022:16:36:24 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://10.0.0.153/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
10.0.0.1 - - [03/Dec/2022:16:37:22 +0000] "GET / HTTP/1.1" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
10.0.0.1 - - [03/Dec/2022:16:37:23 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://www.magedu.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
root@web1:/apps/nginx/logs# cat error.log
2022/12/03 16:36:24 [error] 12665#0: *1 open() "/apps/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "10.0.0.153", referrer: "http://10.0.0.153/"
2022/12/03 16:37:23 [error] 12665#0: *3 open() "/apps/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "www.magedu.net", referrer: "http://www.magedu.net/"
新建 logstash 配置文件/etc/logstash/conf.d/nginxlog-to-es.conf:
input {
file {
path => "/apps/nginx/logs/access.log"
type => "nginx-accesslog"
stat_interval => "1"
start_position => "beginning"
}
file {
path => "/apps/nginx/logs/error.log"
type => "nginx-errorlog"
stat_interval => "1"
start_position => "beginning"
}
}
filter {
if [type] == "nginx-accesslog" {
grok {
match => { "message" => ["%{IPORHOST:clientip} - %{DATA:username} \[%{HTTPDATE:request-time}\] \"%{WORD:request-method} %{DATA:request-uri} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:useragent}\""] }
remove_field => "message"
add_field => { "project" => "magedu"}
}
mutate {
convert => [ "[response_code]", "integer"]
}
}
if [type] == "nginx-errorlog" {
grok {
match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:message}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
remove_field => "message"
}
}
}
output {
if [type] == "nginx-accesslog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "magedu-nginx-accesslog-%{+yyyy.MM.dd}"
user => "magedu"
password => "123456"
}}
if [type] == "nginx-errorlog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "magedu-nginx-errorlog-%{+yyyy.MM.dd}"
user => "magedu"
password => "123456"
}}
}
验证配置文件是否有语法错误
# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginxlog-to-es.conf -t
启动 logstash
# systemctl start logstash.service
# systemctl status logstash.service # 查看状态为 active (running)
# tail -f /var/log/logstash/logstash-plain.log # logstash启动日志无报错
查看 es 信息,索引成功生成。
magedu-nginx-accesslog-2022.12.03 里一条记录如下:
{
"_index": "magedu-nginx-accesslog-2022.12.03",
"_id": "xgzx2IQBpcKVHCcjw1bc",
"_version": 1,
"_score": 1,
"_source": {
"@timestamp": "2022-12-03T17:03:53.322686579Z",
"request-time": "03/Dec/2022:16:36:24 +0000",
"project": "magedu",
"http_version": "1.1",
"request-uri": "/favicon.ico",
"log": {
"file": {
"path": "/apps/nginx/logs/access.log"
}
},
"event": {
"original": "10.0.0.1 - - [03/Dec/2022:16:36:24 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://10.0.0.153/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36""
},
"request-method": "GET",
"referrer": "http://10.0.0.153/",
"host": {
"name": "web1"
},
"username": "-",
"response_code": 404,
"type": "nginx-accesslog",
"useragent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"body_sent_bytes": "555",
"@version": "1",
"clientip": "10.0.0.1"
}
}
magedu-nginx-errorlog-2022.12.03 里一条记录如下:
{
"_index": "magedu-nginx-errorlog-2022.12.03",
"_id": "ygzx2IQBpcKVHCcjxFbL",
"_version": 1,
"_score": 1,
"_source": {
"@timestamp": "2022-12-03T17:03:53.298172638Z",
"pid": "12665",
"domainname": ""10.0.0.153", referrer: "http://10.0.0.153/"",
"request-uri": "/favicon.ico",
"log": {
"file": {
"path": "/apps/nginx/logs/error.log"
}
},
"event": {
"original": "2022/12/03 16:36:24 [error] 12665#0: *1 open() "/apps/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "10.0.0.153", referrer: "http://10.0.0.153/""
},
"loglevel": "error",
"threadid": "0",
"request-method": "GET",
"host": {
"name": "web1"
},
"timestamp": "2022/12/03 16:36:24",
"server": "localhost",
"type": "nginx-errorlog",
"httpversion": "1.1",
"@version": "1",
"clientip": "10.0.0.1",
"connectionid": "1"
}
}
2.基于 logstash 收集 json 格式的 nginx 访问日志
配置 nginx 的 access 日志输出为 json 格式
# 修改nginx配置,访问日志改为json格式
vi /apps/nginx/conf/nginx.conf
http区段加入如下内容:
log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"uri":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"tcp_xff":"$proxy_protocol_addr",'
'"http_user_agent":"$http_user_agent",'
'"status":"$status"}';
access_log /var/log/nginx/access.log access_json;
# 创建access.log目录文件夹,重启nginx
# mkdir -p /var/log/nginx
# /apps/nginx/sbin/nginx
浏览器访问 www.magedu.net, 产生访问日志
root@web1:/apps/nginx/conf# tail /var/log/nginx/access.log
{"@timestamp":"2022-12-06T12:26:14+00:00","host":"10.0.0.153","clientip":"10.0.0.1","size":0,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"www.magedu.net","uri":"/index.html","domain":"www.magedu.net","xff":"-","referer":"-","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36","status":"304"}
{"@timestamp":"2022-12-06T12:26:15+00:00","host":"10.0.0.153","clientip":"10.0.0.1","size":555,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"www.magedu.net","uri":"/favicon.ico","domain":"www.magedu.net","xff":"-","referer":"http://www.magedu.net/","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36","status":"404"}
配置 logstash,处理 nginx 的 access 日志,写入 es。
root@web1:/etc/logstash/conf.d# cat nginx-json-log-to-es.conf
input {
file {
path => "/var/log/nginx/access.log"
start_position => "end"
type => "nginx-json-accesslog"
stat_interval => "1"
codec => json
}
}
output {
if [type] == "nginx-json-accesslog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "nginx-accesslog-150-%{+YYYY.MM.dd}"
user => "magedu"
password => "123456"
}}
}
# systemctl restart logstash.service
由于 logstash 配置的从 end 位置处理 nginx 访问日志,这里需要触发 nginx 访问请求才能产生日志,处理新的消息。
浏览器访问 web 页面后,查看 es,产生索引 nginx-accesslog-150-2022.12.06,索引中一条记录格式如下:
{
"_index": "nginx-accesslog-150-2022.12.06",
"_id": "rMRs54QBJ_3Q-b4jRQ4s",
"_version": 1,
"_score": 1,
"_ignored": [
"event.original.keyword"
],
"_source": {
"xff": "-",
"upstreamtime": "-",
"http_user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"log": {
"file": {
"path": "/var/log/nginx/access.log"
}
},
"upstreamhost": "-",
"http_host": "www.magedu.net",
"host": "10.0.0.153",
"event": {
"original": "{"@timestamp":"2022-12-06T12:32:22+00:00","host":"10.0.0.153","clientip":"10.0.0.1","size":30,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"www.magedu.net","uri":"/index.html","domain":"www.magedu.net","xff":"-","referer":"-","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36","status":"200"}"
},
"type": "nginx-json-accesslog",
"tcp_xff": "-",
"@timestamp": "2022-12-06T12:32:22.000Z",
"size": 30,
"domain": "www.magedu.net",
"responsetime": 0,
"uri": "/index.html",
"@version": "1",
"status": "200",
"referer": "-",
"clientip": "10.0.0.1"
}
}
3.基于 logstash 收集 java 日志并实现多行合并
以 es 日志为例,日志中的错误信息通常是多行显示,而 logstash 默认是按行解析,这样错误信息就没有很好的收集到,不利于后续检索分析。可以用 multline 插件来实现多行合并。
重启 es,以构造 es 日志里的错误消息,例如:
重启es
# systemctl restart elasticsearch
查看日志
# cd /data/eslogs
# vi magedu-es-cluster1.log
3143~3158为java抛出的一个异常信息。
3142 [2022-12-06T12:49:23,484][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [es1] uncaught exception in thread [process reaper (p id 1243)]
3143 java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "modifyThread")
3144 at java.security.AccessControlContext.checkPermission(AccessControlContext.java:485) ~[?:?]
3145 at java.security.AccessController.checkPermission(AccessController.java:1068) ~[?:?]
3146 at java.lang.SecurityManager.checkPermission(SecurityManager.java:411) ~[?:?]
3147 at org.elasticsearch.secure_sm.SecureSM.checkThreadAccess(SecureSM.java:166) ~[?:?]
3148 at org.elasticsearch.secure_sm.SecureSM.checkAccess(SecureSM.java:120) ~[?:?]
3149 at java.lang.Thread.checkAccess(Thread.java:2360) ~[?:?]
3150 at java.lang.Thread.setDaemon(Thread.java:2308) ~[?:?]
3151 at java.lang.ProcessHandleImpl.lambda$static$0(ProcessHandleImpl.java:103) ~[?:?]
3152 at java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:637) ~[?:?]
3153 at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:928) ~[?:?]
3154 at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1021) ~[?:?]
3155 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158) ~[?:?]
3156 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
3157 at java.lang.Thread.run(Thread.java:1589) ~[?:?]
3158 at jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:186) ~[?:?]
3159 [2022-12-06T12:49:23,705][INFO ][o.e.n.Node ] [es1] stopped
3160 [2022-12-06T12:49:23,705][INFO ][o.e.n.Node ] [es1] closing ...
es 服务器上还未安装 logstash,es1 的内存为 4G,需要调大一点,这里调整成 6G,然后再安装 logstash。
# 上传logstash deb包,安装logstash
# dpkg -i logstash-8.5.1-amd64.deb
# logstash配置
root@es1:/etc/logstash/conf.d# cat es-log-to-es.conf
input {
file {
path => "/data/eslogs/magedu-es-cluster1.log"
type => "eslog"
stat_interval => "1"
start_position => "beginning"
codec => multiline {
pattern => "^\[[0-9]{4}\-[0-9]{2}\-[0-9]{2}"
negate => "true"
what => "previous"
}
}
}
output {
if [type] == "eslog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "magedu-eslog-%{+YYYY.ww}"
user => "magedu"
password => "123456"
}}
}
重启logstash
# systemctl restart logstash.service
查看 es,产生索引 magedu-eslog-2022.49,从 kibana 上看,错误信息如下:
4.基于 logstash 收集 syslog 类型日志 (以 haproxy 替代网络设备)
新建虚拟机,用于部署 haproxy。ip 为 10.0.0.154。
之前虚拟机里用的阿里镜像源,有时安装某些软件(特别是 jdk)会很慢,这次这台虚拟机我改用清华镜像源试试。
root@web2:~# cat /etc/apt/sources.list
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
安装配置 haproxy
# apt update
# apt install haproxy -y
配置反向代理kibana
# vi /etc/haproxy/haproxy.cfg
在文件最后新加如下内容
listen kibana
bind 10.0.0.154:5601
log global
# 下面每列含义为:COL1(server标识) COL2(haproxy自定义的名字) COL3(后端服务器地址) COL4~ 状态检查:每隔2s检查一次,连续检查3次失败就把该节点踢出去,连续检查3次通过就把该节点添加到后端服务器。
server kibana 10.0.0.150:5601 check inter 2s fall 3 rise 3
启动haproxy,可以看到5601端口已监听。
root@web2:~# systemctl restart haproxy.service
root@web2:~# ss -tunlp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=792,fd=12))
tcp LISTEN 0 4096 10.0.0.154:5601 0.0.0.0:* users:(("haproxy",pid=3773,fd=7))
tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=792,fd=13))
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=854,fd=3))
tcp LISTEN 0 128 127.0.0.1:6010 0.0.0.0:* users:(("sshd",pid=1248,fd=10))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=854,fd=4))
tcp LISTEN 0 128 [::1]:6010 [::]:* users:(("sshd",pid=1248,fd=9))
配置 syslog
root@web2:~# vi /etc/rsyslog.d/49-haproxy.conf
root@web2:~# cat /etc/rsyslog.d/49-haproxy.conf
# Create an additional socket in haproxy's chroot in order to allow logging via
# /dev/log to chroot'ed HAProxy processes
$AddUnixListenSocket /var/lib/haproxy/dev/log
# Send HAProxy messages to a dedicated logfile
:programname, startswith, "haproxy" {
#默认是记录到本地文件,这里注释掉,改为发送给logstash的514端口。@@代表tcp
#/var/log/haproxy.log
@@10.0.0.154:514
stop
}
重启rsyslog服务
# systemctl restart rsyslog.service
安装配置 logstash
# dpkg -i logstash-8.5.1-amd64.deb
root@web2:/etc/logstash/conf.d# cat syslog-to-es.conf
input{
syslog {
type => "rsyslog-haproxy"
host => "0.0.0.0"
port => "514" #监听一个本地的端口
}}
output{
if [type] == "rsyslog-haproxy" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "magedu-rsyslog-haproxy-%{+YYYY.ww}"
user => "magedu"
password => "123456"
}}
}
启动 logstash,然后访问负载均衡器的 ip。
# systemctl start logstash.service
浏览器访问: http://10.0.0.154:5601/
查看 es,没有产生索引。再来查看 logstash 日志,发现是权限不足(Permission denied)。
[2022-12-06T14:35:32,556][INFO ][logstash.inputs.syslog ][main][2c249ed7cabd709bdb9cd2896dc1ce62ba0fa8640e1d503408d0a16654d683aa] Starting syslog udp listener {:address=>"0.0.0.0:514"}
[2022-12-06T14:35:32,557][WARN ][logstash.inputs.syslog ][main][2c249ed7cabd709bdb9cd2896dc1ce62ba0fa8640e1d503408d0a16654d683aa] syslog listener died {:protocol=>:udp, :address=>"0.0.0.0:514", :exception=>#<Errno::EACCES: Permission denied - bind(2) for "0.0.0.0" port 514>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:200:in `bind'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:191:in `udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:172:in `server'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:152:in `block in run'"]}
[2022-12-06T14:35:37,537][INFO ][logstash.inputs.syslog ][main][2c249ed7cabd709bdb9cd2896dc1ce62ba0fa8640e1d503408d0a16654d683aa] Starting syslog tcp listener {:address=>"0.0.0.0:514"}
[2022-12-06T14:35:37,538][WARN ][logstash.inputs.syslog ][main][2c249ed7cabd709bdb9cd2896dc1ce62ba0fa8640e1d503408d0a16654d683aa] syslog listener died {:protocol=>:tcp, :address=>"0.0.0.0:514", :exception=>#<Errno::EACCES: Permission denied - bind(2) for "0.0.0.0" port 514>, :backtrace=>["org/jruby/ext/socket/RubyTCPServer.java:123:in `initialize'", "org/jruby/RubyClass.java:911:in `new'", "org/jruby/RubyIO.java:868:in `new'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:208:in `tcp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:172:in `server'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:156:in `block in run'"]}
[2022-12-06T14:35:37,558][INFO ][logstash.inputs.syslog ][main][2c249ed7cabd709bdb9cd2896dc1ce62ba0fa8640e1d503408d0a16654d683aa] Starting syslog udp listener {:address=>"0.0.0.0:514"}
[2022-12-06T14:35:37,559][WARN ][logstash.inputs.syslog ][main][2c249ed7cabd709bdb9cd2896dc1ce62ba0fa8640e1d503408d0a16654d683aa] syslog listener died {:protocol=>:udp, :address=>"0.0.0.0:514", :exception=>#<Errno::EACCES: Permission denied - bind(2) for "0.0.0.0" port 514>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:200:in `bind'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:191:in `udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:172:in `server'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:152:in `block in run'"]}
logstash 默认是普通用户启动,这次我们改 service 文件,用 root 启动。
启动后,发现 514 端口也监听了。访问负载均衡器后,es 索引也生成了。
索引 magedu-rsyslog-haproxy-2022.49 里查看一个文档内容如下:
{
"_index": "magedu-rsyslog-haproxy-2022.49",
"_id": "IVzk54QBEk1WulPby46k",
"_version": 1,
"_score": 1,
"_source": {
"@version": "1",
"type": "rsyslog-haproxy",
"log": {
"syslog": {
"priority": 134,
"facility": {
"name": "local0",
"code": 16
},
"severity": {
"name": "Informational",
"code": 6
}
}
},
"message": "10.0.0.1:7786 [06/Dec/2022:14:44:00.538] kibana kibana/kibana 0/0/0/17/17 302 856 - - ---- 1/1/0/0/0 0/0 "POST /internal/security/session HTTP/1.1" ",
"service": {
"type": "system"
},
"event": {
"original": "<134>Dec 6 14:44:00 web2 haproxy[3773]: 10.0.0.1:7786 [06/Dec/2022:14:44:00.538] kibana kibana/kibana 0/0/0/17/17 302 856 - - ---- 1/1/0/0/0 0/0 "POST /internal/security/session HTTP/1.1" "
},
"process": {
"name": "haproxy",
"pid": 3773
},
"host": {
"ip": "10.0.0.154",
"hostname": "web2"
},
"@timestamp": "2022-12-06T14:44:00.000Z"
}
}
5.logstash 收集日志并写入 Redis、再通过其它 logstash 消费至 elasticsearch 并保持 json 格式日志的解析
新建一台虚拟机,用于部署 redis,ip 为 10.0.0.155
环境信息:
web1 10.0.0.153 部署 logstash 和 nginx,收集 nginx 日志,写入 redis
redis 10.0.0.155
web2 10.0.0.154 部署 logstash,将 redis 数据同步到 es
es 集群 10.0.0.150/151/152
root@redis:~# apt-cache madison redis
redis | 5:5.0.7-2ubuntu0.1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 Packages
redis | 5:5.0.7-2ubuntu0.1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-security/universe amd64 Packages
redis | 5:5.0.7-2 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/universe amd64 Packages
root@redis:~# apt install redis -y
修改redis.conf,配置blind地址,并设置密码。关闭快照功能,其他保持默认。
root@redis:~# grep -E '^[^#]' /etc/redis/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis/redis-server.pid
loglevel notice
logfile /var/log/redis/redis-server.log
databases 16
always-show-logo yes
save ""
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
# 启动redis
# systemctl restart redis
web1 上修改 nginx 配置,沿用第 2 章节的 nginx 配置(access 日志输出为 json 格式,错误日志未做更新,通过 logstash 的 filter 进行格式化)。
root@web1:/etc/logstash/conf.d# cat magedu-log-to-redis.conf
input {
file {
path => "/var/log/nginx/access.log"
type => "magedu-nginx-accesslog"
start_position => "beginning"
stat_interval => "1"
codec => "json" #对json格式日志进行json解析
}
file {
path => "/apps/nginx/logs/error.log"
type => "magedu-nginx-errorlog"
start_position => "beginning"
stat_interval => "1"
}
}
filter {
if [type] == "magedu-nginx-errorlog" {
grok {
match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:message}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
remove_field => "message" #删除源日志
}
}
}
output {
if [type] == "magedu-nginx-accesslog" {
redis {
data_type => "list"
key => "magedu-nginx-accesslog"
host => "10.0.0.155"
port => "6379"
db => "0"
password => "123456"
}
}
if [type] == "magedu-nginx-errorlog" {
redis {
data_type => "list"
key => "magedu-nginx-errorlog"
host => "10.0.0.155"
port => "6379"
db => "0"
password => "123456"
}
}
}
重启 logstash,访问 web 页面,构造访问消息和错误消息,查看 redis 里是否生成了对应的 key。
# 重启logstash
# systemctl stop logstash.service
# systemctl start logstash.service
# 查看redis信息
root@redis:~# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> keys *
1) "magedu-nginx-errorlog"
2) "magedu-nginx-accesslog"
web2 上修改 logstash 配置,同步 redis 数据到 es
root@web2:/etc/logstash/conf.d# cat redis-to-es.conf
input {
redis {
data_type => "list"
key => "magedu-nginx-accesslog"
host => "10.0.0.155"
port => "6379"
db => "0"
password => "123456"
}
redis {
data_type => "list"
key => "magedu-nginx-errorlog"
host => "10.0.0.155"
port => "6379"
db => "0"
password => "123456"
}
}
output {
if [type] == "magedu-nginx-accesslog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "redis-magedu-nginx-accesslog-%{+YYYY.MM.dd}"
user => "magedu"
password => "123456"
}}
if [type] == "magedu-nginx-errorlog" {
elasticsearch {
hosts => ["10.0.0.150:9200"]
index => "redis-magedu-nginx-errorlog-%{+YYYY.MM.dd}"
user => "magedu"
password => "123456"
}}
}
重启 logstash
# systemctl stop logstash.service
# systemctl start logstash.service
索引已经生成
redis-magedu-nginx-accesslog-2022.12.06 中文档格式如下:
{
"_index": "redis-magedu-nginx-accesslog-2022.12.06",
"_id": "PVwn6IQBEk1WulPb4I7W",
"_version": 1,
"_score": 1,
"_ignored": [
"event.original.keyword"
],
"_source": {
"uri": "/index.html",
"http_user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"clientip": "10.0.0.1",
"host": "10.0.0.153",
"referer": "-",
"@timestamp": "2022-12-06T15:44:29.000Z",
"type": "magedu-nginx-accesslog",
"log": {
"file": {
"path": "/var/log/nginx/access.log"
}
},
"size": 30,
"@version": "1",
"domain": "www.magedu.net",
"xff": "-",
"event": {
"original": "{"@timestamp":"2022-12-06T15:44:29+00:00","host":"10.0.0.153","clientip":"10.0.0.1","size":30,"responsetime":0.000,"upstreamtime":"-","upstreamhost":"-","http_host":"www.magedu.net","uri":"/index.html","domain":"www.magedu.net","xff":"-","referer":"-","tcp_xff":"-","http_user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36","status":"200"}"
},
"http_host": "www.magedu.net",
"responsetime": 0,
"status": "200",
"upstreamhost": "-",
"upstreamtime": "-",
"tcp_xff": "-"
}
}
redis-magedu-nginx-errorlog-2022.12.06 中文档格式如下:
{
"_index": "redis-magedu-nginx-errorlog-2022.12.06",
"_id": "P1wn6IQBEk1WulPb4Y75",
"_version": 1,
"_score": 1,
"_source": {
"request-uri": "/favicon.ico",
"pid": "30748",
"httpversion": "1.1",
"clientip": "10.0.0.1",
"host": {
"name": "web1"
},
"domainname": ""www.magedu.net", referrer: "http://www.magedu.net/"",
"@timestamp": "2022-12-06T15:43:42.267617917Z",
"threadid": "0",
"server": "localhost",
"type": "magedu-nginx-errorlog",
"timestamp": "2022/12/06 12:26:15",
"@version": "1",
"request-method": "GET",
"loglevel": "error",
"event": {
"original": "2022/12/06 12:26:15 [error] 30748#0: *1 open() "/apps/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "www.magedu.net", referrer: "http://www.magedu.net/""
},
"connectionid": "1",
"log": {
"file": {
"path": "/apps/nginx/logs/error.log"
}
}
}
}
6.基于 docker-compose 部署单机版本 ELK
新建虚拟机,分配 4C8G,ip 为 10.0.0.156。
安装 docker,然后根据 docker-compose.yaml 启动 elk。
root@elk:/apps/elk-docker-compose# cat docker-compose.yml
version: '3.8'
services:
elasticsearch:
image: elasticsearch:7.17.7
container_name: elasticsearch
environment:
- node.name=es-node1
- cluster.name=magedu-es-cluster1
- cluster.initial_master_nodes=es-node1
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
volumes:
- 'es-data:/usr/share/elasticsearch/data'
- './elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml'
ports:
- 9200:9200
restart: always
networks:
- elastic
logstash:
image: logstash:7.17.7
container_name: logstash
volumes:
- './logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf'
- './logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml'
ports:
- 5044:5044
- 9889:9889
- 9600:9600
restart: always
networks:
- elastic
depends_on:
- elasticsearch
kibana:
image: kibana:7.17.7
container_name: kibana
volumes:
- './kibana/config:/usr/share/kibana/config'
ports:
- 5601:5601
restart: always
networks:
- elastic
depends_on:
- elasticsearch
volumes:
es-data:
driver: local
networks:
elastic:
ipam:
driver: default
config:
- subnet: "172.16.16.0/24"
# 先启动elasticsearch
root@elk:/apps/elk-docker-compose# docker-compose up -d elasticsearch
Pulling elasticsearch (elasticsearch:7.17.7)...
7.17.7: Pulling from library/elasticsearch
fb0b3276a519: Pull complete
38dd47397c96: Pull complete
a9cc04abc7e0: Pull complete
1b5db66d71d3: Pull complete
e314b07b4a41: Pull complete
c1cbe2363e68: Pull complete
aec6881665fb: Pull complete
20a24512c722: Pull complete
f3d802e5d059: Pull complete
Digest: sha256:bb22e1ef1707314b30020c84f29e25bc0aa80a50616a196526e38069fbd95c1f
Status: Downloaded newer image for elasticsearch:7.17.7
Creating elasticsearch ... done
# docker exec -it elasticsearch /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive #密码为magedu123
# 启动logstash和kibana
# docker-compose up -d
容器运行正常
root@elk:/apps/elk-docker-compose# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d155dff85877 logstash:7.17.7 "/usr/local/bin/dock…" About a minute ago Up About a minute 0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 0.0.0.0:9600->9600/tcp, :::9600->9600/tcp, 0.0.0.0:9889->9889/tcp, :::9889->9889/tcp logstash
bdd4047dd3ca kibana:7.17.7 "/bin/tini -- /usr/l…" About a minute ago Up About a minute 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp kibana
c4d3f390a845 elasticsearch:7.17.7 "/bin/tini -- /usr/l…" 8 minutes ago Up 8 minutes 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp elasticsearch
访问 kibana 正常
后续可以直接用这一套环境进行日志的收集和展示了,配置其实和非 docker 的没啥区别,这里就不再说明。
Starry
还未添加个人签名 2018-12-10 加入
还未添加个人简介
评论