ELK + Filebeat + Kafka 分布式日志管理平台搭建,最新 java 面试题及答案
Beats 收集数据,写入数据到消息队列中。
Logstash 从消息队列中,读取数据,写入 Elasticsearch 中
如下就是其工作流程
2. ELK + Filebeat + Kafka 分布式日志管理平台搭建
==========================================================================================================
===============================================================================
docker 安装 ELFK 实现日志统计
====================================================================================
由于我们架构演变,在 filebeat 中原来由传输到 logstash 改变为发送到 kafka,我们这边 filebeat.yml 改动的部分为:
filebeat.inputs:
type: log
enabled: true
paths:
/var/logs/springboot/sparksys-authorization.log # 配置我们要读取的 Spring Boot 应用的日志
fields:
#定义日志来源,添加了自定义字段
log_source: authorization
type: log
enabled: true
paths:
/var/logs/springboot/sparksys-gateway.log
fields:
log_source: gateway
type: log
enabled: true
paths:
/var/logs/springboot/sparksys-file.log
fields:
log_source: file
type: log
enabled: true
paths:
/var/logs/springboot/sparksys-oauth.log
fields:
log_source: oauth
#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
Array of hosts to connect to.
hosts: ["192.168.3.3:9200"]
#----------------------------- Logstash output --------------------------------
#output.logstash:
The Logstash hosts
hosts: ["logstash:5044"]
#----------------------------- kafka output --------------------------------
output.kafka:
enabled: true
hosts: ["192.168.3.3:9092"]
topic: sparksys-log
添加 kafka 输出的配置,将 logstash 输出配置注释掉。hosts 表示 kafka 的 ip 和端口号,topic 表示 filebeat 将数据输出到 topic 为 sparksys-log 的主题下,此处也根据自己情况修改
====================================================================================
logstash.conf 配置 input 由原来的输入源 beat 改为 kafka
input {
kafka {
codec => "json"
topics => ["sparksys-log"]
bootstrap_servers => "192.168.3.3:9092"
auto_offset_reset => "latest"
group_id => "logstash-g1"
}
}
output {
elasticsearch {
hosts => "es:9200"
index => "filebeat_%{[fields][log_source]}-%{+YYYY.MM.dd}"
}
}
上述配置说明如下:
topics 后面的 sparksys-log 表示从 kafka 中 topic 为 sparksys-log 的主题中获取数据,此处的配置根据自己的具体情况去配置。
bootstrap_servers 表示配置 kafka 的 ip 与端口。
到此,ELFK 的变动部分结束,接下来就是 kafka 的搭建
===============================================================================
===============================================================================================
配置如下:
version: '3'
services:
zookeeper:
image: zookeeper:latest
container_name: zookeeper
volumes:
/Users/zhouxinlei/docker/kafka/zookeeper/data:/data
/Users/zhouxinlei/docker/kafka/zookeeper/datalog:/datalog
ports:
2181:2181
restart: always
kafka:
image: wurstmeister/kafka
container_name: kafka
volumes:
/Users/zhouxinlei/docker/kafka/data:/kafka
ports:
9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.3.3
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: 120
KAFKA_MESSAGE_MAX_BYTES: 10000000
KAFKA_REPLICA_FETCH_MAX_BYTES: 10000000
KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS: 60000
KAFKA_NUM_PARTITIONS: 3
KAFKA_DELETE_RETENTION_MS: 1000
restart: always
kafka-manager:
image: kafkamanager/kafka-manager
container_name: kafka-manager
environment:
ZK_HOSTS: 192.168.3.3
ports:
9001:9000
restart: always
======================================================================================
docker-compose up -d
2.2.2 访问 http://192.168.3.3:9001
====================================================================================================
进入 kafka-manager web 页面新建 cluster
![ELK + Filebeat + Kafka 分布式日志管理平台搭建](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQt
aW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xMTQ3NDA4OC03MjUzMzdmMDBjOWJlZjMx?x-oss-process=image/format,png)
列表展示
进入 kafka01
评论