fluent-k8s wip

This commit is contained in:
marcel-dempers 2020-11-08 16:00:38 +11:00 committed by Marcel Dempers
parent 2b4df899b1
commit f9b9a3f1fe
9 changed files with 247 additions and 21 deletions

View File

@ -1,4 +1,4 @@
<source>
<source>
@type tail
format json
read_from_head true

View File

@ -1,4 +1,4 @@
<source>
<source>
@type tail
format json
read_from_head true

View File

@ -2,4 +2,3 @@ FROM fluent/fluentd:v1.11-debian
USER root
RUN gem install fluent-plugin-elasticsearch
USER fluent

View File

@ -1,5 +1,13 @@
# Introduction to Fluentd on Kubernetes
## Prerequisites
You will need a basic understanding of Fluentd before you attempt to run it on Kubernetes.<br/>
Fluentd and Kubernetes have a bunch of moving parts.<br/>
To understand the basics of Fluentd, I highly recommend you start with this video: <br/>
<a href="https://youtu.be/Gp0-7oVOtPw" title="Fluentd"><img src="https://i.ytimg.com/vi/Gp0-7oVOtPw/hqdefault.jpg" width="50%" height="50%" alt="Fluentd" /></a>
## We need a Kubernetes cluster
Lets create a Kubernetes cluster to play with using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
@ -10,8 +18,96 @@ kind create cluster --name fluentd --image kindest/node:v1.19.1
## Fluentd Manifests
I would highly recommend to use manifests from the official fluentd [github repo](https://github.com/fluent/fluentd-kubernetes-daemonset) <br/>
I would highly recommend to use manifests from the official fluentd [github repo](https://github.com/fluent/fluentd-kubernetes-daemonset) for production usage <br/>
The manifests found here are purely for demo purpose. <br/>
The manifests in this repo are broken down and simplified for educational purpose. </br>
<br/>
In this example I will use the most common use case and we'll break it down to get an understanding of each component.
In this example I will use the most common use case and we'll break it down to get an understanding of each component.
## Fluentd Docker
I would recommend to start with the official [fluentd](https://hub.docker.com/r/fluent/fluentd/)
docker image. <br/>
You may want to build your own image if you want to install plugins.
In this demo I will be using the `fluentd` elasticsearch plugin <br/>
It's pretty simple to adjust `fluentd` to send logs to any other destination in case you are not an `elasticsearch` user. <br/>
<br/>
Let's build our [docker image](https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/monitoring/logging/fluentd/introduction/dockerfile) in the introduction folder:
```
cd monitoring\logging\fluentd\introduction
#note: use your own tag!
docker build . -t aimvector/fluentd-demo
#note: use your own tag!
docker push aimvector/fluentd-demo
```
## Fluentd Namespace
I like to run certain infrastructure components in their own namespaces. <br/>
If you are using the official manifests, they may be using the `kube-system` namespace instead. <br/>
You may want to carefully adjust it based on your preference <br/>
Let's create a `fluentd` namespace: <br/>
```
kubectl create ns fluentd
```
## Fluentd Configmap
In my [fluentd introduction video](https://youtu.be/Gp0-7oVOtPw), I talk about how `fluentd` allows us to simplify our configs using the `include` statement. <br/>
This helps us prevent having a large complex file.
<br/>
We have 3 files in our `fluentd-configmap.yaml` :
* fluent.conf: Our main config which includes all other configurations
* pods-fluent.conf: `tail` config that sources all pod logs on the `kubernetes` host
* file-fluent.conf: `match` config to capture all logs and write it to file for testing log collection
* elastic-fluent.conf: `match` config that captures all logs and sends it to `elasticseach`
Let's deploy our `configmap`:
```
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\fluentd-configmap.yaml
```
## Fluentd Daemonset
Let's deploy our `daemonset`:
```
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\fluentd-rbac.yaml
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\fluentd.yaml
kubectl -n fluentd get pods
```
NOT message:("pattern not matched") and NOT message:("/var/log/containers/")
## Demo ElasticSearch and Kibana
```
kubectl create ns elastic-kibana
kubectl -n elastic-kibana apply -f .\monitoring\logging\fluentd\kubernetes\elastic\elastic-demo.yaml
kubectl -n elastic-kibana apply -f .\monitoring\logging\fluentd\kubernetes\elastic\kibana-demo.yaml
```
## Kibana
```
kubectl -n elastic-kibana port-forward svc/kibana 5601
```

View File

@ -14,21 +14,27 @@ spec:
labels:
app: elasticsearch
spec:
initContainers:
- name: vm-max-fix
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch
image: elasticsearch:7.9.1
imagePullPolicy: IfNotExists
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9200
env:
- name: node.name
value: "elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch"
- name: bootstrap.memory_lock
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
env:
- name: node.name
value: "elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch"
- name: bootstrap.memory_lock
value: "false"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
---
apiVersion: v1
kind: Service

View File

@ -17,14 +17,14 @@ spec:
containers:
- name: kibana
image: kibana:7.9.1
imagePullPolicy: IfNotExists
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_URL
value: "http://elasticsearch:9200"
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
env:
- name: ELASTICSEARCH_URL
value: "http://elasticsearch:9200"
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
---
apiVersion: v1
kind: Service

View File

@ -0,0 +1,39 @@
#@include file-fluent.conf
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: fluentd
data:
fluent.conf: |-
################################################################
# This source gets all logs from local docker host
@include pods-fluent.conf
@include elastic-fluent.conf
pods-fluent.conf: |-
<source>
@type tail
read_from_head true
tag kubernetes.*
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
exclude_path ["/var/log/containers/fluent*"]
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
file-fluent.conf: |-
<match **>
@type file
path /tmp/file-test.log
</match>
elastic-fluent.conf: |-
<match **>
@type elasticsearch
host "#{ENV['FLUENT_ELASTICSEARCH_HOST'] || 'elasticsearch.elastic-kibana'}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT'] || '9200'}"
index_name fluentd-k8s
type_name fluentd
</match>

View File

@ -0,0 +1,34 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
namespace: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: fluentd

View File

@ -0,0 +1,52 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: fluentd
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
imagePullPolicy: "Always"
image: aimvector/fluentd-demo
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elastic-kibana"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: fluentd-config
mountPath: /fluentd/etc
- name: varlog
mountPath: /var/log
terminationGracePeriodSeconds: 30
volumes:
- name: fluentd-config
configMap:
name: fluentd-config
- name: varlog
hostPath:
path: /var/log