Merge branch 'master' into shipa

This commit is contained in:
Marcel Dempers 2020-12-09 20:58:15 +00:00 committed by GitHub
commit d31cff5cef
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 27331 additions and 16 deletions

24
.gitignore vendored
View File

@ -1,12 +1,12 @@
c#/src/bin/
c#/src/obj/
node_modules/
__pycache__/
*.pem
*.csr
# terraform
.terraform
*.tfstate
*.tfstate.*
security/letsencrypt/introduction/certs/**
kubernetes/shipa/installs/shipa-helm-chart-1.1.1/
c#/src/bin/
c#/src/obj/
node_modules/
__pycache__/
*.pem
*.csr
# terraform
.terraform
*.tfstate
*.tfstate.*
security/letsencrypt/introduction/certs/**
kubernetes/shipa/installs/shipa-helm-chart-1.1.1/

View File

@ -0,0 +1,177 @@
# Introduction to cert-manager for Kubernetes
## We need a Kubernetes cluster
Lets create a Kubernetes cluster to play with using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
```
kind create cluster --name certmanager --image kindest/node:v1.19.1
```
## Concepts
It's important to understand the various concepts and new Kubernetes resources that <br/>
`cert-manager` introduces.
* Issuers [docs](https://cert-manager.io/docs/concepts/issuer/)
* Certificate [docs](https://cert-manager.io/docs/concepts/certificate/)
* CertificateRequests [docs](https://cert-manager.io/docs/concepts/certificaterequest/)
* Orders and Challenges [docs](https://cert-manager.io/docs/concepts/acme-orders-challenges/)
## Installation
You can find the latest release for `cert-manager` on their [GitHub Releases page](https://github.com/jetstack/cert-manager/) <br/>
For this demo, I will use K8s 1.19 and `cert-manager` [v1.0.4](https://github.com/jetstack/cert-manager/releases/tag/v1.0.4)
```
# Get a container to work in
# mount our kubeconfig file and source code
docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host alpine sh
# install kubectl
apk add --no-cache curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
#test cluster access:
/work # kubectl get nodes
NAME STATUS ROLES AGE VERSION
certmanager-control-plane Ready master 3m6s v1.19.1
# get cert-manager
cd kubernetes/cert-manager/
curl -LO https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml
mv cert-manager.yaml cert-manager-1.0.4.yaml
# install cert-manager
kubectl create ns cert-manager
kubectl apply --validate=false -f cert-manager-1.0.4.yaml
```
## Cert Manager Resources
We can see our components deployed
```
kubectl -n cert-manager get all
NAME READY STATUS RESTARTS AGE
pod/cert-manager-86548b886-2b8x7 1/1 Running 0 77s
pod/cert-manager-cainjector-6d59c8d4f7-hrs2v 1/1 Running 0 77s
pod/cert-manager-webhook-578954cdd-tphpj 1/1 Running 0 77s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cert-manager ClusterIP 10.96.87.136 <none> 9402/TCP 77s
service/cert-manager-webhook ClusterIP 10.104.59.25 <none> 443/TCP 77s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cert-manager 1/1 1 1 77s
deployment.apps/cert-manager-cainjector 1/1 1 1 77s
deployment.apps/cert-manager-webhook 1/1 1 1 77s
NAME DESIRED CURRENT READY AGE
replicaset.apps/cert-manager-86548b886 1 1 1 77s
replicaset.apps/cert-manager-cainjector-6d59c8d4f7 1 1 1 77s
replicaset.apps/cert-manager-webhook-578954cdd 1 1 1 77
```
## Test Certificate Issuing
Let's create some test certificates
```
kubectl create ns cert-manager-test
kubectl apply -f ./selfsigned/issuer.yaml
kubectl apply -f ./selfsigned/certificate.yaml
kubectl describe certificate -n cert-manager-test
kubectl get secrets -n cert-manager-test
kubectl delete ns cert-manager-test
```
## Configuration
https://cert-manager.io/docs/configuration/
## Ingress Controller
Let's deploy an Ingress controller: <br/>
```
kubectl create ns ingress-nginx
kubectl -n ingress-nginx apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
kubectl -n ingress-nginx get pods
kubectl -n ingress-nginx --address 0.0.0.0 port-forward svc/ingress-nginx-controller 80
kubectl -n ingress-nginx --address 0.0.0.0 port-forward svc/ingress-nginx-controller 443
```
We should be able to access NGINX in the browser and see a `404 Not Found` page: http://localhost/
This indicates there are no routes to `/` and the ingress controller is running
## Setup my DNS
In my container, I can get the public IP address of my computer by running a simple command:
```
curl ifconfig.co
```
I can log into my DNS provider and point my DNS A record to my IP.<br/>
Also setup my router to allow 80 and 443 to come to my PC <br/>
If you are running in the cloud, your Ingress controller and Cloud provider will give you a
public IP and you can point your DNS to that accordingly.
## Create Let's Encrypt Issuer for our cluster
We create a `ClusterIssuer` that allows us to issue certs in any namespace
```
kubectl apply -f cert-issuer-nginx-ingress.yaml
# check the issuer
kubectl describe clusterissuer letsencrypt-cluster-issuer
```
## Deploy a pod that uses SSL
```
kubectl apply -f .\kubernetes\deployments\
kubectl apply -f .\kubernetes\services\
kubectl get pods
# deploy an ingress route
kubectl apply -f .\kubernetes\cert-manager\ingress.yaml
```
## Issue Certificate
```
kubectl apply -f certificate.yaml
# check the cert has been issued
kubectl describe certificate example-app
# TLS created as a secret
kubectl get secrets
NAME TYPE DATA AGE
example-app-tls kubernetes.io/tls 2 84m
```

View File

@ -0,0 +1,14 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-cluster-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@email.com
privateKeySecretRef:
name: letsencrypt-cluster-issuer-key
solvers:
- http01:
ingress:
class: nginx

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,12 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example-app
namespace: default
spec:
dnsNames:
- marcel.guru
secretName: example-app-tls
issuerRef:
name: letsencrypt-cluster-issuer
kind: ClusterIssuer

View File

@ -0,0 +1,22 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: example-app
spec:
tls:
- hosts:
- marcel.guru
secretName: example-app-tls
rules:
- host: marcel.guru
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80

View File

@ -0,0 +1,11 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: cert-manager-test
spec:
dnsNames:
- example.com
secretName: selfsigned-cert-tls
issuerRef:
name: test-selfsigned

View File

@ -0,0 +1,7 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: test-selfsigned
namespace: cert-manager-test
spec:
selfSigned: {}

View File

@ -20,7 +20,7 @@ az login
az account list -o table
SUBSCRIPTION=<id>
az account set --subscription <SubscriptionId-id-here>
az account set --subscription $SUBSCRIPTION
```

View File

@ -1,4 +1,4 @@
<source>
<source>
@type tail
format json
read_from_head true

View File

@ -1,4 +1,4 @@
<source>
<source>
@type tail
format json
read_from_head true

View File

@ -2,4 +2,3 @@ FROM fluent/fluentd:v1.11-debian
USER root
RUN gem install fluent-plugin-elasticsearch
USER fluent

View File

@ -0,0 +1,127 @@
# Introduction to Fluentd on Kubernetes
## Prerequisites
You will need a basic understanding of Fluentd before you attempt to run it on Kubernetes.<br/>
Fluentd and Kubernetes have a bunch of moving parts.<br/>
To understand the basics of Fluentd, I highly recommend you start with this video: <br/>
<a href="https://youtu.be/Gp0-7oVOtPw" title="Fluentd"><img src="https://i.ytimg.com/vi/Gp0-7oVOtPw/hqdefault.jpg" width="50%" height="50%" alt="Fluentd" /></a>
The most important components to understand is the fluentd `tail` plugin. <br/>
This plugin is used to read logs from containers and pods on the file system and collect them.
## We need a Kubernetes cluster
Lets create a Kubernetes cluster to play with using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
```
kind create cluster --name fluentd --image kindest/node:v1.19.1
```
## Fluentd Manifests
I would highly recommend to use manifests from the official fluentd [github repo](https://github.com/fluent/fluentd-kubernetes-daemonset) for production usage <br/>
The manifests found here are purely for demo purpose. <br/>
The manifests in this repo are broken down and simplified for educational purpose. </br>
<br/>
In this example I will use the most common use case and we'll break it down to get an understanding of each component.
## Fluentd Docker
I would recommend to start with the official [fluentd](https://hub.docker.com/r/fluent/fluentd/)
docker image. <br/>
You may want to build your own image if you want to install plugins.
In this demo I will be using the `fluentd` elasticsearch plugin <br/>
It's pretty simple to adjust `fluentd` to send logs to any other destination in case you are not an `elasticsearch` user. <br/>
<br/>
Let's build our [docker image](https://github.com/marcel-dempers/docker-development-youtube-series/blob/master/monitoring/logging/fluentd/introduction/dockerfile) in the introduction folder:
```
cd .\monitoring\logging\fluentd\kubernetes\
#note: use your own tag!
docker build . -t aimvector/fluentd-demo
#note: use your own tag!
docker push aimvector/fluentd-demo
```
## Fluentd Namespace
I like to run certain infrastructure components in their own namespaces. <br/>
If you are using the official manifests, they may be using the `kube-system` namespace instead. <br/>
You may want to carefully adjust it based on your preference <br/>
Let's create a `fluentd` namespace: <br/>
```
kubectl create ns fluentd
```
## Fluentd Configmap
In my [fluentd introduction video](https://youtu.be/Gp0-7oVOtPw), I talk about how `fluentd` allows us to simplify our configs using the `include` statement. <br/>
This helps us prevent having a large complex file.
<br/>
We have 5 files in our `fluentd-configmap.yaml` :
* fluent.conf: Our main config which includes all other configurations
* pods-kind-fluent.conf: `tail` config that sources all pod logs on the `kind` cluster.
Note: `kind` cluster writes its log in a different format
* pods-fluent.conf: `tail` config that sources all pod logs on the `kubernetes` host in the cloud. <br/>
Note: When running K8s in the cloud, logs may go into JSON format.
* file-fluent.conf: `match` config to capture all logs and write it to file for testing log collection </br>
Note: This is great to test if collection of logs works
* elastic-fluent.conf: `match` config that captures all logs and sends it to `elasticseach`
Let's deploy our `configmap`:
```
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\fluentd-configmap.yaml
```
## Fluentd Daemonset
Let's deploy our `daemonset`:
```
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\fluentd-rbac.yaml
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\fluentd.yaml
kubectl -n fluentd get pods
```
Let's deploy our example app that writes logs to `stdout`
```
kubectl apply -f .\monitoring\logging\fluentd\kubernetes\counter.yaml
kubectl get pods
```
## Demo ElasticSearch and Kibana
```
kubectl create ns elastic-kibana
# deploy elastic search
kubectl -n elastic-kibana apply -f .\monitoring\logging\fluentd\kubernetes\elastic\elastic-demo.yaml
kubectl -n elastic-kibana get pods
# deploy kibana
kubectl -n elastic-kibana apply -f .\monitoring\logging\fluentd\kubernetes\elastic\kibana-demo.yaml
kubectl -n elastic-kibana get pods
```
## Kibana
```
kubectl -n elastic-kibana port-forward svc/kibana 5601
```

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

View File

@ -0,0 +1,22 @@
# AUTOMATICALLY GENERATED
# DO NOT EDIT THIS FILE DIRECTLY, USE /templates/Gemfile.erb
source "https://rubygems.org"
gem "fluentd", "1.11.5"
gem "oj", "3.8.1"
gem "fluent-plugin-multi-format-parser", "~> 1.0.0"
gem "fluent-plugin-concat", "~> 2.4.0"
gem "fluent-plugin-grok-parser", "~> 2.6.0"
gem "fluent-plugin-prometheus", "~> 1.6.1"
gem 'fluent-plugin-json-in-json-2', ">= 1.0.2"
gem "fluent-plugin-record-modifier", "~> 2.0.0"
gem "fluent-plugin-detect-exceptions", "~> 0.0.12"
gem "fluent-plugin-rewrite-tag-filter", "~> 2.2.0"
gem "elasticsearch", "~> 7.0"
gem "fluent-plugin-elasticsearch", "~> 4.1.1"
gem "elasticsearch-xpack", "~> 7.0"
gem "fluent-plugin-dedot_filter", "~> 1.0"
gem "fluent-plugin-kubernetes_metadata_filter", "~> 2.5.0"
gem "ffi"
gem "fluent-plugin-systemd", "~> 1.0.1"

View File

@ -0,0 +1,152 @@
GEM
remote: https://rubygems.org/
specs:
addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0)
concurrent-ruby (1.1.7)
cool.io (1.7.0)
domain_name (0.5.20190701)
unf (>= 0.0.5, < 1.0.0)
elasticsearch (7.9.0)
elasticsearch-api (= 7.9.0)
elasticsearch-transport (= 7.9.0)
elasticsearch-api (7.9.0)
multi_json
elasticsearch-transport (7.9.0)
faraday (~> 1)
multi_json
elasticsearch-xpack (7.9.0)
elasticsearch-api (>= 6)
excon (0.78.0)
faraday (1.1.0)
multipart-post (>= 1.2, < 3)
ruby2_keywords
ffi (1.13.1)
ffi-compiler (1.0.1)
ffi (>= 1.0.0)
rake
fluent-config-regexp-type (1.0.0)
fluentd (> 1.0.0, < 2)
fluent-plugin-concat (2.4.0)
fluentd (>= 0.14.0, < 2)
fluent-plugin-dedot_filter (1.0.0)
fluentd (>= 0.14.0, < 2)
fluent-plugin-detect-exceptions (0.0.13)
fluentd (>= 0.10)
fluent-plugin-elasticsearch (4.1.4)
elasticsearch
excon
fluentd (>= 0.14.22)
fluent-plugin-grok-parser (2.6.2)
fluentd (>= 0.14.6, < 2)
fluent-plugin-json-in-json-2 (1.0.2)
fluentd (>= 0.14.0, < 2)
yajl-ruby (~> 1.0)
fluent-plugin-kubernetes_metadata_filter (2.5.2)
fluentd (>= 0.14.0, < 1.12)
kubeclient (< 5)
lru_redux
fluent-plugin-multi-format-parser (1.0.0)
fluentd (>= 0.14.0, < 2)
fluent-plugin-prometheus (1.6.1)
fluentd (>= 0.14.20, < 2)
prometheus-client (< 0.10)
fluent-plugin-record-modifier (2.0.1)
fluentd (>= 1.0, < 2)
fluent-plugin-rewrite-tag-filter (2.2.0)
fluent-config-regexp-type
fluentd (>= 0.14.2, < 2)
fluent-plugin-systemd (1.0.2)
fluentd (>= 0.14.11, < 2)
systemd-journal (~> 1.3.2)
fluentd (1.11.5)
cool.io (>= 1.4.5, < 2.0.0)
http_parser.rb (>= 0.5.1, < 0.7.0)
msgpack (>= 1.3.1, < 2.0.0)
serverengine (>= 2.2.2, < 3.0.0)
sigdump (~> 0.2.2)
strptime (>= 0.2.2, < 1.0.0)
tzinfo (>= 1.0, < 3.0)
tzinfo-data (~> 1.0)
yajl-ruby (~> 1.0)
http (4.4.1)
addressable (~> 2.3)
http-cookie (~> 1.0)
http-form_data (~> 2.2)
http-parser (~> 1.2.0)
http-accept (1.7.0)
http-cookie (1.0.3)
domain_name (~> 0.5)
http-form_data (2.3.0)
http-parser (1.2.1)
ffi-compiler (>= 1.0, < 2.0)
http_parser.rb (0.6.0)
jsonpath (1.0.5)
multi_json
to_regexp (~> 0.2.1)
kubeclient (4.9.1)
http (>= 3.0, < 5.0)
jsonpath (~> 1.0)
recursive-open-struct (~> 1.1, >= 1.1.1)
rest-client (~> 2.0)
lru_redux (1.1.0)
mime-types (3.3.1)
mime-types-data (~> 3.2015)
mime-types-data (3.2020.1104)
msgpack (1.3.3)
multi_json (1.15.0)
multipart-post (2.1.1)
netrc (0.11.0)
oj (3.8.1)
prometheus-client (0.9.0)
quantile (~> 0.2.1)
public_suffix (4.0.6)
quantile (0.2.1)
rake (13.0.1)
recursive-open-struct (1.1.3)
rest-client (2.1.0)
http-accept (>= 1.7.0, < 2.0)
http-cookie (>= 1.0.2, < 2.0)
mime-types (>= 1.16, < 4.0)
netrc (~> 0.8)
ruby2_keywords (0.0.2)
serverengine (2.2.2)
sigdump (~> 0.2.2)
sigdump (0.2.4)
strptime (0.2.5)
systemd-journal (1.3.3)
ffi (~> 1.9)
to_regexp (0.2.1)
tzinfo (2.0.3)
concurrent-ruby (~> 1.0)
tzinfo-data (1.2020.4)
tzinfo (>= 1.0.0)
unf (0.1.4)
unf_ext
unf_ext (0.0.7.7)
yajl-ruby (1.4.1)
PLATFORMS
ruby
DEPENDENCIES
elasticsearch (~> 7.0)
elasticsearch-xpack (~> 7.0)
ffi
fluent-plugin-concat (~> 2.4.0)
fluent-plugin-dedot_filter (~> 1.0)
fluent-plugin-detect-exceptions (~> 0.0.12)
fluent-plugin-elasticsearch (~> 4.1.1)
fluent-plugin-grok-parser (~> 2.6.0)
fluent-plugin-json-in-json-2 (>= 1.0.2)
fluent-plugin-kubernetes_metadata_filter (~> 2.5.0)
fluent-plugin-multi-format-parser (~> 1.0.0)
fluent-plugin-prometheus (~> 1.6.1)
fluent-plugin-record-modifier (~> 2.0.0)
fluent-plugin-rewrite-tag-filter (~> 2.2.0)
fluent-plugin-systemd (~> 1.0.1)
fluentd (= 1.11.5)
oj (= 3.8.1)
BUNDLED WITH
2.1.4

View File

@ -0,0 +1,42 @@
FROM fluent/fluentd:v1.11-debian
USER root
WORKDIR /home/fluent
ENV PATH /fluentd/vendor/bundle/ruby/2.6.0/bin:$PATH
ENV GEM_PATH /fluentd/vendor/bundle/ruby/2.6.0
ENV GEM_HOME /fluentd/vendor/bundle/ruby/2.6.0
# skip runtime bundler installation
ENV FLUENTD_DISABLE_BUNDLER_INJECTION 1
COPY Gemfile* /fluentd/
RUN buildDeps="sudo make gcc g++ libc-dev libffi-dev" \
runtimeDeps="" \
&& apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
-y --no-install-recommends \
$buildDeps $runtimeDeps net-tools \
&& gem install bundler --version 2.1.4 \
&& bundle config silence_root_warning true \
&& bundle install --gemfile=/fluentd/Gemfile --path=/fluentd/vendor/bundle \
&& SUDO_FORCE_REMOVE=yes \
apt-get purge -y --auto-remove \
-o APT::AutoRemove::RecommendsImportant=false \
$buildDeps \
&& rm -rf /var/lib/apt/lists/* \
&& gem sources --clear-all \
&& rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem
RUN touch /fluentd/etc/disable.conf
# Copy plugins
COPY plugins /fluentd/plugins/
COPY entrypoint.sh /fluentd/entrypoint.sh
# Environment variables
ENV FLUENTD_OPT=""
ENV FLUENTD_CONF="fluent.conf"
# Overwrite ENTRYPOINT to run fluentd as root for /var/log / /var/lib
ENTRYPOINT ["tini", "--", "/fluentd/entrypoint.sh"]

View File

@ -0,0 +1,3 @@
#!/usr/bin/env sh
exec fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins

View File

@ -0,0 +1,68 @@
#
# Fluentd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The following Fluentd parser plugin, aims to simplify the parsing of multiline
# logs found in Kubernetes nodes. Since many log files shared the same format and
# in order to simplify the configuration, this plugin provides a 'kubernetes' format
# parser (built on top of MultilineParser).
#
# When tailing files, this 'kubernetes' format should be applied to the following
# log file sources:
#
# - /var/log/kubelet.log
# - /var/log/kube-proxy.log
# - /var/log/kube-apiserver.log
# - /var/log/kube-controller-manager.log
# - /var/log/kube-scheduler.log
# - /var/log/rescheduler.log
# - /var/log/glbc.log
# - /var/log/cluster-autoscaler.log
#
# Usage:
#
# ---- fluentd.conf ----
#
# <source>
# @type tail
# path ./kubelet.log
# read_from_head yes
# tag kubelet
# <parse>
# @type kubernetes
# </parse>
# </source>
#
# ---- EOF ---
require 'fluent/plugin/parser_regexp'
module Fluent
module Plugin
class KubernetesParser < RegexpParser
Fluent::Plugin.register_parser("kubernetes", self)
CONF_FORMAT_FIRSTLINE = %q{/^\w\d{4}/}
CONF_FORMAT1 = %q{/^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/m}
CONF_TIME_FORMAT = "%m%d %H:%M:%S.%N"
def configure(conf)
conf['expression'] = CONF_FORMAT1
conf['time_format'] = CONF_TIME_FORMAT
super
end
end
end
end

View File

@ -0,0 +1,69 @@
#
# Fluentd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The following Fluentd parser plugin, aims to simplify the parsing of multiline
# logs found in Kubernetes nodes. Since many log files shared the same format and
# in order to simplify the configuration, this plugin provides a 'kubernetes' format
# parser (built on top of MultilineParser).
#
# When tailing files, this 'kubernetes' format should be applied to the following
# log file sources:
#
# - /var/log/kubelet.log
# - /var/log/kube-proxy.log
# - /var/log/kube-apiserver.log
# - /var/log/kube-controller-manager.log
# - /var/log/kube-scheduler.log
# - /var/log/rescheduler.log
# - /var/log/glbc.log
# - /var/log/cluster-autoscaler.log
#
# Usage:
#
# ---- fluentd.conf ----
#
# <source>
# @type tail
# path ./kubelet.log
# read_from_head yes
# tag kubelet
# <parse>
# @type multiline_kubernetes
# </parse>
# </source>
#
# ---- EOF ---
require 'fluent/plugin/parser_multiline'
module Fluent
module Plugin
class MultilineKubernetesParser < MultilineParser
Fluent::Plugin.register_parser("multiline_kubernetes", self)
CONF_FORMAT_FIRSTLINE = %q{/^\w\d{4}/}
CONF_FORMAT1 = %q{/^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/}
CONF_TIME_FORMAT = "%m%d %H:%M:%S.%N"
def configure(conf)
conf['format_firstline'] = CONF_FORMAT_FIRSTLINE
conf['format1'] = CONF_FORMAT1
conf['time_format'] = CONF_TIME_FORMAT
super
end
end
end
end

View File

@ -0,0 +1,53 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
selector:
matchLabels:
app: elasticsearch
replicas: 1
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: vm-max-fix
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch
image: elasticsearch:7.9.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9200
env:
- name: node.name
value: "elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch"
- name: bootstrap.memory_lock
value: "false"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
type: ClusterIP
selector:
app: elasticsearch
ports:
- protocol: TCP
name: http
port: 9200
targetPort: 9200

View File

@ -0,0 +1,43 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
selector:
matchLabels:
app: kibana
replicas: 1
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: kibana:7.9.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_URL
value: "http://elasticsearch:9200"
- name: ELASTICSEARCH_HOSTS
value: "http://elasticsearch:9200"
---
apiVersion: v1
kind: Service
metadata:
name: kibana
labels:
app: kibana
spec:
type: ClusterIP
selector:
app: kibana
ports:
- protocol: TCP
name: http
port: 5601
targetPort: 5601

View File

@ -0,0 +1,81 @@
#@include file-fluent.conf
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: fluentd
data:
fluent.conf: |-
################################################################
# This source gets all logs from local docker host
@include pods-kind-fluent.conf
#@include pods-fluent.conf
#@include file-fluent.conf
@include elastic-fluent.conf
pods-kind-fluent.conf: |-
<source>
@type tail
read_from_head true
tag kubernetes.*
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
exclude_path ["/var/log/containers/fluent*"]
<parse>
@type regexp
#https://regex101.com/r/ZkOBTI/1
expression ^(?<time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[^Z]*Z)\s(?<stream>[^\s]+)\s(?<character>[^\s])\s(?<message>.*)$
#time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
kubernetes_url "#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}"
verify_ssl "#{ENV['KUBERNETES_VERIFY_SSL'] || true}"
ca_file "#{ENV['KUBERNETES_CA_FILE']}"
skip_labels "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_LABELS'] || 'false'}"
skip_container_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_CONTAINER_METADATA'] || 'false'}"
skip_master_url "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_MASTER_URL'] || 'false'}"
skip_namespace_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_NAMESPACE_METADATA'] || 'false'}"
</filter>
pods-fluent.conf: |-
<source>
@type tail
read_from_head true
tag kubernetes.*
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
exclude_path ["/var/log/containers/fluent*"]
<parse>
@type kubernetes
@type "#{ENV['FLUENT_CONTAINER_TAIL_PARSER_TYPE'] || 'json'}"
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
kubernetes_url "#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}"
verify_ssl "#{ENV['KUBERNETES_VERIFY_SSL'] || true}"
ca_file "#{ENV['KUBERNETES_CA_FILE']}"
skip_labels "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_LABELS'] || 'false'}"
skip_container_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_CONTAINER_METADATA'] || 'false'}"
skip_master_url "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_MASTER_URL'] || 'false'}"
skip_namespace_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_NAMESPACE_METADATA'] || 'false'}"
</filter>
file-fluent.conf: |-
<match **>
@type file
path /tmp/file-test.log
</match>
elastic-fluent.conf: |-
<match **>
@type elasticsearch
host "#{ENV['FLUENT_ELASTICSEARCH_HOST'] || 'elasticsearch.elastic-kibana'}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT'] || '9200'}"
index_name fluentd-k8s
type_name fluentd
</match>

View File

@ -0,0 +1,34 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
namespace: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: fluentd

View File

@ -0,0 +1,58 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: fluentd
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
imagePullPolicy: "Always"
image: aimvector/fluentd-demo
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elastic-kibana"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: fluentd-config
mountPath: /fluentd/etc
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: fluentd-config
configMap:
name: fluentd-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers