Merge pull request #151 from marcel-dempers/servicemonitors

servicemonitors
This commit is contained in:
marceldempers 2022-08-07 13:48:05 +10:00 committed by GitHub
commit 52db04db21
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 199 additions and 3 deletions

View File

@ -0,0 +1,110 @@
# Introduction to Service Monitors
In order to understand service monitors, we will need to understand how to monitor
kubernetes environment. </br>
You will need a base understanding of Kubernetes and have a basic understanding of the `kube-prometheus` monitoring stack. </br>
Checkout the video [How to monitor Kubernetes in 2022](https://youtu.be/YDtuwlNTzRc):
<a href="https://youtu.be/YDtuwlNTzRc" title="Monitoring Kubernetes"><img src="https://i.ytimg.com/vi/YDtuwlNTzRc/hqdefault.jpg" width="50%" alt="Monitoring Kubernetes" /></a>
## Create a kubernetes cluster
```
# create cluster
kind create cluster --name monitoring --image kindest/node:v1.23.5
# see cluster up and running
kubectl get nodes
NAME STATUS ROLES AGE VERSION
monitoring-control-plane Ready control-plane,master 2m12s v1.23.5
```
## Deploy kube-prometheus
Installation:
```
kubectl create -f ./monitoring/prometheus/kubernetes/1.23/manifests/setup/
kubectl create -f ./monitoring/prometheus/kubernetes/1.23/manifests/
```
Check the install:
```
kubectl -n monitoring get pods
```
After a few minutes, everything should be up and running:
```
kubectl -n monitoring get pods
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 3m10s
alertmanager-main-1 2/2 Running 0 3m10s
alertmanager-main-2 2/2 Running 0 3m10s
blackbox-exporter-6b79c4588b-t4czf 3/3 Running 0 4m7s
grafana-7fd69887fb-zm2d2 1/1 Running 0 4m7s
kube-state-metrics-55f67795cd-f7frb 3/3 Running 0 4m6s
node-exporter-xjdtn 2/2 Running 0 4m6s
prometheus-adapter-85664b6b74-bvmnj 1/1 Running 0 4m6s
prometheus-adapter-85664b6b74-mcgbz 1/1 Running 0 4m6s
prometheus-k8s-0 2/2 Running 0 3m9s
prometheus-k8s-1 2/2 Running 0 3m9s
prometheus-operator-6dc9f66cb7-z98nj 2/2 Running 0 4m6s
```
## View dashboards
```
kubectl -n monitoring port-forward svc/grafana 3000
```
Then access Grafana on [localhost:3000](http://localhost:3000)
## Access Prometheus
```
kubectl -n monitoring port-forward svc/prometheus-operated 9090
```
Then access Prometheus on [localhost:9090](http://localhost:9090).
## Create our own Prometheus
```
kubectl apply -n monitoring -f ./kubernetes/servicemonitors/prometheus.yaml
```
View our prometheus `prometheus-applications-0` instance:
```
kubectl -n monitoring get pods
```
Checkout our prometheus UI
```
kubectl -n monitoring port-forward prometheus-applications-0 9090
```
## Deploy a service monitor for example app
```
kubectl -n default apply -f ./kubernetes/servicemonitors/servicemonitor.yaml
```
After applying the service monitor, if Prometheus is correctly selecting it, we should see the item appear under the [Service Discovery](http://localhost:9090/service-discovery) page in Prometheus. </br>
Double check with with `port-forward` before proceeding. </br>
If it does not appear, that means your Prometheus instance is not selecting the service monitor accordingly. Either a label mismatch on the namespace or the service monitor. </br>
## Deploy our example app
```
kubectl -n default apply -f kubernetes\servicemonitors\example-app\
```
Now we should see a target in the Prometheus [Targets](http://localhost:9090/targets) page. </br>

View File

@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
labels:
app: example-app
spec:
selector:
matchLabels:
app: example-app
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: aimvector/python:metrics
imagePullPolicy: Always
ports:
- containerPort: 5000

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: example-service
labels:
app: example-app
spec:
type: ClusterIP
selector:
app: example-app
ports:
- protocol: TCP
name: web
port: 80
targetPort: 5000

View File

@ -0,0 +1,31 @@
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
name: applications
namespace: monitoring
spec:
image: quay.io/prometheus/prometheus:v2.32.1
nodeSelector:
kubernetes.io/os: linux
replicas: 1
resources:
requests:
memory: 400Mi
ruleSelector: {}
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
#serviceMonitorNamespaceSelector: {} #match all namespaces
serviceMonitorNamespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default
serviceMonitorSelector: {} #match all servicemonitors
version: 2.32.1

View File

@ -0,0 +1,13 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
name: example-app
namespace: default
spec:
endpoints:
- interval: 30s
port: web
selector:
matchLabels:
app: example-app

View File

@ -1,4 +1,4 @@
FROM python:3.7.3-alpine3.9 as prod
FROM python:3.10.5-alpine3.16 as prod
RUN mkdir /app/
WORKDIR /app/

View File

@ -1,2 +1,2 @@
Flask == 1.0.3
prometheus_client == 0.7.1
Flask == 2.1.2
prometheus_client == 0.14.1