Merge pull request #128 from marcel-dempers/portainer

portainer
This commit is contained in:
Marcel Dempers 2022-03-11 17:11:15 +11:00 committed by GitHub
commit 4aa9108b48
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 405 additions and 0 deletions

1
.gitignore vendored
View File

@ -11,4 +11,5 @@ __pycache__/
security/letsencrypt/introduction/certs/** security/letsencrypt/introduction/certs/**
kubernetes/shipa/installs/shipa-helm-chart-1.1.1/ kubernetes/shipa/installs/shipa-helm-chart-1.1.1/
messaging/kafka/data/* messaging/kafka/data/*
kubernetes/portainer/volume*
kubernetes/rancher/volume/* kubernetes/rancher/volume/*

View File

@ -0,0 +1,124 @@
# Introduction to Portainer
Start here 👉🏽[https://www.portainer.io/](https://www.portainer.io/) </br>
Documentation 👉🏽[https://docs.portainer.io/](https://docs.portainer.io/)
## Portainer installation
In this demo, I will be running Kubernetes 1.22 using `kind` </br>
Which is compatible with portainer 2.11.1 </br>
Let's go ahead with a local docker install:
```
cd kubernetes\portainer
mkdir volume-ce
docker run -d -p 9443:9443 -p 8000:8000 --name portainer-ce `
--restart=always `
-v /var/run/docker.sock:/var/run/docker.sock `
-v ${PWD}/volume-ce:/data `
portainer/portainer-ce:2.11.1
```
## SSL & DOMAIN
We can also upload SSL certificates for our portainer.</br>
In this demo, portainer will issue self signed certificates. </br>
We will need a domain for our portainer server so our clusters can contact it. </br>
Let's use [nip.io](https://nip.io/) to create a public endpoint for portainer.
## Create Kubernetes Cluster
Let's start by creating a local `kind` [cluster](https://kind.sigs.k8s.io/)
For local clusters, we can use the public endpoint Agent. </br>
We can get a public endpoint for the portainer agent by: </br>
* Ingress
* LoadBalancer
* NodePort
So we'll deploy portainer agent with `NodePort` for local </br>
For production environments, I would recommend not to expose the portainer agent. </br>
In this case, for Production, we'll use the portainer edge agent. </br>
To get `NodePort` exposed in `kind`, we'll open a host port with a [kind.yaml](./kind.yaml) config
```
kind create cluster --name local --config kind.yaml
```
## Manage Kubernetes Environments
The portainer UI gives us a one line command to deploy the portainer agent. </br>
Note that in the video, we pick the `node port` option.
## Local: Portainer Agent
I download the YAML from [here](https://downloads.portainer.io/portainer-agent-ce211-k8s-nodeport.yaml) to take a closer look at what it is deploying </br>
Deploy the portainer agent in my `kind` cluster:
```
kubectl apply -f portainer-agent-ce211-k8s-nodeport.yaml
```
See the agent:
```
kubectl -n portainer get pods
```
See the service with the endpoint it exposes:
```
kubectl -n portainer get svc
```
Now since we dont have a public load balancer and using nodeport, our service will be exposed on the node IP. </br>
Since the Kubernetes node is our local machine, we should be able to access the portainer agent on `<computer-IP>:30778` </br>
We can obtain our local IP with `ipconfig` </br>
The IP and NodePort will be used to connect our portainer server to the new agent. </br>
## Production: Portainer Edge Agent
For the Edge agent, we get the command in the portainer UI. </br>
Once deployed, we can see the egde agent in our AKS cluster:
```
kubectl -n portainer get pods
```
## Helm
Let's showcase how to deploy helm charts. </br>
Most folks would have helm charts for their ingress controllers, monitoring, logging and other
platform dependencies.</br>
Let's add Kubernetes NGINX Ingress repo:
```
https://kubernetes.github.io/ingress-nginx
```
## GitOps
So from the Application menu, we can add an application from a `git` repository. </br>
Let's add this repo:
```
https://github.com/marcel-dempers/docker-development-youtube-series
```
We also specify all our manifests path that portainer needs to deploy:
* kubernetes/portainer/example-application/deployment.yaml
* kubernetes/portainer/example-application/configmap.yaml
* kubernetes/portainer/example-application/service.yaml
* kubernetes/portainer/example-application/ingress.yaml
Portainer will now poll our repo and deploy any updates, GitOps style!

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
config.json: |
{
"environment" : "dev"
}
# kubectl create configmap example-config --from-file ./golang/configs/config.json

View File

@ -0,0 +1,43 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
labels:
app: example-app
test: test
spec:
selector:
matchLabels:
app: example-app
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: aimvector/python:1.0.4
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "500m"
volumeMounts:
- name: config-volume
mountPath: /configs/
volumes:
- name: config-volume
configMap:
name: example-config

View File

@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: marcel.test
http:
paths:
- path: /hello(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: example-service
labels:
app: example-app
spec:
type: ClusterIP
selector:
app: example-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000

View File

@ -0,0 +1,12 @@
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
image: kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047
extraPortMappings:
- containerPort: 30778
hostPort: 30778
listenAddress: "0.0.0.0"
protocol: tcp
- role: worker
image: kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.11.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: DEBUG
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,100 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.11.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.id
- name: EDGE_INSECURE_POLL
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.insecure_poll
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP