first commit

This commit is contained in:
2025-11-06 06:41:45 +01:00
commit b8dceaf896
92 changed files with 5382 additions and 0 deletions

View File

@@ -0,0 +1,529 @@
# Introduction to Admission controllers
[Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
<hr/>
## Installation (local)
<hr/>
Create a kind cluster:
```
kind create cluster --name webhook --image kindest/node:v1.20.2
```
## TLS certificate notes for Webhook
<hr/>
In order for our webhook to be invoked by Kubernetes, we need a TLS certificate.<br/>
In this demo I'll be using a self signed cert. <br/>
It's ok for development, but for production I would recommend using a real certificate instead. <br/>
We'll use a very handy CloudFlare SSL tool in a docker container to get this done. <br/>
Follow [Use CFSSL to generate certificates](./tls/ssl_generate_self_signed.md)
After the above, we should have: <br/>
* a Webhook YAML file
* CA Bundle for signing new TLS certificates
* a TLS certificate (Kubernetes secret)
<br/>
## Local Development
<hr/>
We always start with a `dockerfile` since we need a Go dev environment.
```
FROM golang:1.15-alpine as dev-env
WORKDIR /app
```
Build and run the controller
```
# get dev environment: webhook
cd sourcecode
docker build . -t webhook
docker run -it --rm -p 80:80 -v ${PWD}:/app webhook sh
```
We always start with Hello world! <br/>
Let's define our basic main module and a web server
```
go mod init example-webhook
```
New file : `main.go`
```
package main
import (
"net/http"
"log"
)
func main() {
http.HandleFunc("/", HandleRoot)
http.HandleFunc("/mutate", HandleMutate)
log.Fatal(http.ListenAndServe(":80", nil))
}
func HandleRoot(w http.ResponseWriter, r *http.Request){
w.Write([]byte("HandleRoot!"))
}
func HandleMutate(w http.ResponseWriter, r *http.Request){
w.Write([]byte("HandleMutate!"))
}
```
Build our code and run it
```
export CGO_ENABLED=0
go build -o webhook
./webhook
```
We'll be able to hit the `http://localhost/mutate` endpoint in the browser <br/>
NOTE: In Windows, container networking is not fully supported. Our container exposes port 80, but to access our Kubernetes cluster which runs in another container, we need to enable `--net host` flag. This means exposing port 80 will stop working from here on <br/>
Let's exit the container and start with `--net host` so our container can access our kubernetes `kind` cluster
```
docker run -it --rm --net host -v ${HOME}/.kube/:/root/.kube/ -v ${PWD}:/app webhook sh
```
We can also test our access to our kubernetes cluster with the config that is mounted in:
```
apk add --no-cache curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
```
# Kubernetes
How do we interact with Kubernetes ? </br>
Kubernetes provides many libraries and we'll interact with some of these today
Since we'll receive webhook events from Kubernetes, we'll need to translate these
requests into objects or structs that we understand.
For this, the serializer is important:
```
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
var (
universalDeserializer = serializer.NewCodecFactory(runtime.NewScheme()).UniversalDeserializer()
)
```
To access Kubernetes, we need to define a config and a client using our config. <br/>
We can authenticate with K8s in a number of ways. <br/>
First way is good for local development and thats using a kubeconfig file. <br/>
For production, we'll use a Kubernetes service account with RBAC permissions. <br/>
We'll do both methods today. <br/>
```
# define our config and client
var config *rest.Config
var clientSet *kubernetes.Clientset
# in main()
useKubeConfig := os.Getenv("USE_KUBECONFIG")
kubeConfigFilePath := os.Getenv("KUBECONFIG")
if len(useKubeConfig) == 0 {
// default to service account in cluster token
c, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
config = c
} else {
//load from a kube config
var kubeconfig string
if kubeConfigFilePath == "" {
if home := homedir.HomeDir(); home != "" {
kubeconfig = filepath.Join(home, ".kube", "config")
}
} else {
kubeconfig = kubeConfigFilePath
}
fmt.Println("kubeconfig: " + kubeconfig)
c, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
panic(err.Error())
}
config = c
}
```
Once we built our kubeconfig, we can instantiate a client to use in our app:
```
cs, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
clientSet = cs
```
And we'll need to import the dependencies for this:
```
"os"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
rest "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
```
Since we're also using the client-go library, we need to install the same version as
the other libraries as we can see in the `go.mod` file, we're using `v0.21.0`
```
go get k8s.io/client-go@v0.21.0
```
Rebuild to ensure no errors:
```
go build -o webhook
```
Test with a kubeconfig
```
export USE_KUBECONFIG=true
./webhook
```
To test our access, let's create a `test.go` and return pods from the kube-system namespace
```
#test.go
package main
import ()
func test(){
}
```
Use our global clientset defined in main() and get all pods
```
pods, err := clientSet.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
```
Define dependencies:
```
"context"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"fmt"
```
And finally invoking it in main() calling `test()` <br/>
Run and test our Kubernetes access:
```
bash-5.0# ./webhook
kubeconfig: /root/.kube/config
There are 11 pods in the cluster
```
## Mutating Webhook
Now that we have a working app that can talk to Kubernetes, lets implement our webhook endpoint and deploy it to kubernetes to see what type of message the API server sends us when events happen
Firstly, we need to enable a TLS endpoint </br>
Let's take some parameters where we can set the path to the TLS certificate and port number to run on. </br>
Import flag dependency:
```
"flag"
"strconv"
```
Define our parameters for cert configuration
```
type ServerParameters struct {
port int // webhook server port
certFile string // path to the x509 certificate for https
keyFile string // path to the x509 private key matching `CertFile`
}
var parameters ServerParameters
# in main()
flag.IntVar(&parameters.port, "port", 8443, "Webhook server port.")
flag.StringVar(&parameters.certFile, "tlsCertFile", "/etc/webhook/certs/tls.crt", "File containing the x509 Certificate for HTTPS.")
flag.StringVar(&parameters.keyFile, "tlsKeyFile", "/etc/webhook/certs/tls.key", "File containing the x509 private key to --tlsCertFile.")
flag.Parse()
# start our web server exposing TLS endpoint
log.Fatal(http.ListenAndServeTLS(":" + strconv.Itoa(parameters.port), parameters.certFile, parameters.keyFile, nil))
```
Let's capture the request coming from Kubernetes and write it to local file for analysis
```
# dependencies
"io/ioutil"
# HandleMutate
body, err := ioutil.ReadAll(r.Body)
err = ioutil.WriteFile("/tmp/request", body, 0644)
if err != nil {
panic(err.Error())
}
```
## Deployment
<hr/>
Let's built what we have and deploy it to our kubernetes cluster
We will firstly need to add a build step to our `dockerfile` to build the code </br>
And we'll also need to create a smaller runtime layer in our `dockerfile`
Full `dockerfile` :
```
FROM golang:1.15-alpine as dev-env
WORKDIR /app
FROM dev-env as build-env
COPY go.mod /go.sum /app/
RUN go mod download
COPY . /app/
RUN CGO_ENABLED=0 go build -o /webhook
FROM alpine:3.10 as runtime
COPY --from=build-env /webhook /usr/local/bin/webhook
RUN chmod +x /usr/local/bin/webhook
ENTRYPOINT ["webhook"]
```
Let's build the container and push it to a registry:
```
docker build . -t aimvector/example-webhook:v1
docker push aimvector/example-webhook:v1
```
```
# apply generated secret
kubectl -n default apply -f ./tls/example-webhook-tls.yaml
kubectl -n default apply -f rbac.yaml
kubectl -n default apply -f deployment.yaml
kubectl -n default get pods
# ensure above pods are running first
kubectl -n default apply -f webhook.yaml
```
# Deploy a demo that needs mutation
```
kubectl -n default apply -f ./demo-pod.yaml
```
We should now be able to see an example request from Kubernetes sitting in our `tmp/request` location. This request is called an "AdmissionReview" <br/>
Kubernetes sends us an `AdmissionReview` and expects an AdmissionResponse back. <br/>
We can copy this review locally and use it for development so we dont need to deploy to
kubernetes constantly. For example:
```
kubectl cp example-webhook-756bcb566b-9kxjp:/tmp/request ./mock-request.json
```
So lets grab the info from the admission request, so we can do something with it
```
//dependencies
"k8s.io/api/admission/v1beta1"
"errors"
//HandleMutate()
var admissionReviewReq v1beta1.AdmissionReview
if _, _, err := universalDeserializer.Decode(body, nil, &admissionReviewReq); err != nil {
w.WriteHeader(http.StatusBadRequest)
fmt.Errorf("could not deserialize request: %v", err)
} else if admissionReviewReq.Request == nil {
w.WriteHeader(http.StatusBadRequest)
errors.New("malformed admission review: request is nil")
}
fmt.Printf("Type: %v \t Event: %v \t Name: %v \n",
admissionReviewReq.Request.Kind,
admissionReviewReq.Request.Operation,
admissionReviewReq.Request.Name,
)
```
# Mutation
Firstly we need to grab the Pod object from the admission request
```
//dependencies
apiv1 "k8s.io/api/core/v1"
var pod apiv1.Pod
err = json.Unmarshal(admissionReviewReq.Request.Object.Raw, &pod)
if err != nil {
fmt.Errorf("could not unmarshal pod on admission request: %v", err)
}
```
To perform a simple mutation on the object before the Kubernetes API sees the object, we can apply a patch to the operation.
```
//global
type patchOperation struct {
Op string `json:"op"`
Path string `json:"path"`
Value interface{} `json:"value,omitempty"`
}
//HandleMutate()
var patches []patchOperation
```
Add a label that we can inject on the pod
We have to craft the kubernetes object we want to patch. <br/>
For example, a label is part of the Metadata API on the Pod spec
https://pkg.go.dev/k8s.io/api/core/v1#Pod
```
// Get existing Metadata labels
labels := pod.ObjectMeta.Labels
labels["example-webhook"] = "it-worked"
patches = append(patches, patchOperation{
Op: "add",
Path: "/metadata/labels",
Value: labels,
})
```
Once you have completed all your patching, convert the patches to byte slice:
```
patchBytes, err := json.Marshal(patches)
if err != nil {
fmt.Errorf("could not marshal JSON patch: %v", err)
}
```
Add it to the admission response
```
admissionReviewResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
UID: admissionReviewReq.Request.UID,
Allowed: true,
},
}
admissionReviewResponse.Response.Patch = patchBytes
bytes, err := json.Marshal(&admissionReviewResponse)
if err != nil {
fmt.Errorf("marshaling response: %v", err)
}
w.Write(bytes)
//dependencies
"encoding/json"
```
# Build and push the updates
```
docker build . -t aimvector/example-webhook:v1
docker push aimvector/example-webhook:v1
```
# Delete all pods to get latest image
```
kubectl delete pods --all
```
# Redeploy our demo pod and see the mutations
```
kubectl -n default apply -f ./demo-pod.yaml
```
See the injected label
```
kubectl get pods --show-labels
```

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
labels:
example-webhook-enabled: "true"
spec:
containers:
- name: nginx
image: nginx

View File

@@ -0,0 +1,56 @@
apiVersion: v1
kind: Service
metadata:
name: example-webhook
namespace: default
spec:
selector:
app: example-webhook
ports:
- port: 443
targetPort: tls
name: application
- port: 80
targetPort: metrics
name: metrics
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-webhook
namespace: default
labels:
app: example-webhook
spec:
replicas: 1
selector:
matchLabels:
app: example-webhook
template:
metadata:
labels:
app: example-webhook
spec:
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: example-webhook
securityContext:
runAsNonRoot: true
runAsUser: 1234
containers:
- name: server
image: aimvector/example-webhook:v1
imagePullPolicy: Always
ports:
- containerPort: 8443
name: tls
- containerPort: 80
name: metrics
volumeMounts:
- name: webhook-tls-certs
mountPath: /etc/webhook/certs/
readOnly: true
volumes:
- name: webhook-tls-certs
secret:
secretName: example-webhook-tls

View File

@@ -0,0 +1,150 @@
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1beta1",
"request": {
"uid": "dffc1f0f-0c0b-4d15-892f-71524ecfd06c",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "demo-pod",
"namespace": "default",
"operation": "CREATE",
"userInfo": {
"username": "kubernetes-admin",
"groups": [
"system:masters",
"system:authenticated"
]
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "demo-pod",
"namespace": "default",
"creationTimestamp": null,
"labels": {
"example-webhook-enabled": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"example-webhook-enabled\":\"true\"},\"name\":\"demo-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\"}]}}\n"
},
"managedFields": [
{
"manager": "kubectl.exe",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-04-14T07:47:56Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
},
"f:labels": {
".": {},
"f:example-webhook-enabled": {}
}
},
"f:spec": {
"f:containers": {
"k:{\"name\":\"nginx\"}": {
".": {},
"f:image": {},
"f:imagePullPolicy": {},
"f:name": {},
"f:resources": {},
"f:terminationMessagePath": {},
"f:terminationMessagePolicy": {}
}
},
"f:dnsPolicy": {},
"f:enableServiceLinks": {},
"f:restartPolicy": {},
"f:schedulerName": {},
"f:securityContext": {},
"f:terminationGracePeriodSeconds": {}
}
}
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-4fzpv",
"secret": {
"secretName": "default-token-4fzpv"
}
}
],
"containers": [
{
"name": "nginx",
"image": "nginx",
"resources": {},
"volumeMounts": [
{
"name": "default-token-4fzpv",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "default",
"serviceAccount": "default",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0,
"enableServiceLinks": true,
"preemptionPolicy": "PreemptLowerPriority"
},
"status": {}
},
"oldObject": null,
"dryRun": false,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}

View File

@@ -0,0 +1,27 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-webhook
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: example-webhook
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: example-webhook
subjects:
- kind: ServiceAccount
name: example-webhook
namespace: default
roleRef:
kind: ClusterRole
name: example-webhook
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,18 @@
FROM golang:1.15-alpine as dev-env
WORKDIR /app
FROM dev-env as build-env
COPY go.mod /go.sum /app/
RUN go mod download
COPY . /app/
RUN CGO_ENABLED=0 go build -o /webhook
FROM alpine:3.10 as runtime
COPY --from=build-env /webhook /usr/local/bin/webhook
RUN chmod +x /usr/local/bin/webhook
ENTRYPOINT ["webhook"]

View File

@@ -0,0 +1,9 @@
module example-webhook
go 1.15
require (
k8s.io/api v0.21.0
k8s.io/apimachinery v0.21.0
k8s.io/client-go v0.21.0
)

View File

@@ -0,0 +1,421 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.12/go.mod h1:eipySxLmqSyC5s5k1CLupqet0PSENBEDP93LQ9a8QYw=
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.4.1 h1:DLJCy1n/vrD4HPjOvYcT8aYQXpPIzoRZONaYwyycI+I=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073 h1:8qxJSnu+7dRq6upnbntrmriWByIakBuct5OM/MdQC1M=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba h1:O8mE0/t419eoIwhTFpKVkHiTs/Igowgfkj25AcZrtiE=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.21.0 h1:gu5iGF4V6tfVCQ/R+8Hc0h7H1JuEhzyEi9S4R5LM8+Y=
k8s.io/api v0.21.0/go.mod h1:+YbrhBBGgsxbF6o6Kj4KJPJnBmAKuXDeS3E18bgHNVU=
k8s.io/apimachinery v0.21.0 h1:3Fx+41if+IRavNcKOz09FwEXDBG6ORh6iMsTSelhkMA=
k8s.io/apimachinery v0.21.0/go.mod h1:jbreFvJo3ov9rj7eWT7+sYiRx+qZuCYXwWT1bcDswPY=
k8s.io/client-go v0.21.0 h1:n0zzzJsAQmJngpC0IhgFcApZyoGXPrDIAD601HD09ag=
k8s.io/client-go v0.21.0/go.mod h1:nNBytTF9qPFDEhoqgEPaarobC8QPae13bElIVHzIglA=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.8.0 h1:Q3gmuM9hKEjefWFFYF0Mat+YyFJvsUyYuwyNNJ5C9Ts=
k8s.io/klog/v2 v2.8.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/kube-openapi v0.0.0-20210305001622-591a79e4bda7/go.mod h1:wXW5VT87nVfh/iLV8FpR2uDvrFyomxbtb1KivDbvPTE=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920 h1:CbnUZsM497iRC5QMVkHwyl8s2tB3g7yaSHkYPkpgelw=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.0 h1:C4r9BgJ98vrKnnVCjwCSXcWjWe0NKcUQkmzDXZXGwH8=
sigs.k8s.io/structured-merge-diff/v4 v4.1.0/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=

View File

@@ -0,0 +1,169 @@
package main
import (
"net/http"
"log"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
"os"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
rest "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
"flag"
"strconv"
"io/ioutil"
"k8s.io/api/admission/v1beta1"
"errors"
apiv1 "k8s.io/api/core/v1"
"encoding/json"
)
type ServerParameters struct {
port int // webhook server port
certFile string // path to the x509 certificate for https
keyFile string // path to the x509 private key matching `CertFile`
}
type patchOperation struct {
Op string `json:"op"`
Path string `json:"path"`
Value interface{} `json:"value,omitempty"`
}
var parameters ServerParameters
var (
universalDeserializer = serializer.NewCodecFactory(runtime.NewScheme()).UniversalDeserializer()
)
var config *rest.Config
var clientSet *kubernetes.Clientset
func main() {
useKubeConfig := os.Getenv("USE_KUBECONFIG")
kubeConfigFilePath := os.Getenv("KUBECONFIG")
flag.IntVar(&parameters.port, "port", 8443, "Webhook server port.")
flag.StringVar(&parameters.certFile, "tlsCertFile", "/etc/webhook/certs/tls.crt", "File containing the x509 Certificate for HTTPS.")
flag.StringVar(&parameters.keyFile, "tlsKeyFile", "/etc/webhook/certs/tls.key", "File containing the x509 private key to --tlsCertFile.")
flag.Parse()
if len(useKubeConfig) == 0 {
// default to service account in cluster token
c, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
config = c
} else {
//load from a kube config
var kubeconfig string
if kubeConfigFilePath == "" {
if home := homedir.HomeDir(); home != "" {
kubeconfig = filepath.Join(home, ".kube", "config")
}
} else {
kubeconfig = kubeConfigFilePath
}
fmt.Println("kubeconfig: " + kubeconfig)
c, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
panic(err.Error())
}
config = c
}
cs, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
clientSet = cs
test()
http.HandleFunc("/", HandleRoot)
http.HandleFunc("/mutate", HandleMutate)
log.Fatal(http.ListenAndServeTLS(":" + strconv.Itoa(parameters.port), parameters.certFile, parameters.keyFile, nil))
}
func HandleRoot(w http.ResponseWriter, r *http.Request){
w.Write([]byte("HandleRoot!"))
}
func HandleMutate(w http.ResponseWriter, r *http.Request){
body, err := ioutil.ReadAll(r.Body)
err = ioutil.WriteFile("/tmp/request", body, 0644)
if err != nil {
panic(err.Error())
}
var admissionReviewReq v1beta1.AdmissionReview
if _, _, err := universalDeserializer.Decode(body, nil, &admissionReviewReq); err != nil {
w.WriteHeader(http.StatusBadRequest)
fmt.Errorf("could not deserialize request: %v", err)
} else if admissionReviewReq.Request == nil {
w.WriteHeader(http.StatusBadRequest)
errors.New("malformed admission review: request is nil")
}
fmt.Printf("Type: %v \t Event: %v \t Name: %v \n",
admissionReviewReq.Request.Kind,
admissionReviewReq.Request.Operation,
admissionReviewReq.Request.Name,
)
var pod apiv1.Pod
err = json.Unmarshal(admissionReviewReq.Request.Object.Raw, &pod)
if err != nil {
fmt.Errorf("could not unmarshal pod on admission request: %v", err)
}
var patches []patchOperation
labels := pod.ObjectMeta.Labels
labels["example-webhook"] = "it-worked"
patches = append(patches, patchOperation{
Op: "add",
Path: "/metadata/labels",
Value: labels,
})
patchBytes, err := json.Marshal(patches)
if err != nil {
fmt.Errorf("could not marshal JSON patch: %v", err)
}
admissionReviewResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
UID: admissionReviewReq.Request.UID,
Allowed: true,
},
}
admissionReviewResponse.Response.Patch = patchBytes
bytes, err := json.Marshal(&admissionReviewResponse)
if err != nil {
fmt.Errorf("marshaling response: %v", err)
}
w.Write(bytes)
}

View File

@@ -0,0 +1,18 @@
package main
import (
"context"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"fmt"
)
func test(){
pods, err := clientSet.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
}

View File

@@ -0,0 +1,13 @@
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "175200h"
}
}
}
}

View File

@@ -0,0 +1,18 @@
{
"hosts": [
"cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "AU",
"L": "Melbourne",
"O": "Example",
"OU": "CA",
"ST": "Example"
}
]
}

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: example-webhook-tls
type: Opaque
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVCekNDQXUrZ0F3SUJBZ0lVZmxqeWRGNkFCSEtNUTVvdUxlWnhGY0FyY3Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1VqRUxNQWtHQTFVRUJoTUNRVlV4RURBT0JnTlZCQWdUQjBWNFlXMXdiR1V4RWpBUUJnTlZCQWNUQ1UxbApiR0p2ZFhKdVpURVFNQTRHQTFVRUNoTUhSWGhoYlhCc1pURUxNQWtHQTFVRUN4TUNRMEV3SGhjTk1qRXdOREU0Ck1qTTBOekF3V2hjTk5ERXdOREV6TWpNME56QXdXakJTTVFzd0NRWURWUVFHRXdKQlZURVFNQTRHQTFVRUNCTUgKUlhoaGJYQnNaVEVTTUJBR0ExVUVCeE1KVFdWc1ltOTFjbTVsTVJBd0RnWURWUVFLRXdkRmVHRnRjR3hsTVFzdwpDUVlEVlFRTEV3SkRRVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLK3EvQ0hTCk1lYzJ4YWdlMTZCcUU0Ni9nQnR1ejRUY1dEd1l3U2FYWmRVaEdBNTBLSU9tWGtCWkN5OVRlci9QakhIc0dSd0UKUnhsWXZvYkdOZlNMZTdFOEhxVjIxRkxtQ0Vjbk5ucldtRDFJZDU1MENYaEVRc2pDSGJXbFhLZVRIbFUrck10cQpsZnd0MHQxUVNzTDdhaFZxQnF3V0FHVGdXcTZoMkY4ODdocDdXK0pqN3FwVUFmamxObXdPYUxGK2U2dW9uNjRHCkNGRWNjbm9XV2hnWDM4M2I3bWQrUFVQejF0aWVIQ242OUFqQUF2UThMdVFxZDQwdVZJbmM0YVNRNXB4OXkxQWQKZ3J5eHE4R3AvUXBqZ2M2MWJPWHFBbFhFQmZsOUVUM0kyaC9Ya2Nwb2R4N1E2b2Fkalo4Z2Q1N3BpRjVkb05MQwpKcHJoTE1UN3hrSG41WDBDQXdFQUFhT0IxRENCMFRBT0JnTlZIUThCQWY4RUJBTUNCYUF3SFFZRFZSMGxCQll3CkZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0hRWURWUjBPQkJZRUZBZUQKOUZsUUNtOHBTby8wQTlKUkUwUXNRUFFnTUhNR0ExVWRFUVJzTUdxQ0QyVjRZVzF3YkdVdGQyVmlhRzl2YTRJcApaWGhoYlhCc1pTMTNaV0pvYjI5ckxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYSXViRzlqWVd5Q0cyVjRZVzF3CmJHVXRkMlZpYUc5dmF5NWtaV1poZFd4MExuTjJZNElKYkc5allXeG9iM04waHdSL0FBQUJNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDQUZNVlpGdEd6SDNZU051eFlzU3pGVm9LTlE5SmtuZkFRZnRhdjlBNnVBRDF5VThFaQo4QS9kUVFEVGVaakdUSUxUZTF0b0hIQzJxamxObkUyeUJLV3dTS2pqT3RNRDljQVRiYmF4VVdqa01lUGh3azMrClFCV3NhaTJMeWNkRnRVZ0hqTDNNcG8wZlY0S1c3QmFSK2pZOHpLNEdaUWVJaVBrdE9wL0w5ZWhBWmEwUWl6MWYKd1V1K1czVTN4S3hoZHMyTXpNb1U4K0tGd2NWcEk5R0w0OExwVHNsS0NJejI4SUhzRU9NTzFhcXd5cUJNNm9EOAo5ZUwyMjJDbjVhUEVqUXcxV2xiWk9CaFMzdkx5eWx5UzdMVUJ1bDRob3JlaXIrZHF1QmNtTC9xeXUvS3ZlbmhlCkZQaktHelkvRGZJaHdJc2NWUFh6bjltdi8wNnA1QnlzeExVVgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBcjZyOElkSXg1emJGcUI3WG9Hb1RqcitBRzI3UGhOeFlQQmpCSnBkbDFTRVlEblFvCmc2WmVRRmtMTDFONnY4K01jZXdaSEFSSEdWaStoc1kxOUl0N3NUd2VwWGJVVXVZSVJ5YzJldGFZUFVoM25uUUoKZUVSQ3lNSWR0YVZjcDVNZVZUNnN5MnFWL0MzUzNWQkt3dnRxRldvR3JCWUFaT0JhcnFIWVh6enVHbnRiNG1QdQpxbFFCK09VMmJBNW9zWDU3cTZpZnJnWUlVUnh5ZWhaYUdCZmZ6ZHZ1WjM0OVEvUFcySjRjS2ZyMENNQUM5RHd1CjVDcDNqUzVVaWR6aHBKRG1uSDNMVUIyQ3ZMR3J3YW45Q21PQnpyVnM1ZW9DVmNRRitYMFJQY2phSDllUnltaDMKSHREcWhwMk5ueUIzbnVtSVhsMmcwc0ltbXVFc3hQdkdRZWZsZlFJREFRQUJBb0lCQUFZNkJsUGdrbnBDbThENAp6dVhWdkxtN21mdmU4cVlmOVZTei8reXhReC9KMjROdnBKditBcXMvUE1GQnNVRXBSeTRtazBGRitZc3hkUmRyCjRTKzQzZnFMU2Y3TmRuczF3aWRiZ1hmYk1XeENyRkxHaEN0cUovL2J1WmZkczZvUThldE5uR3hkYTlHVGdqenMKQXFwa3BQNzdVaDg1Ykd3bTg2L3E5ck54Z25NWHl6K2dHSVFCSGhvQWFaSlhNQmsvWWpVT3FpMTdWaG93RlpZMwpjK21XSU5zZzNTTWQwT1FNYXNhVEpoMU5uTnloU3p4MU5nSGxVNXo0WmpIbU9WZ1NLcmRkeEJDSnU5NFhReDU3CkxoK2x4ZDgyNFNiOGVLQTY5bWVXazZxd2hoWXo5bEk5NHlKdnlIN0kwQnlvdUJMaTEvdXBmSWxuSFF0c09MUloKcWU2dFZLRUNnWUVBM0RZbmdlaGtxbTVJNk9jOU5wNjRwKzhrdXhmbXltV0IrSTI1dWxtTGRoYzhZUFVrZUQybApnZ0lLelZURWlTYk5qUUZzK1MyMUxEWVdSY09ObzVacjFCUFVLcEVycnBUVUN4N3c3SFowMklVbzEybDAwbzhsCm1QWEhRVlVwenZoUmJYNjFsaEtYcGVzUFV5WTQ1MS9mSUhRckVVc1NjNFY4QWRXYVlZekg2eVVDZ1lFQXpEZWEKb2QweUFMQmFjdWVWdjRadW0xL1pZaytsWjJpcG1Vdzd4UDBIV1poMWJReWptcmpNd0xvbXl2c20rSk4zUk8rUQpkQk5TYWczTzNYczNNUVRVM2xseFlKTWJPRmxWRUV5Qy83eGpUdXc4Tkw1S1FCNWlvRTdleDRFQ2xMZU5rK2grCllaNUFOOTh2azNhNEdILzFlMTFxMmlZeUlUb1FNQTAwWmxLdmJYa0NnWUJkQ3B5Q3JOZnJrcEZIcG53Y21jOVgKVlJsbDIyRnQzcG1kbFBRR0lsTmtYOGpwQm1xVVN5ZWsySXdMMldiNHMrWmhUMXJscFVSSkc4a3BUTWlKZDhLegpablZjVHQzdjgzM3IvUFM2VkFwbWVVeWFSenBPeEtDVUVqUlFERldQMXlkQVppcis3M2dYYUV1ZlRDVDZ6VzBPCjMwWmJGaWNEbkVDYTNjOU9yQmJENlFLQmdCY0F0R1JESEJ6RHdJeHMxWXRMUXk0eEw3VkpMMkprZ2FZSTFqcXMKSGFYVDdIWXFGRXViUVVUOE10NXVSOGQ4Sk5VWS92WjBMclpQYzl1eXcxYThLcFlaRVJKRnY2MHJNcyt4THBoTAp5Z3ZieERSVXN0eGlEODNxMUdFNGdPZnJmUUVLRVNKQnh3NEVEOEhXZjRvUzc3M0RtZ09VaGRVRVMwcCtVa2FzClRhSlJBb0dBU0M1UXdzZTRVQXh3eFl4UDU3dWNOaUNZbUp4MVZZUUdmR003R2lqaFRiUzJHOCtOL3Q3clBZUXYKRFJBUTRFS1ZBWW4xS0lPY1ljWk85QTBVcE15cWhJYTVTM1VkTlpidWx6Tk0rclBQMXJXVEQ4TURDeGh1U2xMVwpLV1RGRFRkVnpXU3loYk1jYWhwRy9OemVBaVF5MUdhYnRrcm4weHBQOW9tZmtjN0NGQ1U9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

View File

@@ -0,0 +1,46 @@
# Use CFSSL to generate certificates
More about [CFSSL here]("https://github.com/cloudflare/cfssl")
```
cd kubernetes\admissioncontrollers\introduction
docker run -it --rm -v ${PWD}:/work -w /work debian bash
apt-get update && apt-get install -y curl &&
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o /usr/local/bin/cfssl && \
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o /usr/local/bin/cfssljson && \
chmod +x /usr/local/bin/cfssl && \
chmod +x /usr/local/bin/cfssljson
#generate ca in /tmp
cfssl gencert -initca ./tls/ca-csr.json | cfssljson -bare /tmp/ca
#generate certificate in /tmp
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=./tls/ca-config.json \
-hostname="example-webhook,example-webhook.default.svc.cluster.local,example-webhook.default.svc,localhost,127.0.0.1" \
-profile=default \
./tls/ca-csr.json | cfssljson -bare /tmp/example-webhook
#make a secret
cat <<EOF > ./tls/example-webhook-tls.yaml
apiVersion: v1
kind: Secret
metadata:
name: example-webhook-tls
type: Opaque
data:
tls.crt: $(cat /tmp/example-webhook.pem | base64 | tr -d '\n')
tls.key: $(cat /tmp/example-webhook-key.pem | base64 | tr -d '\n')
EOF
#generate CA Bundle + inject into template
ca_pem_b64="$(openssl base64 -A <"/tmp/ca.pem")"
sed -e 's@${CA_PEM_B64}@'"$ca_pem_b64"'@g' <"webhook-template.yaml" \
> webhook.yaml
```

View File

@@ -0,0 +1,24 @@
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: example-webhook
webhooks:
- name: example-webhook.default.svc.cluster.local
admissionReviewVersions:
- "v1beta1"
sideEffects: "None"
timeoutSeconds: 30
objectSelector:
matchLabels:
example-webhook-enabled: "true"
clientConfig:
service:
name: example-webhook
namespace: default
path: "/mutate"
caBundle: "${CA_PEM_B64}"
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]

View File

@@ -0,0 +1,24 @@
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: example-webhook
webhooks:
- name: example-webhook.default.svc.cluster.local
admissionReviewVersions:
- "v1beta1"
sideEffects: "None"
timeoutSeconds: 30
objectSelector:
matchLabels:
example-webhook-enabled: "true"
clientConfig:
service:
name: example-webhook
namespace: default
path: "/mutate"
caBundle: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtRENDQW9DZ0F3SUJBZ0lVUmE5YzR5NXVmUXNEWHFrZFo1cFhmOEFGcjNBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1VqRUxNQWtHQTFVRUJoTUNRVlV4RURBT0JnTlZCQWdUQjBWNFlXMXdiR1V4RWpBUUJnTlZCQWNUQ1UxbApiR0p2ZFhKdVpURVFNQTRHQTFVRUNoTUhSWGhoYlhCc1pURUxNQWtHQTFVRUN4TUNRMEV3SGhjTk1qRXdOREUwCk1EUTBOekF3V2hjTk1qWXdOREV6TURRME56QXdXakJTTVFzd0NRWURWUVFHRXdKQlZURVFNQTRHQTFVRUNCTUgKUlhoaGJYQnNaVEVTTUJBR0ExVUVCeE1KVFdWc1ltOTFjbTVsTVJBd0RnWURWUVFLRXdkRmVHRnRjR3hsTVFzdwpDUVlEVlFRTEV3SkRRVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMR1Q2NW9SClVWcTdDM2RKNkZYSXQwOVdFVkN0MUZuWDRZL3Vtc0wySGpudkdNdGdsU096a0p5cjVTYk5uRytZVzQ4UmhzVEQKZEMxTzFmbWFTWmpRYUNjQ2ZJRFZSTDFyVDFxNis0Smo1QTJnN1VPOS9NbThySnlFeFFMaXVJTVJoc1lUaFR6OQpmTDZucXBKWDZaelpPRFhIcUR0SU1wM2tkdGJ1dGhZRW01WW5RQjd3dTRPUnZJbmtkQk0wNWVHZGw3MFkwVHF1CmlZY3ZBL0MzRndUZjErOTBvTHdIdDcyakw3THBBL0pZSGwvWHNuMVhMd1dic3J0ZUZEVUtsWFl2c1lhKytrSVQKbnloeTF3MkZTeWNtSUdDU2hPdjJsVm1sbVVOUWI4b1lZaDNzTUtEZnhZMmwwak5mVnRibituZ0p6bzFqcklYSAptcTJkdzRiWC9WV3F1ZnNDQXdFQUFhTm1NR1F3RGdZRFZSMFBBUUgvQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJCk1BWUJBZjhDQVFJd0hRWURWUjBPQkJZRUZJSnh4VXIyUzlhd0ptM1Nqd3QvRCt1cElaUXNNQjhHQTFVZEl3UVkKTUJhQUZJSnh4VXIyUzlhd0ptM1Nqd3QvRCt1cElaUXNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUNWdDFaOQo4aWMzNVloL3JITktiMGtOZmFPTUNWaWQzamZoNUR0V2tIVG0rVlRkMk1oUDRHb1lGNXNYSmhMN280dzRFTXV2ClNJTGlPMDJjRnVON0NLVno2NUNLQXQwNkI5ekxOb1NUWVlzbWVGUERnME0xUWMreW1FeGlWYU5CelZtRUdHbHMKNXdyUHhXMzE1L1h4VWRnc25kcWRQUmFQM0NBZGM5dEl0Q1BSL0ZoRys0ZHlvd0NiYmZ3NGhiekhDRmRyenJ1cwpLc1Z6NFFWekZkeWhiUFlFQ0hlYVp0UUxzalNQVng4U1kwZWtPcWZyTkZxcHMwRzNEL3pMSmFGVE8xRTVwUkkvClAwTk9JKzRCcnVoQXRZY2d0djhVWTNHc3VQNzE2QmxwY2taZ3FMTm5Zci8vTllZY0NGcEs1enhmaXdYc3VXNDEKWHhuY1VJUG9BSXlXY1cyWQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]

115
affinity/README.md Normal file
View File

@@ -0,0 +1,115 @@
# Kubernetes Concept: Affinity \ Anti-Affinity
## Create a kubernetes cluster
In this guide we we''ll need a Kubernetes cluster for testing. Let's create one using [kind](https://kind.sigs.k8s.io/) </br>
```
cd kubernetes/affinity
kind create cluster --name demo --image kindest/node:v1.28.0 --config kind.yaml
```
Test the cluster:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
demo-control-plane Ready control-plane 59s v1.28.0
demo-worker Ready <none> 36s v1.28.0
demo-worker2 Ready <none> 35s v1.28.0
demo-worker3 Ready <none> 35s v1.28.0
```
## Node Affinity
[Node Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) is similar to `nodeSelector` however you can define more complex expressions. "Like my pods must run on SSD nodes or preffer SSD nodes"
For example:
* Node selector is a hard and fast rule meaning a pod will not be scheduled if the selection is not satisfied
* For example, when using `os` selector as `linux` , a pod can only be scheduled if there is a node available where `os` label is `linux`
Node Affinity allows an expression.
```
kubectl apply -f node-affinity.yaml
```
We can see our pods are prefering SSD and are always going to `us-east`
```
kubectl get pods -owide
#introduce more pods
kubectl scale deploy app-disk --replicas 10
#observe all pods on demo-worker
```
If there is some trouble with our `ssd` disk, `kubectl taint nodes demo-worker type=ssd:NoSchedule`, we can see pods going to the non-ssd disk nodes in `us-east` </br>
This is because our pods prefer SSD, however there is no SSD available, so would still go to non-SSD nodes as long as there are nodes available in `us-east` </br>
If something goes wrong in our last `us-east` node: `kubectl taint nodes demo-worker3 type=ssd:NoSchedule` and we roll out more pods `kubectl scale deploy app-disk --replicas 20`,
notice that our new pods are now in `Pending` status because no nodes satisfy our node affinity rules </br>
Fix our nodes.
```
kubectl taint nodes demo-worker type=ssd:NoSchedule-
kubectl taint nodes demo-worker3 type=ssd:NoSchedule-
```
Scale back down to 0
```
kubectl scale deploy app-disk --replicas 0
kubectl scale deploy app-disk --replicas 1
# pod should go back to demo-worker , node 1
kubectl get pods -owide
```
## Pod Affinity
Now [Pod Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) is an expression to allow us to state that pods should gravitate towards other pods
```
kubectl apply -f pod-affinity.yaml
# observe where pods get deployed
kubectl get pods -owide
kubectl scale deploy app-disk --replicas 3
kubectl scale deploy web-disk --replicas 3
```
## Pod Anti-Affinity
Let's say we observe our `app-disk` application disk usage is quite intense, and we would like to prevent `app-disk` pods from running together. </br>
This is where anti-affinity comes in:
```
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- app-disk
topologyKey: "kubernetes.io/hostname"
```
After applying the above, we can roll it out and observe scheduling:
```
kubectl scale deploy app-disk --replicas 0
kubectl scale deploy web-disk --replicas 0
kubectl apply -f node-affinity.yaml
kubectl get pods -owide
kubectl scale deploy app-disk --replicas 2 #notice pending pods when scaling to 3
kubectl get pods -owide
kubectl scale deploy web-disk --replicas 2
kubectl get pods -owide
```

18
affinity/kind.yaml Normal file
View File

@@ -0,0 +1,18 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker #demo-worker
labels:
zone: us-east
type: ssd
- role: worker #demo-worker2
labels:
zone: us-west
type: ssd
- role: worker #demo-worker3
labels:
zone: us-east
- role: worker #demo-worker4
labels:
zone: us-west

View File

@@ -0,0 +1,46 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-disk
labels:
app: app-disk
spec:
selector:
matchLabels:
app: app-disk
replicas: 1
template:
metadata:
labels:
app: app-disk
spec:
containers:
- name: app-disk
image: nginx:latest
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- app-disk
topologyKey: "kubernetes.io/hostname"
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: In
values:
- us-east
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: type
operator: In
values:
- ssd

View File

@@ -0,0 +1,30 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-disk
labels:
app: web-disk
spec:
selector:
matchLabels:
app: web-disk
replicas: 1
template:
metadata:
labels:
app: web-disk
spec:
containers:
- name: web-disk
image: nginx:latest
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- app-disk
topologyKey: "kubernetes.io/hostname"

View File

@@ -0,0 +1,22 @@
package main
import (
"fmt"
"net/http"
)
func main(){
http.HandleFunc("/", useCPU)
http.ListenAndServe(":80", nil)
}
func useCPU(w http.ResponseWriter, r *http.Request) {
count := 1
for i := 1; i <= 1000000; i++ {
count = i
}
fmt.Printf("count: %d", count)
w.Write([]byte(string(count)))
}

View File

@@ -0,0 +1,50 @@
apiVersion: v1
kind: Service
metadata:
name: application-cpu
labels:
app: application-cpu
spec:
type: ClusterIP
selector:
app: application-cpu
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-cpu
labels:
app: application-cpu
spec:
selector:
matchLabels:
app: application-cpu
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: application-cpu
spec:
containers:
- name: application-cpu
image: aimvector/application-cpu:v1.0.2
imagePullPolicy: Always
ports:
- containerPort: 80
resources:
requests:
memory: "50Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "2000m"

View File

@@ -0,0 +1,15 @@
FROM golang:1.14-alpine as build
RUN apk add --no-cache git curl
WORKDIR /src
COPY app.go /src
RUN go build app.go
FROM alpine as runtime
COPY --from=build /src/app /app/app
CMD [ "/app/app" ]

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: Pod
metadata:
name: traffic-generator
spec:
containers:
- name: alpine
image: alpine
args:
- sleep
- "100000000"

View File

@@ -0,0 +1,202 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-insecure-tls #remove these for production: only used for kind
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: registry.k8s.io/metrics-server/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 10250
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100

180
autoscaling/readme.md Normal file
View File

@@ -0,0 +1,180 @@
# Kubernetes Autoscaling Guide
## Cluster Autoscaling
Cluster autoscaler allows us to scale cluster nodes when they become full <br/>
I would recommend to learn about scaling your cluster nodes before scaling pods. <br/>
## Horizontal Pod Autoscaling
HPA allows us to scale pods when their resource utilisation goes over a threshold <br/>
## Requirements
### A Cluster
* For both autoscaling guides, we'll need a cluster. <br/>
* For `Cluster Autoscaler` You need a cloud based cluster that supports the cluster autoscaler <br/>
* For `HPA` We'll use [kind](http://kind.sigs.k8s.io/)
### Cluster Autoscaling - Creating an AKS Cluster
```
# azure example
NAME=aks-getting-started
RESOURCEGROUP=aks-getting-started
SERVICE_PRINCIPAL=
SERVICE_PRINCIPAL_SECRET=
az aks create -n $NAME \
--resource-group $RESOURCEGROUP \
--location australiaeast \
--kubernetes-version 1.16.10 \
--nodepool-name default \
--node-count 1 \
--node-vm-size Standard_F4s_v2 \
--node-osdisk-size 250 \
--service-principal $SERVICE_PRINCIPAL \
--client-secret $SERVICE_PRINCIPAL_SECRET \
--output none \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 5
```
### Horizontal Pod Autocaling - Creating a Kind Cluster
My Node has 6 CPU cores for this demo <br/>
```
kind create cluster --name hpa --image kindest/node:v1.18.4
```
### Metric Server
* For `Cluster Autoscaler` - On cloud-based clusters, Metric server may already be installed. <br/>
* For `HPA` - We're using kind
[Metric Server](https://github.com/kubernetes-sigs/metrics-server) provides container resource metrics for use in autoscaling pipelines <br/>
Because I run K8s `1.18` in `kind`, the Metric Server version i need is `0.3.7` <br/>
We will need to deploy Metric Server [0.3.7](https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.3.7) <br/>
I used `components.yaml`from the release page link above. <br/>
<b>Important Note</b> : For Demo clusters (like `kind`), you will need to disable TLS <br/>
You can disable TLS by adding the following to the metrics-server container args <br/>
<b>For production, make sure you remove the following :</b> <br/>
```
- --kubelet-insecure-tls
- --kubelet-preferred-address-types="InternalIP"
```
Deployment: <br/>
```
cd kubernetes\autoscaling
kubectl -n kube-system apply -f .\components\metric-server\metricserver-0.3.7.yaml
#test
kubectl -n kube-system get pods
#note: wait for metrics to populate!
kubectl top nodes
```
## Example Application
For all autoscaling guides, we'll need a simple app, that generates some CPU load <br/>
* Build the app
* Push it to a registry
* Ensure resource requirements are set
* Deploy it to Kubernetes
* Ensure metrics are visible for the app
```
# build
cd kubernetes\autoscaling\components\application
docker build . -t aimvector/application-cpu:v1.0.0
# push
docker push aimvector/application-cpu:v1.0.0
# resource requirements
resources:
requests:
memory: "50Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "2000m"
# deploy
kubectl apply -f deployment.yaml
# metrics
kubectl top pods
```
## Cluster Autoscaler
For cluster autoscaling, you should be able to scale the pods manually and watch the cluster scale. </br>
Cluster autoscaling stops here. </br>
For Pod Autoscaling (HPA), continue</br>
## Generate some traffic
Let's deploy a simple traffic generator pod
```
cd kubernetes\autoscaling\components\application
kubectl apply -f .\traffic-generator.yaml
# get a terminal to the traffic-generator
kubectl exec -it traffic-generator sh
# install wrk
apk add --no-cache wrk
# simulate some load
wrk -c 5 -t 5 -d 99999 -H "Connection: Close" http://application-cpu
#you can scale to pods manually and see roughly 6-7 pods will satisfy resource requests.
kubectl scale deploy/application-cpu --replicas 2
```
## Deploy an autoscaler
```
# scale the deployment back down to 2
kubectl scale deploy/application-cpu --replicas 2
# deploy the autoscaler
kubectl autoscale deploy/application-cpu --cpu-percent=95 --min=1 --max=10
# pods should scale to roughly 6-7 to match criteria of 95% of resource requests
kubectl get pods
kubectl top pods
kubectl get hpa/application-cpu -owide
kubectl describe hpa/application-cpu
```
## Vertical Pod Autoscaling
The vertical pod autoscaler allows us to automatically set request values on our pods <br/>
based on recommendations.
This helps us tune the request values based on actual CPU and Memory usage.<br/>
More [here](./vertical-pod-autoscaling/readme.md)

View File

@@ -0,0 +1,146 @@
# Vertical Pod Autoscaling
## We need a Kubernetes cluster
Lets create a Kubernetes cluster to play with using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
```
kind create cluster --name vpa --image kindest/node:v1.30.4
```
<hr/>
## Metric Server
<br/>
* For `Cluster Autoscaler` - On cloud-based clusters, Metric server may already be installed. <br/>
* For `HPA` - We're using kind
[Metric Server](https://github.com/kubernetes-sigs/metrics-server) provides container resource metrics for use in autoscaling pipelines <br/>
Because I run K8s `1.30` in `kind`, the Metric Server version i need is `0.7.2` <br/>
We will need to deploy Metric Server [0.7.2](https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.7.2) <br/>
I used `components.yaml`from the release page link above. <br/>
<b>Important Note</b> : For Demo clusters (like `kind`), you will need to disable TLS <br/>
You can disable TLS by adding the following to the metrics-server container args <br/>
<b>For production, make sure you remove the following :</b> <br/>
```
- --kubelet-insecure-tls
```
Deployment: <br/>
```
cd kubernetes\autoscaling
kubectl -n kube-system apply -f .\components\metric-server\components.yaml
#test
kubectl -n kube-system get pods
#note: wait for metrics to populate!
kubectl top nodes
```
## VPA
VPA docs [here](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#install-command) <br/>
Let's install the VPA from a container that can access our cluster
```
cd kubernetes/autoscaling/vertical-pod-autoscaling
docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host debian:bookworm bash
# install git
apt-get update && apt-get install -y git curl nano
# install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
cd /tmp
git clone https://github.com/kubernetes/autoscaler.git
cd autoscaler/vertical-pod-autoscaler/
# you may need to generate VPA certificates
bash ./pkg/admission-controller/gencerts.sh
# deploy the VPA
./hack/vpa-up.sh
# after few seconds, we can see the VPA components in:
kubectl -n kube-system get pods
```
## Build and deploy example app
```
# build
cd kubernetes\autoscaling\components\application
docker build . -t aimvector/application-cpu:v1.0.0
# push
docker push aimvector/application-cpu:v1.0.0
# deploy
kubectl apply -f deployment.yaml
# metrics
kubectl top pods
```
## Generate some traffic
Let's deploy a simple traffic generator pod
```
cd kubernetes\autoscaling\components\application
kubectl apply -f .\traffic-generator.yaml
# get a terminal to the traffic-generator
kubectl exec -it traffic-generator -- sh
# install wrk
apk add --no-cache wrk
# simulate some load
wrk -c 5 -t 5 -d 99999 -H "Connection: Close" http://application-cpu
```
# Deploy an example VPA
```
kubectl apply -f .\vertical-pod-autoscaling\vpa.yaml
kubectl describe vpa application-cpu
```
# Deploy Goldilocks
```
cd /tmp
git clone https://github.com/FairwindsOps/goldilocks.git
cd goldilocks/hack/manifests/
kubectl create namespace goldilocks
kubectl -n goldilocks apply -f ./controller
kubectl -n goldilocks apply -f ./dashboard
kubectl label ns default goldilocks.fairwinds.com/enabled=true
kubectl label ns default goldilocks.fairwinds.com/vpa-update-mode="off"
kubectl -n goldilocks port-forward svc/goldilocks-dashboard 80
```

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: application-cpu
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: application-cpu
updatePolicy:
updateMode: "Off" # Auto for automatic updates, Off for manual updates

230
daemonsets/README.md Normal file
View File

@@ -0,0 +1,230 @@
# Kubernetes Daemonsets
## We need a Kubernetes cluster
Lets create a Kubernetes cluster to play with using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/) </br>
Because a Daemonset is all about running pods on every node, lets create a 3 node cluster:
```
cd kubernetes/daemonsets
kind create cluster --name daemonsets --image kindest/node:v1.20.2 --config kind.yaml
```
Test our cluster:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
daemonsets-control-plane Ready control-plane,master 65s v1.20.2
daemonsets-worker Ready <none> 31s v1.20.2
daemonsets-worker2 Ready <none> 31s v1.20.2
daemonsets-worker3 NotReady <none> 31s v1.20.2
```
# Introduction
Kubernetes provide [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) for what a Daemonset is with examples.
## Basic Daemonset
Let's deploy a daemonset that runs a pod on each node and collects the name of the node
```
kubectl apply -f daemonset.yaml
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
example-daemonset-8lcr5 1/1 Running 0 3m21s 10.244.2.4 daemonsets-worker <none> <none>
example-daemonset-9jhgx 1/1 Running 0 81s 10.244.3.4 daemonsets-worker2 <none> <none>
example-daemonset-lvvsd 1/1 Running 0 2m41s 10.244.1.4 daemonsets-worker3 <none> <none>
example-daemonset-xxcv9 1/1 Running 0 119s 10.244.0.7 daemonsets-control-plane <none> <none>
```
We can see the logs of any pod
```
kubectl logs <example-daemonset-xxxx>
```
Cleanup:
```
kubectl delete ds example-daemonset
```
## Basic Daemonset: Exposing HTTP
Let's deploy a daemonset that runs a pod on each node and exposes an HTTP endpoint on each node. </br>
In this demo we'll use a simple NGINX on port 80
```
kubectl apply -f daemonset-communication.yaml
kubectl get pods
```
## Communicating with Daemonset Pods
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods
Let's deploy a pod that can talk to our daemonset
```
kubectl apply -f pod.yaml
kubectl exec -it pod -- bash
```
### Service Type: ClusterIP or LoadBalancer
Let's deploy a service of type ClusterIP
```
kubectl apply -f ./services/clusterip-service.yaml
while true; do curl http://daemonset-svc-clusterip; sleep 1s; done
Hello from daemonsets-worker2
Hello from daemonsets-control-plane
Hello from daemonsets-worker2
Hello from daemonsets-worker3
Hello from daemonsets-worker
Hello from daemonsets-worker
```
### Node IP and Node Port
We can add the `nodePort` field to the pods port section to expose a port on the node. </br>
Let's expose the node port in the pod spec:
```
ports:
- containerPort: 80
hostPort: 80
name: "http"
```
This means we can contact the daemonset pod using the Node IP and port:
```
# get the node ips
kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP KERNEL-VERSION CONTAINER-RUNTIME
daemonsets-control-plane Ready control-plane,master 112m v1.20.2 172.18.0.4 <none>
daemonsets-worker Ready <none> 111m v1.20.2 172.18.0.3 <none>
daemonsets-worker2 Ready <none> 111m v1.20.2 172.18.0.2 <none>
daemonsets-worker3 Ready <none> 111m v1.20.2 172.18.0.6 <none>
#example:
bash-5.1# curl http://172.18.0.4:80
Hello from daemonsets-control-plane
bash-5.1# curl http://172.18.0.2:80
Hello from daemonsets-worker2
```
### Service: Headless service
Let's deploy a headless service where `clusterIP: None`
```
kubectl apply -f ./services/headless-service.yaml
```
There are a few ways to discover our pods:
1) Discover the [DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services) records via the "Headless" services
```
apk add --no-cache bind-tools
dig daemonset-svc-headless.default.svc.cluster.local
```
Notice it resolves to multiple DNS records for each pod:
```
;; ANSWER SECTION:
daemonset-svc-headless.default.svc.cluster.local. 30 IN A 10.244.0.5
daemonset-svc-headless.default.svc.cluster.local. 30 IN A 10.244.2.2
daemonset-svc-headless.default.svc.cluster.local. 30 IN A 10.244.3.2
daemonset-svc-headless.default.svc.cluster.local. 30 IN A 10.244.1.2
```
2) Discover pods endpoints by retrieving the endpoints for the headless service
```
kubectl describe endpoints daemonset-svc-headless
```
Example:
```
Addresses: 10.244.0.5,10.244.1.2,10.244.2.2,10.244.3.2
```
Get A records for each pod by using the following format: </br>
`<pod-ip-address>.<my-namespace>.pod.<cluster-domain.example>.` </br>
```
#examples:
10-244-0-5.default.pod.cluster.local
10-244-1-2.default.pod.cluster.local
10-244-2-2.default.pod.cluster.local
10-244-3-2.default.pod.cluster.local
```
Communicate with the pods over DNS:
```
curl http://10-244-0-5.default.pod.cluster.local
Hello from daemonsets-control-plane
```
# Real world Examples:
## Monitoring Nodes: Node-Exporter Daemonset
<br/>
We clone the official kube-prometheus repo to get monitoring manifests for Kubernetes.
```
git clone https://github.com/prometheus-operator/kube-prometheus.git
```
Check the compatibility matrix [here](https://github.com/prometheus-operator/kube-prometheus/tree/v0.8.0#kubernetes-compatibility-matrix)
For this demo, we will use the compatible version tag 0.8
```
git checkout v0.8.0
```
Deploy Prometheus Operator and CRDs
```
cd .\manifests\
kubectl create -f .\setup\
```
Deploy remaining resources including node exporter daemonset
```
kubectl create -f .
# wait for pods to be up
kubectl get pods -n monitoring
#access prometheus in the browser
kubectl -n monitoring port-forward svc/prometheus-k8s 9090
```
See the Daemonset communications on the Prometheus [targets](http://localhost:9090/targets) page
Checkout my [monitoring guide for kubernetes](../../monitoring/prometheus/kubernetes/README.md) for more in depth info
## Monitoring: Logging via Fluentd
Take a look at my monitoring guide for [Fluentd](../../monitoring/logging/fluentd/kubernetes/README.md)

View File

@@ -0,0 +1,55 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-communicate
namespace: default
labels:
app: daemonset-communicate
spec:
selector:
matchLabels:
name: daemonset-communicate
template:
metadata:
labels:
name: daemonset-communicate
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
initContainers:
- name: create-file
image: alpine
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command:
- "bin/sh"
- "-c"
- "echo 'Hello from '$NODE_NAME > /usr/share/nginx/html/index.html"
volumeMounts:
- name: nginx-page
mountPath: /usr/share/nginx/html/
containers:
- name: daemonset-communicate
image: nginx:1.20.0-alpine
volumeMounts:
- name: nginx-page
mountPath: /usr/share/nginx/html/
resources:
limits:
memory: 500Mi
requests:
cpu: 10m
memory: 100Mi
ports:
- containerPort: 80
name: "http"
terminationGracePeriodSeconds: 30
volumes:
- name: nginx-page
emptyDir: {}

40
daemonsets/daemonset.yaml Normal file
View File

@@ -0,0 +1,40 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: example-daemonset
namespace: default
labels:
app: example-daemonset
spec:
selector:
matchLabels:
name: example-daemonset
template:
metadata:
labels:
name: example-daemonset
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: example-daemonset
image: alpine:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command:
- "bin/sh"
- "-c"
- "echo 'Hello! I am running on '$NODE_NAME; while true; do sleep 300s ; done;"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationGracePeriodSeconds: 30

7
daemonsets/kind.yaml Normal file
View File

@@ -0,0 +1,7 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

13
daemonsets/pod.yaml Normal file
View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: pod
image: alpine
command:
- "/bin/sh"
- "-c"
- "apk add --no-cache curl bash && sleep 60m"

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: daemonset-svc-clusterip
spec:
type: ClusterIP
selector:
name: daemonset-communicate
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 80

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: daemonset-svc-headless
spec:
clusterIP: None
selector:
name: daemonset-communicate
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 80

View File

@@ -0,0 +1,67 @@
# kind create cluster --name deployments --image kindest/node:v1.31.1
# kubectl apply -f deployment.yaml
# deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
labels:
app: example-app
test: test
annotations:
fluxcd.io/tag.example-app: semver:~1.0
fluxcd.io/automated: 'true'
spec:
selector:
matchLabels:
app: example-app
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: aimvector/python:1.0.4
imagePullPolicy: Always
ports:
- containerPort: 5000
# livenessProbe:
# httpGet:
# path: /status
# port: 5000
# initialDelaySeconds: 3
# periodSeconds: 3
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "500m"
tolerations:
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
#NOTE: comment out `volumeMounts` section for configmap and\or secret guide
# volumeMounts:
# - name: secret-volume
# mountPath: /secrets/
# - name: config-volume
# mountPath: /configs/
#NOTE: comment out `volumes` section for configmap and\or secret guide
# volumes:
# - name: secret-volume
# secret:
# secretName: mysecret
# - name: config-volume
# configMap:
# name: example-config #name of our configmap object

28
deployments/readme.md Normal file
View File

@@ -0,0 +1,28 @@
# Introduction to Kubernetes: Deployments
Build an example app:
```
# Important!
# make sure you are at root of the repository
# in your terminal
# you can choose which app you want to build!
# aimvector/golang:1.0.0
docker-compose build golang
# aimvector/csharp:1.0.0
docker-compose build csharp
# aimvector/nodejs:1.0.0
docker-compose build nodejs
# aimvector/python:1.0.0
docker-compose build python
```
Take a look at example [deployment yaml](./deployment.yaml)

View File

@@ -0,0 +1,174 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
namespace: kube-system
data:
config.toml: |-
[metrics]
[metrics.prometheus]
entryPoint = "traefik"
buckets = [0.1,0.3,1.2,5.0]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "your-email-here@my-awesome-app.org"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[[acme.domains]]
main = "local1.com"
[acme.httpChallenge]
entryPoint = "http"
################################################################
# Global configuration
################################################################
# Enable debug mode
#
# Optional
# Default: false
#
# debug = true
# Log level
#
# Optional
# Default: "ERROR"
#
# logLevel = "DEBUG"
################################################################
# Entrypoints configuration
################################################################
# Entrypoints definition
#
# Optional
# Default:
#[entrypoints]
# [entrypoints.web]
# address = ":80"
################################################################
# Traefik logs configuration
################################################################
# Traefik logs
# Enabled by default and log to stdout
#
# Optional
#
# [traefikLog]
# Sets the filepath for the traefik log. If not specified, stdout will be used.
# Intermediate directories are created if necessary.
#
# Optional
# Default: os.Stdout
#
# filePath = "log/traefik.log"
# Format is either "json" or "common".
#
# Optional
# Default: "common"
#
# format = "common"
################################################################
# Access logs configuration
################################################################
# Enable access logs
# By default it will write to stdout and produce logs in the textual
# Common Log Format (CLF), extended with additional fields.
#
# Optional
#
# [accessLog]
# Sets the file path for the access log. If not specified, stdout will be used.
# Intermediate directories are created if necessary.
#
# Optional
# Default: os.Stdout
#
# filePath = "/path/to/log/log.txt"
# Format is either "json" or "common".
#
# Optional
# Default: "common"
#
# format = "common"
################################################################
# API and dashboard configuration
################################################################
# Enable API and dashboard
#[api]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
# entryPoint = "traefik"
# Enabled Dashboard
#
# Optional
# Default: true
#
# dashboard = false
################################################################
# Ping configuration
################################################################
# Enable ping
#[ping]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
# entryPoint = "traefik"
################################################################
# Docker configuration backend
################################################################
# Enable Docker configuration backend
#[docker]
# Docker server endpoint. Can be a tcp or a unix socket endpoint.
#
# Required
# Default: "unix:///var/run/docker.sock"
#
# endpoint = "tcp://10.10.10.10:2375"
# Default domain used.
# Can be overridden by setting the "traefik.domain" label on a container.
#
# Optional
# Default: ""
#
# domain = "docker.localhost"
# Expose containers by default in traefik
#
# Optional
# Default: true
#
# exposedByDefault = true

View File

@@ -0,0 +1,69 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7.18-alpine
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/etc/traefik/config.toml
volumeMounts:
- name: traefik-config
mountPath: /etc/traefik/
volumes:
- name: traefik-config
configMap:
name: traefik-config
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
type: LoadBalancer
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: https
- protocol: TCP
port: 8080
name: admin

View File

@@ -0,0 +1,43 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---

18
ingress/ingress.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-service
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
#traefik.ingress.kubernetes.io/redirect-entry-point: https
#traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: marko.test
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 80

62
kubectl.md Normal file
View File

@@ -0,0 +1,62 @@
Kubectl Basics:
## Configs
```
kubectl config
#${HOME}/.kube/config
#kubectl config --kubeconfig="C:\someotherfolder\config"
#$KUBECONFIG
```
### contexts
```
#get the current context
kubectl config current-context
#get and set contexts
kubectl config get-contexts
kubectl config use-context
```
## GET commands
```
kubectl get <resource>
#examples
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get configmaps
kubectl get secrets
kubectl get ingress
```
## Namespaces
```
kubectl get namespaces
kubectl create namespace test
kubectl get pods -n test
```
## Describe command
Used to troubleshoot states and statuses of objects
```
kubectl describe <resource> <name>
```
## Version
```
kubectl version
```

187
kubectl/README.md Normal file
View File

@@ -0,0 +1,187 @@
# Introduction to KUBECTL
To start off this tutorial, we will be using [kind](https://kind.sigs.k8s.io/) to create our test cluster. </br>
You can use `minikube` or any Kubernetes cluster. </br>
Kind is an amazing tool for running test clusters locally as it runs in a container which makes it lightweight and easy to run throw-away clusters for testing purposes. </br>
## Download KUBECTL
We can download `kubectl` from the [Official Docs](https://kubernetes.io/docs/tasks/tools/) </br>
## Create a kubernetes cluster
In this guide we will run two clusters side by side so we can demonstrate cluster access. </br>
Create two clusters:
```
kind create cluster --name dev --image kindest/node:v1.23.5
kind create cluster --name prod --image kindest/node:v1.23.5
```
See cluster up and running:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
prod-control-plane Ready control-plane,master 2m12s v1.23.5
```
## Understanding the KUBECONFIG
Default location of the `kubeconfig` file is in `<users-directory>/.kube/config`
```
kind: Config
apiVersion: v1
clusters:
- list of clusters (addresses \ endpoints)
users:
- list of users (thing that identifies us when accessing a cluster [certificate])
contexts:
- list of contexts ( which user and cluster to use when running commands)
```
Commands to interact with `kubeconfig` are `kubectl config`. </br>
Key commands are telling `kubectl` which context to use
```
kubectl config current-context
kubectl config get-contexts
kubectl config use-context <name>
```
You can also tell your `kubectl` to use different config files. </br>
This is useful to keep your production config separate from your development ones </br>
Set the `$KUBECONFIG` environment variable to a path:
```
#linux
export KUBECONFIG=<path>
#windows
$ENV:KUBECONFIG="C:\Users\aimve\.kube\config"
```
We can export seperate configs using `kind` </br>
This is possible with cloud based clusters as well:
```
kind --name dev export kubeconfig --kubeconfig C:\Users\aimve\.kube\dev-config
kind --name prod export kubeconfig --kubeconfig C:\Users\aimve\.kube\prod-config
#switch to prod
$ENV:KUBECONFIG="C:\Users\aimve\.kube\prod-config"
kubectl get nodes
```
## Working with Kubernetes resources
Now that we have cluster access, next we can read resources from the cluster
with the `kubectl get` command.
## Namespaces
Most kubernetes resources are namespace scoped:
```
kubectl get namespaces
```
By default, `kubectl` commands will run against the `default` namespace
## List resources in a namespace
```
kubectl get <resource>
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get configmaps
kubectl get secrets
kubectl get ingress
```
## Create resources in a namespace
We can create a namespace with the `kubectl create` command:
```
kubectl create ns example-apps
```
Let's create a couple of resources:
```
kubectl -n example-apps create deployment webserver --image=nginx --port=80
kubectl -n example-apps get deploy
kubectl -n example-apps get pods
kubectl -n example-apps create service clusterip webserver --tcp 80:80
kubectl -n example-apps get service
kubectl -n example-apps port-forward svc/webserver 80
# we can access http://localhost/
kubectl -n example-apps create configmap webserver-config --from-file config.json=./kubernetes/kubectl/config.json
kubectl -n example-apps get cm
kubectl -n example-apps create secret generic webserver-secret --from-file secret.json=./kubernetes/kubectl/secret.json
kubectl -n example-apps get secret
```
## Working with YAML
As you can see we can create resources with `kubectl` but this is only for basic testing purposes.
Kubernetes is a declarative platform, meaning we should provide it what to create instead
of running imperative line-by-line commands. </br>
We can also get the YAML of pre-existing objects in our cluster with the `-o yaml` flag on the `get` command </br>
Let's output all our YAML to a `yaml` folder:
```
kubectl -n example-apps get cm webserver-config -o yaml > .\kubernetes\kubectl\yaml\config.yaml
kubectl -n example-apps get secret webserver-secret -o yaml > .\kubernetes\kubectl\yaml\secret.yaml
kubectl -n example-apps get deploy webserver -o yaml > .\kubernetes\kubectl\yaml\deployment.yaml
kubectl -n example-apps get svc webserver -o yaml > .\kubernetes\kubectl\yaml\service.yaml
```
## Create resources from YAML files
The most common and recommended way to create resources in Kubernetes is with the `kubectl apply` command. </br>
This command takes in declarative `YAML` files.
To show you how powerful it is, instead of creating things line-by-line, we can deploy all our infrastructure
with a single command. </br>
Let's deploy a Wordpress CMS site, with a back end MySQL database:
```
kubectl create ns wordpress-site
kubectl -n wordpress-site apply -f ./kubernetes/tutorials/basics/yaml/
```
We can checkout our site with the `port-forward` command:
```
kubectl -n wordpress-site get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.96.146.75 <none> 3306/TCP 17s
wordpress ClusterIP 10.96.157.6 <none> 80/TCP 17s
kubectl -n wordpress-site port-forward svc/wordpress 80
```
## Clean up
```
kind delete cluster --name dev
kind delete cluster --name prod
```

3
kubectl/config.json Normal file
View File

@@ -0,0 +1,3 @@
{
"config": "some-value"
}

3
kubectl/secret.json Normal file
View File

@@ -0,0 +1,3 @@
{
"secret": "some-secret-value"
}

BIN
kubectl/yaml/config.yaml Normal file

Binary file not shown.

Binary file not shown.

BIN
kubectl/yaml/secret.yaml Normal file

Binary file not shown.

BIN
kubectl/yaml/service.yaml Normal file

Binary file not shown.

View File

@@ -0,0 +1,10 @@
# apiVersion: v1
# kind: ConfigMap
# metadata:
# name: example-config
# namespace: example
# data:
# config.json: |
# {
# "environment" : "dev"
# }

View File

@@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
namespace: example
labels:
app: example-app
annotations:
spec:
selector:
matchLabels:
app: example-app
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: aimvector/python:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 5000
volumeMounts:
- name: config-volume
mountPath: /configs/
volumes:
- name: config-volume
configMap:
name: example-config

View File

@@ -0,0 +1,5 @@
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
- configmap.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: example

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: example-service
namespace: example
labels:
app: example-app
spec:
type: LoadBalancer
selector:
app: example-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000

View File

@@ -0,0 +1,4 @@
bases:
- ../../application
patches:
- replica_count.yaml

View File

@@ -0,0 +1,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
namespace: example
spec:
replicas: 4

View File

@@ -0,0 +1,4 @@
{
"environment" : "prod",
"hello": "world"
}

View File

@@ -0,0 +1,13 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
namespace: example
spec:
template:
spec:
containers:
- name: example-app
env:
- name: ENVIRONMENT
value: Production

View File

@@ -0,0 +1,16 @@
bases:
- ../../application
patches:
- replica_count.yaml
- resource_limits.yaml
configMapGenerator:
- name: example-config
namespace: example
#behavior: replace
files:
- configs/config.json
patchesStrategicMerge:
- env.yaml
images:
- name: aimvector/python
newTag: 1.0.1

View File

@@ -0,0 +1,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
namespace: example
spec:
replicas: 6

View File

@@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
namespace: example
spec:
template:
spec:
containers:
- name: example-app
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "500m"

40
kustomize/readme.md Normal file
View File

@@ -0,0 +1,40 @@
# The Basics
```
kubectl apply -f kubernetes/kustomize/application/namespace.yaml
kubectl apply -f kubernetes/kustomize/application/configmap.yaml
kubectl apply -f kubernetes/kustomize/application/deployment.yaml
kubectl apply -f kubernetes/kustomize/application/service.yaml
# OR
kubectl apply -f kubernetes/kustomize/application/
kubectl delete ns example
```
# Kustomize
## Build
```
kubectl kustomize .\kubernetes\kustomize\ | kubectl apply -f -
# OR
kubectl apply -k .\kubernetes\kustomize\
kubectl delete ns example
```
## Overlays
```
kubectl kustomize .\kubernetes\kustomize\environments\production | kubectl apply -f -
# OR
kubectl apply -k .\kubernetes\kustomize\environments\production
kubectl delete ns example
```

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-volume
labels:
type: local
spec:
#we use local node storage here!
#kubectl get storageclass
storageClassName: hostpath
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-claim
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi

View File

@@ -0,0 +1,50 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin123
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- protocol: TCP
name: http
port: 5432
targetPort: 5432

View File

@@ -0,0 +1,57 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin123
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumes:
- name: data
persistentVolumeClaim:
claimName: example-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- protocol: TCP
name: http
port: 5432
targetPort: 5432

View File

@@ -0,0 +1,77 @@
# Persistent Volumes Demo
## Container Storage
By default containers store their data on the file system like any other process.
Container file system is temporary and not persistent during container restarts
When container is recreated, so is the file system
```
# run postgres
docker run -d --rm -e POSTGRES_DB=postgresdb -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin123 postgres:10.4
# enter the container
docker exec -it <container-id> bash
# login to postgres
psql --username=admin postgresdb
#create a table
CREATE TABLE COMPANY(
ID INT PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR(50),
SALARY REAL
);
#show table
\dt
# quit
\q
```
Restarting the above container and going back in you will notice `\dt` commands returning no tables.
Since data is lost.
Same can be demonstrated using Kubernetes
```
cd .\kubernetes\persistentvolume\
kubectl create ns postgres
kubectl apply -n postgres -f ./postgres-no-pv.yaml
kubectl -n postgres get pods
kubectl -n postgres exec -it postgres-0 bash
# run the same above mentioned commands to create and list the database table
kubectl delete po -n postgres postgres-0
# exec back in and confirm table does not exist.
```
# Persist data Docker
```
docker volume create postges
docker run -d --rm -v postges:/var/lib/postgresql/data -e POSTGRES_DB=postgresdb -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin123 postgres:10.4
# run the same tests as above and notice
```
# Persist data Kubernetes
```
kubectl apply -f persistentvolume.yaml
kubectl apply -n postgres -f persistentvolumeclaim.yaml
kubectl apply -n postgres -f postgres-with-pv.yaml
kubectl -n postgres get pods
```

64
pods/pod-sidecar.yaml Normal file
View File

@@ -0,0 +1,64 @@
# kind create cluster --name pods --image kindest/node:v1.31.1
apiVersion: v1
kind: Pod
metadata:
name: example-pod
namespace: default
labels:
app: example-app
spec:
shareProcessNamespace: true
nodeSelector:
kubernetes.io/os: linux
containers:
- name: config-watcher
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- |
apk add --no-cache inotify-tools
while true; do
inotifywait -e modify /etc/nginx/nginx.conf
pkill -HUP nginx
done
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/
volumes:
- name: config-volume
configMap:
name: nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
events {}
http {
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location = /health {
access_log off;
default_type text/plain;
add_header Content-Type text/plain;
return 200 "ok";
}
}
}

38
pods/pod.yaml Normal file
View File

@@ -0,0 +1,38 @@
# kind create cluster --name pods --image kindest/node:v1.31.1
# kubectl apply -f pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: example-pod
namespace: default
labels:
app: example-app
test: test
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: example-app
image: aimvector/python:1.0.4
imagePullPolicy: Always
ports:
- containerPort: 5000
livenessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 3
periodSeconds: 3
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "500m"

120
probes/README.md Normal file
View File

@@ -0,0 +1,120 @@
# Introduction to Kubernetes Probes
## Create a kubernetes cluster
In this guide we we''ll need a Kubernetes cluster for testing. Let's create one using [kind](https://kind.sigs.k8s.io/) </br>
```
cd kubernetes/probes
kind create cluster --name demo --image kindest/node:v1.28.0
```
Test the cluster:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
demo-control-plane Ready control-plane 59s v1.28.0
```
## Applications
Client app is used to act as a client that sends web requests :
```
kubectl apply -f client.yaml
```
The server app is the app that will receive web requests:
```
kubectl apply -f server.yaml
```
Test making web requests constantly:
```
while true; do curl http://server; sleep 1s; done
```
Bump the server `version` label up and apply to force a new deployment </br>
Notice the client throws an error, so traffic is interupted, not good! </br>
This is because our new pod during deployment is not ready to take traffic!
## Readiness Probes
Let's add a readiness probe that tells Kubernetes when we are ready:
```
readinessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 3
periodSeconds: 3
failureThreshold: 3
```
### Automatic failover with Readiness probes
Let's pretend our application starts hanging and not longer returns responses </br>
This is common with some web servers and may need to be manually restarted
```
kubectl exec -it podname -- sh -c "rm /data.txt"
```
Now we will notice our client app starts getting errors. </br>
Few things to notice:
* Our readiness probe detected an issue and removed traffic from the faulty pod.
* We should be running more than one application so we would be highly available
```
kubectl scale deploy server --replicas 2
```
* Notice traffic comes back as its routed to the healthy pod
Fix our old pod: `kubectl exec -it podname -- sh -c "echo 'ok' > /data.txt"` </br>
* If we do this again with 2 pods, notice we still get an interuption but our app automaticall stabalises after some time
* This is because readinessProbe has `failureThreshold` and some failure will be expected before recovery
* Do not set this `failureThreshold` too low as you may remove traffic frequently. Tune accordingly!
Readiness probes help us automatically remove traffic when there are intermittent network issues </br>
## Liveness Probes
Liveness probe helps us when we cannot automatically recover. </br>
Let's use the same mechanism to create a vaulty pod:
```
kubectl exec -it podname -- sh -c "rm /data.txt"
```
Our readiness probe has saved us from traffic issues. </br>
But we want the pod to recover automatically, so let's create livenessProbe:
```
livenessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 3
periodSeconds: 4
failureThreshold: 8
```
Scale back up: `kubectl scale deploy server --replicas 2`
Create a vaulty pod: `kubectl exec -it podname -- sh -c "rm /data.txt" `
If we observe we will notice the readinessProbe saves our traffic, and livenessProbe will eventually replace the bad pod </br>
## Startup Probes
The [startup probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes) is for slow starting applications </br>
It's important to understand difference between start up and readiness probes. </br>
In our examples here, readiness probe acts as a startup probe too, since our app is fairly slow starting! </br>

22
probes/client.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
labels:
app: client
spec:
selector:
matchLabels:
app: client
replicas: 1
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: alpine:latest
command:
- sleep
- "9999"

83
probes/server.yaml Normal file
View File

@@ -0,0 +1,83 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: server
labels:
app: server
spec:
selector:
matchLabels:
app: server
replicas: 1
template:
metadata:
labels:
app: server
version: "1"
spec:
containers:
- name: server
image: python:alpine
workingDir: /app
command: ["/bin/sh"]
args:
- -c
- "pip3 install --disable-pip-version-check --root-user-action=ignore flask && echo 'ok' > /data.txt && flask run -h 0.0.0.0 -p 5000"
ports:
- containerPort: 5000
volumeMounts:
- name: app
mountPath: "/app"
readinessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 3
periodSeconds: 3
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 3
periodSeconds: 4
failureThreshold: 8
volumes:
- name: app
configMap:
name: server-code
---
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: server
spec:
type: ClusterIP
selector:
app: server
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000
---
apiVersion: v1
kind: ConfigMap
metadata:
name: server-code
data:
app.py: |
import time
import logging
import os.path
logging.basicConfig(level=logging.DEBUG)
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
with open('/data.txt') as data:
return data.read()

209
rbac/README.md Normal file
View File

@@ -0,0 +1,209 @@
# Introduction to Kubernetes: RBAC
## Create Kubernetes cluster
```
kind create cluster --name rbac --image kindest/node:v1.20.2
```
## Kubernetes CA Certificate
Kubernetes does not have a concept of users, instead it relies on certificates and would only
trust certificates signed by its own CA. </br>
To get the CA certificates for our cluster, easiest way is to access the master node. </br>
Because we run on `kind`, our master node is a docker container. </br>
The CA certificates exists in the `/etc/kubernetes/pki` folder by default. </br>
If you are using `minikube` you may find it under `~/.minikube/.`
Access the master node:
```
docker exec -it rbac-control-plane bash
ls -l /etc/kubernetes/pki
total 60
-rw-r--r-- 1 root root 1135 Sep 10 01:38 apiserver-etcd-client.crt
-rw------- 1 root root 1675 Sep 10 01:38 apiserver-etcd-client.key
-rw-r--r-- 1 root root 1143 Sep 10 01:38 apiserver-kubelet-client.crt
-rw------- 1 root root 1679 Sep 10 01:38 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1306 Sep 10 01:38 apiserver.crt
-rw------- 1 root root 1675 Sep 10 01:38 apiserver.key
-rw-r--r-- 1 root root 1066 Sep 10 01:38 ca.crt
-rw------- 1 root root 1675 Sep 10 01:38 ca.key
drwxr-xr-x 2 root root 4096 Sep 10 01:38 etcd
-rw-r--r-- 1 root root 1078 Sep 10 01:38 front-proxy-ca.crt
-rw------- 1 root root 1679 Sep 10 01:38 front-proxy-ca.key
-rw-r--r-- 1 root root 1103 Sep 10 01:38 front-proxy-client.crt
-rw------- 1 root root 1675 Sep 10 01:38 front-proxy-client.key
-rw------- 1 root root 1679 Sep 10 01:38 sa.key
-rw------- 1 root root 451 Sep 10 01:38 sa.pub
exit the container
```
Copy the certs out of our master node:
```
cd kubernetes/rbac
docker cp rbac-control-plane:/etc/kubernetes/pki/ca.crt ca.crt
docker cp rbac-control-plane:/etc/kubernetes/pki/ca.key ca.key
```
# Kubernetes Users
As mentioned before, Kubernetes has no concept of users, it trusts certificates that is signed by its CA. <br/>
This allows a lot of flexibility as Kubernetes lets you bring your own auth mechanisms, such as [OpenID Connect](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) or OAuth. </br>
<p> This allows managed Kubernetes offerings to use their cloud logins to authenticate. </p>
So on Azure, I can use my Microsoft account, GKE my Google account and AWS EKS my Amazon account. </br>
You will need to consult your cloud provider to setup authentication. </br>
Example [Azure AKS](https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration-cli)
## User Certificates
First thing we need to do is create a certificate signed by our Kubernetes CA. </br>
We have the CA, let's make a certificate. </br>
Easy way to create a cert is use `openssl` and the easiest way to get `openssl` is to simply run a container:
```
docker run -it -v ${PWD}:/work -w /work -v ${HOME}:/root/ --net host alpine sh
apk add openssl
```
Let's create a certificate for Bob Smith:
```
#start with a private key
openssl genrsa -out bob.key 2048
```
Now we have a key, we need a certificate signing request (CSR). </br>
We also need to specify the groups that Bob belongs to. </br>
Let's pretend Bob is part of the `Shopping` team and will be developing
applications for the `Shopping`
```
openssl req -new -key bob.key -out bob.csr -subj "/CN=Bob Smith/O=Shopping"
```
Use the CA to generate our certificate by signing our CSR. </br>
We may set an expiry on our certificate as well
```
openssl x509 -req -in bob.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out bob.crt -days 1
```
## Building a kube config
Let's install `kubectl` in our container to make things easier:
```
apk add curl nano
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
```
We'll be trying to avoid messing with our current kubernetes config. </br>
So lets tell `kubectl` to look at a new config that does not yet exists
```
export KUBECONFIG=~/.kube/new-config
```
Create a cluster entry which points to the cluster and contains the details of the CA certificate:
```
kubectl config set-cluster dev-cluster --server=https://127.0.0.1:52807 \
--certificate-authority=ca.crt \
--embed-certs=true
#see changes
nano ~/.kube/new-config
```
kubectl config set-credentials bob --client-certificate=bob.crt --client-key=bob.key --embed-certs=true
kubectl config set-context dev --cluster=dev-cluster --namespace=shopping --user=bob
kubectl config use-context dev
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "Bob Smith" cannot list resource "pods" in API group "" in the namespace "shopping"
## Give Bob Smith Access
```
cd kubernetes/rbac
kubectl create ns shopping
kubectl -n shopping apply -f .\role.yaml
kubectl -n shopping apply -f .\rolebinding.yaml
```
## Test Access as Bob
kubectl get pods
No resources found in shopping namespace.
# Kubernetes Service Accounts
So we've covered users, but what about applications or services running in our cluster ? </br>
Most business apps will not need to connect to the kubernetes API unless you are building something that integrates with your cluster, like a CI/CD tool, an autoscaler or a custom webhook. </br>
Generally applications will use a service account to connect. </br>
You can read more about [Kubernetes Service Accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/).
Let's deploy a service account
```
kubectl -n shopping apply -f serviceaccount.yaml
```
Now we can deploy a pod that uses the service account
```
kubectl -n shopping apply -f pod.yaml
```
Now we can test the access from within that pod by trying to list pods:
```
kubectl -n shopping exec -it shopping-api -- bash
# Point to the internal API server hostname
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
# List pods through the API
curl --cacert ${CACERT} --header "Authorization: Bearer $TOKEN" -s ${APISERVER}/api/v1/namespaces/shopping/pods/
# we should see an error not having access
```
Now we can allow this pod to list pods in the shopping namespace
```
kubectl -n shopping apply -f serviceaccount-role.yaml
kubectl -n shopping apply -f serviceaccount-rolebinding.yaml
```
If we try run `curl` command again we can see now we are able to get a json
response with pod information

9
rbac/pod.yaml Normal file
View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Pod
metadata:
name: shopping-api
spec:
containers:
- image: nginx
name: shopping-api
serviceAccountName: shopping-api

12
rbac/role.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: shopping
name: manage-pods
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "watch", "list", "create", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "watch", "list", "delete", "create"]

13
rbac/rolebinding.yaml Normal file
View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manage-pods
namespace: shopping
subjects:
- kind: User
name: "Bob Smith"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: manage-pods
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,9 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: shopping
name: shopping-api
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: shopping-api
namespace: shopping
subjects:
- kind: ServiceAccount
name: shopping-api
roleRef:
kind: Role
name: shopping-api
apiGroup: rbac.authorization.k8s.io

4
rbac/serviceaccount.yaml Normal file
View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: shopping-api

68
secrets/README.md Normal file
View File

@@ -0,0 +1,68 @@
# Introduction to Kubernetes: Secrets
## Create a cluster with Kind
```
kind create cluster --name secrets --image kindest/node:v1.31.1
```
## Our Secret
We have a secret under `kubernetes/secrets/secret.json`
```
cat kubernetes/secrets/secret.json
```
## Using our secret in a container
As a file:
```
docker run -it -v $PWD/kubernetes/secrets/secret.json:/secrets/secret.json ubuntu:latest bash
cat /secrets/secret.json
```
As environment variables:
```
api_key="somesecretgoeshere"
docker run -it -e API_KEY=$api_key ubuntu:latest bash
echo $API_KEY
```
## Kubernetes Secret
Read more about [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/)
## Create our secret
There are two main ways we can create a Kubernetes secret. </br>
Either by creating the secret object with `kubectl create secret` or apply\create it declaratively using YAML with `kubectl apply -f`
`kubectl create secret`:
```
kubectl create secret generic mysecret --from-file kubernetes/secrets/secret.json
```
`kubectl apply -f` or `kubectl create -f` allows us to define things declaratively using YAML files:
```
kubectl apply -f kubernetes/secrets/secret.yaml
```
## Use our secret
In order to use our secret we add a `volume` to our pod spec and then mount that using a `volumeMount` </br>
We can also use a secret references as `env` variable </br>
```
kubectl apply -f kubernetes/secrets/pod.yaml
```

30
secrets/pod.yaml Normal file
View File

@@ -0,0 +1,30 @@
apiVersion: v1
kind: Pod
metadata:
name: example-pod
namespace: default
labels:
app: example-app
test: test
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: example-app
image: aimvector/python:1.0.4
imagePullPolicy: Always
ports:
- containerPort: 5000
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: mysecret
key: api_key
volumeMounts:
- name: secret-volume
mountPath: /secrets/
volumes:
- name: secret-volume
secret:
secretName: mysecret

View File

@@ -0,0 +1,285 @@
# Introduction to Sealed Secrets
Checkout the [Sealed Secrets GitHub Repo](https://github.com/bitnami-labs/sealed-secrets) </br>
There are a number of use-cases where this is a really great concept. </br>
1) GitOps - Storing your YAML manifests in Git and using GitOps tools to sync the manifests to your clusters (For example Flux and ArgoCD!)
2) Giving a team access to secrets without revealing the secret material.
developer: "I want to confirm my deployed secret value is X in the cluster" </br>
developer can compare `sealedSecret` YAML in Git, with the `sealedSecret` in the cluster and confirm the value is the same. </br>
## Create a kubernetes cluster
In this guide we we'll need a Kubernetes cluster for testing. Let's create one using [kind](https://kind.sigs.k8s.io/) </br>
```
kind create cluster --name sealedsecrets --image kindest/node:v1.23.5
```
See cluster up and running:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
sealedsecrets-control-plane Ready control-plane,master 2m12s v1.23.5
```
## Run a container to work in
### run Alpine Linux:
```
docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host alpine sh
```
### install kubectl
```
apk add --no-cache curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
```
### install helm
```
curl -o /tmp/helm.tar.gz -LO https://get.helm.sh/helm-v3.10.1-linux-amd64.tar.gz
tar -C /tmp/ -zxvf /tmp/helm.tar.gz
mv /tmp/linux-amd64/helm /usr/local/bin/helm
chmod +x /usr/local/bin/helm
```
### test cluster access:
```
/work # kubectl get nodes
NAME STATUS ROLES AGE VERSION
sealedsecrets-control-plane Ready control-plane,master 3m26s v1.23.5
```
## Install Sealed Secret Controller
### download the YAML
In this demo we'll use version [0.19.1](https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.1/controller.yaml) of the sealed secrets controller downloaded from the
[Github releases](https://github.com/bitnami-labs/sealed-secrets/releases) page
```
curl -L -o ./kubernetes/secrets/sealed-secrets/controller-v0.19.1.yaml https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.1/controller.yaml
```
### install using Helm
You can also install the controller using `helm`
```
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm search repo sealed-secrets --versions
helm template sealed-secrets --version 2.7.0 -n kube-system sealed-secrets/sealed-secrets \
> ./kubernetes/secrets/sealed-secrets/controller-helm-v0.19.1.yaml
```
With `helm template` we can explore the YAML and then replace the `helm template` with `helm install`
to install the chart
### install using YAML manifest
```
kubectl apply -f kubernetes/secrets/sealed-secrets/controller-v0.19.1.yaml
```
### Check the installation
The controller deploys to the `kube-system` namespace by default.
```
kubectl -n kube-system get pods
```
Check the logs of the sealed secret controller
```
kubectl -n kube-system logs -l name=sealed-secrets-controller --tail -1
```
From the logs we can see that it writes the encryption key its going to use as a kubernetes secret </br>
Example log:
```
2022/11/05 21:38:20 New key written to kube-system/sealed-secrets-keymwzn9
```
## Encryption keys
```
kubectl -n kube-system get secrets
kubectl -n kube-system get secret sealed-secrets-keygxlvg -o yaml
```
## Download KubeSeal
The same way we downloaded the sealed secrets controller from the [GitHub releases](https://github.com/bitnami-labs/sealed-secrets/releases) page,
we'll want to download kubeseal from the assets section
```
curl -L -o /tmp/kubeseal.tar.gz \
https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.1/kubeseal-0.19.1-linux-amd64.tar.gz
tar -xzf /tmp/kubeseal.tar.gz -C /tmp/
chmod +x /tmp/kubeseal
mv /tmp/kubeseal /usr/local/bin/
```
We can now run `kubeseal --help`
## Sealing a basic Kubernetes Secret
Looks at our existing Kubernetes secret YAML
```
cat kubernetes/secrets/secret.yaml
```
If you run `kubeseal` you will see it pause and expect input from `stdin`. </br>
You can paste your secret YAML and press CTRL+D to terminate `stdin`. </br>
You will notice it writes a `sealedSecret` to `stdout`. </br>
We can then automate this using `|` characters. </br>
Create a sealed secret using `stdin` :
```
cat kubernetes/secrets/secret.yaml | kubeseal -o yaml > kubernetes/secrets/sealed-secrets/sealed-secret.yaml
```
Create a sealed secret using a YAML file:
```
kubeseal -f kubernetes/secrets/secret.yaml -o yaml > kubernetes/secrets/sealed-secrets/sealed-secret.yaml
```
Deploy the sealed secret
```
kubectl apply -f kubernetes/secrets/sealed-secrets/sealed-secret.yaml
```
Now few seconds later, see the secret
```
kubectl -n default get secret
NAME TYPE DATA AGE
mysecret Opaque 1 25s
```
## How the encryption key is managed
By default the controller generates a key as we saw earlier and stores it in a Kubernetes secret. </br>
By default, the controller will generate a new active key every 30 days.
It keeps old keys so it can decrypt previous encrypted sealed secrets and will use the active key with new encryption. </br>
It's important to keep these keys secured. </br>
When the controller starts it consumes all the secrets and will start using them </br>
This means we can backup these keys in a Vault and use them to migrate our clusters if we wanted to. </br>
We can also override the renewal period to increase or decrease the value. `0` turns it off </br>
To showcase this I can set `--key-renew-period=<value>` to 5min to watch how it works. </br>
```
apk add nano
export KUBE_EDITOR=nano
```
Set the flag on the command like so to add a new key every 5 min for testing:
```
spec:
containers:
- command:
- controller
- --key-renew-period=5m
kubectl edit deployment/sealed-secrets-controller --namespace=kube-system
```
You should see a new key created under secrets in the `kube-system` namespace
```
kubectl -n kube-system get secrets
```
## Backup your encryption keys
To get your keys out for backup purpose, it's as simple as grabbing a secret by label using `kubectl` :
```
kubectl get secret -n kube-system \
-l sealedsecrets.bitnami.com/sealed-secrets-key \
-o yaml \
> kubernetes/secrets/sealed-secrets/sealed-secret-keys.key
```
This can be used when migrating from one cluster to another, or simply for keeping backups. </br>
## Migrate your encryption keys to a new cluster
To test this, lets delete our cluster and recreate it. </br>
```
kind delete cluster --name sealedsecrets
kind create cluster --name sealedsecrets --image kindest/node:v1.23.5
# check the cluster
kubectl get nodes
# redeploy sealed-secrets controller
kubectl apply -f kubernetes/secrets/sealed-secrets/controller-v0.19.1.yaml
kubectl -n kube-system get pods
```
### restore our encryption keys
```
kubectl apply -f kubernetes/secrets/sealed-secrets/sealed-secret-keys.key
```
### apply our old sealed secret
```
kubectl apply -f kubernetes/secrets/sealed-secrets/sealed-secret.yaml
```
### see sealed secret status
To troubleshoot the secret, you can use the popular `kubectl describe` command. </br>
Note that we're unable to decrypt the secret. </br>
Why is that ? </br>
We'll this is because the encryption key secrets are read when the controller starts. </br>
So we will need to restart the controller to that it can read ingest the encryption keys:
```
kubectl delete pod -n kube-system -l name=sealed-secrets-controller
```
## Re-encrypting secrets with the latest key
We can also use `kubeseal --re-encrypt` to encrypt a secret again. </br>
Let's say we want to encrypt with the latest key. </br>
This will re-encrypt the sealed secret without having to pull the actual secret to the client </br>
```
cat ./kubernetes/secrets/sealed-secrets/sealed-secret.yaml \
| kubeseal --re-encrypt -o yaml
```
I can then save this to override the original old local sealed secret file:
```
cat ./kubernetes/secrets/sealed-secrets/sealed-secret.yaml \
| kubeseal --re-encrypt -o yaml \
> tmp.yaml && mv tmp.yaml ./kubernetes/secrets/sealed-secrets/sealed-secret.yaml
```

View File

@@ -0,0 +1,354 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
spec:
minReadySeconds: 30
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
name: sealed-secrets-controller
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations: {}
labels:
name: sealed-secrets-controller
spec:
containers:
- args: []
command:
- controller
env: []
image: docker.io/bitnami/sealed-secrets-controller:v0.19.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /healthz
port: http
name: sealed-secrets-controller
ports:
- containerPort: 8080
name: http
readinessProbe:
httpGet:
path: /healthz
port: http
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
stdin: false
tty: false
volumeMounts:
- mountPath: /tmp
name: tmp
imagePullSecrets: []
initContainers: []
securityContext:
fsGroup: 65534
serviceAccountName: sealed-secrets-controller
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmp
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: sealedsecrets.bitnami.com
spec:
group: bitnami.com
names:
kind: SealedSecret
listKind: SealedSecretList
plural: sealedsecrets
singular: sealedsecret
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: SealedSecret is the K8s representation of a "sealed Secret" -
a regular k8s Secret that has been sealed (encrypted) using the controller's
key.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: SealedSecretSpec is the specification of a SealedSecret
properties:
data:
description: Data is deprecated and will be removed eventually. Use
per-value EncryptedData instead.
format: byte
type: string
encryptedData:
additionalProperties:
type: string
type: object
x-kubernetes-preserve-unknown-fields: true
template:
description: Template defines the structure of the Secret that will
be created from this sealed secret.
properties:
data:
additionalProperties:
type: string
description: Keys that should be templated using decrypted data
nullable: true
type: object
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata'
nullable: true
type: object
x-kubernetes-preserve-unknown-fields: true
type:
description: Used to facilitate programmatic handling of secret
data.
type: string
type: object
required:
- encryptedData
type: object
status:
description: SealedSecretStatus is the most recently observed status of
the SealedSecret.
properties:
conditions:
description: Represents the latest available observations of a sealed
secret's current state.
items:
description: SealedSecretCondition describes the state of a sealed
secret at a certain point.
properties:
lastTransitionTime:
description: Last time the condition transitioned from one status
to another.
format: date-time
type: string
lastUpdateTime:
description: The last time this condition was updated.
format: date-time
type: string
message:
description: A human readable message indicating details about
the transition.
type: string
reason:
description: The reason for the condition's last transition.
type: string
status:
description: 'Status of the condition for a sealed secret. Valid
values for "Synced": "True", "False", or "Unknown".'
type: string
type:
description: 'Type of condition for a sealed secret. Valid value:
"Synced"'
type: string
required:
- status
- type
type: object
type: array
observedGeneration:
description: ObservedGeneration reflects the generation most recently
observed by the sealed-secrets controller.
format: int64
type: integer
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations: {}
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sealed-secrets-key-admin
subjects:
- kind: ServiceAccount
name: sealed-secrets-controller
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
annotations: {}
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
spec:
ports:
- port: 8080
targetPort: 8080
selector:
name: sealed-secrets-controller
type: ClusterIP
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations: {}
labels:
name: sealed-secrets-service-proxier
name: sealed-secrets-service-proxier
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sealed-secrets-service-proxier
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:authenticated
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations: {}
labels:
name: sealed-secrets-service-proxier
name: sealed-secrets-service-proxier
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- sealed-secrets-controller
resources:
- services
verbs:
- get
- apiGroups:
- ""
resourceNames:
- 'http:sealed-secrets-controller:'
- http:sealed-secrets-controller:http
- sealed-secrets-controller
resources:
- services/proxy
verbs:
- create
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations: {}
labels:
name: sealed-secrets-key-admin
name: sealed-secrets-key-admin
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations: {}
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: secrets-unsealer
subjects:
- kind: ServiceAccount
name: sealed-secrets-controller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations: {}
labels:
name: secrets-unsealer
name: secrets-unsealer
rules:
- apiGroups:
- bitnami.com
resources:
- sealedsecrets
verbs:
- get
- list
- watch
- apiGroups:
- bitnami.com
resources:
- sealedsecrets/status
verbs:
- update
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- create
- update
- delete
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get

View File

@@ -0,0 +1,16 @@
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: mysecret
namespace: default
spec:
encryptedData:
secret.json: AgBGAxpwLeMnCxvuGAY0tipvPVexoqucs3qKUNwD8MNy9PXw+SwLnUzMGSCX1Q/I8/EvNcDOY7vkenHts4Rzexva6m0MNMn60223QEm1394HRQQS5xELnCyAsjpSeHuN4TizxmWU/O8IBPvAMYINFapA0Hc3VADArKaToECyJJxAxBmUFUuQTVdWS8E/yX1ZMibpwYdPQt0PK7i/8eA6Qln7KfvNxp4KaWz4enL9uScIztXLfeHaDgE7CazaINL/vmkUIB0J7tbAuhLwPpy86pcL+MioeXTk4Kze7ikPn8dXSdYgSzVR85CaaFwueXWFujqUDEgu6ROGH0CyLZ8aK/Z8X3NAnIxupQLl26xbRDONXwjLsEFAOHLmX+J/gpkDJBU6F+Nt/Unc7q2KBPPuQgdozr3e7ZtCRnUO7kU75UccwgqHF9o/8n4j4+/WjkepTuKy6hBP16sFRTdTWg+iP79xBPLAKyJ1urRVYRhLkzgRf15Wyk28XeGMw0Ip8hEJnDIzTc9IepsMdXDHQppHIUUfp8LIgeWZUxsb//Z7+28qwuffEr436cgKI6jf4g8eIaIFNyk1JRU2cmcDgEQEhTYaskqsEedaDfBWzjxhtqGYh2SJIDvpZkL43rZQLM68EhApFkefo/6goNRupXAEUY8xb9XP3Z6nMiIlGZMU5pl4A6NPpRSEgyipHaM8E8MqN9dyQVMgcnRBWYjk8WZIQHWrKiw0BtNTgVuVBxKkpQkqQHO7FXLdQw==
template:
metadata:
creationTimestamp: null
name: mysecret
namespace: default
type: Opaque

3
secrets/secret.json Normal file
View File

@@ -0,0 +1,3 @@
{
"api_key" : "somesecretgoeshere"
}

19
secrets/secret.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: default
labels:
app: example-app
type: Opaque
data:
api_key: c29tZXNlY3JldGdvZXNoZXJlCg==
secret.json: ew0KICAiYXBpX2tleSIgOiAic29tZXNlY3JldGdvZXNoZXJlIg0KfQ==
# stringData:
# secret.json: |-
# {
# "api_key" : "somesecretgoeshere"
# }
#kubectl create secret generic mysecret --from-file .\golang\secrets\secret.json

34
services/README.md Normal file
View File

@@ -0,0 +1,34 @@
# Introduction to Kubernetes: Services
## Create a Kubernetes cluster
Firstly, we will need a Kubernetes cluster and will create one using [kind](https://kind.sigs.k8s.io/)
```
kind create cluster --name services --image kindest/node:v1.31.1
```
Test the cluster
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
services-control-plane Ready control-plane 5m48s v1.31.1
```
## Deploy a few pods
Let's deploy a few pods using a Kubernetes Deployment. To understand services, we need to deploy some pods that become our "upstream" or "endpoint" that we want to access. </br>
```
kubectl apply -f kubernetes/deployments/deployment.yaml
```
## Deploy a service
We can now deploy a Kubernetes service that targets our deployment:
```
kubectl apply -f kubernetes/services/service.yaml
```

15
services/service.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: example-service
labels:
app: example-app
spec:
type: ClusterIP
selector:
app: example-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000

View File

@@ -0,0 +1,32 @@
apiVersion: v1
kind: Service
metadata:
name: hit-counter-lb
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 5000
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hit-counter-app
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: aimvector/api-redis-ha:1.0
ports:
- containerPort: 5000

30
statefulsets/notes.md Normal file
View File

@@ -0,0 +1,30 @@
# Create a namespace
```
kubectl create ns example
```
# Check the storageclass for host path provisioner
```
kubectl get storageclass
```
# Deploy our statefulset
```
kubectl -n example apply -f .\kubernetes\statefulsets\statefulset.yaml
kubectl -n example apply -f .\kubernetes\statefulsets\example-app.yaml
```
# Enable Redis Cluster
```
$IPs = $(kubectl -n example get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
kubectl -n example exec -it redis-cluster-0 -- /bin/sh -c "redis-cli -h 127.0.0.1 -p 6379 --cluster create ${IPs}"
kubectl -n example exec -it redis-cluster-0 -- /bin/sh -c "redis-cli -h 127.0.0.1 -p 6379 cluster info"
```
# More info
https://rancher.com/blog/2019/deploying-redis-cluster

View File

@@ -0,0 +1,86 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
data:
update-node.sh: |
#!/bin/sh
REDIS_NODES="/data/nodes.conf"
sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES}
exec "$@"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0.1-alpine
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "hostpath"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
spec:
clusterIP: None
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster