Merge branch 'master' into k8s

This commit is contained in:
marcel-dempers 2022-04-15 15:54:26 +10:00
commit 71a1fcb7cd
17 changed files with 1295 additions and 2 deletions

2
.gitignore vendored
View File

@ -11,3 +11,5 @@ __pycache__/
security/letsencrypt/introduction/certs/**
kubernetes/shipa/installs/shipa-helm-chart-1.1.1/
messaging/kafka/data/*
kubernetes/portainer/volume*
kubernetes/rancher/volume/*

View File

@ -17,13 +17,13 @@ This is important for learning Go, however there are a few challenges for using
Follow my Redis clustering Tutorial </br>
<a href="https://youtube.com/playlist?list=PLHq1uqvAteVtlgFkmOlIqWro3XP26y_oW" title="Redis"><img src="https://i.ytimg.com/vi/L3zp347cWNw/hqdefault.jpg" width="50%" alt="Redis Guide" /></a>
<a href="https://youtube.com/playlist?list=PLHq1uqvAteVtlgFkmOlIqWro3XP26y_oW" title="Redis"><img src="https://i.ytimg.com/vi/L3zp347cWNw/hqdefault.jpg" width="30%" alt="Redis Guide" /></a>
Code is over [here](../../../storage/redis/clustering/readme.md)
## Go Dev Environment
The same as Part 1+2+3, we start with a [dockerfile](./dockerfile) where we declare our version of `go`.
The same as Part 1+2+3+4, we start with a [dockerfile](./dockerfile) where we declare our version of `go`.
The `dockerfile`:

View File

@ -43,6 +43,10 @@ spec:
limits:
memory: "256Mi"
cpu: "500m"
tolerations:
- key: "cattle.io/os"
value: "linux"
effect: "NoSchedule"
#NOTE: comment out `volumeMounts` section for configmap and\or secret guide
# volumeMounts:
# - name: secret-volume

View File

@ -0,0 +1,124 @@
# Introduction to Portainer
Start here 👉🏽[https://www.portainer.io/](https://www.portainer.io/) </br>
Documentation 👉🏽[https://docs.portainer.io/](https://docs.portainer.io/)
## Portainer installation
In this demo, I will be running Kubernetes 1.22 using `kind` </br>
Which is compatible with portainer 2.11.1 </br>
Let's go ahead with a local docker install:
```
cd kubernetes\portainer
mkdir volume-ce
docker run -d -p 9443:9443 -p 8000:8000 --name portainer-ce `
--restart=always `
-v /var/run/docker.sock:/var/run/docker.sock `
-v ${PWD}/volume-ce:/data `
portainer/portainer-ce:2.11.1
```
## SSL & DOMAIN
We can also upload SSL certificates for our portainer.</br>
In this demo, portainer will issue self signed certificates. </br>
We will need a domain for our portainer server so our clusters can contact it. </br>
Let's use [nip.io](https://nip.io/) to create a public endpoint for portainer.
## Create Kubernetes Cluster
Let's start by creating a local `kind` [cluster](https://kind.sigs.k8s.io/)
For local clusters, we can use the public endpoint Agent. </br>
We can get a public endpoint for the portainer agent by: </br>
* Ingress
* LoadBalancer
* NodePort
So we'll deploy portainer agent with `NodePort` for local </br>
For production environments, I would recommend not to expose the portainer agent. </br>
In this case, for Production, we'll use the portainer edge agent. </br>
To get `NodePort` exposed in `kind`, we'll open a host port with a [kind.yaml](./kind.yaml) config
```
kind create cluster --name local --config kind.yaml
```
## Manage Kubernetes Environments
The portainer UI gives us a one line command to deploy the portainer agent. </br>
Note that in the video, we pick the `node port` option.
## Local: Portainer Agent
I download the YAML from [here](https://downloads.portainer.io/portainer-agent-ce211-k8s-nodeport.yaml) to take a closer look at what it is deploying </br>
Deploy the portainer agent in my `kind` cluster:
```
kubectl apply -f portainer-agent-ce211-k8s-nodeport.yaml
```
See the agent:
```
kubectl -n portainer get pods
```
See the service with the endpoint it exposes:
```
kubectl -n portainer get svc
```
Now since we dont have a public load balancer and using nodeport, our service will be exposed on the node IP. </br>
Since the Kubernetes node is our local machine, we should be able to access the portainer agent on `<computer-IP>:30778` </br>
We can obtain our local IP with `ipconfig` </br>
The IP and NodePort will be used to connect our portainer server to the new agent. </br>
## Production: Portainer Edge Agent
For the Edge agent, we get the command in the portainer UI. </br>
Once deployed, we can see the egde agent in our AKS cluster:
```
kubectl -n portainer get pods
```
## Helm
Let's showcase how to deploy helm charts. </br>
Most folks would have helm charts for their ingress controllers, monitoring, logging and other
platform dependencies.</br>
Let's add Kubernetes NGINX Ingress repo:
```
https://kubernetes.github.io/ingress-nginx
```
## GitOps
So from the Application menu, we can add an application from a `git` repository. </br>
Let's add this repo:
```
https://github.com/marcel-dempers/docker-development-youtube-series
```
We also specify all our manifests path that portainer needs to deploy:
* kubernetes/portainer/example-application/deployment.yaml
* kubernetes/portainer/example-application/configmap.yaml
* kubernetes/portainer/example-application/service.yaml
* kubernetes/portainer/example-application/ingress.yaml
Portainer will now poll our repo and deploy any updates, GitOps style!

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
config.json: |
{
"environment" : "dev"
}
# kubectl create configmap example-config --from-file ./golang/configs/config.json

View File

@ -0,0 +1,43 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deploy
labels:
app: example-app
test: test
spec:
selector:
matchLabels:
app: example-app
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: aimvector/python:1.0.4
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "500m"
volumeMounts:
- name: config-volume
mountPath: /configs/
volumes:
- name: config-volume
configMap:
name: example-config

View File

@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: marcel.test
http:
paths:
- path: /hello(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: example-service
labels:
app: example-app
spec:
type: ClusterIP
selector:
app: example-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000

View File

@ -0,0 +1,12 @@
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
image: kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047
extraPortMappings:
- containerPort: 30778
hostPort: 30778
listenAddress: "0.0.0.0"
protocol: tcp
- role: worker
image: kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047

View File

@ -0,0 +1,81 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
type: NodePort
selector:
app: portainer-agent
ports:
- name: http
protocol: TCP
port: 9001
targetPort: 9001
nodePort: 30778
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent-headless
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.11.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: DEBUG
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent-headless"
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 9001
protocol: TCP

View File

@ -0,0 +1,100 @@
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer-crb-clusteradmin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: portainer-sa-clusteradmin
namespace: portainer
# Optional: can be added to expose the agent port 80 to associate an Edge key.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: portainer-agent
# namespace: portainer
# spec:
# type: LoadBalancer
# selector:
# app: portainer-agent
# ports:
# - name: http
# protocol: TCP
# port: 80
# targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: portainer-agent
namespace: portainer
spec:
clusterIP: None
selector:
app: portainer-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-agent
namespace: portainer
spec:
selector:
matchLabels:
app: portainer-agent
template:
metadata:
labels:
app: portainer-agent
spec:
serviceAccountName: portainer-sa-clusteradmin
containers:
- name: portainer-agent
image: portainer/agent:2.11.1
imagePullPolicy: Always
env:
- name: LOG_LEVEL
value: INFO
- name: KUBERNETES_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EDGE
value: "1"
- name: AGENT_CLUSTER_ADDR
value: "portainer-agent"
- name: EDGE_ID
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.id
- name: EDGE_INSECURE_POLL
valueFrom:
configMapKeyRef:
name: portainer-agent-edge
key: edge.insecure_poll
- name: EDGE_KEY
valueFrom:
secretKeyRef:
name: portainer-agent-edge-key
key: edge.key
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 80
protocol: TCP

View File

@ -0,0 +1,230 @@
# Introduction to Rancher: On-prem Kubernetes
This guide follows the general instructions of running a [manual rancher install](https://rancher.com/docs/rancher/v2.5/en/quick-start-guide/deployment/quickstart-manual-setup/) and running our own infrastructure on Hyper-v
# Hyper-V : Prepare our infrastructure
In this demo, we will use Hyper-V to create our infrastructure. </br>
For on-premise, many companies use either Hyper-V, VMWare Vsphere and other technologies to create virtual infrastructure on bare metal. </br>
Few points to note here:
* Benefit of Virtual infrastructure is that it's immutable
a) We can add and throw away virtual machines at will.
b) This makes maintenance easier as we can roll updated virtual machines instead of
patching existing machines and turning them to long-living snowflakes.
c) Reduce lifespan of machines
* Bare Metal provides the compute.
a) We don't want Kubernetes directly on bare metal as we want machines to be immutable.
b) This goes back to the previous point on immutability.
* Every virtual machine needs to be able to reach each other on the network
a) This is a kubernetes networking requirements that all nodes can communicate with one another
# Hyper-V : Create our network
In order for us to create virtual machines all on the same network, I am going to create a virtual switch in Hyper-v </br>
Open Powershell in administrator
```
# get our network adapter where all virtual machines will run on
# grab the name we want to use
Get-NetAdapter
Import-Module Hyper-V
$ethernet = Get-NetAdapter -Name "Ethernet 2"
New-VMSwitch -Name "virtual-network" -NetAdapterName $ethernet.Name -AllowManagementOS $true -Notes "shared virtual network interface"
```
# Hyper-V : Create our machines
We firstly need harddrives for every VM. </br>
Let's create three:
```
mkdir c:\temp\vms\linux-0\
mkdir c:\temp\vms\linux-1\
mkdir c:\temp\vms\linux-2\
New-VHD -Path c:\temp\vms\linux-0\linux-0.vhdx -SizeBytes 20GB
New-VHD -Path c:\temp\vms\linux-1\linux-1.vhdx -SizeBytes 20GB
New-VHD -Path c:\temp\vms\linux-2\linux-2.vhdx -SizeBytes 20GB
```
```
New-VM `
-Name "linux-0" `
-Generation 1 `
-MemoryStartupBytes 2048MB `
-SwitchName "virtual-network" `
-VHDPath "c:\temp\vms\linux-0\linux-0.vhdx" `
-Path "c:\temp\vms\linux-0\"
New-VM `
-Name "linux-1" `
-Generation 1 `
-MemoryStartupBytes 2048MB `
-SwitchName "virtual-network" `
-VHDPath "c:\temp\vms\linux-1\linux-1.vhdx" `
-Path "c:\temp\vms\linux-1\"
New-VM `
-Name "linux-2" `
-Generation 1 `
-MemoryStartupBytes 2048MB `
-SwitchName "virtual-network" `
-VHDPath "c:\temp\vms\linux-2\linux-2.vhdx" `
-Path "c:\temp\vms\linux-2\"
```
Setup a DVD drive that holds the `iso` file for Ubuntu Server
```
Set-VMDvdDrive -VMName "linux-0" -ControllerNumber 1 -Path "C:\temp\ubuntu-20.04.3-live-server-amd64.iso"
Set-VMDvdDrive -VMName "linux-1" -ControllerNumber 1 -Path "C:\temp\ubuntu-20.04.3-live-server-amd64.iso"
Set-VMDvdDrive -VMName "linux-2" -ControllerNumber 1 -Path "C:\temp\ubuntu-20.04.3-live-server-amd64.iso"
```
Start our VM's
```
Start-VM -Name "linux-0"
Start-VM -Name "linux-1"
Start-VM -Name "linux-2"
```
Now we can open up Hyper-v Manager and see our infrastructure. </br>
In this video we'll connect to each server, and run through the initial ubuntu setup. </br>
Once finished, select the option to reboot and once it starts, you will notice an `unmount` error on CD-Rom </br>
This is ok, just shut down the server and start it up again.
# Hyper-V : Setup SSH for our machines
Now in this demo, because I need to copy rancher bootstrap commands to each VM, it would be easier to do so
using SSH. So let's connect to each VM in Hyper-V and setup SSH. </br>
This is because `copy+paste` does not work without `Enhanced Session` mode in Ubuntu Server. </br>
Let's temporarily turn on SSH on each server:
```
sudo apt update
sudo apt install -y nano net-tools openssh-server
sudo systemctl enable ssh
sudo ufw allow ssh
sudo systemctl start ssh
```
Record the IP address of each VM so we can SSH to it:
```
sudo ifconfig
# record eth0
linux-0 IP=192.168.0.16
linux-1 IP=192.168.0.17
linux-2 IP=192.168.0.18
```
In new Powershell windows, let's SSH to our VMs
```
ssh linux-0@192.168.0.16
ssh linux-1@192.168.0.17
ssh linux-2@192.168.0.18
```
# Setup Docker
It is required that every machine that needs to join our cluster, has docker running on it. </br>
Firstly, rancher will use docker to run it's agent as well as bootstrap the cluster.</br>
Install docker on each VM:
```
curl -sSL https://get.docker.com/ | sh
sudo usermod -aG docker $(whoami)
sudo service docker start
```
# Running Rancher in Docker
So Rancher can be [deployed](https://rancher.com/docs/rancher/v2.5/en/quick-start-guide/deployment/) almost anywhere. </br>
We can run it in Kubernetes on-prem or the cloud. </br>
Now because we want Rancher to manage kubernetes clusters, we dont want it running in the clusters we are managing. </br>
So I would like to keep my Rancher server outside and separate from my Kubernetes clusters.</br>
So let's setup a single server with [docker](https://rancher.com/docs/rancher/v2.5/en/quick-start-guide/deployment/quickstart-manual-setup/)
## Persist data
We will want to persist ranchers data across reboots. </br>
Rancher stores its data under `/var/lib/rancher` </br>
In this repo, let's create a space to persist data:
```
cd kubernetes/rancher
mkdir volume
```
## Run Rancher
```
docker run -d --name rancher-server -v ${PWD}/volume:/var/lib/rancher --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher
```
## Unlock Rancher
Once its up and running we can extract the Rancher initial boostrap password from the logs
```
docker logs rancher-server > rancher.log
```
## Get Rancher IP
It's important that our servers can reach the Rancher server. </br>
As all the VMs and my machine are on the same network, we can use my machine IP as the server IP so the VM's can reach it. </br>
let's grab the IP:
```
ipconfig
```
We can now access Rancher on [localhost](https://localhost)
## Deploy Sample Workloads
To deploy some sample basic workloads, let's get the `kubeconfig` for our cluster </br>
Set kubeconfig:
```
$ENV:KUBECONFIG="<path-to-kubeconfig>"
```
Deploy 2 pods, and a service:
```
kubectl create ns marcel
kubectl -n marcel apply -f .\kubernetes\configmaps\configmap.yaml
kubectl -n marcel apply -f .\kubernetes\secrets\secret.yaml
kubectl -n marcel apply -f .\kubernetes\deployments\deployment.yaml
kubectl -n marcel apply -f .\kubernetes\services\service.yaml
```
One caveat is because we are not a cloud provider, Kubernetes does not support our service `type=LoadBalancer`. </br>
For that, we need something like `metallb`. </br>
However - we can `port-forward`
```
kubectl -n marcel get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service LoadBalancer 10.43.235.240 <pending> 80:31310/TCP 13s
kubectl -n marcel port-forward svc/example-service 81:80
```
We can access our example-app on port 81

View File

@ -0,0 +1,499 @@
# Introduction to Python: Storing data in Redis Database
So far, we've learnt Python fundamentals, worked with data, files , HTTP and more importantly, basic data structures like `csv` and `json`.
Be sure to checkout: </br>
[Part 1: Intro to Python](../README.md) </br>
[Part 2: Files](../part-2.files/README.md) </br>
[Part 3: JSON](../part-3.json/README.md) </br>
## Start up a Redis Cluster
Follow my Redis clustering Tutorial </br>
<a href="https://youtube.com/playlist?list=PLHq1uqvAteVtlgFkmOlIqWro3XP26y_oW" title="Redis"><img src="https://i.ytimg.com/vi/L3zp347cWNw/hqdefault.jpg" width="30%" alt="Redis Guide" /></a>
Code is over [here](../../../storage/redis/clustering/readme.md)
## Python Dev Environment
The same as Part 1+2+3+4, we start with a [dockerfile](./dockerfile) where we declare our version of `python`.
```
FROM python:3.9.6-alpine3.13 as dev
WORKDIR /work
```
Let's build and start our container:
```
cd python\introduction\part-5.database.redis
docker build --target dev . -t python
docker run -it -v ${PWD}:/work -p 5000:5000 --net redis python sh
/work # python --version
Python 3.9.6
```
## Our application
We're going to use what we've learnt in part 1,2 & 3 and create
our customer app that handles customer data </br>
Firstly we have to import our dependencies:
```
import os.path
import csv
import json
from flask import Flask
from flask import request
```
Then we have a class to define what a customer looks like:
```
class Customer:
def __init__(self, c="",f="",l=""):
self.customerID = c
self.firstName = f
self.lastName = l
def fullName(self):
return self.firstName + " " + self.lastName
```
And also set a global variable for the location of our videos `json` file:
```
dataPath = "./customers.json"
```
Then we need a function which returns our customers:
```
def getCustomers():
if os.path.isfile(dataPath):
with open(dataPath, newline='') as customerFile:
data = customerFile.read()
customers = json.loads(data)
return customers
else:
return {}
```
Here is a function to return a specific customer:
```
def getCustomer(customerID):
customers = getCustomers()
if customerID in customers:
return customers[customerID]
else:
return {}
```
And finally a function for updating our customers:
```
def updateCustomers(customers):
with open(dataPath, 'w', newline='') as customerFile:
customerJSON = json.dumps(customers)
customerFile.write(customerJSON)
```
In the previous episode, we've created a `json` file to hold all our customers. </br>
We've learnt how to read and write to file and temporarily use the file for our storage. </br>
Let's create a file called `customers.json` :
```
{
"a": {
"customerID": "a",
"firstName": "James",
"lastName": "Baker"
},
"b": {
"customerID": "b",
"firstName": "Jonathan",
"lastName": "D"
},
"c": {
"customerID": "c",
"firstName": "Aleem",
"lastName": "Janmohamed"
},
"d": {
"customerID": "d",
"firstName": "Ivo",
"lastName": "Galic"
},
"e": {
"customerID": "e",
"firstName": "Joel",
"lastName": "Griffiths"
},
"f": {
"customerID": "f",
"firstName": "Michael",
"lastName": "Spinks"
},
"g": {
"customerID": "g",
"firstName": "Victor",
"lastName": "Savkov"
}
}
```
Now that we have our customer data and functions to read and update our customer data, let's define our `Flask` application:
```
app = Flask(__name__)
```
We create our route to get all customers:
```
@app.route("/", methods=['GET'])
def get_customers():
customers = getCustomers()
return json.dumps(customers)
```
A route to get one customer by ID:
```
@app.route("/get/<string:customerID>", methods=['GET'])
def get_customer(customerID):
customer = getCustomer(customerID)
if customer == {}:
return {}, 404
else:
return customer
```
And finally a route to update or add customers called `/set` :
```
@app.route("/set", methods=['POST'])
def add_customer():
jsonData = request.json
if "customerID" not in jsonData:
return "customerID required", 400
if "firstName" not in jsonData:
return "firstName required", 400
if "lastName" not in jsonData:
return "lastName required", 400
customers = getCustomers()
customers[jsonData["customerID"]] = Customer( jsonData["customerID"], jsonData["firstName"], jsonData["lastName"]).__dict__
updateCustomers(customers)
return "success", 200
```
Before we can be done, we need to import our `Flask` dependency we covered in our Python HTTP video. </br>
Let's create a `requirements.txt` file:
```
Flask == 2.0.2
```
We can install our dependencies using:
```
pip install -r requirements.txt
```
This gives us a web application that handles customer data and using a file as it's storage </br>
To test it, we can start up Flask:
```
export FLASK_APP=src/app
flask run -h 0.0.0 -p 5000
```
Now we can confirm it's working by accessing our application in the browser on `http://localhost:5000`
## Redis
To connect to Redis, we'll use a popular library called `redis-py` which we can grab from [here](https://github.com/redis/redis-py) </br>
The pip install is over [here](https://pypi.org/project/redis/3.5.3/) </br>
Let's add that to our `requirements.txt` dependency files.
```
redis == 3.5.3
```
We can proceed to install it using `pip install`
```
pip install -r requirements.txt
```
Now to connect to Redis in a highly available manner, we need to take a look at the
`Sentinel support` section of the guide </br>
Let's test the library. The beauty of Python is that it's a scripting language, so we don't have to compile and keep restarting our application, we can test each line of code. </br>
```
python
from redis.sentinel import Sentinel
sentinel = Sentinel([('sentinel-0', 5000),('sentinel-1', 5000),('sentinel-2', 5000)], socket_timeout=0.1)
sentinel.discover_master('mymaster')
sentinel.discover_slaves('mymaster')
master = sentinel.master_for('mymaster',password = "a-very-complex-password-here", socket_timeout=0.1)
slave = sentinel.slave_for('mymaster',password = "a-very-complex-password-here", socket_timeout=0.1)
master.set('foo', 'bar')
slave.get('foo')
```
We can demonstrate reading and writing a key value pair. </br>
We can also demonstrate failure, when we stop the current master, we'll get a connection error. It's important to implement retry logic. </br>
If we wait a moment and execute commands again, we will see that it starts to work.
```
# stop current master
docker rm -f redis-0
master.set('foo', 'bar2')
redis.exceptions.ConnectionError: Connection closed by server.
# retry moments later...
master.set('foo', 'bar2')
slave.get('foo')
sentinel.discover_master('mymaster')
sentinel.discover_slaves('mymaster')
```
We can find the current master by running `docker inspect` to see who owns that IP address.
Start up `redis-0` again, to simulate a recovery from failure.
## Connecting our App to Redis
To connect to redis, we'll want to read the connection info from environment variables. Let's set some global variables.
```
import os
redis_sentinels = os.environ.get('REDIS_SENTINELS')
redis_master_name = os.environ.get('REDIS_MASTER_NAME')
redis_password = os.environ.get('REDIS_PASSWORD')
```
We will need to restart our container so we can inject these environment variables. Let's go ahead and do that:
```
docker run -it -p 5000:5000 `
--net redis `
-v ${PWD}:/work `
-e REDIS_SENTINELS="sentinel-0:5000,sentinel-1:5000,sentinel-2:5000" `
-e REDIS_MASTER_NAME="mymaster" `
-e REDIS_PASSWORD="a-very-complex-password-here" `
python sh
# re-install our dependencies
pip install -r requirements.txt
```
Now we can setup a client:
```
from redis.sentinel import Sentinel
sentinels = []
for s in redis_sentinels.split(","):
sentinels.append((s.split(":")[0], s.split(":")[1]))
redis_sentinel = Sentinel(sentinels, socket_timeout=5)
redis_master = redis_sentinel.master_for(redis_master_name,password = redis_password, socket_timeout=5)
```
## Retry logic
Now we noticed that if we have a master that fails, the sentinels will choose and assign a new master. We can see this by simply retrying our redis command. </br>
When talking to redis we need to have some retry capability to be able to recover from this scenario. </br>
Let's build a retry function at the top of our application, that runs a redis command:
```
def redis_command(command, *args):
max_retries = 3
count = 0
backoffSeconds = 5
while True:
try:
return command(*args)
except (redis.exceptions.ConnectionError, redis.exceptions.TimeoutError):
count += 1
if count > max_retries:
raise
print('Retrying in {} seconds'.format(backoffSeconds))
time.sleep(backoffSeconds)
```
We can test out our `redis_command` by calling it and printing the result
to the screen
```
print(redis_command(redis_master.set, 'foo', 'bar'))
print(redis_command(redis_master.get, 'foo'))
```
We can simulate failure again, by finding and stopping the current master.
Once we're done with our tests, we can `exec` into the current master and run `FLUSHALL` to remove our test records from redis.
## Saving our data to Redis
Now let's change our customer functions to point to Redis instead of file </br>
Starting with `getCustomer` to retrieve a single customer
```
def getCustomer(customerID):
customer = redis_command(redis_master.get, customerID)
if customer == None:
return {}
else:
c = str(customer, 'utf-8')
return json.loads(c)
```
Now we can use that to return all our customers by updating the `getCustomers` function:
```
def getCustomers():
customers = {}
customerIDs = redis_command(redis_master.scan_iter, "*")
for customerID in customerIDs:
customer = getCustomer(customerID)
customers[customer["customerID"]] = customer
return customers
```
Let's improve our functions by adding a new function to update a single customer:
```
def updateCustomer(customer):
redis_command(redis_master.set, customer.customerID, json.dumps(customer.__dict__))
```
And finally we can use that function to update all customers by tweaking our `updateCustomers` function:
```
def updateCustomers(customers):
for customer in customers:
updateCustomer(customer)
```
Now our simple functions are done, let's hook them up to our endpoints
```
# firstly delete these test lines
print(redis_command(redis_master.set, 'foo', 'bar'))
print(redis_command(redis_master.get, 'foo'))
```
Our simple Get all
```
@app.route("/", methods=['GET'])
def get_customers():
customers = getCustomers()
return json.dumps(customers)
```
Our Get by ID
```
@app.route("/get/<string:customerID>", methods=['GET'])
def get_customer(customerID):
customer = getCustomer(customerID)
if customer == {}:
return {}, 404
else:
return customer
```
And our update endpoint to update a customer
```
@app.route("/set", methods=['POST'])
def add_customer():
jsonData = request.json
if "customerID" not in jsonData:
return "customerID required", 400
if "firstName" not in jsonData:
return "firstName required", 400
if "lastName" not in jsonData:
return "lastName required", 400
customer = Customer( jsonData["customerID"], jsonData["firstName"], jsonData["lastName"])
updateCustomer(customer)
return "success", 200
```
## Docker
Let's build our container image and run it while mounting our customer file
Our final `dockerfile`
```
FROM python:3.9.6-alpine3.13 as dev
WORKDIR /work
FROM dev as runtime
WORKDIR /app
COPY ./requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY ./src/app.py /app/app.py
ENV FLASK_APP=app.py
CMD flask run -h 0.0.0 -p 5000
```
Build our container.
```
cd python\introduction\part-5.database.redis
docker build . -t customer-app
```
Now we can run our production container:
```
docker build . -t customer-app
docker run -it -p 5000:5000 `
--net redis `
-e REDIS_SENTINELS="sentinel-0:5000,sentinel-1:5000,sentinel-2:5000" `
-e REDIS_MASTER_NAME="mymaster" `
-e REDIS_PASSWORD="a-very-complex-password-here" `
customer-app
```

View File

@ -0,0 +1,37 @@
{
"a": {
"customerID": "a",
"firstName": "James",
"lastName": "Baker"
},
"b": {
"customerID": "b",
"firstName": "Jonathan",
"lastName": "D"
},
"c": {
"customerID": "c",
"firstName": "Aleem",
"lastName": "Janmohamed"
},
"d": {
"customerID": "d",
"firstName": "Ivo",
"lastName": "Galic"
},
"e": {
"customerID": "e",
"firstName": "Joel",
"lastName": "Griffiths"
},
"f": {
"customerID": "f",
"firstName": "Michael",
"lastName": "Spinks"
},
"g": {
"customerID": "g",
"firstName": "Victor",
"lastName": "Savkov"
}
}

View File

@ -0,0 +1,14 @@
FROM python:3.9.6-alpine3.13 as dev
WORKDIR /work
FROM dev as runtime
WORKDIR /app
COPY ./requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY ./src/app.py /app/app.py
ENV FLASK_APP=app.py
CMD flask run -h 0.0.0 -p 5000

View File

@ -0,0 +1,2 @@
Flask == 2.0.2
redis == 3.5.3

View File

@ -0,0 +1,101 @@
import os.path
import csv
import json
import time
from flask import Flask
from flask import request
import os
from redis.sentinel import Sentinel
dataPath = "./customers.json"
redis_sentinels = os.environ.get('REDIS_SENTINELS')
redis_master_name = os.environ.get('REDIS_MASTER_NAME')
redis_password = os.environ.get('REDIS_PASSWORD')
def redis_command(command, *args):
max_retries = 3
count = 0
backoffSeconds = 5
while True:
try:
return command(*args)
except (redis.exceptions.ConnectionError, redis.exceptions.TimeoutError):
count += 1
if count > max_retries:
raise
print('Retrying in {} seconds'.format(backoffSeconds))
time.sleep(backoffSeconds)
class Customer:
def __init__(self, c="",f="",l=""):
self.customerID = c
self.firstName = f
self.lastName = l
def fullName(self):
return self.firstName + " " + self.lastName
def getCustomers():
customers = {}
customerIDs = redis_command(redis_master.scan_iter, "*")
for customerID in customerIDs:
customer = getCustomer(customerID)
customers[customer["customerID"]] = customer
return customers
def getCustomer(customerID):
customer = redis_command(redis_master.get, customerID)
if customer == None:
return {}
else:
c = str(customer, 'utf-8')
return json.loads(c)
def updateCustomer(customer):
redis_command(redis_master.set,customer.customerID, json.dumps(customer.__dict__) )
def updateCustomers(customers):
for customer in customers:
updateCustomer(customer)
app = Flask(__name__)
sentinels = []
for s in redis_sentinels.split(","):
sentinels.append((s.split(":")[0], s.split(":")[1]))
redis_sentinel = Sentinel(sentinels, socket_timeout=5)
redis_master = redis_sentinel.master_for(redis_master_name,password = redis_password, socket_timeout=5)
@app.route("/", methods=['GET'])
def get_customers():
customers = getCustomers()
return json.dumps(customers)
@app.route("/get/<string:customerID>", methods=['GET'])
def get_customer(customerID):
customer = getCustomer(customerID)
if customer == {}:
return {}, 404
else:
return customer
@app.route("/set", methods=['POST'])
def add_customer():
jsonData = request.json
if "customerID" not in jsonData:
return "customerID required", 400
if "firstName" not in jsonData:
return "firstName required", 400
if "lastName" not in jsonData:
return "lastName required", 400
customer = Customer( jsonData["customerID"], jsonData["firstName"], jsonData["lastName"])
updateCustomer(customer)
return "success", 200