mirror of
https://github.com/marcel-dempers/docker-development-youtube-series.git
synced 2025-06-06 17:01:30 +00:00
updates
This commit is contained in:
parent
0ba3a32a24
commit
816953d517
@ -20,19 +20,18 @@ We can then run it to explore the contents:
|
||||
```
|
||||
docker run --rm --name kafka -it aimvector/kafka:2.7.0 bash
|
||||
|
||||
ls -l /kafka/
|
||||
ls -l /kafka/bin/
|
||||
cat /kafka/config/server.properties
|
||||
ls -l /kafka/bin
|
||||
```
|
||||
|
||||
We can use the `docker cp` command to copy the file out of our container:
|
||||
|
||||
```
|
||||
docker cp kafka:/kafka/config/server.properties ./server.properties
|
||||
docker cp kafka:/kafka/config/zookeeper.properties ./zookeeper/zookeeper.properties
|
||||
docker cp kafka:/kafka/config/zookeeper.properties ./zookeeper.properties
|
||||
```
|
||||
|
||||
We'll need the Kafka configuration to tune our server and Kafka also requires
|
||||
Note: We'll need the Kafka configuration to tune our server and Kafka also requires
|
||||
at least one Zookeeper instance in order to function. To achieve high availability, we'll run
|
||||
multiple kafka as well as multiple zookeeper instances in the future
|
||||
|
||||
@ -41,16 +40,23 @@ multiple kafka as well as multiple zookeeper instances in the future
|
||||
Let's build a Zookeeper image. The Apache folks have made it easy to start a Zookeeper instance the same way as the Kafka instance by simply running the `start-zookeeper.sh` script.
|
||||
|
||||
```
|
||||
cd .\messaging\kafka\zookeeper
|
||||
cd ./zookeeper
|
||||
docker build . -t aimvector/zookeeper:2.7.0
|
||||
|
||||
cd ..
|
||||
```
|
||||
|
||||
Let's create a kafka network and run 1 zookeeper instance
|
||||
|
||||
```
|
||||
docker network create kafka
|
||||
docker run -d --rm --name zookeeper --net kafka zookeeper
|
||||
docker run -d `
|
||||
--rm `
|
||||
--name zookeeper-1 `
|
||||
--net kafka `
|
||||
-v ${PWD}/config/zookeeper-1/zookeeper.properties:/kafka/config/zookeeper.properties `
|
||||
aimvector/zookeeper:2.7.0
|
||||
|
||||
docker logs zookeeper-1
|
||||
```
|
||||
|
||||
# Kafka - 1
|
||||
@ -62,6 +68,8 @@ docker run -d `
|
||||
--net kafka `
|
||||
-v ${PWD}/config/kafka-1/server.properties:/kafka/config/server.properties `
|
||||
aimvector/kafka:2.7.0
|
||||
|
||||
docker logs kafka-1
|
||||
```
|
||||
|
||||
# Kafka - 2
|
||||
@ -94,13 +102,13 @@ To create a topic, Kafka and Zookeeper have scripts with the installer that allo
|
||||
|
||||
Access the container:
|
||||
```
|
||||
docker exec -it zookeeper bash
|
||||
docker exec -it zookeeper-1 bash
|
||||
```
|
||||
Create the Topic:
|
||||
```
|
||||
/kafka/bin/kafka-topics.sh \
|
||||
--create \
|
||||
--zookeeper zookeeper:2181 \
|
||||
--zookeeper zookeeper-1:2181 \
|
||||
--replication-factor 1 \
|
||||
--partitions 3 \
|
||||
--topic Orders
|
||||
@ -111,40 +119,46 @@ Describe our Topic:
|
||||
/kafka/bin/kafka-topics.sh \
|
||||
--describe \
|
||||
--topic Orders \
|
||||
--zookeeper zookeeper:2181
|
||||
```
|
||||
|
||||
We can take a look at how Kafka stores data
|
||||
|
||||
```
|
||||
apt install -y tree
|
||||
tree /tmp/kafka-logs/
|
||||
--zookeeper zookeeper-1:2181
|
||||
```
|
||||
|
||||
# Simple Producer & Consumer
|
||||
|
||||
The Kafka installation also ships with a script that allows us to produce
|
||||
and consume messages to our Kafka network:
|
||||
|
||||
```
|
||||
echo "New Order: 1" | \
|
||||
/kafka/bin/kafka-console-producer.sh \
|
||||
--broker-list kafka-1:9092 \
|
||||
--topic Orders > /dev/null
|
||||
```
|
||||
and consume messages to our Kafka network: <br/>
|
||||
|
||||
We can then run the consumer that will receive that message on that Orders topic:
|
||||
|
||||
```
|
||||
docker exec -it zookeeper-1 bash
|
||||
|
||||
/kafka/bin/kafka-console-consumer.sh \
|
||||
--bootstrap-server kafka-1:9092 \
|
||||
--bootstrap-server kafka-1:9092,kafka-2:9092,kafka-3:9092 \
|
||||
--topic Orders --from-beginning
|
||||
|
||||
```
|
||||
|
||||
With a consumer in place, we can start producing messages
|
||||
|
||||
```
|
||||
docker exec -it zookeeper-1 bash
|
||||
|
||||
echo "New Order: 1" | \
|
||||
/kafka/bin/kafka-console-producer.sh \
|
||||
--broker-list kafka-1:9092,kafka-2:9092,kafka-3:9092 \
|
||||
--topic Orders > /dev/null
|
||||
```
|
||||
|
||||
|
||||
Once we have a message in Kafka, we can explore where it got stored in which partition:
|
||||
|
||||
```
|
||||
docker exec -it kafka-1 bash
|
||||
|
||||
apt install -y tree
|
||||
tree /tmp/kafka-logs/
|
||||
|
||||
|
||||
ls -lh /tmp/kafka-logs/Orders-*
|
||||
|
||||
/tmp/kafka-logs/Orders-0:
|
||||
@ -200,8 +214,3 @@ docker run -it `
|
||||
-e KAFKA_TOPIC="Orders" `
|
||||
kafka-consumer
|
||||
```
|
||||
|
||||
# High Availability + Replication
|
||||
|
||||
Next up, we'll take a look at achieving high availability using replication techniques
|
||||
and taking advantage of Kafka's distributed architecture.
|
@ -18,7 +18,7 @@
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The id of the broker. This must be set to a unique integer for each broker.
|
||||
broker.id=0
|
||||
broker.id=1
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
@ -120,7 +120,7 @@ log.retention.check.interval.ms=300000
|
||||
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
|
||||
# You can also append an optional chroot string to the urls to specify the
|
||||
# root directory for all kafka znodes.
|
||||
zookeeper.connect=zookeeper:2181
|
||||
zookeeper.connect=zookeeper-1:2181
|
||||
|
||||
# Timeout in ms for connecting to zookeeper
|
||||
zookeeper.connection.timeout.ms=18000
|
||||
|
@ -18,7 +18,7 @@
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The id of the broker. This must be set to a unique integer for each broker.
|
||||
broker.id=1
|
||||
broker.id=2
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
@ -120,7 +120,7 @@ log.retention.check.interval.ms=300000
|
||||
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
|
||||
# You can also append an optional chroot string to the urls to specify the
|
||||
# root directory for all kafka znodes.
|
||||
zookeeper.connect=zookeeper:2181
|
||||
zookeeper.connect=zookeeper-1:2181
|
||||
|
||||
# Timeout in ms for connecting to zookeeper
|
||||
zookeeper.connection.timeout.ms=18000
|
||||
|
@ -18,7 +18,7 @@
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The id of the broker. This must be set to a unique integer for each broker.
|
||||
broker.id=2
|
||||
broker.id=3
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
@ -120,7 +120,7 @@ log.retention.check.interval.ms=300000
|
||||
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
|
||||
# You can also append an optional chroot string to the urls to specify the
|
||||
# root directory for all kafka znodes.
|
||||
zookeeper.connect=zookeeper:2181
|
||||
zookeeper.connect=zookeeper-1:2181
|
||||
|
||||
# Timeout in ms for connecting to zookeeper
|
||||
zookeeper.connection.timeout.ms=18000
|
||||
|
@ -1,12 +1,13 @@
|
||||
FROM openjdk:11.0.10-jre-buster
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl
|
||||
|
||||
ENV KAFKA_VERSION 2.7.0
|
||||
ENV SCALA_VERSION 2.13
|
||||
RUN mkdir /tmp/kafka && \
|
||||
apt-get update && \
|
||||
apt-get install -y curl
|
||||
|
||||
RUN curl "https://archive.apache.org/dist/kafka/${KAFKA_VERSION}/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz" \
|
||||
RUN mkdir /tmp/kafka && \
|
||||
curl "https://archive.apache.org/dist/kafka/${KAFKA_VERSION}/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz" \
|
||||
-o /tmp/kafka/kafka.tgz && \
|
||||
mkdir /kafka && cd /kafka && \
|
||||
tar -xvzf /tmp/kafka/kafka.tgz --strip 1
|
||||
|
Loading…
x
Reference in New Issue
Block a user