mirror of
https://github.com/marcel-dempers/docker-development-youtube-series.git
synced 2025-06-02 16:53:58 +00:00
producer+consumer and walkthrough
This commit is contained in:
parent
7ec37e4826
commit
8a7a1e3699
4
apache/kafka/README.md
Normal file
4
apache/kafka/README.md
Normal file
@ -0,0 +1,4 @@
|
||||
# Introduction to Kafka
|
||||
|
||||
This guide is under the messaging section alongside other message brokers like `RabbitMQ` etc. </br>
|
||||
Checkout the guide under the [messaging/kafka](../../messaging/kafka/README.md) folder
|
@ -1,59 +1,207 @@
|
||||
# Notes
|
||||
https://hub.docker.com/_/openjdk?tab=description&page=1&ordering=last_updated&name=alpine
|
||||
https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-debian-10
|
||||
# Introduction to Kafka
|
||||
|
||||
# Building a Docker file
|
||||
docker run --rm --name kafka -it kafka bash
|
||||
Official [Docs](https://kafka.apache.org/)
|
||||
|
||||
docker run --rm -it kafka bash -c "ls -l /kafka/"
|
||||
docker run --rm -it kafka bash -c "cat ~/kafka/config/server.properties"
|
||||
docker run --rm -it kafka bash -c "ls -l ~/kafka/bin"
|
||||
## Building a Docker file
|
||||
|
||||
As always, we start with a `dockerfile` </br>
|
||||
We can build our `dockerfile`
|
||||
|
||||
```
|
||||
cd .\messaging\kafka\
|
||||
docker build . -t aimvector/kafka:2.7.0
|
||||
|
||||
```
|
||||
|
||||
## Exploring the Kafka Install
|
||||
|
||||
We can then run it to explore the contents:
|
||||
|
||||
```
|
||||
docker run --rm --name kafka -it aimvector/kafka:2.7.0 bash
|
||||
|
||||
ls -l /kafka/
|
||||
cat /kafka/config/server.properties
|
||||
ls -l /kafka/bin
|
||||
```
|
||||
|
||||
We can use the `docker cp` command to copy the file out of our container:
|
||||
|
||||
```
|
||||
docker cp kafka:/kafka/config/server.properties ./server.properties
|
||||
docker cp kafka:/kafka/config/zookeeper.properties ./zookeeper/zookeeper.properties
|
||||
```
|
||||
|
||||
# Kafka
|
||||
|
||||
docker network create kafka
|
||||
docker run -it --rm --name kafka --net kafka -v ${PWD}/server.properties:/kafka/config/server.properties kafka
|
||||
We'll need the Kafka configuration to tune our server and Kafka also requires
|
||||
at least one Zookeeper instance in order to function. To achieve high availability, we'll run
|
||||
multiple kafka as well as multiple zookeeper instances in the future
|
||||
|
||||
# Zookeeper
|
||||
|
||||
docker run -it --rm --name zookeeper --net kafka zookeeper
|
||||
Let's build a Zookeeper image. The Apache folks have made it easy to start a Zookeeper instance the same way as the Kafka instance by simply running the `start-zookeeper.sh` script.
|
||||
|
||||
```
|
||||
cd .\messaging\kafka\zookeeper
|
||||
docker build . -t aimvector/zookeeper:2.7.0
|
||||
|
||||
```
|
||||
|
||||
Let's create a kafka network and run 1 zookeeper instance
|
||||
|
||||
```
|
||||
docker network create kafka
|
||||
docker run -d --rm --name zookeeper --net kafka zookeeper
|
||||
```
|
||||
|
||||
# Kafka - 1
|
||||
|
||||
```
|
||||
docker run -d `
|
||||
--rm `
|
||||
--name kafka-1 `
|
||||
--net kafka `
|
||||
-v ${PWD}/config/kafka-1/server.properties:/kafka/config/server.properties `
|
||||
aimvector/kafka:2.7.0
|
||||
```
|
||||
|
||||
# Kafka - 2
|
||||
|
||||
```
|
||||
docker run -d `
|
||||
--rm `
|
||||
--name kafka-2 `
|
||||
--net kafka `
|
||||
-v ${PWD}/config/kafka-2/server.properties:/kafka/config/server.properties `
|
||||
aimvector/kafka:2.7.0
|
||||
```
|
||||
|
||||
# Kafka - 3
|
||||
|
||||
```
|
||||
docker run -d `
|
||||
--rm `
|
||||
--name kafka-3 `
|
||||
--net kafka `
|
||||
-v ${PWD}/config/kafka-3/server.properties:/kafka/config/server.properties `
|
||||
aimvector/kafka:2.7.0
|
||||
```
|
||||
|
||||
|
||||
# Topic
|
||||
docker exec -it kafka bash
|
||||
|
||||
/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic
|
||||
|
||||
# Producer
|
||||
|
||||
echo "Hello, World" | /kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null
|
||||
|
||||
# Consumer
|
||||
|
||||
/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TutorialTopic --from-beginning
|
||||
|
||||
# Build an Application: Producer
|
||||
|
||||
https://docs.confluent.io/clients-confluent-kafka-go/current/overview.html#go-installation
|
||||
Let's create a Topic that allows us to store `Order` information. </br>
|
||||
To create a topic, Kafka and Zookeeper have scripts with the installer that allows us to do so. </br>
|
||||
|
||||
Access the container:
|
||||
```
|
||||
cd messaging/kafka/applications/producer
|
||||
|
||||
docker run -it --rm -v ${PWD}:/app -w /app golang:1.15-alpine
|
||||
|
||||
apk -U add ca-certificates && \
|
||||
apk update && apk upgrade && apk add pkgconf git bash build-base && \
|
||||
cd /tmp && \
|
||||
git clone https://github.com/edenhill/librdkafka.git && \
|
||||
cd librdkafka && \
|
||||
git checkout v1.6.1 && \
|
||||
./configure --prefix /usr && make && make install
|
||||
|
||||
#apk add --no-cache git make librdkafka-dev gcc musl-dev librdkafka
|
||||
|
||||
go mod init producer
|
||||
go get gopkg.in/confluentinc/confluent-kafka-go.v1/kafka
|
||||
docker exec -it zookeeper bash
|
||||
```
|
||||
Create the Topic:
|
||||
```
|
||||
/kafka/bin/kafka-topics.sh \
|
||||
--create \
|
||||
--zookeeper zookeeper:2181 \
|
||||
--replication-factor 1 \
|
||||
--partitions 3 \
|
||||
--topic Orders
|
||||
```
|
||||
|
||||
Describe our Topic:
|
||||
```
|
||||
/kafka/bin/kafka-topics.sh \
|
||||
--describe \
|
||||
--topic Orders \
|
||||
--zookeeper zookeeper:2181
|
||||
```
|
||||
|
||||
We can take a look at how Kafka stores data
|
||||
|
||||
```
|
||||
apt install -y tree
|
||||
tree /tmp/kafka-logs/
|
||||
```
|
||||
|
||||
# Simple Producer & Consumer
|
||||
|
||||
The Kafka installation also ships with a script that allows us to produce
|
||||
and consume messages to our Kafka network:
|
||||
|
||||
```
|
||||
echo "New Order: 1" | \
|
||||
/kafka/bin/kafka-console-producer.sh \
|
||||
--broker-list kafka-1:9092 \
|
||||
--topic Orders > /dev/null
|
||||
```
|
||||
|
||||
We can then run the consumer that will receive that message on that Orders topic:
|
||||
|
||||
```
|
||||
/kafka/bin/kafka-console-consumer.sh \
|
||||
--bootstrap-server kafka-1:9092 \
|
||||
--topic Orders --from-beginning
|
||||
|
||||
```
|
||||
|
||||
Once we have a message in Kafka, we can explore where it got stored in which partition:
|
||||
|
||||
```
|
||||
ls -lh /tmp/kafka-logs/Orders-*
|
||||
|
||||
/tmp/kafka-logs/Orders-0:
|
||||
total 4.0K
|
||||
-rw-r--r-- 1 root root 10M May 4 06:54 00000000000000000000.index
|
||||
-rw-r--r-- 1 root root 0 May 4 06:54 00000000000000000000.log
|
||||
-rw-r--r-- 1 root root 10M May 4 06:54 00000000000000000000.timeindex
|
||||
-rw-r--r-- 1 root root 8 May 4 06:54 leader-epoch-checkpoint
|
||||
|
||||
/tmp/kafka-logs/Orders-1:
|
||||
total 4.0K
|
||||
-rw-r--r-- 1 root root 10M May 4 06:54 00000000000000000000.index
|
||||
-rw-r--r-- 1 root root 0 May 4 06:54 00000000000000000000.log
|
||||
-rw-r--r-- 1 root root 10M May 4 06:54 00000000000000000000.timeindex
|
||||
-rw-r--r-- 1 root root 8 May 4 06:54 leader-epoch-checkpoint
|
||||
|
||||
/tmp/kafka-logs/Orders-2:
|
||||
total 8.0K
|
||||
-rw-r--r-- 1 root root 10M May 4 06:54 00000000000000000000.index
|
||||
-rw-r--r-- 1 root root 80 May 4 06:57 00000000000000000000.log
|
||||
-rw-r--r-- 1 root root 10M May 4 06:54 00000000000000000000.timeindex
|
||||
-rw-r--r-- 1 root root 8 May 4 06:54 leader-epoch-checkpoint
|
||||
```
|
||||
|
||||
By seeing 0 bytes in partition 0 and 1, we know the message is sitting in partition 2 as it has 80 bytes. </br>
|
||||
We can check the message with :
|
||||
|
||||
```
|
||||
cat /tmp/kafka-logs/Orders-2/*.log
|
||||
```
|
||||
|
||||
## Building a Producer: Go
|
||||
|
||||
```
|
||||
docker run -it `
|
||||
--net kafka `
|
||||
-e KAFKA_PEERS="kafka-1:9092,kafka-2:9092,kafka-3:9092" `
|
||||
-e KAFKA_TOPIC="Orders" `
|
||||
-e KAFKA_PARTITION=1 `
|
||||
-p 80:80 `
|
||||
kafka-producer
|
||||
```
|
||||
|
||||
## Building a Consumer: Go
|
||||
|
||||
```
|
||||
cd messaging\kafka\applications\consumer
|
||||
docker build . -t kafka-consumer
|
||||
|
||||
docker run -it `
|
||||
--net kafka `
|
||||
-e KAFKA_PEERS="kafka-1:9092,kafka-2:9092,kafka-3:9092" `
|
||||
-e KAFKA_TOPIC="Orders" `
|
||||
kafka-consumer
|
||||
```
|
||||
|
||||
# High Availability + Replication
|
||||
|
||||
Next up, we'll take a look at achieving high availability using replication techniques
|
||||
and taking advantage of Kafka's distributed architecture.
|
@ -1,34 +1,92 @@
|
||||
package main
|
||||
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"gopkg.in/Shopify/sarama.v1"
|
||||
)
|
||||
|
||||
var kafka_host = os.Getenv("KAFKA_HOSTS")
|
||||
var kafkaBrokers = os.Getenv("KAFKA_PEERS")
|
||||
var kafkaTopic = os.Getenv("KAFKA_TOPIC")
|
||||
|
||||
var globalProducer sarama.SyncProducer
|
||||
|
||||
func main() {
|
||||
consume()
|
||||
}
|
||||
|
||||
func consume() {
|
||||
config := sarama.NewConfig()
|
||||
config.Producer.RequiredAcks = sarama.WaitForAll
|
||||
config.Producer.Return.Successes = true
|
||||
config.Producer.Partitioner = sarama.NewRandomPartitioner
|
||||
|
||||
consumer, err := sarama.NewConsumer(strings.Split(kafkaBrokers, ","), config)
|
||||
if err != nil {
|
||||
log.Fatalf("%s: %s", "Failed to connect to Kafka", err)
|
||||
fmt.Printf("Failed to open Kafka consumer: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
|
||||
forever := make(chan bool)
|
||||
partitionList, err := consumer.Partitions(kafkaTopic)
|
||||
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to get the list of partitions: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var bufferSize = 256
|
||||
var (
|
||||
messages = make(chan *sarama.ConsumerMessage, bufferSize)
|
||||
closing = make(chan struct{})
|
||||
wg sync.WaitGroup
|
||||
)
|
||||
|
||||
go func() {
|
||||
for d := range msgs {
|
||||
log.Printf("Received a message: %s", d.Body)
|
||||
|
||||
d.Ack(false)
|
||||
signals := make(chan os.Signal, 1)
|
||||
signal.Notify(signals, syscall.SIGTERM, os.Interrupt)
|
||||
<-signals
|
||||
fmt.Println("Initiating shutdown of consumer...")
|
||||
close(closing)
|
||||
}()
|
||||
|
||||
for _, partition := range partitionList {
|
||||
pc, err := consumer.ConsumePartition(kafkaTopic, partition, sarama.OffsetOldest)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to start consumer for partition %d: %s\n", partition, err)
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
|
||||
fmt.Println("Running...")
|
||||
<-forever
|
||||
|
||||
go func(pc sarama.PartitionConsumer) {
|
||||
<-closing
|
||||
pc.AsyncClose()
|
||||
}(pc)
|
||||
|
||||
wg.Add(1)
|
||||
go func(pc sarama.PartitionConsumer) {
|
||||
defer wg.Done()
|
||||
for message := range pc.Messages() {
|
||||
messages <- message
|
||||
}
|
||||
}(pc)
|
||||
}
|
||||
|
||||
go func() {
|
||||
for msg := range messages {
|
||||
fmt.Printf("Partition:\t%d\n", msg.Partition)
|
||||
fmt.Printf("Offset:\t%d\n", msg.Offset)
|
||||
fmt.Printf("Key:\t%s\n", string(msg.Key))
|
||||
fmt.Printf("Value:\t%s\n", string(msg.Value))
|
||||
fmt.Println()
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Wait()
|
||||
fmt.Println("Done consuming topic", kafkaTopic)
|
||||
close(messages)
|
||||
|
||||
if err := consumer.Close(); err != nil {
|
||||
fmt.Printf("Failed to close consumer: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,15 +1,20 @@
|
||||
FROM golang:1.14-alpine as build
|
||||
FROM golang:1.16-alpine as dev-env
|
||||
|
||||
RUN apk add --no-cache git
|
||||
RUN apk add --no-cache git gcc musl-dev
|
||||
|
||||
WORKDIR /src
|
||||
WORKDIR /app
|
||||
|
||||
COPY consumer.go /src
|
||||
FROM dev-env as build-env
|
||||
COPY go.mod /go.sum /app/
|
||||
RUN go mod download
|
||||
|
||||
RUN go build consumer.go
|
||||
COPY . /app/
|
||||
|
||||
FROM alpine as runtime
|
||||
RUN CGO_ENABLED=0 go build -o /consumer
|
||||
|
||||
COPY --from=build /src/consumer /app/consumer
|
||||
FROM alpine:3.10 as runtime
|
||||
|
||||
CMD [ "/app/consumer" ]
|
||||
COPY --from=build-env /consumer /usr/local/bin/consumer
|
||||
RUN chmod +x /usr/local/bin/consumer
|
||||
|
||||
ENTRYPOINT ["consumer"]
|
17
messaging/kafka/applications/consumer/go.mod
Normal file
17
messaging/kafka/applications/consumer/go.mod
Normal file
@ -0,0 +1,17 @@
|
||||
module producer
|
||||
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/DataDog/zstd v1.4.8 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/eapache/go-resiliency v1.2.0 // indirect
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 // indirect
|
||||
github.com/eapache/queue v1.1.0 // indirect
|
||||
github.com/golang/snappy v0.0.3 // indirect
|
||||
github.com/julienschmidt/httprouter v1.3.0 // indirect
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible // indirect
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
|
||||
github.com/sirupsen/logrus v1.8.1 // indirect
|
||||
gopkg.in/Shopify/sarama.v1 v1.20.1 // indirect
|
||||
)
|
26
messaging/kafka/applications/consumer/go.sum
Normal file
26
messaging/kafka/applications/consumer/go.sum
Normal file
@ -0,0 +1,26 @@
|
||||
github.com/DataDog/zstd v1.4.8 h1:Rpmta4xZ/MgZnriKNd24iZMhGpP5dvUcs/uqfBapKZY=
|
||||
github.com/DataDog/zstd v1.4.8/go.mod h1:g4AWEaM3yOg3HYfnJ3YIawPnVdXJh9QME85blwSAmyw=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/eapache/go-resiliency v1.2.0 h1:v7g92e/KSN71Rq7vSThKaWIq68fL4YHvWyiUKorFR1Q=
|
||||
github.com/eapache/go-resiliency v1.2.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8/yCZMuEPMUDHG0CW/brkkEp8mzqk2+ODEitlw=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
||||
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
|
||||
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
|
||||
github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
|
||||
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U=
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible h1:Ix9yFKn1nSPBLFl/yZknTp8TU5G4Ps0JDmguYK6iH1A=
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
|
||||
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037 h1:YyJpGZS1sBuBCzLAR1VEpK193GlqGZbnPFnPV/5Rsb4=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
gopkg.in/Shopify/sarama.v1 v1.20.1 h1:Gi09A3fJXm0Jgt8kuKZ8YK+r60GfYn7MQuEmI3oq6hE=
|
||||
gopkg.in/Shopify/sarama.v1 v1.20.1/go.mod h1:AxnvoaevB2nBjNK17cG61A3LleFcWFwVBHBt+cot4Oc=
|
@ -1,15 +1,20 @@
|
||||
FROM golang:1.15-alpine as build
|
||||
FROM golang:1.16-alpine as dev-env
|
||||
|
||||
RUN apk add --no-cache git
|
||||
RUN apk add --no-cache git gcc musl-dev
|
||||
|
||||
WORKDIR /src
|
||||
WORKDIR /app
|
||||
|
||||
COPY publisher.go /src
|
||||
FROM dev-env as build-env
|
||||
COPY go.mod /go.sum /app/
|
||||
RUN go mod download
|
||||
|
||||
RUN go build publisher.go
|
||||
COPY . /app/
|
||||
|
||||
FROM alpine as runtime
|
||||
RUN CGO_ENABLED=0 go build -o /producer
|
||||
|
||||
COPY --from=build /src/publisher /app/publisher
|
||||
FROM alpine:3.10 as runtime
|
||||
|
||||
CMD [ "/app/publisher" ]
|
||||
COPY --from=build-env /producer /usr/local/bin/producer
|
||||
RUN chmod +x /usr/local/bin/producer
|
||||
|
||||
ENTRYPOINT ["producer"]
|
@ -1,8 +1,17 @@
|
||||
module producer
|
||||
|
||||
go 1.15
|
||||
go 1.16
|
||||
|
||||
require (
|
||||
github.com/confluentinc/confluent-kafka-go v1.6.1 // indirect
|
||||
gopkg.in/confluentinc/confluent-kafka-go.v1 v1.6.1 // indirect
|
||||
github.com/DataDog/zstd v1.4.8 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/eapache/go-resiliency v1.2.0 // indirect
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 // indirect
|
||||
github.com/eapache/queue v1.1.0 // indirect
|
||||
github.com/golang/snappy v0.0.3 // indirect
|
||||
github.com/julienschmidt/httprouter v1.3.0 // indirect
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible // indirect
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
|
||||
github.com/sirupsen/logrus v1.8.1 // indirect
|
||||
gopkg.in/Shopify/sarama.v1 v1.20.1 // indirect
|
||||
)
|
||||
|
@ -1,4 +1,26 @@
|
||||
github.com/confluentinc/confluent-kafka-go v1.6.1 h1:YxM/UtMQ2vgJX2gIgeJFUD0ANQYTEvfo4Cs4qKUlmGE=
|
||||
github.com/confluentinc/confluent-kafka-go v1.6.1/go.mod h1:u2zNLny2xq+5rWeTQjFHbDzzNuba4P1vo31r9r4uAdg=
|
||||
gopkg.in/confluentinc/confluent-kafka-go.v1 v1.6.1 h1:nKc5Vj4Kko8O6khwOIxQ2UqkEZP7ZZ91vb/lI+ephvk=
|
||||
gopkg.in/confluentinc/confluent-kafka-go.v1 v1.6.1/go.mod h1:ZdI3yfYmdNSLQPNCpO1y00EHyWaHG5EnQEyL/ntAegY=
|
||||
github.com/DataDog/zstd v1.4.8 h1:Rpmta4xZ/MgZnriKNd24iZMhGpP5dvUcs/uqfBapKZY=
|
||||
github.com/DataDog/zstd v1.4.8/go.mod h1:g4AWEaM3yOg3HYfnJ3YIawPnVdXJh9QME85blwSAmyw=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/eapache/go-resiliency v1.2.0 h1:v7g92e/KSN71Rq7vSThKaWIq68fL4YHvWyiUKorFR1Q=
|
||||
github.com/eapache/go-resiliency v1.2.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8/yCZMuEPMUDHG0CW/brkkEp8mzqk2+ODEitlw=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
||||
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
|
||||
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
|
||||
github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
|
||||
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U=
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible h1:Ix9yFKn1nSPBLFl/yZknTp8TU5G4Ps0JDmguYK6iH1A=
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
|
||||
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037 h1:YyJpGZS1sBuBCzLAR1VEpK193GlqGZbnPFnPV/5Rsb4=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
gopkg.in/Shopify/sarama.v1 v1.20.1 h1:Gi09A3fJXm0Jgt8kuKZ8YK+r60GfYn7MQuEmI3oq6hE=
|
||||
gopkg.in/Shopify/sarama.v1 v1.20.1/go.mod h1:AxnvoaevB2nBjNK17cG61A3LleFcWFwVBHBt+cot4Oc=
|
||||
|
77
messaging/kafka/applications/producer/producer.go
Normal file
77
messaging/kafka/applications/producer/producer.go
Normal file
@ -0,0 +1,77 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"github.com/julienschmidt/httprouter"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"os"
|
||||
"strings"
|
||||
"strconv"
|
||||
"gopkg.in/Shopify/sarama.v1"
|
||||
)
|
||||
|
||||
var kafkaBrokers = os.Getenv("KAFKA_PEERS")
|
||||
var kafkaTopic = os.Getenv("KAFKA_TOPIC")
|
||||
var kafkaPartition = os.Getenv("KAFKA_PARTITION")
|
||||
var partition int32 = -1
|
||||
var globalProducer sarama.SyncProducer
|
||||
|
||||
func main() {
|
||||
|
||||
config := sarama.NewConfig()
|
||||
config.Producer.RequiredAcks = sarama.WaitForAll
|
||||
config.Producer.Return.Successes = true
|
||||
config.Producer.Partitioner = sarama.NewRandomPartitioner
|
||||
p, err := strconv.Atoi(kafkaPartition)
|
||||
|
||||
if err != nil {
|
||||
fmt.Println("Failed to convert KAFKA_PARTITION to Int32")
|
||||
panic(err)
|
||||
}
|
||||
|
||||
partition = int32(p)
|
||||
|
||||
producer, err := sarama.NewSyncProducer(strings.Split(kafkaBrokers, ","), config)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to open Kafka producer: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
|
||||
globalProducer = producer
|
||||
|
||||
defer func() {
|
||||
fmt.Println("Closing Kafka producer...")
|
||||
if err := globalProducer.Close(); err != nil {
|
||||
fmt.Printf("Failed to close Kafka producer cleanly: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
|
||||
router := httprouter.New()
|
||||
|
||||
router.POST("/publish/:message", func(w http.ResponseWriter, r *http.Request, p httprouter.Params){
|
||||
submit(w,r,p)
|
||||
})
|
||||
|
||||
fmt.Println("Running...")
|
||||
log.Fatal(http.ListenAndServe(":80", router))
|
||||
}
|
||||
|
||||
func submit(writer http.ResponseWriter, request *http.Request, p httprouter.Params) {
|
||||
|
||||
messageValue := p.ByName("message")
|
||||
message := &sarama.ProducerMessage{Topic: kafkaTopic, Partition: partition }
|
||||
message.Value = sarama.StringEncoder(messageValue)
|
||||
|
||||
fmt.Println("Received message: " + messageValue)
|
||||
|
||||
partition, offset, err := globalProducer.SendMessage(message)
|
||||
|
||||
if err != nil {
|
||||
log.Fatalf("%s: %s", "Failed to connect to Kafka", err)
|
||||
}
|
||||
|
||||
fmt.Printf("publish success! topic=%s\tpartition=%d\toffset=%d\n", kafkaTopic, partition, offset)
|
||||
|
||||
}
|
@ -1,36 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"github.com/julienschmidt/httprouter"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"os"
|
||||
)
|
||||
|
||||
var kafka_host = os.Getenv("KAFKA_HOSTS")
|
||||
|
||||
func main() {
|
||||
|
||||
router := httprouter.New()
|
||||
|
||||
router.POST("/publish/:message", func(w http.ResponseWriter, r *http.Request, p httprouter.Params){
|
||||
submit(w,r,p)
|
||||
})
|
||||
|
||||
fmt.Println("Running...")
|
||||
log.Fatal(http.ListenAndServe(":80", router))
|
||||
}
|
||||
|
||||
func submit(writer http.ResponseWriter, request *http.Request, p httprouter.Params) {
|
||||
message := p.ByName("message")
|
||||
|
||||
fmt.Println("Received message: " + message)
|
||||
|
||||
if err != nil {
|
||||
log.Fatalf("%s: %s", "Failed to connect to Kafka", err)
|
||||
}
|
||||
|
||||
defer conn.Close()
|
||||
fmt.Println("publish success!")
|
||||
}
|
136
messaging/kafka/config/kafka-2/server.properties
Normal file
136
messaging/kafka/config/kafka-2/server.properties
Normal file
@ -0,0 +1,136 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# see kafka.server.KafkaConfig for additional details and defaults
|
||||
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The id of the broker. This must be set to a unique integer for each broker.
|
||||
broker.id=1
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
# The address the socket server listens on. It will get the value returned from
|
||||
# java.net.InetAddress.getCanonicalHostName() if not configured.
|
||||
# FORMAT:
|
||||
# listeners = listener_name://host_name:port
|
||||
# EXAMPLE:
|
||||
# listeners = PLAINTEXT://your.host.name:9092
|
||||
#listeners=PLAINTEXT://:9092
|
||||
|
||||
# Hostname and port the broker will advertise to producers and consumers. If not set,
|
||||
# it uses the value for "listeners" if configured. Otherwise, it will use the value
|
||||
# returned from java.net.InetAddress.getCanonicalHostName().
|
||||
#advertised.listeners=PLAINTEXT://your.host.name:9092
|
||||
|
||||
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
|
||||
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
|
||||
|
||||
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
|
||||
num.network.threads=3
|
||||
|
||||
# The number of threads that the server uses for processing requests, which may include disk I/O
|
||||
num.io.threads=8
|
||||
|
||||
# The send buffer (SO_SNDBUF) used by the socket server
|
||||
socket.send.buffer.bytes=102400
|
||||
|
||||
# The receive buffer (SO_RCVBUF) used by the socket server
|
||||
socket.receive.buffer.bytes=102400
|
||||
|
||||
# The maximum size of a request that the socket server will accept (protection against OOM)
|
||||
socket.request.max.bytes=104857600
|
||||
|
||||
|
||||
############################# Log Basics #############################
|
||||
|
||||
# A comma separated list of directories under which to store log files
|
||||
log.dirs=/tmp/kafka-logs
|
||||
|
||||
# The default number of log partitions per topic. More partitions allow greater
|
||||
# parallelism for consumption, but this will also result in more files across
|
||||
# the brokers.
|
||||
num.partitions=1
|
||||
|
||||
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
|
||||
# This value is recommended to be increased for installations with data dirs located in RAID array.
|
||||
num.recovery.threads.per.data.dir=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offsets.topic.replication.factor=1
|
||||
transaction.state.log.replication.factor=1
|
||||
transaction.state.log.min.isr=1
|
||||
|
||||
############################# Log Flush Policy #############################
|
||||
|
||||
# Messages are immediately written to the filesystem but by default we only fsync() to sync
|
||||
# the OS cache lazily. The following configurations control the flush of data to disk.
|
||||
# There are a few important trade-offs here:
|
||||
# 1. Durability: Unflushed data may be lost if you are not using replication.
|
||||
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
|
||||
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
|
||||
# The settings below allow one to configure the flush policy to flush data after a period of time or
|
||||
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
|
||||
|
||||
# The number of messages to accept before forcing a flush of data to disk
|
||||
#log.flush.interval.messages=10000
|
||||
|
||||
# The maximum amount of time a message can sit in a log before we force a flush
|
||||
#log.flush.interval.ms=1000
|
||||
|
||||
############################# Log Retention Policy #############################
|
||||
|
||||
# The following configurations control the disposal of log segments. The policy can
|
||||
# be set to delete segments after a period of time, or after a given size has accumulated.
|
||||
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
|
||||
# from the end of the log.
|
||||
|
||||
# The minimum age of a log file to be eligible for deletion due to age
|
||||
log.retention.hours=168
|
||||
|
||||
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
|
||||
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
|
||||
#log.retention.bytes=1073741824
|
||||
|
||||
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
|
||||
log.segment.bytes=1073741824
|
||||
|
||||
# The interval at which log segments are checked to see if they can be deleted according
|
||||
# to the retention policies
|
||||
log.retention.check.interval.ms=300000
|
||||
|
||||
############################# Zookeeper #############################
|
||||
|
||||
# Zookeeper connection string (see zookeeper docs for details).
|
||||
# This is a comma separated host:port pairs, each corresponding to a zk
|
||||
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
|
||||
# You can also append an optional chroot string to the urls to specify the
|
||||
# root directory for all kafka znodes.
|
||||
zookeeper.connect=zookeeper:2181
|
||||
|
||||
# Timeout in ms for connecting to zookeeper
|
||||
zookeeper.connection.timeout.ms=18000
|
||||
|
||||
|
||||
############################# Group Coordinator Settings #############################
|
||||
|
||||
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
|
||||
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
|
||||
# The default value for this is 3 seconds.
|
||||
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
|
||||
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
|
||||
group.initial.rebalance.delay.ms=0
|
136
messaging/kafka/config/kafka-3/server.properties
Normal file
136
messaging/kafka/config/kafka-3/server.properties
Normal file
@ -0,0 +1,136 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# see kafka.server.KafkaConfig for additional details and defaults
|
||||
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The id of the broker. This must be set to a unique integer for each broker.
|
||||
broker.id=2
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
# The address the socket server listens on. It will get the value returned from
|
||||
# java.net.InetAddress.getCanonicalHostName() if not configured.
|
||||
# FORMAT:
|
||||
# listeners = listener_name://host_name:port
|
||||
# EXAMPLE:
|
||||
# listeners = PLAINTEXT://your.host.name:9092
|
||||
#listeners=PLAINTEXT://:9092
|
||||
|
||||
# Hostname and port the broker will advertise to producers and consumers. If not set,
|
||||
# it uses the value for "listeners" if configured. Otherwise, it will use the value
|
||||
# returned from java.net.InetAddress.getCanonicalHostName().
|
||||
#advertised.listeners=PLAINTEXT://your.host.name:9092
|
||||
|
||||
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
|
||||
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
|
||||
|
||||
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
|
||||
num.network.threads=3
|
||||
|
||||
# The number of threads that the server uses for processing requests, which may include disk I/O
|
||||
num.io.threads=8
|
||||
|
||||
# The send buffer (SO_SNDBUF) used by the socket server
|
||||
socket.send.buffer.bytes=102400
|
||||
|
||||
# The receive buffer (SO_RCVBUF) used by the socket server
|
||||
socket.receive.buffer.bytes=102400
|
||||
|
||||
# The maximum size of a request that the socket server will accept (protection against OOM)
|
||||
socket.request.max.bytes=104857600
|
||||
|
||||
|
||||
############################# Log Basics #############################
|
||||
|
||||
# A comma separated list of directories under which to store log files
|
||||
log.dirs=/tmp/kafka-logs
|
||||
|
||||
# The default number of log partitions per topic. More partitions allow greater
|
||||
# parallelism for consumption, but this will also result in more files across
|
||||
# the brokers.
|
||||
num.partitions=1
|
||||
|
||||
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
|
||||
# This value is recommended to be increased for installations with data dirs located in RAID array.
|
||||
num.recovery.threads.per.data.dir=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offsets.topic.replication.factor=1
|
||||
transaction.state.log.replication.factor=1
|
||||
transaction.state.log.min.isr=1
|
||||
|
||||
############################# Log Flush Policy #############################
|
||||
|
||||
# Messages are immediately written to the filesystem but by default we only fsync() to sync
|
||||
# the OS cache lazily. The following configurations control the flush of data to disk.
|
||||
# There are a few important trade-offs here:
|
||||
# 1. Durability: Unflushed data may be lost if you are not using replication.
|
||||
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
|
||||
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
|
||||
# The settings below allow one to configure the flush policy to flush data after a period of time or
|
||||
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
|
||||
|
||||
# The number of messages to accept before forcing a flush of data to disk
|
||||
#log.flush.interval.messages=10000
|
||||
|
||||
# The maximum amount of time a message can sit in a log before we force a flush
|
||||
#log.flush.interval.ms=1000
|
||||
|
||||
############################# Log Retention Policy #############################
|
||||
|
||||
# The following configurations control the disposal of log segments. The policy can
|
||||
# be set to delete segments after a period of time, or after a given size has accumulated.
|
||||
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
|
||||
# from the end of the log.
|
||||
|
||||
# The minimum age of a log file to be eligible for deletion due to age
|
||||
log.retention.hours=168
|
||||
|
||||
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
|
||||
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
|
||||
#log.retention.bytes=1073741824
|
||||
|
||||
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
|
||||
log.segment.bytes=1073741824
|
||||
|
||||
# The interval at which log segments are checked to see if they can be deleted according
|
||||
# to the retention policies
|
||||
log.retention.check.interval.ms=300000
|
||||
|
||||
############################# Zookeeper #############################
|
||||
|
||||
# Zookeeper connection string (see zookeeper docs for details).
|
||||
# This is a comma separated host:port pairs, each corresponding to a zk
|
||||
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
|
||||
# You can also append an optional chroot string to the urls to specify the
|
||||
# root directory for all kafka znodes.
|
||||
zookeeper.connect=zookeeper:2181
|
||||
|
||||
# Timeout in ms for connecting to zookeeper
|
||||
zookeeper.connection.timeout.ms=18000
|
||||
|
||||
|
||||
############################# Group Coordinator Settings #############################
|
||||
|
||||
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
|
||||
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
|
||||
# The default value for this is 3 seconds.
|
||||
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
|
||||
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
|
||||
group.initial.rebalance.delay.ms=0
|
@ -14,5 +14,4 @@ RUN curl "https://archive.apache.org/dist/kafka/${KAFKA_VERSION}/kafka_${SCALA_V
|
||||
COPY start-zookeeper.sh /usr/bin
|
||||
RUN chmod +x /usr/bin/start-zookeeper.sh
|
||||
|
||||
CMD ["start-zookeeper.sh"]
|
||||
|
||||
CMD ["start-zookeeper.sh"]
|
Loading…
x
Reference in New Issue
Block a user