Skip to content

Getting: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes. #272

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
luck02 opened this issue Feb 15, 2015 · 16 comments · Fixed by #277

Comments

@luck02
Copy link

luck02 commented Feb 15, 2015

Sha: 888a760
Kafka Version: kafka_2.9.2-0.8.2.0

I've created a kafka server with 3 nodes, completely based on the quickstart in the kafka documentation.

Everything works fine with the quickstart tutorial, I can create topics, list them, publish and consume. Including the topics listed in the below code. IE the CLI works for 'NewTopic' but the code errors out.

However when I attempt to run the producer / simple producer code locally I get this failure:

➜  LegacyProducer  go run main.go
> connected
panic: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.

goroutine 16 [running]:
runtime.panic(0x563240, 0x5)
        /usr/lib/golang/src/pkg/runtime/panic.c:279 +0xf5
main.main()
        /home/glucas/dev/goproj/src/github.com/luck02/LegacyProducer/main.go:70 +0x36f

goroutine 19 [finalizer wait]:
runtime.park(0x4133b0, 0x699808, 0x697ac9)
        /usr/lib/golang/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0x699808, 0x697ac9)
        /usr/lib/golang/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
        /usr/lib/golang/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
        /usr/lib/golang/src/pkg/runtime/proc.c:1445

goroutine 23 [chan receive]:
github.com/shopify/sarama.(*Broker).responseReceiver(0xc208004180)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/broker.go:359 +0xdd
github.com/shopify/sarama.*Broker.(github.com/shopify/sarama.responseReceiver)·fm()
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/broker.go:113 +0x26
github.com/shopify/sarama.withRecover(0xc208000510)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:42 +0x37
created by github.com/shopify/sarama.func·001
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/broker.go:113 +0x381

goroutine 27 [runnable]:
github.com/shopify/sarama.(*Client).backgroundMetadataUpdater(0xc208048090)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/client.go:494 +0x1e0
github.com/shopify/sarama.*Client.(github.com/shopify/sarama.backgroundMetadataUpdater)·fm()
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/client.go:80 +0x26
github.com/shopify/sarama.withRecover(0xc208000680)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:42 +0x37
created by github.com/shopify/sarama.NewClient
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/client.go:80 +0x4b3

goroutine 44 [runnable]:
github.com/shopify/sarama.withRecover(0xc208001b70)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:32
created by github.com/shopify/sarama.safeAsyncClose
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:51 +0x6c

goroutine 45 [runnable]:
github.com/shopify/sarama.withRecover(0xc208001b80)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:32
created by github.com/shopify/sarama.safeAsyncClose
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:51 +0x6c

goroutine 34 [sleep]:
time.Sleep(0x12a05f200)
        /usr/lib/golang/src/pkg/runtime/time.goc:39 +0x31
net.func·019()
        /usr/lib/golang/src/pkg/net/dnsclient_unix.go:183 +0x56
created by net.loadConfig
        /usr/lib/golang/src/pkg/net/dnsclient_unix.go:212 +0x153

goroutine 47 [runnable]:
github.com/shopify/sarama.withRecover(0xc208001ba0)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:32
created by github.com/shopify/sarama.safeAsyncClose
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:51 +0x6c

goroutine 46 [runnable]:
github.com/shopify/sarama.withRecover(0xc208001b90)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:32
created by github.com/shopify/sarama.safeAsyncClose
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:51 +0x6c

goroutine 43 [runnable]:
github.com/shopify/sarama.func·003()
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/producer.go:235 +0x53
github.com/shopify/sarama.withRecover(0xc208001230)
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/utils.go:42 +0x37
created by github.com/shopify/sarama.(*Producer).Close
        /home/glucas/dev/goproj/src/github.com/shopify/sarama/producer.go:237 +0xae
exit status 2

Code:

func main() {
        client, err := sarama.NewClient("UniqueClientID", []string{"192.168.1.76:9092"}, sarama.NewClientConfig())
        if err != nil {
                panic(err)
        } else {
                fmt.Println("> connected")
        }
        defer client.Close()

        producer, err := sarama.NewSimpleProducer(client, nil)
        if err != nil {
                panic(err)
        }
        defer producer.Close()

        for {
                err = producer.SendMessage("my_topic", nil, sarama.StringEncoder("testing 123"))
                if err != nil {
                        panic(err)
                } else {
                        fmt.Println("> message sent")
                }
        }
}

@wvanbergen
Copy link
Contributor

Can you try: kafka/bin/kafka-topics.sh --describe --topic my_topic --zookeeper localhost, and paste the output here?

@luck02
Copy link
Author

luck02 commented Feb 15, 2015

Hi, @wvanbergen

➜  kafka_2.9.2-0.8.2.0  bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic my_topic --from-beginning
herp
derp
terp
^CConsumed 3 messages
➜  kafka_2.9.2-0.8.2.0  bin/kafka-topics.sh --describe --topic my_topic --zookeeper localhost:2181            
Topic:my_topic  PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: my_topic Partition: 0    Leader: 0       Replicas: 0     Isr: 0

code:

func main() {
        client, err := sarama.NewClient("herp23432", []string{"192.168.1.76:9092"}, sarama.NewClientConfig())
        if err != nil {
                panic(err)
        } else {
                fmt.Println("> connected")
        }
        defer client.Close()

        producer, err := sarama.NewSimpleProducer(client, nil)
        if err != nil {
                panic(err)
        }
        defer producer.Close()

        for {
                err = producer.SendMessage("my_topic", nil, sarama.StringEncoder("testing 123"))
                if err != nil {
                        panic(err)
                } else {
                        fmt.Println("> message sent")
                }
        }
}

code execution:

➜  LegacyProducer  go run main.go 
> connected
panic: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.

goroutine 16 [running]:
runtime.panic(0x563240, 0x5)
        /usr/lib/golang/src/pkg/runtime/panic.c:279 +0xf5
main.main()
        /home/glucas/dev/goproj/src/github.com/luck02/LegacyProducer/main.go:70 +0x36f

etc.

Thanks :)
G

@eapache
Copy link
Contributor

eapache commented Feb 15, 2015

Please check that the broker is actually reachable from where you are running your go program (since it doesn't look like you're running it on the same machine) - could there be a firewall or routing problem in the way? I have a suspicion we are simply reporting the wrong error, and the right error is a lower-level networking problem.

If that doesn't help, please set sarama.Logger to a useful value and provide that output as well.

@eapache eapache added the bug label Feb 15, 2015
@luck02
Copy link
Author

luck02 commented Feb 16, 2015

it's not a simple network issue: 192.168.1.76 == localhost, it's all on my laptop. Fedora 21 more or less stock.

output of sarama debug:

➜  LegacyProducer  cat sarama.debug.log 
PREFIX: 2015/02/15 19:59:38 client.go:42: Initializing new client
PREFIX: 2015/02/15 19:59:38 client.go:353: Fetching metadata for [] from broker localhost:9092
PREFIX: 2015/02/15 19:59:38 broker.go:112: Connected to broker localhost:9092
PREFIX: 2015/02/15 19:59:38 client.go:522: Registered new broker #0 at gclaptop:9092
PREFIX: 2015/02/15 19:59:38 client.go:82: Successfully initialized new client
PREFIX: 2015/02/15 19:59:38 producer.go:508: producer/flusher/0 starting up
PREFIX: 2015/02/15 19:59:38 broker.go:104: Failed to connect to broker gclaptop:9092: dial tcp: lookup gclaptop: no such host
PREFIX: 2015/02/15 19:59:38 client.go:310: Disconnecting Broker 0
PREFIX: 2015/02/15 19:59:38 producer.go:555: producer/flusher/0 state change to [closing] because dial tcp: lookup gclaptop: no such host
PREFIX: 2015/02/15 19:59:38 broker.go:135: Failed to close connection to broker gclaptop:9092: kafka: broker: not connected
PREFIX: 2015/02/15 19:59:38 utils.go:49: Error closing broker 0 : kafka: broker: not connected
PREFIX: 2015/02/15 19:59:38 producer.go:397: producer/leader state change to [retrying] on my_topic/0
PREFIX: 2015/02/15 19:59:38 producer.go:398: producer/leader abandoning broker 0 on my_topic/0
PREFIX: 2015/02/15 19:59:38 producer.go:603: producer/flusher/0 shut down
PREFIX: 2015/02/15 19:59:38 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:38 client.go:522: Registered new broker #0 at gclaptop:9092
PREFIX: 2015/02/15 19:59:39 broker.go:104: Failed to connect to broker gclaptop:9092: dial tcp: lookup gclaptop: no such host
PREFIX: 2015/02/15 19:59:39 client.go:366: Some partitions are leaderless, waiting 250ms for election... (3 retries remaining)
PREFIX: 2015/02/15 19:59:39 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:39 client.go:366: Some partitions are leaderless, waiting 250ms for election... (2 retries remaining)
PREFIX: 2015/02/15 19:59:39 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:39 client.go:366: Some partitions are leaderless, waiting 250ms for election... (1 retries remaining)
PREFIX: 2015/02/15 19:59:39 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:39 client.go:363: Some partitions are leaderless, but we're out of retries
PREFIX: 2015/02/15 19:59:39 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:39 client.go:366: Some partitions are leaderless, waiting 250ms for election... (3 retries remaining)
PREFIX: 2015/02/15 19:59:40 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:40 client.go:366: Some partitions are leaderless, waiting 250ms for election... (2 retries remaining)
PREFIX: 2015/02/15 19:59:40 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:40 client.go:366: Some partitions are leaderless, waiting 250ms for election... (1 retries remaining)
PREFIX: 2015/02/15 19:59:40 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:40 client.go:363: Some partitions are leaderless, but we're out of retries
PREFIX: 2015/02/15 19:59:40 producer.go:407: producer/leader state change to [flushing] on my_topic/0
PREFIX: 2015/02/15 19:59:40 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:40 producer.go:280: Producer shutting down.
PREFIX: 2015/02/15 19:59:40 client.go:366: Some partitions are leaderless, waiting 250ms for election... (3 retries remaining)
PREFIX: 2015/02/15 19:59:40 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:40 client.go:366: Some partitions are leaderless, waiting 250ms for election... (2 retries remaining)
PREFIX: 2015/02/15 19:59:41 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:41 client.go:366: Some partitions are leaderless, waiting 250ms for election... (1 retries remaining)
PREFIX: 2015/02/15 19:59:41 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:41 client.go:363: Some partitions are leaderless, but we're out of retries
PREFIX: 2015/02/15 19:59:41 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:41 client.go:366: Some partitions are leaderless, waiting 250ms for election... (3 retries remaining)
PREFIX: 2015/02/15 19:59:41 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:41 client.go:366: Some partitions are leaderless, waiting 250ms for election... (2 retries remaining)
PREFIX: 2015/02/15 19:59:41 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:41 client.go:366: Some partitions are leaderless, waiting 250ms for election... (1 retries remaining)
PREFIX: 2015/02/15 19:59:42 client.go:353: Fetching metadata for [my_topic] from broker localhost:9092
PREFIX: 2015/02/15 19:59:42 client.go:363: Some partitions are leaderless, but we're out of retries
PREFIX: 2015/02/15 19:59:42 producer.go:410: producer/leader state change to [normal] after "kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes." on my_topic/0
PREFIX: 2015/02/15 19:59:42 client.go:101: Closing Client

And now it look like the client is doing some sort of host lookup to complete the connection:
"PREFIX: 2015/02/15 19:59:38 broker.go:104: Failed to connect to broker gclaptop:9092: dial tcp: lookup gclaptop: no such host"

gclaptop is my machine name. Going to add that to hosts...

And that solved the problem, thanks @eapache and @wvanbergen

@luck02 luck02 closed this as completed Feb 16, 2015
@eapache
Copy link
Contributor

eapache commented Feb 17, 2015

Kafka advertises itself by hostname through zookeeper, which can lead to some funny behaviours like this. Sarama can do a better job here by returning the right error in the first place though, I will try and fix that.

eapache added a commit that referenced this issue Feb 17, 2015
Previously, if the cluster metadata was giving us back a broker which we
suspected was unavailable (since it was already in our 'dead' set) then we would
wait for the connection, and mark it as unavailable if the connection failed
(otherwise, we simply do what the cluster tells us and let the
producers/consumers deal with the connection errors). This was handy since it
let us back off nicely if a broker crashed and came back, retrying metadata
until the cluster had caught up and moved the leader to a broker that was up.

I'm now of the opinion this was more trouble than it's worth, so scrap it. Among
other things:
 - it does network IO while holding an important mutex, which is a bad pattern
   to begin with (#263)
 - it can mask real network errors behind "LeaderNotAvailable" (#272)

The unfortunate side-effect of scrapping it is that in the producer and consumer
we are more likely to fail if we don't wait long enough for the cluster to fail
over leadership. The real solution if that occurs is to wait longer in the
correct spot (`RetryBackoff` in the producer, currently hard-coded to 10 seconds
in the consumer) instead of this hack.
@luck02
Copy link
Author

luck02 commented Feb 17, 2015

Oh interesting, so kafka registers itself via hostname, makes some sort of sense. I wonder how the console commands work?

In any case, thanks allot.

@tsuna
Copy link

tsuna commented Nov 7, 2016

The issue is that macOS laptops use a hostname that doesn't resolve by default, and the error message is very misleading (it doesn't contain the real error, which is Failed to connect to broker gclaptop:9092: dial tcp: lookup <laptop-hostname>: no such host).

@krisnova
Copy link

So I am running into the same error.. Although it doesn't like my macbook trying to resolve it's own hostname is the culprit.. hrmm

Sarama logs :

[sarama] 2016/11/13 10:45:40 Initializing new client
[sarama] 2016/11/13 10:45:40 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2016/11/13 10:45:40 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2016/11/13 10:45:40 client/metadata fetching metadata for all topics from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:40 Connected to broker at 10.0.0.11:32780 (unregistered)
[sarama] 2016/11/13 10:45:40 client/brokers registered new broker #1007 at 10.0.0.11:32780
[sarama] 2016/11/13 10:45:40 client/brokers registered new broker #1006 at 10.0.0.11:32779
[sarama] 2016/11/13 10:45:40 client/brokers registered new broker #1005 at 10.0.0.11:32778
[sarama] 2016/11/13 10:45:40 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:40 client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2016/11/13 10:45:40 client/metadata fetching metadata for all topics from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:40 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:40 client/metadata retrying after 250ms... (2 attempts remaining)
[sarama] 2016/11/13 10:45:40 client/metadata fetching metadata for all topics from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:40 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:40 client/metadata retrying after 250ms... (1 attempts remaining)
[sarama] 2016/11/13 10:45:40 client/metadata fetching metadata for all topics from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:40 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:40 Successfully initialized new client
[sarama] 2016/11/13 10:45:40 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:40 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:40 client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2016/11/13 10:45:41 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:41 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:41 client/metadata retrying after 250ms... (2 attempts remaining)
[sarama] 2016/11/13 10:45:41 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:41 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:41 client/metadata retrying after 250ms... (1 attempts remaining)
[sarama] 2016/11/13 10:45:41 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:41 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:41 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:41 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:41 client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2016/11/13 10:45:42 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:42 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:42 client/metadata retrying after 250ms... (2 attempts remaining)
[sarama] 2016/11/13 10:45:42 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:42 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:42 client/metadata retrying after 250ms... (1 attempts remaining)
[sarama] 2016/11/13 10:45:42 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:42 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:42 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:42 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:42 client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2016/11/13 10:45:42 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:42 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:42 client/metadata retrying after 250ms... (2 attempts remaining)
[sarama] 2016/11/13 10:45:43 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:43 client/metadata found some partitions to be leaderless
[sarama] 2016/11/13 10:45:43 client/metadata retrying after 250ms... (1 attempts remaining)
[sarama] 2016/11/13 10:45:43 client/metadata fetching metadata for [hermione] from broker 10.0.0.11:32780
[sarama] 2016/11/13 10:45:43 client/metadata found some partitions to be leaderless
2016/11/13 10:45:43 kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.

@wvanbergen
Copy link
Contributor

The logs seem to indicate that one (or more) of the partitions is leader-less. Sarama can only consume from and produce to leaders of partitions. Normally this situation is temporary because it's waiting for the cluster to elect a new leader, but that appears to be not happening. Sarama is handling this as designed: it will try a bunch of times but eventually give up.

What does Kafka's kafka-topics.sh script tell you about this topic? Can you connect to your cluster with any other Kafka client?

@krisnova
Copy link

@wvanbergen

I was able to resolve the problem by ensuring the kafka topic after bringing up a new cluster..

It seems that the following did the trick, obviously plug in your own values..

$KAFKA_HOME/bin/kafka-topics.sh --create --topic mytopic --partitions 2 --zookeeper 1.2.3.4 --replication-factor 2

The script can be found from the Kafka downloads page

This issue is now resolved for me, albeit looking forward to some better vernacular in the future regarding the other case.

@guotie
Copy link

guotie commented Apr 27, 2017

Has this problem some result? I encounter the same problem.

1 similar comment
@fuzhq
Copy link

fuzhq commented Mar 8, 2018

Has this problem some result? I encounter the same problem.

@skOak
Copy link

skOak commented Apr 17, 2018

Adds
[ip-address] [hostname]
in your /etc/hosts file.
It works for me.

@nobody4t
Copy link

nobody4t commented Nov 5, 2018

I think this may be a bug of Kafka. I got the same situation with @kris-nova. Kafka just fails to elect a new leader. So the connection of Sarama is disconnected. And this problems just happened suddenly.

@ycq3
Copy link

ycq3 commented Nov 16, 2021

use 127.0.0.1 instead of localhost ,it work for me.

@zhuangxiaopi
Copy link

Sometime may the kafka is elect leader, you should add retry times .
`func (p *Producer) SendMessage(msgid string, msg []byte, trytime int) error {
var err error
for i := 0; i < trytime; i++ {
err = p.writer.WriteMessages(context.Background(),
kafka.Message{
Key: []byte(msgid),
Value: msg,
})
if err == nil {
return nil
}
fmt.Println(i)
time.Sleep(2 * time.Second)
}
return err

}`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.