You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Scrap the connection test in the client metadata update
Previously, if the cluster metadata was giving us back a broker which we
suspected was unavailable (since it was already in our 'dead' set) then we would
wait for the connection, and mark it as unavailable if the connection failed
(otherwise, we simply do what the cluster tells us and let the
producers/consumers deal with the connection errors). This was handy since it
let us back off nicely if a broker crashed and came back, retrying metadata
until the cluster had caught up and moved the leader to a broker that was up.
I'm now of the opinion this was more trouble than it's worth, so scrap it. Among
other things:
- it does network IO while holding an important mutex, which is a bad pattern
to begin with (#263)
- it can mask real network errors behind "LeaderNotAvailable" (#272)
The unfortunate side-effect of scrapping it is that in the producer and consumer
we are more likely to fail if we don't wait long enough for the cluster to fail
over leadership. The real solution if that occurs is to wait longer in the
correct spot (`RetryBackoff` in the producer, currently hard-coded to 10 seconds
in the consumer) instead of this hack.
0 commit comments