-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Change connection pool idle expiration logic #19004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Arthur Schreiber <[email protected]>
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #19004 +/- ##
=======================================
Coverage ? 69.81%
=======================================
Files ? 1610
Lines ? 215356
Branches ? 0
=======================================
Hits ? 150343
Misses ? 65013
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
mattlord
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me. I only had the one question about how we handle the case where fail to return the conn.
Thank you for jumping on this so quickly! ❤️
| pool.closedConn() | ||
| // Return all the valid connections back to waiters or the stack | ||
| for _, conn := range validConnections { | ||
| pool.tryReturnConn(conn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to call pool.closedConn() if this returns false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. 🤔
tryReturnConn returns true if there was a direct connection hand-off via the waitlist, but it returns false if:
- the connection was closed because we have too many connections open. In this case,
closeOnIdleLimitReachedwill manage theactiveconnection count and there is no need to callclosedConn - the connections was added back to one of the stacks (no need to call
closedConnbecause we didn't close anything)
|
I'll add some tests as well so we have coverage for this behavior. |
|
📝 Documentation updates detected! Updated existing suggestion: Add v24.0.0 changelog entries for VTTablet bug fixes |
Signed-off-by: Arthur Schreiber <[email protected]>
Signed-off-by: Arthur Schreiber <[email protected]>
mhamza15
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm not mistaken, this will cause the new connection stacks to be in reverse order. Is there a concern that new connections will be on the bottom of the stack?
I don't think this matters. The connection stack is not strictly ordered by the I could reverse the order of putting the connections back if we have stronger feelings about it? |
I definitely don't know if it will have a tangible impact, so no strong feelings here. If it's an easy switch, maybe maintaining the order will offer less surprises, but no concerns from me 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change make sense to me 🎉
I would suggest we benchmark this, but I suspect we wouldn't be accurately testing this code path, as the benchmark is likely to create very-active connection pools, unless we added specific tests that would mimic an occasionally-used pool. So I'm not suggesting we benchmark this, but I wish we were in a place where that would give us confidence. The area I'm most curious about is when we iterate over every connection while others are waiting
I also wanted to call out that it would be nice to have an e2e test for this, but that could be a project on it's own. Technically it should be possible to simulate an idle MySQL connection and using tablet-stats we could check the pool acted properly, but this would be a large amount of work. cc'ing @arthurschreiber / @mattlord to validate whether or not this would be possible, and if you agree the effort required is out-of-scope
@arthurschreiber It was explicitly designed to be a LIFO queue, so it's probably worth doing: #14033 |
|
There is also the flip side, which is that theoretically as you're adding back the connections, there are existing consumers trying to pull them out (either in the waitlist or directly from the stack), right? So if we add back the connections that used to be on top first, they'll get used first (so respecting the initial LIFO), but if we do maintain the initial ordering like I first suggested, the older connections will get placed onto stack first and might end up getting pulled out first... Not sure if this is overthinking it 🤔 |
Good context: that's probably a behaviour worth keeping, or at least I'd like to understand (and make it clear) why we're switching the behaviour |
Description
This pull request changes how idle connection expiration is handled.
#18967 fixed a bug where connection pools could lose track of how many connections are actually active in the pool, but introduced a new issue where idle connections on a connection pool with low frequency would be closed but not removed from the connection stacks. As every connection struct also allocates a
bufio.Readerwith a 16MB buffer size, over time these connections would use a significant chunk of memory, causing the Go Garbage Collector to go into overdrive and consume a lot of CPU resources in turn.This changes the connection pool expiration logic to not modify connections while they're still in a connection stack. Instead, we pop off all the available connections from the stack, quickly filter out the ones that need to be expired and return all the other connections back to the connection pool. Note that we're not interested in the actively used connections - the fact that they are used and not on one of the stacks means that they are not idling and not in need of being expired anyway.
Once we have collected the list of connections to expire, we reopen them one by one, and then return them back to the pool if we could reopen them successfully. If we encounter an error during a reopen operation, we just treat that connection as having been closed (and update the pool to reflect this).
There's some downside to this approach, namely that for very large pools there might be a short moment where all the connections have been popped off the stack and not returned yet - and incoming
getcalls will end up on the waitlist. I don't think this is an issue, because a lot of connections on the stack means low usage of the pool, so it's probably okay for a very short moment to not have any connection on the stack. On a connection pool that's used at a very high frequency, we'll probably only pop off a few connections from the stack - so checking for idle connections should barely be noticeable.Related Issue(s)
PRSfailure #18202Checklist
Deployment Notes
AI Disclosure