Skip to content

Conversation

@arthurschreiber
Copy link
Member

@arthurschreiber arthurschreiber commented Dec 8, 2025

Description

This pull request changes how idle connection expiration is handled.

#18967 fixed a bug where connection pools could lose track of how many connections are actually active in the pool, but introduced a new issue where idle connections on a connection pool with low frequency would be closed but not removed from the connection stacks. As every connection struct also allocates a bufio.Reader with a 16MB buffer size, over time these connections would use a significant chunk of memory, causing the Go Garbage Collector to go into overdrive and consume a lot of CPU resources in turn.

This changes the connection pool expiration logic to not modify connections while they're still in a connection stack. Instead, we pop off all the available connections from the stack, quickly filter out the ones that need to be expired and return all the other connections back to the connection pool. Note that we're not interested in the actively used connections - the fact that they are used and not on one of the stacks means that they are not idling and not in need of being expired anyway.

Once we have collected the list of connections to expire, we reopen them one by one, and then return them back to the pool if we could reopen them successfully. If we encounter an error during a reopen operation, we just treat that connection as having been closed (and update the pool to reflect this).

There's some downside to this approach, namely that for very large pools there might be a short moment where all the connections have been popped off the stack and not returned yet - and incoming get calls will end up on the waitlist. I don't think this is an issue, because a lot of connections on the stack means low usage of the pool, so it's probably okay for a very short moment to not have any connection on the stack. On a connection pool that's used at a very high frequency, we'll probably only pop off a few connections from the stack - so checking for idle connections should barely be noticeable.

Related Issue(s)

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

AI Disclosure

@vitess-bot
Copy link
Contributor

vitess-bot bot commented Dec 8, 2025

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Dec 8, 2025
@github-actions github-actions bot added this to the v24.0.0 milestone Dec 8, 2025
@arthurschreiber arthurschreiber changed the title Change how we expire idle connections. Change connection pool idle expiration logic Dec 8, 2025
@codecov
Copy link

codecov bot commented Dec 9, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (main@3dd1516). Learn more about missing BASE report.

Additional details and impacted files
@@           Coverage Diff           @@
##             main   #19004   +/-   ##
=======================================
  Coverage        ?   69.81%           
=======================================
  Files           ?     1610           
  Lines           ?   215356           
  Branches        ?        0           
=======================================
  Hits            ?   150343           
  Misses          ?    65013           
  Partials        ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Member

@mattlord mattlord left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me. I only had the one question about how we handle the case where fail to return the conn.

Thank you for jumping on this so quickly! ❤️

pool.closedConn()
// Return all the valid connections back to waiters or the stack
for _, conn := range validConnections {
pool.tryReturnConn(conn)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to call pool.closedConn() if this returns false?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. 🤔

tryReturnConn returns true if there was a direct connection hand-off via the waitlist, but it returns false if:

  • the connection was closed because we have too many connections open. In this case, closeOnIdleLimitReached will manage the active connection count and there is no need to call closedConn
  • the connections was added back to one of the stacks (no need to call closedConn because we didn't close anything)

@arthurschreiber arthurschreiber marked this pull request as ready for review December 9, 2025 10:04
@arthurschreiber arthurschreiber added Backport to: release-22.0 Needs to be backport to release-22.0 Backport to: release-23.0 Needs to be backport to release-23.0 Component: VTTablet Type: Bug and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says labels Dec 9, 2025
@arthurschreiber
Copy link
Member Author

I'll add some tests as well so we have coverage for this behavior.

@promptless
Copy link

promptless bot commented Dec 9, 2025

📝 Documentation updates detected!

Updated existing suggestion: Add v24.0.0 changelog entries for VTTablet bug fixes

@arthurschreiber arthurschreiber removed NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Dec 9, 2025
Signed-off-by: Arthur Schreiber <[email protected]>
Signed-off-by: Arthur Schreiber <[email protected]>
Copy link
Contributor

@mhamza15 mhamza15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, this will cause the new connection stacks to be in reverse order. Is there a concern that new connections will be on the bottom of the stack?

@arthurschreiber
Copy link
Member Author

If I'm not mistaken, this will cause the new connection stacks to be in reverse order. Is there a concern that new connections will be on the bottom of the stack?

I don't think this matters. The connection stack is not strictly ordered by the timeUsed value. This will make the order deviate stronger, but I don't think anything depends on the exact values.

I could reverse the order of putting the connections back if we have stronger feelings about it?

@timvaillancourt timvaillancourt self-requested a review December 9, 2025 15:46
@mhamza15
Copy link
Contributor

mhamza15 commented Dec 9, 2025

If I'm not mistaken, this will cause the new connection stacks to be in reverse order. Is there a concern that new connections will be on the bottom of the stack?

I don't think this matters. The connection stack is not strictly ordered by the timeUsed value. This will make the order deviate stronger, but I don't think anything depends on the exact values.

I could reverse the order of putting the connections back if we have stronger feelings about it?

I definitely don't know if it will have a tangible impact, so no strong feelings here. If it's an easy switch, maybe maintaining the order will offer less surprises, but no concerns from me 👍

Copy link
Contributor

@timvaillancourt timvaillancourt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change make sense to me 🎉

I would suggest we benchmark this, but I suspect we wouldn't be accurately testing this code path, as the benchmark is likely to create very-active connection pools, unless we added specific tests that would mimic an occasionally-used pool. So I'm not suggesting we benchmark this, but I wish we were in a place where that would give us confidence. The area I'm most curious about is when we iterate over every connection while others are waiting

I also wanted to call out that it would be nice to have an e2e test for this, but that could be a project on it's own. Technically it should be possible to simulate an idle MySQL connection and using tablet-stats we could check the pool acted properly, but this would be a large amount of work. cc'ing @arthurschreiber / @mattlord to validate whether or not this would be possible, and if you agree the effort required is out-of-scope

@mattlord
Copy link
Member

mattlord commented Dec 9, 2025

If I'm not mistaken, this will cause the new connection stacks to be in reverse order. Is there a concern that new connections will be on the bottom of the stack?

I don't think this matters. The connection stack is not strictly ordered by the timeUsed value. This will make the order deviate stronger, but I don't think anything depends on the exact values.

I could reverse the order of putting the connections back if we have stronger feelings about it?

@arthurschreiber It was explicitly designed to be a LIFO queue, so it's probably worth doing: #14033

@mhamza15
Copy link
Contributor

mhamza15 commented Dec 9, 2025

There is also the flip side, which is that theoretically as you're adding back the connections, there are existing consumers trying to pull them out (either in the waitlist or directly from the stack), right? So if we add back the connections that used to be on top first, they'll get used first (so respecting the initial LIFO), but if we do maintain the initial ordering like I first suggested, the older connections will get placed onto stack first and might end up getting pulled out first...

Not sure if this is overthinking it 🤔

@timvaillancourt
Copy link
Contributor

@arthurschreiber It was explicitly designed to be a LIFO queue

Good context: that's probably a behaviour worth keeping, or at least I'd like to understand (and make it clear) why we're switching the behaviour

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Backport to: release-22.0 Needs to be backport to release-22.0 Backport to: release-23.0 Needs to be backport to release-23.0 Component: VTTablet Type: Bug

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants