Skip to content

Conversation

@seanlinsley
Copy link

At pganalyze we run a long-lived Postgres pool, and with the help of bytehound have discovered that a certain query with large bind params appears to leak memory. While it may be possible to fix this leak, there may be leaks elsewhere or future changes may introduce them, so it's probably best for deadpool to recommend that users with long-lived pools call retain to prune old database connections.

I've both added an example to the README that's similar to our configuration, and updated the retain function docs to highlight this issue.

@bikeshedder bikeshedder added documentation Improvements or additions to documentation A-postgres Area: PostgreSQL support / deadpool-postgres labels Oct 4, 2023
@bikeshedder
Copy link
Collaborator

I didn't know about tokio-postgres leaking that much memory. Is this maybe related with the connection staying active even after dropping the client if there are still queries running? In the latest version of deadpool-postgres does call abort when the client wrapper is dropped: https://github.com/bikeshedder/deadpool/blob/884629fc148a8e2933ceb8c307bdb541907903f8/postgres/src/lib.rs#L444-L448

@seanlinsley
Copy link
Author

seanlinsley commented Oct 4, 2023

Our workload involves a high number of queries that complete quickly and we have a short statement_timeout set, so I don't think that #237 will help. That said, we haven't updated deadpool yet.

Even after making this change we are still seeing a leak, though it's slower now. I'll keep looking into this.

Do you have idea why CI is failing? From the other examples in the README, I would've assumed rust,no_run would prevent CI from running the code.

@seanlinsley
Copy link
Author

seanlinsley commented Nov 13, 2023

The remaining apparent memory leak was solved with aws/amazon-ecs-agent#3594 (comment)

After confirming the memory usage was stable, I then disabled the prune_db_connections code path (while still using malloc_trim(0)) and confirmed there is definitely a leak in the Postgres connections:

Screenshot 2023-11-11 at 7 46 29 PM

@inzanez
Copy link

inzanez commented Jun 4, 2024

Is there any intention to merge this at some point?

@bikeshedder
Copy link
Collaborator

I'm hesitant to merge this as it is pointing fingers at tokio-postgres and nobody else was able to reproduce this, yet.

Do you use the statement cache a lot? I mean prepare_cached which is added by deadpool-postgres. That cache doesn't currently have a logic to remove statements so if you use it with dynamically generated queries it's going to bloat quite a lot.

A simple manager.statement_caches.clear() does clear the caches for all connections.

The statement cache currently has no way of removing statements automatically so you should never use it for dynamically generated queries. See:

@inzanez
Copy link

inzanez commented Jun 4, 2024

I see. And I generally agree. I am still working on a way to reproduce the issue. But I can say that I never make any use of prepare_cached.

I am currently working with my Rocket web back end, as I can only reproduce it on a real workload,...I am still not at a point where I could reproduce it in a minimal example.

What I can say so far: if my Rocket web server uses one connection pool to Postgres, I have memory issues hitting certain endpoints.
If I use a separate connection pool for the endpoint receiving huge data loads and storing them in the database, the memory consumption goes down.

So I think there might some interaction between different statements (SELECT and INSERT),...

@inzanez
Copy link

inzanez commented Jun 5, 2024

Ok, I will stop for now. I have found out that as long as no other DB operation takes place on any of the pool's connections, the large 'INSERT' statements I run do not lead to any memory issue / overhead.
However, as soon as 'SELECT' statements run before or during the heavy INSERT operations,...I get a huge memory overhead on the container.
Unfortunately I cannot reproduce the issue in a minimal example...and I am not sure if this is linked to rust-postgres (rust-postgres/rust-postgres#1081) or deadpool ...

@bikeshedder
Copy link
Collaborator

@seanlinsley @inzanez Did you find the source for the leaking memory by now?

@seanlinsley
Copy link
Author

Per rust-postgres/rust-postgres#1081, the memory leaks I observed were from BytesMut usage in rust-postgres, which is there to avoid small allocations when constructing messages to send to the server, or receive data from the server. Given the discussion in tokio-rs/bytes#634 it seems like the design of BytesMut makes it difficult to avoid this problem. At one point I was experimenting with a branch of rust-postgres that removed BytesMut, instead relying on regular allocations, but I never finished it.

@bikeshedder bikeshedder force-pushed the main branch 3 times, most recently from 7a9da3d to f713c85 Compare September 2, 2025 11:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-postgres Area: PostgreSQL support / deadpool-postgres documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants