You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Right now pgxpool provides MinConns, which addresses the situation where one wants their application to be able to always have at least some connections ready for when a new request comes in, if one cares about tail latencies and doesn't want to sometimes introduces a DB connection in the middle of trying to serve the request (which matters when you have a remote DB and TLS is involved, which can sometimes be hundreds of milliseconds).
However, this does not address the situation that one has a fairly active service (so may have a good number of connections open at peak time), as MinConns would not kick in in that situation. In this situation, there may not be many idle connections available if the request rate had been stable recently and all idle connections closed due to timeout. This exposes one to the situation of potentially not having connections ready to go to handle a burst of new requests.
Of course one could set MinConns so high that it is above both your off-peak and on-peak usage, and therefore guarantee that you will always have idle connections ready, but that would be a waste of resources during off-peak.
Describe the solution you'd like
I believe something like MinIdleConns would solve this. The pool could always try to maintain at least #MinIdleConns extra connections on top of the current amount of in-use connections. This would mean the application is always prepared to handle a burst of #MinIdleConns simultaneous requests on top of whatever load it already has. Since this would be configurable, it would allow application owners to adjust it for whatever amount of burst traffic they want to be ready for.
Additional context
I'm happy to try and implement this if I can get an OK from a maintainer that it seems like a reasonable architecture and I'm not missing something.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Right now
pgxpool
providesMinConns
, which addresses the situation where one wants their application to be able to always have at least some connections ready for when a new request comes in, if one cares about tail latencies and doesn't want to sometimes introduces a DB connection in the middle of trying to serve the request (which matters when you have a remote DB and TLS is involved, which can sometimes be hundreds of milliseconds).However, this does not address the situation that one has a fairly active service (so may have a good number of connections open at peak time), as
MinConns
would not kick in in that situation. In this situation, there may not be many idle connections available if the request rate had been stable recently and all idle connections closed due to timeout. This exposes one to the situation of potentially not having connections ready to go to handle a burst of new requests.Of course one could set
MinConns
so high that it is above both your off-peak and on-peak usage, and therefore guarantee that you will always have idle connections ready, but that would be a waste of resources during off-peak.Describe the solution you'd like
I believe something like
MinIdleConns
would solve this. The pool could always try to maintain at least #MinIdleConns
extra connections on top of the current amount of in-use connections. This would mean the application is always prepared to handle a burst of #MinIdleConns
simultaneous requests on top of whatever load it already has. Since this would be configurable, it would allow application owners to adjust it for whatever amount of burst traffic they want to be ready for.Additional context
I'm happy to try and implement this if I can get an OK from a maintainer that it seems like a reasonable architecture and I'm not missing something.
The text was updated successfully, but these errors were encountered: