[PATCH 0 of 7 cubicweb] try to improve the database connections pool
ph at itsalwaysdns.eu
Tue Mar 31 17:18:30 CEST 2020
On 20/12/2019, Laurent Peuch wrote:
I thought that I already replied to this series but I cannot find my reply in https://lists.cubicweb.org/pipermail/cubicweb-devel/
So sorry if I'm repeating myself, and of course sorry for the really long delay of 3 month...
I'm globally ok with the algorithm of this series, except the delay
between closing connections which depend on the workload and can be
addressed by having "min-size" configurable.
Also I think we shouldn't go too much into such optimizations since the
pooler can be disabled and we can use pgbouncer or other if needed.
I saw that even psycopg2 has a more naive pooler than the one we're
trying to write: https://github.com/psycopg/psycopg2/blob/master/lib/pool.py :)
I wrote the algorithm in a test to run some benchmarks, if you want to
test various scenarios: https://paste.debian.net/plain/1137590 (well
it's hard to simulate a real workload...)
About the patch series, I think this code needs prior refactoring and simplification.
I'll submit a patch series inspired by yours.
> # This situation is really annoying because we don't want to break connections but for
> # now it's needed. Why? Because in some situations CubicWeb likes to exhaust all
> # available connections in the pool while not releasing any and thus get stuck.
> # Raising an exception here seems to break the fault thread and thus releasing its
> # connections and all CubicWeb to get unstucked.
> # A better solution would be to fix this bug but I haven't been able to find it yet, my
> # best bet is that we have either:
> # * hanging connections without a timeout somewhere
> # * a thread that opens a connection then try to open ANOTHER one without closing the
> # first thus ending up exhausting the pool
> # The way I've found to reproduce that is to limit the pool size to a max of something
> # like 3 connections, the db can be sqlite/postgresql, that doesn't change anything,
> # and then you hammer a fresh cubicweb instance with something like:
> # > for i in $(seq 50); do firefox http://localhost:8080; done
> # and you can enjoy watching this exception being raised all the time
> # When the max pool size is at least 5, this problem disappears for the home.
> # On another subject, when benchmarking the server with 100 max connections, this
> # pooler never ended up opening more than 9 connections.
Maybe cubicweb store some session stuff when requesting "/", or start a
looping task thread ? IIRC creating entities requires 2 database
connections, because entities.eid isn't a serial, it a int which is
incremented in a dedicated transaction. Also I wouldn't be surprised that
some web pages require more than one transaction.
We can get rid of this by having a higher, or even unlimited
max connection pool size by default. So we can handle various
number of processes/threads scenarios.
More information about the cubicweb-devel