[PATCH 1 of 7 cubicweb] [database/pool] write a new connections pool that only open additional connections when needed

Philippe Pepiot philippe.pepiot at logilab.fr
Tue Jan 14 10:21:28 CET 2020


On 20/12/2019, Laurent Peuch wrote:
> # HG changeset patch
> +        if self.current_size < self.size:
> +            try:
> +                return self._queue.get(block=True, timeout=self.min_timeout)
> +            except queue.Empty:
> +                # size could have increased during waiting
> +                if self.current_size < self.size:
> +                    # we have load, open another connection
> +                    cnxset = self.source.wrapped_connection()
> +                    self._cnxsets.append(cnxset)
> +                    self.current_size += 1
> +                    return cnxset

I just read https://julien.danjou.info/atomic-lock-free-counters-in-python/ :) And I think "current_size" handling is not thread safe here.

> +            # This situation is really annoying because we don't want to break connections but for
> +            # now it's needed. Why? Because in some situations CubicWeb likes to exhaust all
> +            # available connections in the pool while not releasing any and thus get stuck.
> +            #
> +            # Raising an exception here seems to break the fault thread and thus releasing its
> +            # connections and all CubicWeb to get unstucked.
> +            #
> +            # A better solution would be to fix this bug but I haven't been able to find it yet, my
> +            # best bet is that we have either:
> +            # * hanging connections without a timeout somewhere
> +            # * a thread that opens a connection then try to open ANOTHER one without closing the
> +            #   first thus ending up exhausting the pool
> +            #
> +            # The way I've found to reproduce that is to limit the pool size to a max of something
> +            # like 3 connections, the db can be sqlite/postgresql, that doesn't change anything,
> +            # and then you hammer a fresh cubicweb instance with something like:
> +            # > for i in $(seq 50); do firefox http://localhost:8080; done
> +            # and you can enjoy watching this exception being raised all the time
> +            #
> +            # When the max pool size is at least 5, this problem disappears for the home.
> +            #
> +            # On another subject, when benchmarking the server with 100 max connections, this
> +            # pooler never ended up opening more than 9 connections.

I think this directly depends on the number of threads and processes that are started to handle http requests. Also I wouldn't be surprised that cubicweb might open multiple connections for a some single http requests, the API allow this.
I usually set connections-pool-size equal to processes * (nb_threads + 1)



More information about the cubicweb-devel mailing list