[PATCH 1 of 7 cubicweb] [database/pool] write a new connections pool that only open additional connections when needed

cortex at worlddomination.be cortex at worlddomination.be
Fri Jan 17 13:33:55 CET 2020


Le 2020-01-14 10:21, Philippe Pepiot a écrit :
> On 20/12/2019, Laurent Peuch wrote:
>> # HG changeset patch
>> +        if self.current_size < self.size:
>> +            try:
>> +                return self._queue.get(block=True, 
>> timeout=self.min_timeout)
>> +            except queue.Empty:
>> +                # size could have increased during waiting
>> +                if self.current_size < self.size:
>> +                    # we have load, open another connection
>> +                    cnxset = self.source.wrapped_connection()
>> +                    self._cnxsets.append(cnxset)
>> +                    self.current_size += 1
>> +                    return cnxset
> 
> I just read
> https://julien.danjou.info/atomic-lock-free-counters-in-python/ :) And
> I think "current_size" handling is not thread safe here.

Thanks, I'm implementing this :)!

>> +            # This situation is really annoying because we don't want 
>> to break connections but for
>> +            # now it's needed. Why? Because in some situations 
>> CubicWeb likes to exhaust all
>> +            # available connections in the pool while not releasing 
>> any and thus get stuck.
>> +            #
>> +            # Raising an exception here seems to break the fault 
>> thread and thus releasing its
>> +            # connections and all CubicWeb to get unstucked.
>> +            #
>> +            # A better solution would be to fix this bug but I 
>> haven't been able to find it yet, my
>> +            # best bet is that we have either:
>> +            # * hanging connections without a timeout somewhere
>> +            # * a thread that opens a connection then try to open 
>> ANOTHER one without closing the
>> +            #   first thus ending up exhausting the pool
>> +            #
>> +            # The way I've found to reproduce that is to limit the 
>> pool size to a max of something
>> +            # like 3 connections, the db can be sqlite/postgresql, 
>> that doesn't change anything,
>> +            # and then you hammer a fresh cubicweb instance with 
>> something like:
>> +            # > for i in $(seq 50); do firefox http://localhost:8080; 
>> done
>> +            # and you can enjoy watching this exception being raised 
>> all the time
>> +            #
>> +            # When the max pool size is at least 5, this problem 
>> disappears for the home.
>> +            #
>> +            # On another subject, when benchmarking the server with 
>> 100 max connections, this
>> +            # pooler never ended up opening more than 9 connections.
> 
> I think this directly depends on the number of threads and processes
> that are started to handle http requests. Also I wouldn't be surprised
> that cubicweb might open multiple connections for a some single http
> requests, the API allow this.
> I usually set connections-pool-size equal to processes * (nb_threads + 
> 1)

Shouldn't we dynamically set that then?

I'm really concerned by this "I end up self locking myself and raising 
after that" and I'm wondering if
introducing a "burst mode" wouldn't be better than raising. Like when 
you don't have anymore connections
available, you still open a new one instead of raising and you imediatly 
close it after having use it.



More information about the cubicweb-devel mailing list