ZODB Client-Server framework

Overview

ZEO - Single-server client-server database server for ZODB

ZEO is a client-server storage for ZODB for sharing a single storage among many clients. When you use ZEO, a lower-level storage, typically a file storage, is opened in the ZEO server process. Client programs connect to this process using a ZEO ClientStorage. ZEO provides a consistent view of the database to all clients. The ZEO client and server communicate using a custom protocol layered on top of TCP.

Some alternatives to ZEO:

  • NEO is a distributed-server client-server storage.
  • RelStorage leverages the RDBMS servers to provide a client-server storage.

The documentation is available on readthedocs.

Comments
  • move `setLastTID` call after connection cache invalidation processing

    move `setLastTID` call after connection cache invalidation processing

    … as required by ZODB>=5.6.0

    This fixes #166.

    The PR no longer actually moves the call but instead uses locking to ensure that foreign threads see the effect of setLastTid only after the callback call.

    opened by d-maurer 24
  • *: Documentation, Cosmetics

    *: Documentation, Cosmetics


    kirr:

    Extract from https://github.com/zopefoundation/ZEO/pull/195 bits that add documentation to existing code without changing semantic, and fix typos.

    The only things I added myself are:

    • documentation for server_sync in ClientStorage;
    • stub documentation for credentials in ClientStorage.

    Even though we agree to deprecate credentials in favour of peer-to-peer TLS, removing their support should go as a separate step.

    For server_sync feature, that https://github.com/zopefoundation/ZEO/pull/195 currently removes, we actually do use it for correctness:

    https://lab.nexedi.com/nexedi/erp5/blob/eaae74a082a0/product/ERP5Type/tests/custom_zodb.py#L175-179 https://lab.nexedi.com/nexedi/erp5/commit/c663257fa477 https://github.com/zopefoundation/ZODB/commit/9821696f584f

    So document it with the intent to preserve it.

    opened by navytux 18
  • client: Fix cache corruption on loadBefore and prefetch

    client: Fix cache corruption on loadBefore and prefetch

    Currently loadBefore and prefetch spawn async protocol.load_before task, and, after waking up on its completion, populate the cache with received data. But in between the time when protocol.load_before handler is run and the time when protocol.load_before caller wakes up, there is a window in which event loop might be running some other code, including the code that handles invalidateTransaction messages from the server.

    This means that cache updates and cache invalidations can be processed on the client not in the order that server sent it. And such difference in the order can lead to data corruption if e.g server sent

    <- loadBefore oid serial=tid1 next_serial=None
    <- invalidateTransaction tid2 oid
    

    and client processed it as

    invalidateTransaction tid2 oid
    cache.store(oid, tid1, next_serial=None)
    

    because here the end effect is that invalidation for oid@tid2 is not applied to the cache.

    The fix is simple: perform cache updates right after loadBefore reply is received.

    Fixes: https://github.com/zopefoundation/ZEO/issues/155

    The fix is based on analysis and initial patch by @jmuchemb:

    https://github.com/zopefoundation/ZEO/issues/155#issuecomment-581046248

    ~~For tests, similarly to https://github.com/zopefoundation/ZODB/pull/345, I wanted to include a general test for this issue into ZODB, so that all storages - not just ZEO - are exercised for this race scenario. However in ZODB test infrastructure there is currently no established general way to open several client storage connections to one storage server. This way the test for this issue currently lives in wendelin.core repository (and exercises both NEO and ZEO there):~~

    ~~https://lab.nexedi.com/nexedi/wendelin.core/commit/c37a989d~~

    ~~I understand there is a room for improvement. For the reference, my original ZEO-specific data corruption reproducer is here:~~

    ~~https://github.com/zopefoundation/ZEO/issues/155#issuecomment-577602842 https://lab.nexedi.com/kirr/wendelin.core/blob/ecd0e7f0/zloadrace5.py~~

    EDIT: this fix now has corresponding test that should be coming in via https://github.com/zopefoundation/ZEO/pull/170 and https://github.com/zopefoundation/ZODB/pull/345.

    /cc @d-maurer, @jamadden, @dataflake, @jimfulton

    opened by navytux 18
  • Include both modified and just created objects into invalidations

    Include both modified and just created objects into invalidations

    Starting from 1999 (b3805a2f "just getting started") only modified - not just created - objects were included into ZEO invalidation messages:

    https://github.com/zopefoundation/ZEO/commit/b3805a2f#diff-52fb76aaf08a1643cdb8fdaf69e37802R126-R127

    In 2000 this behaviour was further changed to not send invalidation message at all if the only objects a transaction has were the created ones:

    https://github.com/zopefoundation/ZEO/commit/230ffbe8#diff-52fb76aaf08a1643cdb8fdaf69e37802L163-R163

    In 2016 the latter was reconsidered as bug and fixed in ZEO5 because ZODB5 relies more heavily on MVCC semantic and needs to be notified about every transaction committed to storage to be able to properly update ZODB.Connection view:

    https://github.com/zopefoundation/ZEO/commit/02943acd#diff-52fb76aaf08a1643cdb8fdaf69e37802L889-R834 https://github.com/zopefoundation/ZEO/commit/9613f09b

    In 2020, with this patch, I'm proposing to reconsider initial "send only modified, not created objects" as bug, and include both modified and just created objects into invalidation messages at least for the following reasons:

    • a ZODB client (not necessarily native ZODB/py client) can maintain raw cache for the storage. If such client tries to load an oid at database view when that object did not existed yet, gets "no object" reply and stores that information into raw cache, to properly invalidate the cache it needs an invalidation message from ZODB server that includes created object.

    • tools like zodb watch [1,2,3] don't work properly (give incorrect output) if not all objects modified/created by a transaction are included into invalidation messages.

    • similarly to zodb watch, a monitoring tool, that would want to be notified of all created/modified objects, won't see full database-change picture, and so won't work properly without knowing which objects were created.

    • wendelin.core 2 - which builds data from ZODB BTrees and data objects into virtual filesystem - needs to get invalidation messages with both modified and created objects to properly implement its own lazy invalidation and isolation protocol for file blocks in OS cache: when a block of file is accessed, all clients, that have this block mmaped, need to be notified and asked to remmap that block into particular revision of the file depending on a client's view of the filesystem and database [4,5].

      To compute to where a client needs to remmap the block, WCFS server (that in turn acts as ZODB client wrt ZEO/NEO server), needs to be able to see whether client's view of the filesystem is before object creation (and then ask that client to pin that block to hole), or after creation (and then ask the client to pin that block to corresponding revision).

      This computation needs ZODB server to send invalidation messages in full: with both modified and just created objects.

    The patch is simple - it removes if serial != b"\0\0\0\0\0\0\0\0" before queuing oid into ZEOStorage.invalidated, and adjusts the tests correspondingly. From my point of view and experience, in practice, this patch should not cause any compatibility break nor performance regressions.

    Thanks beforehand, Kirill

    /cc @jimfulton

    [1] https://lab.nexedi.com/kirr/neo/blob/ea53a795/go/zodb/zodbtools/watch.go [2] https://lab.nexedi.com/kirr/neo/commit/e0d59f5d [3] https://lab.nexedi.com/kirr/neo/commit/c41c2907

    [4] https://lab.nexedi.com/kirr/wendelin.core/blob/1efb5876/wcfs/wcfs.go#L94-182 [5] https://lab.nexedi.com/kirr/wendelin.core/blob/1efb5876/wcfs/client/wcfs.h#L20-71

    opened by navytux 18
  • `trollius` has been removed from PyPI; ZEO cannot be installed on 2.7

    `trollius` has been removed from PyPI; ZEO cannot be installed on 2.7

    That means ZEO can no longer be installed on Python 2.7:

    Collecting trollius; python_version == "2.7" (from ZEO>=5.2->RelStorage==3.0a6.dev0)
      ERROR: Could not find a version that satisfies the requirement trollius; python_version == "2.7" (from ZEO>=5.2->RelStorage==3.0a6.dev0) (from versions: none)
    ERROR: No matching distribution found for trollius; python_version == "2.7" (from ZEO>=5.2->RelStorage==3.0a6.dev0)
    
    Command exited with code 1
    

    It should be noted that Plone 5.2 depends on trollius 2.2, and Plone 5.1 depends on 2.1.

    Luckily I had some pre-built wheels of 2.2 that I re-uploaded which might solve the problem in some cases (it seemed to fix my CI), but I cannot upload the trollius-2.2.tar.gz file (because that name had already been used).

    Perhaps we can coordinate with @vstinner to find a happy resolution.

    opened by jamadden 18
  • Cannot run tests locally

    Cannot run tests locally

    I tried to run the tests for ZEO locally, both via tox or directly via bin/test after manual buildout, and I get a lot of failures and errors.

    On a first glance, there are two categories

    • failures on asserts
    • time out errors

    Examples:

    (ZEO) jugmac00@jugmac00-XPS-13-9370:~/Projects/ZEO$ bin/test -vv
    /home/jugmac00/Projects/ZEO/src/ZEO/tests/testZEO.py:1247: DeprecationWarning: invalid escape sequence \d
      """
    Running tests at all levels
    Running zope.testrunner.layer.UnitTests tests:
      Set up zope.testrunner.layer.UnitTests in 0.000 seconds.
      Running:
     testClientBasics (ZEO.asyncio.tests.ClientTests)
    
    Failure in test testClientBasics (ZEO.asyncio.tests.ClientTests)
    Traceback (most recent call last):
      File "/usr/lib/python3.6/unittest/case.py", line 59, in testPartExecutor
        yield
      File "/usr/lib/python3.6/unittest/case.py", line 605, in run
        testMethod()
      File "/home/jugmac00/Projects/ZEO/src/ZEO/asyncio/tests.py", line 311, in testClientBasics
        self.assertEqual(self.pop(), (2, False, 'get_info', ()))
      File "/usr/lib/python3.6/unittest/case.py", line 829, in assertEqual
        assertion_func(first, second, msg=msg)
      File "/usr/lib/python3.6/unittest/case.py", line 822, in _baseAssertEqual
        raise self.failureException(msg)
    AssertionError: [] != (2, False, 'get_info', ())
    
    Error in test checkQuickVerificationWith2Clients (ZEO.tests.testConnection.FileStorageInvqTests)
    Traceback (most recent call last):
      File "/usr/lib/python3.6/unittest/case.py", line 59, in testPartExecutor
        yield
      File "/usr/lib/python3.6/unittest/case.py", line 605, in run
        testMethod()
      File "/home/jugmac00/Projects/ZEO/src/ZEO/tests/ConnectionTests.py", line 598, in checkQuickVerificationWith2Clients
        perstorage = self.openClientStorage(cache="test")
      File "/home/jugmac00/Projects/ZEO/src/ZEO/tests/ConnectionTests.py", line 151, in openClientStorage
        **self._client_options())
      File "/home/jugmac00/Projects/ZEO/src/ZEO/ClientStorage.py", line 281, in __init__
        self._wait()
      File "/home/jugmac00/Projects/ZEO/src/ZEO/asyncio/client.py", line 825, in wait
        self.wait_for_result(self.client.connected, timeout)
      File "/home/jugmac00/Projects/ZEO/src/ZEO/asyncio/client.py", line 759, in wait_for_result
        raise ClientDisconnected("timed out waiting for connection")
    ZEO.Exceptions.ClientDisconnected: timed out waiting for connection
    

    Complete log -> https://gist.github.com/jugmac00/40b78aee5e0f4c5d64fdab5841937e48

    I tried to run the tests both on Ubuntu 18.04 Python 3.6 / 2.7 and Fedora 28 Python 3.6 / 2.7 - no luck.

    Does it need something special (like some c extensions or whatever) to get the test suite to run?

    I just wonder, as Travis seems to run fine (with an occasional re-run https://github.com/zopefoundation/ZEO/pull/139#issuecomment-486284582 ).

    question help wanted 
    opened by jugmac00 17
  • fix ZopeUndo.Prefix decode on python 2

    fix ZopeUndo.Prefix decode on python 2

    On python 2, calling undoLog via ZEO with ZopeUndo.Prefix filter, ZEO server raise an exception:

    File ".../src/ZEO/asyncio/marshal.py", line 114, in pickle_server_decode
        return unpickler.load() # msgid, flags, name, args
    File ".../src/ZEO/asyncio/marshal.py", line 164, in server_find_global
        raise ImportError("import error %s: %s" % (module, msg))
    ImportError: import error copy_reg:
    

    The problem was introduced when ZopeUndo.Prefix was changed and subclassed from the object.

    This PR tries to fix the problem.

    bug 
    opened by mamico 16
  • Remove testing for mtacceptor and dead Python versions

    Remove testing for mtacceptor and dead Python versions

    Fixes #188

    This PR removes GitHub Actions tests for the asyncio/mtacceptor module because it appears to be unused. The module itself remains in place and will be removed in ZEO version 6.

    I also cut some other GitHub Actions tests like Python 3.5, it's dead.

    opened by dataflake 15
  • Load uses load before

    Load uses load before

    Changed loadBefore to operate more like load behaved, especially with regard to the load lock. This allows ZEO to work with the upcoming ZODB 5, which used loadBefore rather than load.

    Reimplemented load using loadBefore, this testing loadBefore extensively via existing tests.

    opened by jimfulton 15
  • Multiple ZEO data corruptions due to concurrency bugs

    Multiple ZEO data corruptions due to concurrency bugs

    Hello up there. In https://github.com/zopefoundation/ZEO/pull/207 it was noted that current ZEO5 master fails sometimes in check_race_load_vs_external_invalidate wrt ZEO4 server. I digged in to find out why and discovered multiple concurrency bugs - in both ZEO4 and ZEO5 - that lead to data corruptions. Please find details about those bugs below:

    Bug1: ZEO5.client - ZEO4.server : race in between load / invalidate

    The first bug is that ZEO5.client - contrary to ZEO4.client - does not account for simultaneous invalidations when working wrt ZEO4.server. It shows as e.g.

    (z-dev) kirr@deca:~/src/wendelin/z/ZEO5$ ZEO4_SERVER=1 zope-testrunner -fvvvx --test-path=src -t check_race_load_vs_external
    ...
    
    AssertionError: T1: obj1 (24)  !=  obj2 (23)
    obj1._p_serial: 0x03ea4b6fb05b52cc  obj2._p_serial: 0x03ea4b6faf253855
    zconn_at: 0x03ea4b6fb05b52cc  # approximated as max(serials)
    zstor.loadBefore(obj1, @zconn.at)       ->  serial: 0x03ea4b6fb05b52cc  next_serial: None
    zstor.loadBefore(obj2, @zconn.at)       ->  serial: 0x03ea4b6faf253855  next_serial: 0x03ea4b6fb07def66
    zstor._cache.clear()
    zstor.loadBefore(obj1, @zconn.at)       ->  serial: 0x03ea4b6fb05b52cc  next_serial: 0x03ea4b6fb07def66
    zstor.loadBefore(obj2, @zconn.at)       ->  serial: 0x03ea4b6fb05b52cc  next_serial: 0x03ea4b6fb07def66
    

    indicating that obj2 was provided to user from the cache that erroneously got stale.

    With IO trace showing message exchange in between client and the server, this looks as:

    # loadBefore issued
    tx (('\x00\x00\x00\x00\x00\x00\x00\x02', '\x03\xeaKo\xaf%8V'), False, 'loadBefore', ('\x00\x00\x00\x00\x00\x00\x00\x02', '\x03\xeaKo\xaf%8V'))
    
    # received invalidateTransaction
    rx (0, 1, 'invalidateTransaction', ('\x03\xeaKo\xb0[R\xcc', ['\x00\x00\x00\x00\x00\x00\x00\x01', '\x00\x00\x00\x00\x00\x00\x00\x02']))
    
    # received loadBefore reply but with end_tid=None !!!
    rx (('\x00\x00\x00\x00\x00\x00\x00\x02', '\x03\xeaKo\xaf%8V'), 0, '.reply', ('\x80\x03cZODB.tests.MinPO\nMinPO\nq\x01.\x80\x03}q\x02U\x05valueq\x03K\x17s.', '\x03\xeaKo\xaf%8U', None))
    

    which:

    1. contradicts what @jimfulton wrote about ZEO4: that there invalidations are sent in a callback called when the storage lock is held, blocking loads while committing, and
    2. points out what the bug is:

    Since in ZEO4 loads can be handled while a commit is in progress, ZEO4.client has special care to detect if an invalidateTransaction comes in between load request and corresponding .reply response, and does not update the cache if invalidateTransaction sneaked in-between:

    https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/ClientStorage.py#L367-L374 https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/ClientStorage.py#L841-L852 https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/ClientStorage.py#L1473-L1476

    but in ZEO5.client loadBefore does not have anything like that

    https://github.com/zopefoundation/ZEO/blob/fc0729b3cc754bda02c7f54319260b5527dd42a3/src/ZEO/ClientStorage.py#L603-L608 https://github.com/zopefoundation/ZEO/blob/fc0729b3cc754bda02c7f54319260b5527dd42a3/src/ZEO/asyncio/client.py#L289-L309

    and thus invalidateTransaction sneaked in between loadBefore request and corresponding .reply causes ZEO client cache corruption.

    In the original check_race_load_vs_external_invalidate the problem appears only sometimes, but the bug happens with ~ 100% probability with the following delay injected on the server after loadBefore:

    --- a/src/ZEO/tests/ZEO4/StorageServer.py
    +++ b/src/ZEO/tests/ZEO4/StorageServer.py
    @@ -285,7 +285,9 @@ def loadEx(self, oid):
     
         def loadBefore(self, oid, tid):
             self.stats.loads += 1
    -        return self.storage.loadBefore(oid, tid)
    +        x = self.storage.loadBefore(oid, tid)
    +        time.sleep(0.1)
    +        return x
     
         def getInvalidations(self, tid):
             invtid, invlist = self.server.get_invalidations(self.storage_id, tid)
    

    so maybe, in addition to normal test runs, it could be also good idea to run the whole ZEO testsuite against so-amended storage backend. This way it will be similar to how e.g. races are detected by my tracetest package.


    This particular race does not happen for ZEO5.client - ZEO5.server, because there is a difference how invalidations are emitted by the server:

    ZEO4 schedules to send invalidations via callAsyncNoSend:

    https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/StorageServer.py#L1453-L1467

    which immediately puts the message into the output queue and arms the trigger to wakeup IO thread:

    https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/zrpc/connection.py#L574-L581 https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/zrpc/connection.py#L546-L558

    the bug, thus, can happen because invalidateTransaction message is put into the output queue to client immediately on tpc_finish.

    ZEO5, however, queues to the IO thread both actions via call_soon_threadsafe:

    https://github.com/zopefoundation/ZEO/blob/fc0729b3cc754bda02c7f54319260b5527dd42a3/src/ZEO/StorageServer.py#L827-L845 https://github.com/zopefoundation/ZEO/blob/fc0729b3cc754bda02c7f54319260b5527dd42a3/src/ZEO/asyncio/server.py#L153-L154

    which results in that invalidateTransaction message is appended to the output queue at the end of current event loop cycle, not immediately right after tpc_finish on the server. In https://github.com/zopefoundation/ZEO/blob/master/docs/ordering.rst#zeo-5 @jimfulton wrote about this the following:

    The server-side hndling of invalidations is a bit tricker in ZEO 5 because there isn't a thread-safe queue of outgoing messages in ZEO 5 as there was in ZEO 4. The natural approach in ZEO 5 would be to use asyncio's call_soon_threadsafe to send invalidations in a client's thread. This could easily cause invalidations to be sent after loads. As shown above, this isn't a problem for ZODB 5, at least assuming that invalidations arrive in order. This would be a problem for ZODB 4. For this reason, we require ZODB 5 for ZEO 5.

    Anyway, this particular bug does not happen for ZEO5.client - ZEO5.server, because the server does not inject invalidateTransaction message in between loadBefore request and corresponding .reply response.

    The fix is either 1) abandon support for working with ZEO4 servers completely, or 2) to implement load-tracking similar to ZEO4 client.

    If we go with option "1" I suggest to explicitly reject Z4 protocol from both ZEO5 client and ZEO5 server.

    Bug2: ZEO5.client - ZEO5.server, ZEO4.client - ZEO4.server : race in between invalidate / disconnect

    While investigating Bug1 and verifying whether it also shows itself on ZEO4, I've found out that, even though ZEO4.client has explicit support for sneaked-in invalidateTransaction, check_race_load_vs_external_invalidate also fails sometimes on ZEO4.client - ZEO4.server, but due to another reason: failing assertion was similar to the one in Bug1, but IO trace was showing that for the failure one intermediate invalidateTransaction message was not delivered to client at all. Missed invalidation makes client think that both objects were unchanged, and retrieve them either from ZODB.Connection, or ZEO cache, but if for some reason, one of the object is evicted from the cache, it will result in inconsistency detected by broken test invariant.

    The bug here is due to the race on the server: when sending out invalidateTransaction messages to all connected clients, both ZEO4 and ZEO5 iterate the list of connected clients without any locking:

    https://github.com/zopefoundation/ZEO/blob/47d3fbe8cbf24cad91b183483df069ef20708874/src/ZEO/StorageServer.py#L1108-L1115

    https://github.com/zopefoundation/ZEO/blob/fc0729b3cc754bda02c7f54319260b5527dd42a3/src/ZEO/StorageServer.py#L843-L845

    So, if during such iteration, a client disconnects, it will cause some other client to be skipped during the iteration and so do not receive currently-processed invalidateTransaction message. The following code illustrates this:

    In [9]: l = [1,2,3,4]		# list of clients
    
    In [10]: i = iter(l)		# start iterating over it
    
    In [11]: next(i)		# first client yielded
    Out[11]: 1
    
    In [12]: next(i)		# second client yielded
    Out[12]: 2
    
    In [13]: del l[0]		# first client disconnects
    
    In [14]: l
    Out[14]: [2, 3, 4]
    
    In [15]: next(i)		# `3` is skipped wrongly
    Out[15]: 4
    
    In [16]: next(i)
    ---------------------------------------------------------------------------
    StopIteration                             Traceback (most recent call last)
    

    This bug affects both servers: ZEO4 and ZEO5 in multi-threaded mode. ZEO5 is affected only in multithreaded mode, because if ZEO5 server runs single-threaded, a disconnection should not be handled simultaneously to invalidateTransaction processing. ZEO5 multi-threaded mode is activated by ZEO_MTACCEPTOR=1 environment variable. This mode is scheduled to be removed in ZEO6, but still, my understanding is that the bug does not show itself in single-threaded mode only due to chance, not by design - there is neither comments, nor explicit care to avoid this race.

    The fix is to iterate over the snapshot copy of the list of clients when doing such kind of iterations.

    opened by navytux 13
  • Asyncio-based ZEO client and server

    Asyncio-based ZEO client and server

    The main things in this change:

    • Change from one asynchronous networking framework (asyncore) to another (asyncio).
    • Make connection logic much simpler.
    • Simplify client threading model, eliminating many locks
    • Reduce extra levels of indirection that made the code harder to reason about.

    How to approach reviewing this?

    I would recommend starting with ZEO.asyncio, noting that the cache is now managed by the networking thread and doesn't require locks, then look at the changes to ClientStorage, StorageServer, and ZEO.runzeo and ZEO.tests.forker. I would focus on one side at a time, client or server.

    Ask questions in the PR or on IRC.

    If you're feeling especially thorough, you could review the test changes.

    opened by jimfulton 13
  • Drop Python2 support  (WIP: do not merge before #208 or #195)

    Drop Python2 support (WIP: do not merge before #208 or #195)


    kirr:

    ZEO6 is going to support Python3 - see discussion around https://github.com/zopefoundation/ZEO/pull/195#issuecomment-1093837395 .

    This patch was extracted from https://github.com/zopefoundation/ZEO/pull/195 .

    opened by navytux 22
  • `ZEO.asyncio`: switch to `async/await` style

    `ZEO.asyncio`: switch to `async/await` style

    This PR tries to make the ZEO client interface implementation easier to understand; likely, the result is slightly less efficient than the current implementation.

    The PR switches to standard async/await style for the asyncio part of the client interface implementation. Therefore, it must drop Python 2 support. It uses throughout standard asyncio Futures (with scheduled callbacks) instead of special futures (with immediate callbacks) at some places. It significantly extends (and at some places corrects) the source code documentation.

    @navytux The PR drops "credentials" support. Maybe, this is a mistake. Two aspects made me drop it:

    1. the parameters have been in a section with heading "mostly ignored"
    2. a comment stated that ZEO 5 dropped "credentials" in favor of SSL.

    Thus, I definitely ignored the parameters. Only later, I noticed that they might still be usefull for the use case "ZEO 6 client + ZEO 4 server". If we want continued support for this use case, the "credentials" support can easily be restored.

    The PR removes the testing dependency from mock and Random2. Random2 was there to get the same randomization in Python 2 and Python 3. With Python 2 support dropped, Random2 was no longer necessary. However, its replacement by Python 3's random caused a huge diff for test_cache.

    Note to reviewers: due to significant changes it is likely easier to look at the files directly rather than at the diffs.

    opened by d-maurer 142
  • (Commit-)`LockManager` is not fair

    (Commit-)`LockManager` is not fair

    The ZEO server associates a commit lock with each of its storages, managed by a LockManager.

    The lock manager uses a dict to remember which connections are waiting for the commit lock. When the lock is released, the lock manager gives each of them the chance to obtain the lock - at most one will succeed, the remaining are put back into the waiting dict. The success chance mostly depends on the order in which connections are asked whether they can use the lock. The current implementation used Python's "dict" order. This is not a fair order: a connection at the first place has a persistently higher change to get the lock than following connections (because if they come back, the dict order will remain the same). Theoretically, it is possible that a pair of high volume writers starve the other writers.

    Fairness is only important if there is significant contention, in this particular case, significant transaction frequency by multiple clients. Because the ZODB does not perform well in such situations, they are likely rare. Then, fairness would not be a big issue.

    enhancement 
    opened by d-maurer 0
  • ClientStorage: race condition when used with multiple addresses

    ClientStorage: race condition when used with multiple addresses

    ClientStorage supports several server addresses. Those addresses specify servers which essentially serve the same data, maybe a replicated storage. When ClientStorage connects, it tries to open protocol connections to all those servers. The first connection with the required capabilities (writable/readable) wins and is used until this connection is lost; then a reconnection (with all servers) is tried.

    The implementation is spread across two classes: Protocol and Client. Protocol objects represent the connection to a concrete server; there is single Client object implementing the coordination of the server connections.

    From some point during the establishment of a protocol connection, the server can start to asynchronously send (invalidation) messages. Logically, those messages belong to the protocol (because the protocol connection has not yet been fully established); however they are stored in the client object. When several protocol connecting processes arrive at this point, they concurrently update a shared client data structure - which may cause a race condition.

    I am not sure whether this is a real problem: all servers are (implicitly) assumed to serve essentially the same data (if they don't nothing good will come out). Therefore, the messages stored in the shared data structure should be comparable; they call for invalidations, therefore, processing twice may not be a problem. Nevertheless, it would be clearer and safer when the protocol objects would not use a shared data structure.

    bug enhancement 
    opened by d-maurer 4
  • tests hard depend on deprecated mock

    tests hard depend on deprecated mock

    Could you please consider to use something like

    -        import mock
    +        try:
    +           from unittest import mock as mock
    +        except ImportError:
    +           import mock
    

    in the code?

    opened by pgajdos 5
Owner
Zope
Zope
Cross-platform desktop synchronization client for the Nuxeo platform.

Nuxeo Drive Desktop Synchronization Client for Nuxeo This is an ongoing development project for desktop synchronization of local folders with remote N

Nuxeo 63 Dec 16, 2022
The web end of seafile server.

Introduction Seahub is the web frontend for Seafile. Preparation Build and deploy Seafile server from source. See http://manual.seafile.com/build_seaf

null 476 Dec 29, 2022
Nerd-Storage is a simple web server for sharing files on the local network.

Nerd-Storage is a simple web server for sharing files on the local network. It supports the download of files and directories, the upload of multiple files at once, making a directory, updates and deletions.

ハル 68 Jun 7, 2022
This is a Client-Server-System which can share the screen from the server to client and in the other direction.

Screenshare-Streaming-Python This is a Client-Server-System which can share the screen from the server to client and in the other direction. You have

VFX / Videoeffects Creator 1 Nov 19, 2021
This is a Client-Server-System which can send audio from a microphone from the server to client and in the other direction.

Audio-Streaming-Python This is a Client-Server-System which can send audio from a microphone from the server to client and in the other direction. You

VFX / Videoeffects Creator 0 Jan 5, 2023
Dns-Client-Server - Dns Client Server For Python

Dns-client-server DNS Server: supporting all types of queries and replies. Shoul

Nishant Badgujar 1 Feb 15, 2022
Client library for relay - a service for relaying server side messages to the client side browsers via websockets.

Client library for relay - a service for relaying server side messages to the client side browsers via websockets.

getme 1 Nov 10, 2021
Python codes for the server and client end that facilitates file transfers. (Using AWS EC2 instance as the server)

Server-and-Client-File-Transfer Python codes for the server and client end that facilitates file transfers. I will be using an AWS EC2 instance as the

Amal Farhad Shaji 2 Oct 13, 2021
Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework Key Features Supports both client and server side of HTTP protocol. Supports both client and server Web-Sockets out

aio-libs 13.2k Jan 5, 2023
Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework Key Features Supports both client and server side of HTTP protocol. Supports both client and server Web-Sockets out

aio-libs 13.1k Jan 1, 2023
FPS, fast pluggable server, is a framework designed to compose and run a web-server based on plugins.

FPS, fast pluggable server, is a framework designed to compose and run a web-server based on plugins. It is based on top of fastAPI, uvicorn, typer, and pluggy.

Adrien Delsalle 1 Nov 16, 2021
Block fingerprinting for the beacon chain, for client identification & client diversity metrics

blockprint This is a repository for discussion and development of tools for Ethereum block fingerprinting. The primary aim is to measure beacon chain

Sigma Prime 49 Dec 8, 2022
Drcom-pt-client - Drcom Pt version client with refresh timer

drcom-pt-client Drcom Pt version client with refresh timer Dr.com Pt版本客户端 可用于网页认

null 4 Nov 16, 2022
Mlflow-rest-client - Python client for MLflow REST API

Python Client for MLflow Python client for MLflow REST API. Features: Minimal de

MTS 35 Dec 23, 2022
Iris-client - Python client for DFIR-IRIS

Python client dfir_iris_client offers a Python interface to communicate with IRI

DFIR-IRIS 11 Dec 22, 2022
league-connection is a python package to communicate to riot client and league client

league-connection is a python package to communicate to riot client and league client.

Sandbox 1 Sep 13, 2022
Raphtory-client - The python client for the Raphtory project

Raphtory Client This is the python client for the Raphtory project Install via p

Raphtory 5 Apr 28, 2022
Python Socket.IO server and client

python-socketio Python implementation of the Socket.IO realtime client and server. Version compatibility The Socket.IO protocol has been through a num

Miguel Grinberg 3.2k Dec 31, 2022
This websocket program is for data transmission between server and client. Data transmission is for Federated Learning in Edge computing environment.

websocket-for-data-transmission This websocket program is for data transmission between server and client. Data transmission is for Federated Learning

null 9 Jul 19, 2022