Backport of the concurrent.futures package to Python 2.6 and 2.7

Overview
Build Status

This is a backport of the concurrent.futures standard library module to Python 2.

It does not work on Python 3 due to Python 2 syntax being used in the codebase. Python 3 users should not attempt to install it, since the package is already included in the standard library.

To conditionally require this library only on Python 2, you can do this in your setup.py:

setup(
    ...
    extras_require={
        ':python_version == "2.7"': ['futures']
    }
)

Or, using the newer syntax:

setup(
    ...
    install_requires={
        'futures; python_version == "2.7"'
    }
)

Warning

The ProcessPoolExecutor class has known (unfixable) problems on Python 2 and should not be relied on for mission critical work. Please see Issue 29 and upstream bug report for more details.

Comments
  • syntax error with  raise type(self._exception), self._exception, self._traceback (_base.py)

    syntax error with raise type(self._exception), self._exception, self._traceback (_base.py)

    I installed futures 3.0.3 and it raised the following exception:

    from IPython.kernel import KernelManager
    

    File "c:\apythonensae\python\lib\site-packages\IPython\kernel__init__.py", line 4, in from . import zmq File "c:\apythonensae\python\lib\site-packages\IPython\kernel\zmq__init__.py", line 10, in from .session import Session File "c:\apythonensae\python\lib\site-packages\IPython\kernel\zmq\session.py", line 41, in from zmq.eventloop.ioloop import IOLoop File "c:\apythonensae\python\lib\site-packages\zmq\eventloop__init__.py", line 3, in from zmq.eventloop.ioloop import IOLoop File "c:\apythonensae\python\lib\site-packages\zmq\eventloop\ioloop.py", line 35, in from tornado.ioloop import PollIOLoop, PeriodicCallback File "c:\apythonensae\python\lib\site-packages\tornado\ioloop.py", line 46, in from tornado.concurrent import TracebackFuture, is_future File "c:\apythonensae\python\lib\site-packages\tornado\concurrent.py", line 37, in from concurrent import futures File "c:\apythonensae\python\lib\site-packages\concurrent\futures__init__.py", line 8, in from concurrent.futures._base import (FIRST_COMPLETED, File "c:\apythonensae\python\lib\site-packages\concurrent\futures_base.py", line 355 raise type(self._exception), self._exception, self._traceback ^ SyntaxError: invalid syntax

    invalid 
    opened by sdpython 24
  • futures 3.1.2 breaks the world :-)

    futures 3.1.2 breaks the world :-)

    If you have a python 2/3 compatible package that uses third party dependencies that use concurrent.futures then using 3.1.2 causes an error on install for python 3. Thats unfortunate as its a common dependency for things that are 2/3 compatible. I have a half-dozen third party libraries in venv that depend on futures.

    Collecting futures==3.1.2 (from -r requirements.txt (line 8))
      Using cached https://files.pythonhosted.org/packages/4a/f4/418e844d868e34638486732417fb82b05031910059d88b86aaea9c70f699/futures-3.1.2.tar.gz
        Complete output from command python setup.py egg_info:
        This backport is meant only for Python 2.
        It does not work on Python 3, and Python 3 users do not need it as the concurrent.futures package is available in the standard library.
        For projects that work on both Python 2 and 3, the dependency needs to be conditional on the Python version, like so:
        extras_require={':python_version == "2.7"': ['futures']}
        
    
    opened by kapilt 20
  • Create better error info in the case that PY3 was installed

    Create better error info in the case that PY3 was installed

    This closes #78. There is more context in that issue.

    The crux of the issue is that installing this library on Python 3 breaks Python 3's __get_result method. This breaks pytest.

    This is particularly problematic if you are relying on a library that itself has not updated their dependencies to have the newer 'futures; python_version == "2.7"' dependency specifier in the install_requires.

    In general, it is better to stop people from installing something if doing so will break a common testing framework.

    That means we need to provide an error to let people know why it didn't work.

    At the same time, it also tells library writers to where to go to learn how to change their install_requires in setuptools.setup for when their users run into the error & copy and paste it into issues.

    opened by mpacer 7
  • Implement exception-chaining for future.result()

    Implement exception-chaining for future.result()

    Chained exceptions are vital in debugging programs that use this library.
    
    They are going to be added to py3k with
    http://www.python.org/dev/peps/pep-3134/ but for people using python 2,
    I've attached a quick-and-dirty implementation that is specific to
    python-futures.
    
    I've tested it for ThreadPoolExecutor but not ProcessPoolExecutor.
    
    Sample output when future.result() is called:
    
    
    Traceback (most recent call last):
      File "src/scrape.py", line 287, in <module>
        sys.exit(main(*args, **opts.__dict__))
      File "src/scrape.py", line 65, in main
        ret = f(*args)
      File "src/scrape.py", line 187, in round_photo
        self.ff.commitUserPhotos(socgr.vs["id"], ppdb)
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
    kr.py",
    line 265, in commitUserPhotos
        self.execAllUnique(users, ppdb, "producer db (user)", run, post, conc_m)
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
    kr.py",
    line 217, in execAllUnique
        LOG.info, "%s: %%(i1)s/%s %%(it)s" % (name, total), expected_length=total):
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/util
    .py",
    line 188, in enumerate_cb
        for i, item in enumerate(iterable):
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/futures/_base.py
    ",
    line 602, in run_to_results
        raise e
    futures._base.ExecutionException: Caused by:
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/futures/thread.p
    y",
    line 87, in run
        result = self.call()
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
    kr.py",
    line 210, in <lambda>
        tasks = [partial(lambda it: (it, run(it)), it) for it in items if it
    not in done]
      File
    "/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
    kr.py",
    line 253, in run
        print a[3]
    IndexError('list index out of range',)
    
    

    Original issue reported on code.google.com by [email protected] on 3 Apr 2010 at 6:29

    Attachments:

    Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 7
  • Backport py3 exit

    Backport py3 exit

    I would prefer to make this against a 3.1x backport branch, but I'll make it against master since that's the closest I can.

    As of today if I am in a python 3 environment and I install futures, 3.1.1 is installed:

    (broken_python3) ~ $ pip3 install futures
    Collecting futures
      Downloading https://files.pythonhosted.org/packages/cc/26/b61e3a4eb50653e8a7339d84eeaa46d1e93b92951978873c220ae64d0733/futures-3.1.1.tar.gz
    Building wheels for collected packages: futures
      Running setup.py bdist_wheel for futures ... done
      Stored in directory: /Users/mpacer/Library/Caches/pip/wheels/f3/f9/c7/4fbf1faa6038faf183f6e3ea61f17a5f7eea5ab9a1dd7753fd
    Successfully built futures
    Installing collected packages: futures
    Successfully installed futures-3.1.1
    

    Unfortunately, this will still occur even after you release the next 3.2 version because pip3 will never download any version of futures that includes "python_requires>=2.6, <3.0". It will continue to download 3.1.1 — which will break anyone's python3 concurrent.futures packages.

    I don't think that you should leave out python_requires for any other versions of the package. There just needs to be some version of futures > 3.1.1 (e.g., 3.1.2) that has this explicit catch and exit.

    opened by mpacer 6
  • Issue: __get_result incorrectly overwrites the correct version when installed on python 3.

    Issue: __get_result incorrectly overwrites the correct version when installed on python 3.

    https://github.com/agronholm/pythonfutures/blob/5edbc65401fd578e195c2f0e2e8d3a6b5404f6bc/concurrent/futures/_base.py#L405-L416

    I'm happy to make a PR to fix this, but I think the real solution is to release a new version that does not include the python_requires='>=2.6, <3', and raises an error instead of a warning when attempted to be installed on python 3.

    For example, the monkey patching will break pytest if installed on python3 with version 3.1.1. That means if someone does pip3 install -U futures, that python3 executable would install a futures==3.1.1 because it doesn't have a python_requires='>=2.6, <3', which is what stops pip3 from even seeing that futures 3.2.0 exists.

    If we were to raise an error in a way analogous to what we do in IPython: https://github.com/ipython/ipython/blob/01bd59ec7c184171df0cb0d933c5672e8c20b67e/setup.py#L25-L58:

    if sys.version_info < (3, 4):
        pip_message = 'This may be due to an out of date pip. Make sure you have pip >= 9.0.1.'
        try:
            import pip
            pip_version = tuple([int(x) for x in pip.__version__.split('.')[:3]])
            if pip_version < (9, 0, 1) :
                pip_message = 'Your pip version is out of date, please install pip >= 9.0.1. '\
                'pip {} detected.'.format(pip.__version__)
            else:
                # pip is new enough - it must be something else
                pip_message = ''
        except Exception:
            pass
    
    
        error = """
    IPython 7.0+ supports Python 3.4 and above.
    When using Python 2.7, please install IPython 5.x LTS Long Term Support version.
    Python 3.3 was supported up to IPython 6.x.
    See IPython `README.rst` file for more information:
        https://github.com/ipython/ipython/blob/master/README.rst
    Python {py} detected.
    {pip}
    """.format(py=sys.version_info, pip=pip_message )
    
        print(error, file=sys.stderr)
        sys.exit(1)
    

    then pip install -U futures would successfully avoid breaking peoples' python3 executables by not allowing itself to be installed. I'll happiliy make a PR for this.

    wontfix 
    opened by mpacer 6
  • Cross-Compatibility of Python 2 and 3 in a single folder.

    Cross-Compatibility of Python 2 and 3 in a single folder.

    Hi, I have a corporate project that requires maintain of compatibility between 2.7 and 3.X (Mainly 3.6.Y).

    Due to Firewall policy, I could not use pip install X for installing various dependencies that requires futures. This includes apscheduler etc.

    The only way for me to install these modules is via --install-lib (More on: https://docs.python.org/2/install/index.html) option with the use of environment variable PythonPath with the use of .tar / .zip version.

    So, I have installed both Python 2.7 and 3.6 on separate folders in _C:_ and a common Module Path at C:\Python_Modules. This will result apscheduler crash for Python 3 version. This is because apscheduler for Python 3 tries to import futures from Python 2 egg regardless if I tried to compile Futures for Python 3 or not.

    Attempted to compile Futures in Python 3 resulted a redundancy warning. redundant for python3

    The first one is: raise exception_type, self._exception, self._traceback in line 381 of _base.py first issue

    The fix is to change the file (futures-3.1.1\build\lib\concurrent\futures_base.py) with these:

    1. Change the line 381 code to: raise_with_traceback(exception_type(self._exception)) 2: Add this line of code below line 9 (import types) so as to import the traceback feature. from future.utils import raise_with_traceback

    The changed code (Change the extension from txt to py) can be downloaded here: _base.txt

    The second one is the issue of the Queue and queue for Python 2 and 3 compatibility. second issue

    This is also faced by other modules such as https://github.com/docker/compose/issues/2108

    The fix is to change the file (futures-3.1.1\build\lib\concurrent\futures\thread.py) with this on line 8:

    try:
    	import Queue as queue
    except ImportError:
    	import queue
    

    The changed code (Change the extension from txt to py) can be downloaded here: thread.txt

    With these changes on both files, I will able to solve this issue of futures module not able to use in Python 3.

    opened by slee047 6
  • Backport: Issue #27664: Allow specifying prefix for thread name in concurrent.futures.ThreadPoolExecutor

    Backport: Issue #27664: Allow specifying prefix for thread name in concurrent.futures.ThreadPoolExecutor

    Please consider including the changes for thread_name_prefix:

    http://bugs.python.org/issue27664, commit: https://github.com/python/cpython/commit/50abe877ee6f50ebd9cfe228d314220e071fa3c6

    opened by normanr 6
  • ThreadPoolExecutor should fail when max_worker is 0

    ThreadPoolExecutor should fail when max_worker is 0

    I noticed a difference when instantiated the executor with ThreadPoolExecutor(max_workers=0). The backport does not raise, whereas the 3.5 raises a ValueError:

      File "/opt/python/3.5.0/lib/python3.5/concurrent/futures/thread.py", line 96, in __init__
        raise ValueError("max_workers must be greater than 0")
    
    ValueError: max_workers must be greater than 0
    
    bug 
    opened by leplatrem 6
  • missing braces in concurrent/futures/_base.py

    missing braces in concurrent/futures/_base.py

    The missing braces in line 357 raise type(self._exception), self._exception, self._traceback needs to be updated to raise (type(self._exception), self._exception, self._traceback)

    This issue is causing django-pipeline to break as it depends on this package.

    invalid 
    opened by lebeier 6
  • Sync with upstream changes

    Sync with upstream changes

    The concurrent.futures module in cpython has had some changes since this 
    backport was released; it would be good to sync up with cpython 3.3.  I'm 
    particularly interested in this change:
      http://hg.python.org/cpython/annotate/4390d6939a56/Lib/concurrent/futures/thread.py#130
    which prevents a 100ms delay when shutting down a ThreadPoolExecutor.
    

    Original issue reported on code.google.com by [email protected] on 23 May 2013 at 3:28

    wontfix Type-Defect Priority-Medium auto-migrated 
    opened by GoogleCodeExporter 6
  • Add process initializer (from Python 3.7) to complete issue 21423 implementation

    Add process initializer (from Python 3.7) to complete issue 21423 implementation

    Also updates the docs for thread and process pools.

    Unfortunately, the ProcessPoolExecutor is vastly differently implemented than upstream Python 3, so I couldn't use stick to the upstream PR as much as for thread pools: https://github.com/python/cpython/pull/4241/files

    Is there any interest in wholesale copying the Python 3.7 process pool executor instead of redoing the bpo's out of order?

    opened by fahhem 2
  • Explicit documentation of which version of concurrent.futures is backported

    Explicit documentation of which version of concurrent.futures is backported

    Given the version number I assume this is a backport of concurrent.futures from Python 3.2. There have been improvements to the module in later releases of Python 3, and I was wondering if there are any plans to pull them in? I'm specifically interested in improvements to ProcessPoolExecutor, namely

    1. the improved behavior on abrupt worker termination (3.3)
    2. initializer / initargs support (3.7)

    Thanks!

    opened by snakescott 2
  • Intermittent threading hangs with process pools

    Intermittent threading hangs with process pools

    I've been chasing this for a year or so (across various versions of Python and futures—this time 2.7.6 and 3.0.3) and finally went through the rigamarole of settings up the Python gdb tools to get some decent tracebacks out of it. In short, during large jobs with thousands of tasks, execution sometimes hangs. It runs for about an hour, getting somewhere between 11-17% done in the current reproduction; conveniently, I have a progress bar. The variation makes me think it's some kind of timing bug. The CPU use slowly falls down to 0 as the worker processes complete and no new ones are scheduled to replace them. I end up with a process table like this:

      585 ?        00:01:58 dxr
      605 ?        01:03:30 dxr <defunct>
      606 ?        00:22:48 dxr <defunct>
      607 ?        00:01:50 dxr <defunct>
      609 ?        00:17:43 dxr <defunct>
    

    The defunct processes are the workers. Adding -L, we can see the threads futures spins up to coordinate the work distribution:

      585   585 ?        00:00:39 dxr
      585   603 ?        00:00:00 dxr
      585   604 ?        00:00:00 dxr
      585   608 ?        00:01:16 dxr
      605   605 ?        01:03:30 dxr <defunct>
      606   606 ?        00:22:48 dxr <defunct>
      607   607 ?        00:01:50 dxr <defunct>
      609   609 ?        00:17:43 dxr <defunct>
    

    I don't know why there are only 3 of them, when my process pool is of size 4. Maybe that's a clue?

    The Python traceback, from attaching with gdb and using its Python tools, looks like this:

    (gdb) py-bt
    #5 Frame 0x23c2e90, for file /usr/lib/python2.7/threading.py, line 339, in wait (self=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>, timeout=None, waiter=<thread.lock at remote 0x7f2716b74e00>, saved_state=None)
        waiter.acquire()
    #9 Frame 0x7f272708f460, for file /usr/lib/python2.7/threading.py, line 620, in wait (self=<_Event(_Verbose__verbose=False, _Event__flag=False, _Event__cond=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>) at remote 0x7f2716b5c840>, timeout=None)
        self.__cond.wait(timeout)
    #13 Frame 0x24189e0, for file /code/dbg-venv/local/lib/python2.7/site-packages/concurrent/futures/_base.py, line 217, in as_completed (fs=set([<...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <.....(truncated)
        waiter.event.wait(wait_timeout)
    #19 Frame 0x2424df0, for file /code/dbg-venv/local/lib/python2.7/site-packages/click/_termui_impl.py, line 240, in next (self=<ProgressBar(color=None, pos=47, length_known=True, max_width=62, file=<_NonClosingTextIOWrapper(_stream=<_FixupStream(_stream=<file at remote 0x7f272f2751c0>) at remote 0x7f27270a6fb0>) at remote 0x7f272b1edc60>, is_hidden=False, avg=[<float at remote 0x25d2558>, <float at remote 0x25d2530>, <float at remote 0x25d2508>, <float at remote 0x25d24e0>, <float at remote 0x25d24b8>, <float at remote 0x25d25a8>, <float at remote 0x25d25d0>], last_eta=<float at remote 0x25d2468>, width=36, info_sep='  ', bar_template='%(label)-18s [%(bar)s] %(info)s', label='Indexing files', empty_char='-', start=<float at remote 0x2360e88>, entered=True, item_show_func=None, autowidth=False, show_percent=None, show_pos=False, finished=False, fill_char='#', eta_known=True, show_eta=False, iter=<generator at remote 0x7f2716b4cb60>, length=273, current_item=<Future(_exception=None, _result=None, _condition=<_Condit...(truncated)
        rv = next(self.iter)
    #27 Frame 0x24187a0, for file /home/dxr/dxr/dxr/build.py, line 325, in show_progress (futures=[<Future(_exception=None, _result=None, _condition=<_Condition(_Condition__lock=<_RLock(_Verbose__verbose=False, _RLock__owner=None, _RLock__block=<thread.lock at remote 0x7f272bb3bc20>, _RLock__count=0) at remote 0x7f27270a6f40>, acquire=<instancemethod at remote 0x7f27270ee9e0>, _is_owned=<instancemethod at remote 0x7f27270ee4e0>, _release_save=<instancemethod at remote 0x7f27270ee8e0>, release=<instancemethod at remote 0x7f27270ee5e0>, _acquire_restore=<instancemethod at remote 0x7f27270eeae0>, _Verbose__verbose=False, _Condition__waiters=[]) at remote 0x7f272f13d4c0>, _state='RUNNING', _traceback=None, _waiters=[<_AsCompletedWaiter(lock=<thread.lock at remote 0x7f2716b74f40>, event=<_Event(_Verbose__verbose=False, _Event__flag=False, _Event__cond=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0...(truncated)
        for future in bar:
    #30 Frame 0x2411980, for file /home/dxr/dxr/dxr/build.py, line 638, in index_files (tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobber && make -f client.mk build MOZ_OBJDIR=$object_folder MOZ_MAKE_FLAGS="-s -j$jobs"', 'description': '', 'core': {}, 'enabled_plugins': [<Plugin(tree_to_index=<type at remote 0x2325170>, file_to_skim=None, filters=[<type at remote 0x232e220>, <type at remote 0x2324270>, <type at remote 0x2323e80>, <type at remote 0x2324d80>, <type at remote 0x233b890>, <type at remote 0x2324990>], analyzers={'tokenizer': {'trigram_tokenizer': {'max_gram': 3, 'type': 'nGram', 'min_gram': 3}}, 'analyzer': {'trigramalyzer': {'type': 'custom', 'tokenizer': 'trigram_tokenizer'}, 'lowercase': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'keyword'}, 'trigramalyzer_lower': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'trigram_tokenizer'}}}, ref...(truncated)
        for future in show_progress(futures, 'Indexing files'):
    #33 Frame 0x231ea40, for file /home/dxr/dxr/dxr/build.py, line 275, in index_tree (tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobber && make -f client.mk build MOZ_OBJDIR=$object_folder MOZ_MAKE_FLAGS="-s -j$jobs"', 'description': '', 'core': {}, 'enabled_plugins': [<Plugin(tree_to_index=<type at remote 0x2325170>, file_to_skim=None, filters=[<type at remote 0x232e220>, <type at remote 0x2324270>, <type at remote 0x2323e80>, <type at remote 0x2324d80>, <type at remote 0x233b890>, <type at remote 0x2324990>], analyzers={'tokenizer': {'trigram_tokenizer': {'max_gram': 3, 'type': 'nGram', 'min_gram': 3}}, 'analyzer': {'trigramalyzer': {'type': 'custom', 'tokenizer': 'trigram_tokenizer'}, 'lowercase': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'keyword'}, 'trigramalyzer_lower': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'trigram_tokenizer'}}}, refs...(truncated)
        index_files(tree, tree_indexers, index, pool, es)
    #37 Frame 0x23ea910, for file /home/dxr/dxr/dxr/build.py, line 64, in index_and_deploy_tree (tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobber && make -f client.mk build MOZ_OBJDIR=$object_folder MOZ_MAKE_FLAGS="-s -j$jobs"', 'description': '', 'core': {}, 'enabled_plugins': [<Plugin(tree_to_index=<type at remote 0x2325170>, file_to_skim=None, filters=[<type at remote 0x232e220>, <type at remote 0x2324270>, <type at remote 0x2323e80>, <type at remote 0x2324d80>, <type at remote 0x233b890>, <type at remote 0x2324990>], analyzers={'tokenizer': {'trigram_tokenizer': {'max_gram': 3, 'type': 'nGram', 'min_gram': 3}}, 'analyzer': {'trigramalyzer': {'type': 'custom', 'tokenizer': 'trigram_tokenizer'}, 'lowercase': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'keyword'}, 'trigramalyzer_lower': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'trigram_tokenizer...(truncated)
        index_name = index_tree(tree, es, verbose=verbose)
    #41 Frame 0x231f140, for file /home/dxr/dxr/dxr/cli/index.py, line 26, in index (config=<Config(_section={'es_hosts': 'http://12---Type <return> to continue, or q <return> to quit---
    7.0.0.1:9200/', 'default_tree': 'mozilla-central', 'max_thumbnail_size': 20000, 'es_alias': 'dxr_{format}_{tree}', 'es_catalog_index': 'dxr_catalog', 'workers': 4, 'log_folder': '/code/dxr-logs-{tree}', 'es_indexing_timeout': 60, 'temp_folder': '/code/dxr-temp-{tree}', 'google_analytics_key': '', 'es_refresh_interval': 60, 'es_catalog_replicas': 1, 'www_root': '', 'es_index': 'dxr_{format}_{tree}_{unique}', 'generated_date': 'Fri, 05 Feb 2016 18:27:38 +0000', 'skip_stages': ['build']}, trees=<OrderedDict(_OrderedDict__map={'mozilla-central': ['mozilla-central', [None, [...], [...]], [...]]}, _OrderedDict__end=[...]) at remote 0x7f272b1ae5c0>) at remote 0x7f272b5e9300>, verbose=False, tree_names=(), tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobbe...(truncated)
    ...
    

    Here's the calling code.

    Here's the C traceback as well, in case it's helpful:

    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x000000000056e2a4 in PyThread_acquire_lock (lock=0x22b12e0, waitflag=1) at ../Python/thread_pthread.h:324
    #2  0x000000000060fba9 in lock_PyThread_acquire_lock (self=0x7f2716b74e00, args=()) at ../Modules/threadmodule.c:52
    #3  0x00000000004877aa in PyCFunction_Call (func=<built-in method acquire of thread.lock object at remote 0x7f2716b74e00>,
        arg=(), kw=0x0) at ../Objects/methodobject.c:81
    #4  0x00000000005273b4 in call_function (pp_stack=0x7ffcb97d7450, oparg=0) at ../Python/ceval.c:4020
    #5  0x00000000005222e1 in PyEval_EvalFrameEx (
        f=Frame 0x23c2e90, for file /usr/lib/python2.7/threading.py, line 339, in wait (self=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>, timeout=None, waiter=<thread.lock at remote 0x7f2716b74e00>, saved_state=None), throwflag=0) at ../Python/ceval.c:2666
    #6  0x0000000000524b9a in PyEval_EvalCodeEx (co=0x7f272ce09720,
        globals={'current_thread': <function at remote 0x7f272ce27060>, '_BoundedSemaphore': <type at remote 0x1f88210>, 'currentThread': <function at remote 0x7f272ce27060>, '_Timer': <type at remote 0x1f8a6a0>, '_format_exc': <function at remote 0x7f272ce16c30>, 'Semaphore': <function at remote 0x7f272ce22a38>, '_deque': <type at remote 0x8c7c60>, 'activeCount': <function at remote 0x7f272ce27300>, '_profile_hook': None, '_sleep': <built-in function sleep>, '_trace_hook': None, 'ThreadError': <type at remote 0x1df77d0>, '_enumerate': <function at remote 0x7f272ce273a8>, '_start_new_thread': <built-in function start_new_thread>, 'BoundedSemaphore': <function at remote 0x7f272ce241b0>, '_shutdown': <instancemethod at remote 0x7f272d2048e0>, '__all__': ['activeCount', 'active_count', 'Condition', 'currentThread', 'current_thread', 'enumerate', 'Event', 'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Timer', 'setprofile', 'settrace', 'local', 'stack_size'], '_Event': <type at remote 0x1f88930>, 'active_count': <fu...(truncated), locals=0x0, args=0x7f272708f5f8, argcount=2, kws=0x7f272708f608, kwcount=0, defs=0x7f272ce23168,
        defcount=1, closure=0x0) at ../Python/ceval.c:3252
    #7  0x000000000052799e in fast_function (func=<function at remote 0x7f272ce22f78>, pp_stack=0x7ffcb97d78f0, n=2, na=2, nk=0)
        at ../Python/ceval.c:4116
    #8  0x0000000000527588 in call_function (pp_stack=0x7ffcb97d78f0, oparg=1) at ../Python/ceval.c:4041
    #9  0x00000000005222e1 in PyEval_EvalFrameEx (
        f=Frame 0x7f272708f460, for file /usr/lib/python2.7/threading.py, line 620, in wait (self=<_Event(_Verbose__verbose=False, _Event__flag=False, _Event__cond=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>) at remote 0x7f2716b5c840>, timeout=None), throwflag=0) at ../Python/ceval.c:2666
    #10 0x0000000000524b9a in PyEval_EvalCodeEx (co=0x7f272ce0e5c0,
        globals={'current_thread': <function at remote 0x7f272ce27060>, '_BoundedSemaphore': <type at remote 0x1f88210>, 'currentThread': <function at remote 0x7f272ce27060>, '_Timer': <type at remote 0x1f8a6a0>, '_format_exc': <function at remote 0x7f272ce16c30>, 'Semaphore': <function at remote 0x7f272ce22a38>, '_deque': <type at remote 0x8c7c60>, 'activeCount': <function at remote 0x7f272ce27300>, '_profile_hook': None, '_sleep': <built-in function sleep>, '_trace_hook': None, 'ThreadError': <type at remote 0x1df77d0>, '_enumerate': <function at remote 0x7f272ce273a8>, '_start_new_thread': <built-in function start_new_thread>, 'BoundedSemaphore': <function at remote 0x7f272ce241b0>, '_shutdown': <instancemethod at remote 0x7f272d2048e0>, '__all__': ['activeCount', 'active_count', 'Condition', 'currentThread', 'current_thread', 'enumerate', 'Event', 'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Timer', 'setprofile', 'settrace', 'local', 'stack_size'], '_Event': <type at remote 0x1f88930>, 'active_count': <fu...(truncated), locals=0x0, args=0x2418bb0, argcount=2, kws=0x2418bc0, kwcount=0, defs=0x7f272ce23478, defcount=1,
        closure=0x0) at ../Python/ceval.c:3252
    #11 0x000000000052799e in fast_function (func=<function at remote 0x7f272ce24ae0>, pp_stack=0x7ffcb97d7d90, n=2, na=2, nk=0)
        at ../Python/ceval.c:4116
    #12 0x0000000000527588 in call_function (pp_stack=0x7ffcb97d7d90, oparg=1) at ../Python/ceval.c:4041
    #13 0x00000000005222e1 in PyEval_EvalFrameEx (
        f=Frame 0x24189e0, for file /code/dbg-venv/local/lib/python2.7/site-packages/concurrent/futures/_base.py, line 217, in as_completed (fs=set([<...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <.....(truncated), throwflag=0) at ../Python/ceval.c:2666
    #14 0x0000000000452c14 in gen_send_ex (gen=0x7f2716b4cb60, arg=0x0, exc=0) at ../Objects/genobject.c:85
    #15 0x0000000000453581 in gen_iternext (gen=0x7f2716b4cb60) at ../Objects/genobject.c:283
    #16 0x0000000000513dce in builtin_next (self=0x0, args=(<generator at remote 0x7f2716b4cb60>,)) at ../Python/bltinmodule.c:1107
    #17 0x00000000004877aa in PyCFunction_Call (func=<built-in function next>, arg=(<generator at remote 0x7f2716b4cb60>,), kw=0x0)
        at ../Objects/methodobject.c:81
    #18 0x00000000005273b4 in call_function (pp_stack=0x7ffcb97d8120, oparg=1) at ../Python/ceval.c:4020
    #19 0x00000000005222e1 in PyEval_EvalFrameEx (
        f=Frame 0x2424df0, for file /code/dbg-venv/local/lib/python2.7/site-packages/click/_termui_impl.py, line 240, in next (self=<ProgressBar(color=None, pos=47, length_known=True, max_width=62, file=<_NonClosingTextIOWrapper(_stream=<_FixupStream(_stream=<file at remote 0x7f272f2751c0>) at remote 0x7f27270a6fb0>) at remote 0x7f272b1edc60>, is_hidden=False, avg=[<float at remote 0x25d2558>, <float at remote 0x25d2530>, <float at remote 0x25d2508>, <float at remote 0x25d24e0>, <float at remote 0x25d24b8>, <float at remote 0x25d25a8>, <float at remote 0x25d25d0>], last_eta=<float at remote 0x25d2468>, width=36, info_sep='  ', bar_template='%(label)-18s [%(bar)s] %(info)s', label='Indexing files', empty_char='-', start=<float at remote 0x2360e88>, entered=True, item_show_func=None, autowidth=False, show_percent=None, show_pos=False, finished=False, fill_char='#', eta_known=True, show_eta=False, iter=<generator at remote 0x7f2716b4cb60>, length=273, current_item=<Future(_exception=None, _result=None, _condition=<_Condit...(truncated), throwflag=0) at ../Python/ceval.c:2666
    ...
    

    Let me know if I can supply any more information. I'm also not sure if this is more properly filed with upstream, as my codebase isn't Python 3 clean. Thank you!

    opened by erikrose 10
  • Attempt to handle KeyboardInterrupt exceptions

    Attempt to handle KeyboardInterrupt exceptions

    For #25 there are effectively two problems. (1) the worker process doesn't handle the keyboard interrupt and as such the process pool cannot shutdown properly. c2ae8dc fixes this issue, such that the worker processes handle the exception and continue working. (2) There is no way to terminate a process pool. In the event that a keyboard interrupt was received and you do in fact want to exit-- you have no choice but to wait for the tasks to complete (assuming you have the first fix I mentioned). the addition of a terminate() method (f5727bd) allows you to do a more forcible shutdown of the pool-- clearing all unstarted jobs and forcibly killing all inflight processes.

    So to recap c2ae8dc makes it so you can handle the keyboard interrupt from the caller and f5727bd gives you a mechanism to stop the pool.

    Potential fix for #25

    opened by jacksontj 4
  • [Errno 32] Broken pipe When Mapping Too Many Values

    [Errno 32] Broken pipe When Mapping Too Many Values

    First of all, thanks for that great backport. I am using it for the xfork package to support the 2.7 branch.

    Unfortunately, there is something strange happening with this futures distribution which works perfectly fine with python3.4

    from concurrent.futures import ProcessPoolExecutor
    
    def calc(n):
        with ProcessPoolExecutor() as pool:
            results = pool.map(term, range(n))
            return sum(results)
    
    def term(x):
        return x
    
    print(calc(5000))
    
    /usr/bin/python2.7 calc.py
    12497500
    Traceback (most recent call last):
      File "/usr/lib/python2.7/multiprocessing/queues.py", line 266, in _feed
        send(obj)
    IOError: [Errno 32] Broken pipe
    
    awaiting input 
    opened by srkunze 6
  • Raising an exception that is unable to be unpickled causes hang in ProcessPoolExecutor

    Raising an exception that is unable to be unpickled causes hang in ProcessPoolExecutor

    What steps will reproduce the problem?
    1. In the function submitted to a ProcessPoolExecutor, raise a custom exception 
    class that takes more than one argument to __init__.
    
    What is the expected output? What do you see instead?
    I expect a call to future.result() to not hang.
    
    What version of the product are you using? On what operating system?
    I'm using ver 2.1.6 on python 2.7 on Gentoo Linux.
    
    Please provide any additional information below.
    I have attached a patch to address the issue and a test case for it.  Without 
    the patch, the new test case hangs.  With the patch, it passes.
    
    This is needed because of the issue raised in 
    http://bugs.python.org/issue1692335.  An exception class that takes multiple 
    arguments to __init__ can be pickled but it raises a TypeError when being 
    unpickled:
    
    In [1]: class MyError(Exception):
       ...:     def __init__(self, arg1, arg2):
       ...:         super(MyError, self).__init__(
       ...:             'arg1 = {}, arg2 = {}'.format(arg1, arg2))
       ...: 
    
    In [2]: import pickle
    
    In [3]: p = pickle.dumps(MyError('arg1val', 'arg2val'))
    
    In [4]: pickle.loads(p)
    ---------------------------------------------------------------------------
    <snip>
    TypeError: __init__() takes exactly 3 arguments (2 given)
    
    So if a child process raises an exception like this, it gets pickled and put in 
    the result queue just fine.  However, in _queue_management_worker, the call to 
    result_queue.get(block=True) will raise an uncaught TypeError when it tries to 
    unpickle the exception.  So then the queue management just breaks.
    
    My proposed patch attempts to catch this condition before putting the exception 
    in the result queue and create a new exception that will be able to be 
    unpickled but still contains information from the original exception.
    

    Original issue reported on code.google.com by [email protected] on 30 Sep 2014 at 2:23

    Attachments:

    Type-Defect Priority-Medium auto-migrated awaiting input 
    opened by GoogleCodeExporter 9
Owner
Alex Grönholm
Alex Grönholm
A concurrent sync tool which works with multiple sources and targets.

Concurrent Sync A concurrent sync tool which works similar to rsync. It supports syncing given sources with multiple targets concurrently. Requirement

Halit Şimşek 2 Jan 11, 2022
rosny is a lightweight library for building concurrent systems.

rosny is a lightweight library for building concurrent systems. Installation Tested on: Linux Python >= 3.6 From pip: pip install rosny From source: p

Ruslan Baikulov 6 Oct 5, 2021
A Python package for easy multiprocessing, but faster than multiprocessing

MPIRE, short for MultiProcessing Is Really Easy, is a Python package for multiprocessing, but faster and more user-friendly than the default multiprocessing package.

null 753 Dec 29, 2022
A curated list of awesome Python asyncio frameworks, libraries, software and resources

Awesome asyncio A carefully curated list of awesome Python asyncio frameworks, libraries, software and resources. The Python asyncio module introduced

Timo Furrer 3.8k Jan 8, 2023
Trio – a friendly Python library for async concurrency and I/O

Trio – a friendly Python library for async concurrency and I/O The Trio project aims to produce a production-quality, permissively licensed, async/awa

null 5k Jan 7, 2023
A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs.

A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs, supporting both control flow and dataflow execution paradigms as well as de-centralized CPU & GPU scheduling.

null 102 Jan 6, 2023
Functional interface for concurrent futures, including asynchronous I/O.

Futured provides a consistent interface for concurrent functional programming in Python. It wraps any callable to return a concurrent.futures.Future,

A. Coady 11 Nov 27, 2022
This is a backport of the BaseExceptionGroup and ExceptionGroup classes from Python 3.11.

This is a backport of the BaseExceptionGroup and ExceptionGroup classes from Python 3.11. It contains the following: The exceptiongroup.BaseExceptionG

Alex Grönholm 19 Dec 15, 2022
Python based Algo trading bot for Nifty / Banknifty futures and options

Fully automated Alice Blue Algo Trading with Python on NSE and MCX for Nifty / Crude / Banknifty futures and options , absolutely FREE ! This algo tra

Rajesh Sivadasan 49 Dec 31, 2022
Market calendar RESTful API with holiday, late open, and early close. Over 50+ unique exchange calendars for global equity and futures markets.

Trading Calendar Market calendar RESTful API with holiday, late open, and early close. Over 50+ unique exchange calendars for global equity and future

Apptastic Software 1 Feb 3, 2022
Asynchronous Python HTTP Requests for Humans using Futures

Asynchronous Python HTTP Requests for Humans Small add-on for the python requests http library. Makes use of python 3.2's concurrent.futures or the ba

Ross McFarland 2k Dec 30, 2022
Deribit_Algo_Project_Python - Deribit algo project written in python trading crypto futures

This is a Algo/script trading for Deribit. You need an account with deribit, to

null 24 Jan 9, 2023
This is a database of 180.000+ symbols containing Equities, ETFs, Funds, Indices, Futures, Options, Currencies, Cryptocurrencies and Money Markets.

Finance Database As a private investor, the sheer amount of information that can be found on the internet is rather daunting.

Jeroen Bouma 1.4k Dec 31, 2022
SCOOP (Scalable COncurrent Operations in Python)

SCOOP (Scalable COncurrent Operations in Python) is a distributed task module allowing concurrent parallel programming on various environments, from h

Yannick Hold 573 Dec 27, 2022
A Python script that exports users from one Telegram group to another using one or more concurrent user bots.

ExportTelegramUsers A Python script that exports users from one Telegram group to another using one or more concurrent user bots. Make sure to set all

Fasil Minale 17 Jun 26, 2022
A library to make concurrent selenium tests that automatically download and setup webdrivers

AutoParaSelenium A library to make parallel selenium tests that automatically download and setup webdrivers Usage Installation pip install autoparasel

Ronak Badhe 8 Mar 13, 2022
A wrapper around ffmpeg to make it work in a concurrent and memory-buffered fashion.

Media Fixer Have you ever had a film or TV show that your TV wasn't able to play its audio? Well this program is for you. Media Fixer is a program whi

Halit Şimşek 3 May 4, 2022
A concurrent sync tool which works with multiple sources and targets.

Concurrent Sync A concurrent sync tool which works similar to rsync. It supports syncing given sources with multiple targets concurrently. Requirement

Halit Şimşek 2 Jan 11, 2022
Binance leverage futures Hook

Simple binance futures Attention Just use leverage. The fee difference between futures and spot is not considered. For example, funding rate, etc. Onl

Adriance 26 Aug 27, 2022
Dashboard to monitor the performance of your Binance Futures account

futuresboard A python based scraper and dashboard to monitor the performance of your Binance Futures account. Note: A local sqlite3 database config/fu

null 86 Dec 29, 2022