LiuAlgoTrader is a scalable, multi-process ML-ready framework for effective algorithmic trading



Build Status PyPI - Python Version Python 3 Updates Documentation Status Tested with Hypothesis Gitter Sourcery


LiuAlgoTrader is a scalable, multi-process ML-ready framework for effective algorithmic trading. The framework simplify development, testing, deployment, analysis and training algo trading strategies. The framework automatically analyzes trading sessions, and the analysis may be used to train predictive models.

LiuAlgoTrader can run on a laptop and hedge-on-the-go, or run on a multi-core hosted Linux server and it will automatically optimize for best performance for either.

LiuAlgoTrader uses Alpaca.Markets brokerage APIs for trading, and can use either Alpaca or for stocks' data. The framework is evolving to support additional brokers and data-providers.

See LiuAlgoTrader in Action

LiuAlgoTrader comes equipped with powerful & user-friendly back-testing tool.

Watch a $4,000 Profit using LiuAlgoTrader out-of-the-box tools.



Install & Configure

Step 1: To install LiuAlgoTrader just type:

pip install liualgotrader

Step 2: To configure the frame work type:

liu quickstart

and follow the installation wizard instructions. The wizard will walk you through the configuration of environment variables, setup of a local dockerized PostgreSQL and pre-populate with test data.

Try the samples

LiuAlgoTrader quickstart wizard installs samples allowing a first-time experience of the framework. Follow the post-installation instructions, and try to back-test a specific day.

Additional samples can we found in the examples directory.


While Liu is first and foremost a trading platform, it comes equipped with full back-testing capabilities, providing command-line tool & jupyter notebook for analysis, and a browser-based UI covering both functionalities.

Machine Learning

These features are still work in process:

Analysis & Analytics

The framework includes a wide ranges of analysis Jupyter Notebooks, as well as streamlit applications for analysis for both trading and back-testing sessions. To name a few of the visual analytical tools:

  • tear-sheet analysis,
  • gain&loss analysis,
  • anchored-VWAPs,
  • indicators & distributions

What's Next?

Read the documentation and learn how to use LiuAlgoTrader to develop, deploy & testing money making strategies.

Watch the Evolution

LiuAlgoTrader is an ever evolving platform, to glimpse the concepts, thoughts and ideas visit the design folder and feel free to comment.


Would you like to help improve & evolve LiuAlgoTrader? Do you have a suggestion, comment, idea for improvement or a have a wish-list item? Please read our Contribution Document or email me at [email protected]


Special thanks to the below individuals for their comments, reviews and suggestions:

  • Unable to start LiuAlgoTrader in Firefox

    Unable to start LiuAlgoTrader in Firefox

    Describe the bug Unable to start LiuAlgoTrader in Firefox on Linux Debian

    To Reproduce Installation tutorial was followed exactly and all prerequisites were met

    Screenshots Screenshot from 2021-09-24 07-35-13

    Desktop (please complete the following information):

    • Linux Debian 10
    • Firefox ESR
    • 78.14.0esr (64-bit)
    opened by ksilo 18
  • ModuleNotFoundError: No module named '' when running backtester

    ModuleNotFoundError: No module named '' when running backtester

    Hi! I think I might have found a bug (probably fell between versions) liualgotrader.backtester tries to import TradierTrader but that is currently missing from the package.

    To Reproduce Steps to reproduce the behavior:

    1. install liualgotrader
    2. download sample data

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots image

    opened by AleFestante 11
  • Unable to launch the backtester_ui due to Streamlit error

    Unable to launch the backtester_ui due to Streamlit error

    After setting up the tool and sourcing the as source, the next step as per the documentation is:

    streamlit run

    Below is the output of the above command:

      You can now view your Streamlit app in your browser.
      Local URL: http://localhost:8501
      Network URL:
    2021-05-31 12:40:05.617 Compute Engine Metadata server unavailable onattempt 1 of 3. Reason: timed out
    2021-05-31 12:40:05.685 Compute Engine Metadata server unavailable onattempt 2 of 3. Reason: [Errno 113] No route to host
    2021-05-31 12:40:08.688 Compute Engine Metadata server unavailable onattempt 3 of 3. Reason: timed out
    2021-05-31 12:40:08.688 Authentication failed using Compute Engine authentication due to unavailable metadata server.

    Going to the web browser, below is the error on the webpage:

    Below is the error that I receive: RuntimeError: There is no current event loop in thread 'ScriptRunner.scriptThread'.


    File "/home/metacu/pythons/liu_env/lib/python3.8/site-packages/streamlit/", line 338, in _run_script
        exec(code, module.__dict__)
    File "/tmp/tmp8h71d3lx/", line 17, in <module>
        from import (calc_batch_revenue, count_trades,
    File "/home/metacu/pythons/liu_env/lib/python3.8/site-packages/liualgotrader/analytics/", line 10, in <module>
        from liualgotrader.common.data_loader import DataLoader  # type: ignore
    File "/home/metacu/pythons/liu_env/lib/python3.8/site-packages/liualgotrader/common/", line 21, in <module>
    File "/home/metacu/pythons/liu_env/lib/python3.8/site-packages/", line 12, in apply
        loop = loop or asyncio.get_event_loop()
    File "/usr/lib/python3.8/asyncio/", line 639, in get_event_loop
        raise RuntimeError('There is no current event loop in thread %r.'

    Screenshot: liu_algo

    opened by akgpt13 9
  • ML support

    ML support

    Is your feature request related to a problem? Please describe. some strategies seem to work well on certain stocks and certain setups, and less for others.

    Describe the solution you'd like extend market miner for off-market calculation of a NN that will predict the affinity of stock to a specific strategy

    Describe alternatives you've considered direct caluclatuon

    enhancement in-process no-issue-activity 
    opened by amor71 8
  • code improvement

    code improvement

    in a merge between _fetch_data_range() and fetch_data_range() would improve code quality (need to make sure all flows are covered by unit tests)

    help wanted good first issue 
    opened by amor71 7
  • [ENH] Throttle API requests to support polygon's free plan for market miners?

    [ENH] Throttle API requests to support polygon's free plan for market miners?

    Sorry for the spam of tickets, but I noticed this while running some tests.

    Is your feature request related to a problem? Please describe. Essentially it currently seems that I have setup everything correctly (YAY). I started the example setup with running the miner.toml under the examples section and the swing-momentum/ like such:

            filename = "swing-momentum/"
            portfolio_size = 2000
            debug = true
            index = 'SP500'
            rank_days = 90
            atr_days = 20
            risk_factor = 0.002
            indicators = ['SMA100']

    initially I was getting errors like the following:

    [load_data()][9942]2021-01-21 22:43:05.755232:loading 200 days for symbol MMM (1/505)
    [main()][9942]2021-01-21 22:43:07.067853:[ERROR] aborted w/ exception object of type 'NoneType' has no len()
    Traceback (most recent call last):
      File "/.venv/lib/python3.8/site-packages/liualgotrader-0.0.86-py3.8.egg/EGG-INFO/scripts/market_miner", line 73, in main
        await asyncio.gather(*task_list)
      File "swing-momentum/", line 178, in run
        await self.load_data(symbols)
      File "swing-momentum/", line 67, in load_data
        tlog(f"loaded {len(self.data_bars[symbol])} data-points")
    TypeError: object of type 'NoneType' has no len()

    which I now believed to have traced down on how polygon behaves under the free plan. In the dashboard I can see that 5 requests per minute are getting answered correctly, but the rest is refused due to rate limitation of the free plan (see


    From my end it seems if one is rate limited the call to daily_bars will return Ǹone (and not throw an Exception) causing all sorts of issues.

    Describe the solution you'd like The 'perfect' solution I could imagine is either some rate limitation build in into LiuAlgoTrader, or using another free endpoint without rate limitation that allows to fetch daily data (e.g. yahoo finance).

    Certainly if I know my strategies return the expenses needed to pay for the premium for polygon this problem would not exist...but I'm not there yet ;-)

    bug in-process 
    opened by TheSnoozer 5
  • `python install` fails under python 3.9.5

    `python install` fails under python 3.9.5

    Describe the bug Hello, I'm currently having trouble to install the project with python install. Something along the lines of

    rm -rf .venv
    python3.9 -m virtualenv .venv
    source .venv/bin/activate
    python -m pip install -U pip setuptools
    python install

    fails with: a) error: protobuf 4.21.2 is installed but protobuf<4.0.0dev,>=3.12.0 is required by {'google-api-core'} b) error: grpcio 1.44.0 is installed but grpcio>=1.47.0 is required by {'grpcio-status'} depending on what commit is checked out.

    To Reproduce Either: git checkout upstream/pyup/scheduled-update-2022-06-27 or git checkout v0.0.82 or git checkout d00ab72f6a41c487c177b2a11b68ab4623301439 or git checkout 9ebd4564ddef13aff718d95d18596b5e3954cef1 and run the python install as mentioned above.

    Anything that I'm missing here? Thanks for the help!

    Desktop (please complete the following information):

    $ python --version
    Python 3.9.5
    $ cat /etc/*release*
    DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
    VERSION="20.04.4 LTS (Focal Fossa)"
    PRETTY_NAME="Ubuntu 20.04.4 LTS"
    opened by TheSnoozer 4
  • os.getloadavg not working on Windows.

    os.getloadavg not working on Windows.

    Describe the bug Attempting to run the trader program once the market starts and the system try's to work out how many workers to spawn it crashes. My understanding is that os.getloadavg() doesn't work on windows. The exact error message: Traceback (most recent call last): File "C:\Users\holy_\Projects\liualgotrader\.venv\Scripts\trader", line 220, in <module> num_consumer_processes = calc_num_consumer_processes() File "C:\Users\Projects\liualgotrader\.venv\Scripts\trader", line 126, in calc_num_consumer_processes load_avg = sum(os.getloadavg()) / 3 AttributeError: module 'os' has no attribute 'getloadavg'

    It could entirely be my poor setup, I've been struggling a bit to get things working

    To Reproduce Steps to reproduce the behavior:

    1. On Windows Run "python ...venv\Scripts\trader"
    2. Wait for market to open
    3. Checks open positions then crashes.
    4. See error

    Desktop (please complete the following information):

    • OS: Windows 10 64-bit

    Additional context The psutil package offers cross platform alternatives to the os system package. I have tested on Windows and confirm it is working but further tests would be needed to make sure it doesn't break anything on Linux or Mac. I will send through a PR for review.

    opened by sigmantium 4
  • In what areas would you suggest to start off my contributions?

    In what areas would you suggest to start off my contributions?

    This project looks super impressive! I rarely see an end-to-end algo trading framework like this get open-sourced. In addition, the tools that you use (e.g. streamlit) is pretty up-to-update from Data Science perspective, also it has a solid automation flow to facilitate the dev pipeline. The whole framework shows a nice mix of data science, quant and software engineering.

    I would love to contribute to this project. I particularly like the part you have a pretty cool interface to do analysis such as backtesting. However my algo trading experience is very limited. Is there any way I could help contribute to this project? Look forward to your reply!


    opened by riven314 4
  • Project depends on TA-LIB

    Project depends on TA-LIB

    Describe the bug When I see it correctly the project has removed the dependency to TA-LIB. However it seems that the MAMA strategy is not yet replaced (here and here). It is also referenced as comment in my_strategy.

    To Reproduce

    • Call market_miner:
    • See error:
    Traceback (most recent call last):
      File "/.venv/bin/market_miner", line 16, in <module>
        from liualgotrader.miners.daily_ohlc import DailyOHLC
      File "/.venv/lib/python3.8/site-packages/liualgotrader/miners/", line 6, in <module>
        from talib import MAMA
    ModuleNotFoundError: No module named 'talib'

    Expected behavior Either depend on talib, or find a suitable replacement for the MAMA strategy.

    Desktop (please complete the following information):

    $ python --version
    Python 3.8.0
    $ pip freeze | grep liu
    opened by TheSnoozer 4
  • Wont take keys

    Wont take keys

    Hello I made an .env file with the following filled out with my keys but it keeps giving me this error after running liu quickstart

    The Framework expects two environment variables to be set: APCA_API_KEY_ID and APCA_API_SECRET_KEY reflecting the funded account's API key and secret respectively. Please set the two environment and re-run the wizard.

    opened by slayer8197 4
  • UnboundLocalError on quickstart

    UnboundLocalError on quickstart

    Describe the bug After compiling the "liu quickstart" setup, you get an "UnboundLocalError: local variable 'user_name' referenced before assignment" error.

    To Reproduce Steps to reproduce the behavior:

    1. Follow the QuickStart steps
    2. set all the envs
    3. See the error after pressing [ENTER] when "Ready to go?? Press [ENTER] to start the installation.."

    Expected behavior Complete the quickstart process

    Screenshots image

    Desktop (please complete the following information):

    • OS: Apple MacOS Ventura 13.0.1 (22A400)
    • Python 3.10.6

    Additional context DSN it's like driver://username:password@host:port/database

    opened by savex83 1
  • PYUP Scheduled weekly dependency update for week 52

    PYUP Scheduled weekly dependency update for week 52

    Update asttokens from 2.1.0 to 2.2.1.

    The bot wasn't able to find a changelog for this release. Got an idea?

    • PyPI:
    • Repo:

    Update attrs from 22.1.0 to 22.2.0.

    The bot wasn't able to find a changelog for this release. Got an idea?

    • PyPI:
    • Homepage:

    Update black from 22.10.0 to 22.12.0.



    Preview style
    &lt;!-- Changes that affect Black&#x27;s preview style --&gt;
    - Enforce empty lines before classes and functions with sticky leading comments (3302)
    - Reformat empty and whitespace-only files as either an empty file (if no newline is
    present) or as a single newline character (if a newline is present) (3348)
    - Implicitly concatenated strings used as function args are now wrapped inside
    parentheses (3307)
    - For assignment statements, prefer splitting the right hand side if the left hand side
    fits on a single line (3368)
    - Correctly handle trailing commas that are inside a line&#x27;s leading non-nested parens
    &lt;!-- Changes to how Black can be configured --&gt;
    - Fix incorrectly applied `.gitignore` rules by considering the `.gitignore` location
    and the relative path to the target file (3338)
    - Fix incorrectly ignoring `.gitignore` presence when more than one source directory is
    specified (3336)
    &lt;!-- Changes to the parser or to version autodetection --&gt;
    - Parsing support has been added for walruses inside generator expression that are
    passed as function args (for example,
    `any(match := my_re.match(text) for text in texts)`) (3327).
    &lt;!-- For example, Docker, GitHub Actions, pre-commit, editors --&gt;
    - Vim plugin: Optionally allow using the system installation of Black via
    `let g:black_use_virtualenv = 0`(3309)
    • PyPI:
    • Changelog:

    Update certifi from 2022.9.24 to 2022.12.7.

    The bot wasn't able to find a changelog for this release. Got an idea?

    • PyPI:
    • Repo:

    Update chardet from 5.0.0 to 5.1.0.



    - Add `should_rename_legacy` argument to most functions, which will rename older encodings to their more modern equivalents (e.g., `GB2312` becomes `GB18030`) (264, dan-blanchard)
    - Add capital letter sharp S and ISO-8859-15 support (222, SimonWaldherr)
    - Add a prober for MacRoman encoding (5 updated as c292b52a97e57c95429ef559af36845019b88b33, Rob Speer and dan-blanchard )
    - Add `--minimal` flag to `chardetect` command (214, dan-blanchard)
    - Add type annotations to the project and run mypy on CI (261, jdufresne)
    - Add support for Python 3.11 (274, hugovk)
    - Clarify LGPL version in License trove classifier (255, musicinmybrain)
    - Remove support for EOL Python 3.6 (260, jdufresne)
    - Remove unnecessary guards for non-falsey values (259, jdufresne)
    Misc changes
    - Switch to Python 3.10 release in GitHub actions (257, jdufresne)
    - Remove in favor of build package (262, jdufresne)
    - Run tests on macos, Windows, and 3.11-dev (267, dan-blanchard)
    • PyPI:
    • Changelog:
    • Repo:

    Update coverage from 6.5.0 to 7.0.1.



    - When checking if a file mapping resolved to a file that exists, we weren&#x27;t
    considering files in .whl files.  This is now fixed, closing `issue 1511`_.
    - File pattern rules were too strict, forbidding plus signs and curly braces in
    directory and file names.  This is now fixed, closing `issue 1513`_.
    - Unusual Unicode or control characters in source files could prevent
    reporting.  This is now fixed, closing `issue 1512`_.
    - The PyPy wheel now installs on PyPy 3.7, 3.8, and 3.9, closing `issue 1510`_.
    .. _issue 1510:
    .. _issue 1511:
    .. _issue 1512:
    .. _issue 1513:
    .. _changes_7-0-0:


    Nothing new beyond 7.0.0b1.
    .. _changes_7-0-0b1:


    - Changes to file pattern matching, which might require updating your
    - Previously, ``*`` would incorrectly match directory separators, making
     precise matching difficult.  This is now fixed, closing `issue 1407`_.
    - Now ``**`` matches any number of nested directories, including none.
    - Improvements to combining data files when using the
    :ref:`config_run_relative_files` setting:
    - During ``coverage combine``, relative file paths are implicitly combined
     without needing a ``[paths]`` configuration setting.  This also fixed
     `issue 991`_.
    - A ``[paths]`` setting like ``*/foo`` will now match ``foo/`` so that
     relative file paths can be combined more easily.
    - The setting is properly interpreted in more places, fixing `issue 1280`_.
    - Fixed environment variable expansion in pyproject.toml files.  It was overly
    broad, causing errors outside of settings, as described in `issue
    1481`_ and `issue 1345`_.  This is now fixed, but in rare cases will require
    changing your pyproject.toml to quote non-string values that use environment
    - Fixed internal logic that prevented from running on
    implementations other than CPython or PyPy (`issue 1474`_).
    .. _issue 991:
    .. _issue 1280:
    .. _issue 1345:
    .. _issue 1407:
    .. _issue 1474:
    .. _issue 1481:
    .. _changes_6-5-0:


    - Changes to file pattern matching, which might require updating your
    - Previously, ``*`` would incorrectly match directory separators, making
     precise matching difficult.  This is now fixed, closing `issue 1407`_.
    - Now ``**`` matches any number of nested directories, including none.
    - Improvements to combining data files when using the
    :ref:`config_run_relative_files` setting, which might require updating your
    - During ``coverage combine``, relative file paths are implicitly combined
     without needing a ``[paths]`` configuration setting.  This also fixed
     `issue 991`_.
    - A ``[paths]`` setting like ``*/foo`` will now match ``foo/`` so that
     relative file paths can be combined more easily.
    - The :ref:`config_run_relative_files` setting is properly interpreted in
     more places, fixing `issue 1280`_.
    - When remapping file paths with ``[paths]``, a path will be remapped only if
    the resulting path exists.  The documentation has long said the prefix had to
    exist, but it was never enforced.  This fixes `issue 608`_, improves `issue
    649`_, and closes `issue 757`_.
    - Reporting operations now implicitly use the ``[paths]`` setting to remap file
    paths within a single data file.  Combining multiple files still requires the
    ``coverage combine`` step, but this simplifies some single-file situations.
    Closes `issue 1212`_ and `issue 713`_.
    - The ``coverage report`` command now has a ``--format=`` option.  The original
    style is now ``--format=text``, and is the default.
    - Using ``--format=markdown`` will write the table in Markdown format, thanks
     to `Steve Oswald &lt;pull 1479_&gt;`_, closing `issue 1418`_.
    - Using ``--format=total`` will write a single total number to the
     output.  This can be useful for making badges or writing status updates.
    - Combining data files with ``coverage combine`` now hashes the data files to
    skip files that add no new information.  This can reduce the time needed.
    Many details affect the speed-up, but for;s own test suite,
    combining is about 40% faster. Closes `issue 1483`_.
    - When searching for completely un-executed files, uses the
    presence of ```` files to determine which directories have source
    that could have been imported.  However, `implicit namespace packages`_ don&#x27;t
    require ````.  A new setting ``[report]
    include_namespace_packages`` tells to consider these directories
    during reporting.  Thanks to `Felix Horvat &lt;pull 1387_&gt;`_ for the
    contribution.  Closes `issue 1383`_ and `issue 1024`_.
    - Fixed environment variable expansion in pyproject.toml files.  It was overly
    broad, causing errors outside of settings, as described in `issue
    1481`_ and `issue 1345`_.  This is now fixed, but in rare cases will require
    changing your pyproject.toml to quote non-string values that use environment
    - An empty file has a coverage total of 100%, but used to fail with
    ``--fail-under``.  This has been fixed, closing `issue 1470`_.
    - The text report table no longer writes out two separator lines if there are
    no files listed in the table.  One is plenty.
    - Fixed a mis-measurement of a strange use of wildcard alternatives in
    match/case statements, closing `issue 1421`_.
    - Fixed internal logic that prevented from running on
    implementations other than CPython or PyPy (`issue 1474`_).
    - The deprecated ``[run] note`` setting has been completely removed.
    .. _implicit namespace packages:
    .. _issue 608:
    .. _issue 649:
    .. _issue 713:
    .. _issue 757:
    .. _issue 991:
    .. _issue 1024:
    .. _issue 1212:
    .. _issue 1280:
    .. _issue 1345:
    .. _issue 1383:
    .. _issue 1407:
    .. _issue 1418:
    .. _issue 1421:
    .. _issue 1470:
    .. _issue 1474:
    .. _issue 1481:
    .. _issue 1483:
    .. _pull 1387:
    .. _pull 1479:
    .. _changes_6-6-0b1:


    • PyPI:
    • Changelog:
    • Repo:

    Update debugpy from 1.6.3 to 1.6.4.



    Fixes: 985, 1003, 1005, 1018, 1024, 1025, 1030, 1031, 1042, 1064, 1081, 1100, 1104, 1111, 1126 
    Improvements: 532, 989, 1022, 1056, 1099
    • PyPI:
    • Changelog:
    • Homepage:

    Update filelock from 3.8.0 to 3.8.2.



    - Fix mypy does not accept ``filelock.FileLock`` as a valid type
    • PyPI:
    • Changelog:
    • Repo:

    Update fire from 0.4.0 to 0.5.0.



    * Support for custom serializers with fire.Fire(serializer=your_serializer) 345 
    * Auto-generated help text now shows short arguments (e.g. -a) when appropriate 318 
    * Documentation improvements (334, 399, 372, 383, 387)
    * Default values are now shown in help for kwonly arguments 414 
    * Completion script fix where previously completions might not show at all 336 
    Highlighted change: `fire.Fire(serialize=custom_serialize_fn)` 345
    You can now pass a custom serialization function to fire to control how the output is serialized.
    Your serialize function should accept an object as input, and may return a string as output. If it returns a string, Fire will display that string. If it returns None, Fire will display nothing. If it returns something else, Fire will use the default serialization method to convert it to text.
    The default serialization remains unchanged from previous versions. Primitives and collections of primitives are serialized one item per line. Objects that define a custom `__str__` function are serialized using that. Complex objects that don&#x27;t define `__str__` trigger their help screen rather than being serialized and displayed.
    • PyPI:
    • Changelog:
    • Repo:

    Update hypothesis from 6.58.1 to 6.61.0.



    This release improves our treatment of database keys, which based on (among other things)
    the source code of your test function.  We now post-process this source to ignore
    decorators, comments, trailing whitespace, and blank lines - so that you can add
    :obj:`example() &lt;hypothesis.example&gt;`\ s or make some small no-op edits to your code
    without preventing replay of any known failing or covering examples.


    This patch updates our vendored `list of top-level domains &lt;;`__,
    which is used by the provisional :func:`` strategy.


    This release improves Hypothesis&#x27; ability to resolve forward references in
    type annotations. It fixes a bug that prevented
    :func:`~hypothesis.strategies.builds` from being used with `pydantic models that
    possess updated forward references &lt;;`__. See :issue:`3519`.


    The :obj:`example(...) &lt;hypothesis.example&gt;` decorator now has a ``.via()``
    method, which future tools will use to track automatically-added covering
    examples (:issue:`3506`).


    This patch updates our vendored `list of top-level domains &lt;;`__,
    which is used by the provisional :func:`` strategy.
    • PyPI:
    • Changelog:
    • Homepage:

    Update identify from 2.5.9 to 2.5.11.

    The bot wasn't able to find a changelog for this release. Got an idea?

    • PyPI:
    • Repo:

    Update importlib-metadata from 5.1.0 to 5.2.0.

    The bot wasn't able to find a changelog for this release. Got an idea?

    • PyPI:
    • Changelog:
    • Repo:

    Update isort from 5.10.1 to 5.11.4.



    - Fixed 2038 (again): stop installing documentation files to top-level site-packages (2057) mgorny
    - CI: only run release workflows for upstream (2052) hugovk
    - Tests: remove obsolete toml import from the test suite (1978) mgorny
    - CI: bump Poetry 1.3.1 (2058) staticdev


    - Fixed 2007: settings for py3.11 (2040) staticdev
    - Fixed 2038: packaging pypoetry (2042) staticdev
    - Docs: renable portray (2043) timothycrosley
    - Ci: add minimum GitHub token permissions for workflows (1969) varunsh-coder
    - Ci: general CI improvements (2041) staticdev
    - Ci: add release workflow (2026) staticdev


    - Hotfix 2034: isort --version is not accurate on 5.11.x releases (2034) gschaffner


    - Hotfix 2031: only call `colorama.init` if `colorama` is available (2032) tomaarsen


    - Added official support for Python 3.11 (1996, 2008, 2011) staticdev
    - Dropped support for Python 3.6 (2019) barrelful
    - Fixed problematic tests (2021, 2022) staticdev
    - Fixed 1960: Rich compatibility (1961) ofek
    - Fixed 1945, 1986: Python 4.0 upper bound dependency resolving issues staticdev
    - Fixed Pyodide CDN URL (1991) andersk
    - Docs: clarify description of use_parentheses (1941) mgedmin
    - Fixed 1976: `black` compatibility for `.pyi` files XuehaiPan
    - Implemented 1683: magic trailing comma option (1876) legau
    - Add missing space in unrecoverable exception message (1933) andersk
    - Fixed 1895: skip-gitignore: use allow list, not deny list bmalehorn
    - Fixed 1917: infinite loop for unmatched parenthesis (1919) anirudnits
    - Docs: shared profiles (1896) matthewhughes934
    - Fixed build-backend values in the example plugins (1892) mgorny
    - Remove reference to jamescurtin/isort-action (1885) AndrewLane
    - Split long cython import lines (1931) davidcollins001
    - Update plone profile: copy of `black`, plus three settings. (1926) mauritsvanrees
    - Fixed 1815, 1862: Add a command-line flag to sort all re-exports (1863) parafoxia
    - Fixed 1854: `lines_before_imports` appending lines after comments (1861) legau
    - Remove redundant `multi_line_output = 3` from &quot;Compatibility with black&quot; (1858) jdufresne
    - Add tox config example (1856) umonaca
    - Docs: add examples for frozenset and tuple settings (1822) sgaist
    - Docs: add multiple config documentation (1850) anirudnits
    • PyPI:
    • Changelog:
    • Repo:

    Update jsonschema from 4.17.1 to 4.17.3.



    * Fix instantiating validators with cached refs to boolean schemas
    rather than objects (1018).


    * Empty strings are not valid relative JSON Pointers (aren&#x27;t valid under the
    RJP format).
    * Durations without (trailing) units are not valid durations (aren&#x27;t
    valid under the duration format). This involves changing the dependency
    used for validating durations (from ``isoduration`` to ``isodate``).
    • PyPI:
    • Changelog:

    Update keyring from 23.11.0 to 23.13.1.



    * 573: Fixed failure in macOS backend when attempting to set a
    password after previously setting a blank password, including a
    test applying to all backends.


    * 608: Added support for tab completion on the ``keyring`` command
    if the ``completion`` extra is installed (``keyring[completion]``).


    * 612: Prevent installation of ``pywin32-ctypes 0.1.2`` with broken
    ``use2to3`` directive.


    * 607: Removed PSF license as it was unused and confusing. Project
    remains MIT licensed as always.
    • PyPI:
    • Changelog:
    • Repo:

    Update lxml from 4.9.1 to 4.9.2.



    Bugs fixed
    * CVE-2022-2309: A Bug in libxml2 2.9.1[0-4] could let namespace declarations
    from a failed parser run leak into later parser runs.  This bug was worked around
    in lxml and resolved in libxml2 2.10.0.
    Other changes
    * LP1981760: ``Element.attrib`` now registers as ````.
    * lxml now has a static build setup for macOS on ARM64 machines (not used for building wheels).
    Patch by Quentin Leffray.
    • PyPI:
    • Changelog:
    • Homepage:

    Update multidict from 6.0.2 to 6.0.4.



    - Declared the official support for Python 3.11 — by :user:`mlegner`. (:issue:`872`)
    • PyPI:
    • Changelog:
    • Repo:

    Update nbclient from 0.7.0 to 0.7.2.



    ([Full Changelog](
    Merged PRs
    - Allow space after In [264]( ([davidbrochart](
    - Fix jupyter_core pinning [263]( ([davidbrochart](
    - Update README, add Python 3.11 [260]( ([davidbrochart](
    Contributors to this release
    ([GitHub contributors page for this release](;to=2022-11-29&amp;type=c))
    &lt;!-- &lt;END NEW CHANGELOG ENTRY&gt; --&gt;


    ([Full Changelog](
    Maintenance and upkeep improvements
    - CI Refactor [257]( ([blink1073](
    Other merged PRs
    - Remove nest-asyncio [259]( ([davidbrochart](
    - Add upper bound to dependencies [258]( ([davidbrochart](
    Contributors to this release
    ([GitHub contributors page for this release](;to=2022-11-29&amp;type=c))
    [blink1073](;type=Issues) | [davidbrochart](;type=Issues) | [pre-commit-ci](;type=Issues)
    • PyPI:
    • Changelog:

    Update nbconvert from 7.2.5 to 7.2.7.



    ([Full Changelog](
    Bugs fixed
    - Fix Hanging Tests on Linux [1924]( ([blink1073](
    Maintenance and upkeep improvements
    - Adopt ruff and handle lint [1925]( ([blink1073](
    Contributors to this release
    ([GitHub contributors page for this release](;to=2022-12-19&amp;type=c))
    [blink1073](;type=Issues) | [pre-commit-ci](;type=Issues)
    &lt;!-- &lt;END NEW CHANGELOG ENTRY&gt; --&gt;


    ([Full Changelog](
    Maintenance and upkeep improvements
    - Include all templates in sdist [1916]( ([blink1073](
    - clean up workflows [1911]( ([blink1073](
    - CI Cleanup [1910]( ([blink1073](
    Documentation improvements
    - Fix docs build and switch to PyData Sphinx Theme [1912]( ([blink1073](
    Contributors to this release
    ([GitHub contributors page for this release](;to=2022-12-05&amp;type=c))
    • PyPI:
    • Changelog:

    Update nbformat from 5.7.0 to 5.7.1.

    The bot wasn't able to find a changelog for this release. Got an idea?

    • PyPI:

    Update numpy from 1.23.5 to 1.24.1.



    discovered after the 1.24.0 release. The Python versions supported by
    this release are 3.8-3.11.
    A total of 12 people contributed to this release. People with a \&quot;+\&quot; by
    their names contributed a patch for the first time.
    -   Andrew Nelson
    -   Ben Greiner +
    -   Charles Harris
    -   Clément Robert
    -   Matteo Raso
    -   Matti Picus
    -   Melissa Weber Mendonça
    -   Miles Cranmer
    -   Ralf Gommers
    -   Rohit Goswami
    -   Sayed Adel
    -   Sebastian Berg
    Pull requests merged
    A total of 18 pull requests were merged for this release.
    -   [22820]( BLD: add workaround in for newer setuptools
    -   [22830]( BLD: CIRRUS_TAG redux
    -   [22831]( DOC: fix a couple typos in 1.23 notes
    -   [22832]( BUG: Fix refcounting errors found using pytest-leaks
    -   [22834]( BUG, SIMD: Fix invalid value encountered in several ufuncs
    -   [22837]( TST: ignore more np.distutils.log imports
    -   [22839]( BUG: Do not use getdata() in
    -   [22847]( BUG: Ensure correct behavior for rows ending in delimiter in\...
    -   [22848]( BUG, SIMD: Fix the bitmask of the boolean comparison
    -   [22857]( BLD: Help raspian arm + clang 13 about \_\_builtin_mul_overflow
    -   [22858]( API: Ensure a full mask is returned for masked_invalid
    -   [22866]( BUG: Polynomials now copy properly (#22669)
    -   [22867]( BUG, SIMD: Fix memory overlap in ufunc comparison loops
    -   [22868]( BUG: Fortify string casts against floating point warnings
    -   [22875]( TST: Ignore nan-warnings in randomized out tests
    -   [22883]( MAINT: restore npymath implementations needed for freebsd
    -   [22884]( BUG: Fix integer overflow in in1d for mixed integer dtypes #22877
    -   [22887]( BUG: Use whole file for encoding checks with `charset_normalizer`.
     9e543db90493d6a00939bd54c2012085  numpy-1.24.1-cp310-cp310-macosx_10_9_x86_64.whl
     4ebd7af622bf617b4876087e500d7586  numpy-1.24.1-cp310-cp310-macosx_11_0_arm64.whl
     0c0a3012b438bb455a6c2fadfb1be76a  numpy-1.24.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     0bddb527345449df624d3cb9aa0e1b75  numpy-1.24.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     b246beb773689d97307f7b4c2970f061  numpy-1.24.1-cp310-cp310-win32.whl
     1f3823999fce821a28dee10ac6fdd721  numpy-1.24.1-cp310-cp310-win_amd64.whl
     8eedcacd6b096a568e4cb393d43b3ae5  numpy-1.24.1-cp311-cp311-macosx_10_9_x86_64.whl
     50bddb05acd54b4396100a70522496dd  numpy-1.24.1-cp311-cp311-macosx_11_0_arm64.whl
     2a76bd9da8a78b44eb816bd70fa3aee3  numpy-1.24.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     9e86658a414272f9749bde39344f9b76  numpy-1.24.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     915dfb89054e1631574a22a9b53a2b25  numpy-1.24.1-cp311-cp311-win32.whl
     ab7caa2c6c20e1fab977e1a94dede976  numpy-1.24.1-cp311-cp311-win_amd64.whl
     8246de961f813f5aad89bca3d12f81e7  numpy-1.24.1-cp38-cp38-macosx_10_9_x86_64.whl
     58366b1a559baa0547ce976e416ed76d  numpy-1.24.1-cp38-cp38-macosx_11_0_arm64.whl
     a96f29bf106a64f82b9ba412635727d1  numpy-1.24.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     4c32a43bdb85121614ab3e99929e33c7  numpy-1.24.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     09b20949ed21683ad7c9cbdf9ebb2439  numpy-1.24.1-cp38-cp38-win32.whl
     9e9f1577f874286a8bdff8dc5551eb9f  numpy-1.24.1-cp38-cp38-win_amd64.whl
     4383c1137f0287df67c364fbdba2bc72  numpy-1.24.1-cp39-cp39-macosx_10_9_x86_64.whl
     987f22c49b2be084b5d72f88f347d31e  numpy-1.24.1-cp39-cp39-macosx_11_0_arm64.whl
     848ad020bba075ed8f19072c64dcd153  numpy-1.24.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     864b159e644848bc25f881907dbcf062  numpy-1.24.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     db339ec0b2693cac2d7cf9ca75c334b1  numpy-1.24.1-cp39-cp39-win32.whl
     fec91d4c85066ad8a93816d71b627701  numpy-1.24.1-cp39-cp39-win_amd64.whl
     619af9cd4f33b668822ae2350f446a15  numpy-1.24.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl
     46f19b4b147f8836c2bd34262fabfffa  numpy-1.24.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     e85b245c57a10891b3025579bf0cf298  numpy-1.24.1-pp38-pypy38_pp73-win_amd64.whl
     dd3aaeeada8e95cc2edf9a3a4aa8b5af  numpy-1.24.1.tar.gz
     179a7ef0889ab769cc03573b6217f54c8bd8e16cef80aad369e1e8185f994cd7  numpy-1.24.1-cp310-cp310-macosx_10_9_x86_64.whl
     b09804ff570b907da323b3d762e74432fb07955701b17b08ff1b5ebaa8cfe6a9  numpy-1.24.1-cp310-cp310-macosx_11_0_arm64.whl
     f1b739841821968798947d3afcefd386fa56da0caf97722a5de53e07c4ccedc7  numpy-1.24.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     0e3463e6ac25313462e04aea3fb8a0a30fb906d5d300f58b3bc2c23da6a15398  numpy-1.24.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     b31da69ed0c18be8b77bfce48d234e55d040793cebb25398e2a7d84199fbc7e2  numpy-1.24.1-cp310-cp310-win32.whl
     b07b40f5fb4fa034120a5796288f24c1fe0e0580bbfff99897ba6267af42def2  numpy-1.24.1-cp310-cp310-win_amd64.whl
     7094891dcf79ccc6bc2a1f30428fa5edb1e6fb955411ffff3401fb4ea93780a8  numpy-1.24.1-cp311-cp311-macosx_10_9_x86_64.whl
     28e418681372520c992805bb723e29d69d6b7aa411065f48216d8329d02ba032  numpy-1.24.1-cp311-cp311-macosx_11_0_arm64.whl
     e274f0f6c7efd0d577744f52032fdd24344f11c5ae668fe8d01aac0422611df1  numpy-1.24.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     0044f7d944ee882400890f9ae955220d29b33d809a038923d88e4e01d652acd9  numpy-1.24.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     442feb5e5bada8408e8fcd43f3360b78683ff12a4444670a7d9e9824c1817d36  numpy-1.24.1-cp311-cp311-win32.whl
     de92efa737875329b052982e37bd4371d52cabf469f83e7b8be9bb7752d67e51  numpy-1.24.1-cp311-cp311-win_amd64.whl
     b162ac10ca38850510caf8ea33f89edcb7b0bb0dfa5592d59909419986b72407  numpy-1.24.1-cp38-cp38-macosx_10_9_x86_64.whl
     26089487086f2648944f17adaa1a97ca6aee57f513ba5f1c0b7ebdabbe2b9954  numpy-1.24.1-cp38-cp38-macosx_11_0_arm64.whl
     caf65a396c0d1f9809596be2e444e3bd4190d86d5c1ce21f5fc4be60a3bc5b36  numpy-1.24.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     b0677a52f5d896e84414761531947c7a330d1adc07c3a4372262f25d84af7bf7  numpy-1.24.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     dae46bed2cb79a58d6496ff6d8da1e3b95ba09afeca2e277628171ca99b99db1  numpy-1.24.1-cp38-cp38-win32.whl
     6ec0c021cd9fe732e5bab6401adea5a409214ca5592cd92a114f7067febcba0c  numpy-1.24.1-cp38-cp38-win_amd64.whl
     28bc9750ae1f75264ee0f10561709b1462d450a4808cd97c013046073ae64ab6  numpy-1.24.1-cp39-cp39-macosx_10_9_x86_64.whl
     84e789a085aabef2f36c0515f45e459f02f570c4b4c4c108ac1179c34d475ed7  numpy-1.24.1-cp39-cp39-macosx_11_0_arm64.whl
     8e669fbdcdd1e945691079c2cae335f3e3a56554e06bbd45d7609a6cf568c700  numpy-1.24.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     ef85cf1f693c88c1fd229ccd1055570cb41cdf4875873b7728b6301f12cd05bf  numpy-1.24.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     87a118968fba001b248aac90e502c0b13606721b1343cdaddbc6e552e8dfb56f  numpy-1.24.1-cp39-cp39-win32.whl
     ddc7ab52b322eb1e40521eb422c4e0a20716c271a306860979d450decbb51b8e  numpy-1.24.1-cp39-cp39-win_amd64.whl
     ed5fb71d79e771ec930566fae9c02626b939e37271ec285e9efaf1b5d4370e7d  numpy-1.24.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl
     ad2925567f43643f51255220424c23d204024ed428afc5aad0f86f3ffc080086  numpy-1.24.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     cfa1161c6ac8f92dea03d625c2d0c05e084668f4a06568b77a25a89111621566  numpy-1.24.1-pp38-pypy38_pp73-win_amd64.whl
     2386da9a471cc00a1f47845e27d916d5ec5346ae9696e01a8a34760858fe9dd2  numpy-1.24.1.tar.gz


    The NumPy 1.24.0 release continues the ongoing work to improve the
    handling and promotion of dtypes, increase the execution speed, and
    clarify the documentation. There are also a large number of new and
    expired deprecations due to changes in promotion and cleanups. This
    might be called a deprecation release. Highlights are
    -   Many new deprecations, check them out.
    -   Many expired deprecations,
    -   New F2PY features and fixes.
    -   New \&quot;dtype\&quot; and \&quot;casting\&quot; keywords for stacking functions.
    See below for the details,
    Deprecate fastCopyAndTranspose and PyArray_CopyAndTranspose
    The `numpy.fastCopyAndTranspose` function has been deprecated. Use the
    corresponding copy and transpose methods directly:
    The underlying C function `PyArray_CopyAndTranspose` has also been
    deprecated from the NumPy C-API.
    Conversion of out-of-bound Python integers
    Attempting a conversion from a Python integer to a NumPy value will now
    always check whether the result can be represented by NumPy. This means
    the following examples will fail in the future and give a
    `DeprecationWarning` now:
     np.array([3000], dtype=np.int8)
    Many of these did succeed before. Such code was mainly useful for
    unsigned integers with negative values such as `np.uint8(-1)` giving
    Note that conversion between NumPy integers is unaffected, so that
    `np.array(-1).astype(np.uint8)` continues to work and use C integer
    overflow logic.
    Deprecate `msort`
    The `numpy.msort` function is deprecated. Use `np.sort(a, axis=0)`
    `np.str0` and similar are now deprecated
    The scalar type aliases ending in a 0 bit size: `np.object0`, `np.str0`,
    `np.bytes0`, `np.void0`, `np.int0`, `np.uint0` as well as `np.bool8` are
    now deprecated and will eventually be removed.
    Expired deprecations
    -   The `normed` keyword argument has been removed from
     [np.histogram]{.title-ref}, [np.histogram2d]{.title-ref}, and
     [np.histogramdd]{.title-ref}. Use `density` instead. If `normed` was
     passed by position, `density` is now used.
    -   Ragged array creation will now always raise a `ValueError` unless
     `dtype=object` is passed. This includes very deeply nested
    -   Support for Visual Studio 2015 and earlier has been removed.
    -   Support for the Windows Interix POSIX interop layer has been
    -   Support for cygwin \&lt; 3.3 has been removed.
    -   The mini() method of `` has been removed. Use
     either `` or ``.
    -   The single-argument form of `` and `` has
     been removed. Use `` or
     `` instead.
    -   Passing dtype instances other than the canonical (mainly native
     byte-order) ones to `dtype=` or `signature=` in ufuncs will now
     raise a `TypeError`. We recommend passing the strings `&quot;int8&quot;` or
     scalar types `np.int8` since the byte-order, datetime/timedelta
     unit, etc. are never enforced. (Initially deprecated in NumPy 1.21.)
    -   The `dtype=` argument to comparison ufuncs is now applied correctly.
     That means that only `bool` and `object` are valid values and
     `dtype=object` is enforced.
    -   The deprecation for the aliases `np.object`, `np.bool`, `np.float`,
     `np.complex`, `np.str`, and `` is expired (introduces NumPy
     1.20). Some of these will now give a FutureWarning in addition to
     raising an error since they will be mapped to the NumPy scalars in
     the future.
    Compatibility notes
    `array.fill(scalar)` may behave slightly different
    `numpy.ndarray.fill` may in some cases behave slightly different now due
    to the fact that the logic is aligned with item assignment:
     arr = np.array([1])   with any dtype/value
      is now identical to:
     arr[0] = scalar
    Previously casting may have produced slightly different answers when
    using values that could not be represented in the target `dtype` or when
    the target had `object` dtype.
    Subarray to object cast now copies
    Casting a dtype that includes a subarray to an object will now ensure a
    copy of the subarray. Previously an unsafe view was returned:
     arr = np.ones(3, dtype=[(&quot;f&quot;, &quot;i&quot;, 3)])
     subarray_fields = arr.astype(object)[0]
     subarray = subarray_fields[0]   &quot;f&quot; field
     np.may_share_memory(subarray, arr)
    Is now always false. While previously it was true for the specific cast.
    Returned arrays respect uniqueness of dtype kwarg objects
    When the `dtype` keyword argument is used with
    :py`np.array()`{.interpreted-text role=&quot;func&quot;} or
    :py`asarray()`{.interpreted-text role=&quot;func&quot;}, the dtype of the returned
    array now always exactly matches the dtype provided by the caller.
    In some cases this change means that a *view* rather than the input
    array is returned. The following is an example for this on 64bit Linux
    where `long` and `longlong` are the same precision but different
     &gt;&gt;&gt; arr = np.array([1, 2, 3], dtype=&quot;long&quot;)
     &gt;&gt;&gt; new_dtype = np.dtype(&quot;longlong&quot;)
     &gt;&gt;&gt; new = np.asarray(arr, dtype=new_dtype)
     &gt;&gt;&gt; new.dtype is new_dtype
     &gt;&gt;&gt; new is arr
    Before the change, the `dtype` did not match because `new is arr` was
    DLPack export raises `BufferError`
    When an array buffer cannot be exported via DLPack a `BufferError` is
    now always raised where previously `TypeError` or `RuntimeError` was
    raised. This allows falling back to the buffer protocol or
    `__array_interface__` when DLPack was tried first.
    NumPy builds are no longer tested on GCC-6
    Ubuntu 18.04 is deprecated for GitHub actions and GCC-6 is not available
    on Ubuntu 20.04, so builds using that compiler are no longer tested. We
    still test builds using GCC-7 and GCC-8.
    New Features
    New attribute `symbol` added to polynomial classes
    The polynomial classes in the `numpy.polynomial` package have a new
    `symbol` attribute which is used to represent the indeterminate of the
    polynomial. This can be used to change the value of the variable when
     &gt;&gt;&gt; P_y = np.polynomial.Polynomial([1, 0, -1], symbol=&quot;y&quot;)
     &gt;&gt;&gt; print(P_y)
     1.0 + 0.0·y¹ - 1.0·y²
    Note that the polynomial classes only support 1D polynomials, so
    operations that involve polynomials with different symbols are
    disallowed when the result would be multivariate:
     &gt;&gt;&gt; P = np.polynomial.Polynomial([1, -1])   default symbol is &quot;x&quot;
     &gt;&gt;&gt; P_z = np.polynomial.Polynomial([1, 1], symbol=&quot;z&quot;)
     &gt;&gt;&gt; P * P_z
     Traceback (most recent call last)
     ValueError: Polynomial symbols differ
    The symbol can be any valid Python identifier. The default is
    `symbol=x`, consistent with existing behavior.
    F2PY support for Fortran `character` strings
    F2PY now supports wrapping Fortran functions with:
    -   character (e.g. `character x`)
    -   character array (e.g. `character, dimension(n) :: x`)
    -   character string (e.g. `character(len=10) x`)
    -   and character string array (e.g.
     `character(len=10), dimension(n, m) :: x`)
    arguments, including passing Python unicode strings as Fortran character
    string arguments.
    New function `np.show_runtime`
    A new function `numpy.show_runtime` has been added to display the
    runtime information of the machine in addition to `numpy.show_config`
    which displays the build-related information.
    `strict` option for `testing.assert_array_equal`
    The `strict` option is now available for `testing.assert_array_equal`.
    Setting `strict=True` will disable the broadcasting behaviour for
    scalars and ensure that input arrays have the same data type.
    New parameter `equal_nan` added to `np.unique`
    `np.unique` was changed in 1.21 to treat all `NaN` values as equal and
    return a single `NaN`. Setting `equal_nan=False` will restore pre-1.21
    behavior to treat `NaNs` as unique. Defaults to `True`.
    `casting` and `dtype` keyword arguments for `numpy.stack`
    The `casting` and `dtype` keyword arguments are now available for
    `numpy.stack`. To use them, write
    `np.stack(..., dtype=None, casting=&#x27;same_kind&#x27;)`.
    `casting` and `dtype` keyword arguments for `numpy.vstack`
    The `casting` and `dtype` keyword arguments are now available for
    `numpy.vstack`. To use them, write
    `np.vstack(..., dtype=None, casting=&#x27;same_kind&#x27;)`.
    `casting` and `dtype` keyword arguments for `numpy.hstack`
    The `casting` and `dtype` keyword arguments are now available for
    `numpy.hstack`. To use them, write
    `np.hstack(..., dtype=None, casting=&#x27;same_kind&#x27;)`.
    The bit generator underlying the singleton RandomState can be changed
    The singleton `RandomState` instance exposed in the `numpy.random`
    module is initialized at startup with the `MT19937` bit generator. The
    new function `set_bit_generator` allows the default bit generator to be
    replaced with a user-provided bit generator. This function has been
    introduced to provide a method allowing seamless integration of a
    high-quality, modern bit generator in new code with existing code that
    makes use of the singleton-provided random variate generating functions.
    The companion function `get_bit_generator` returns the current bit
    generator being used by the singleton `RandomState`. This is provided to
    simplify restoring the original source of randomness if required.
    The preferred method to generate reproducible random numbers is to use a
    modern bit generator in an instance of `Generator`. The function
    `default_rng` simplifies instantiation:
     &gt;&gt;&gt; rg = np.random.default_rng(3728973198)
     &gt;&gt;&gt; rg.random()
    The same bit generator can then be shared with the singleton instance so
    that calling functions in the `random` module will use the same bit
     &gt;&gt;&gt; orig_bit_gen = np.random.get_bit_generator()
     &gt;&gt;&gt; np.random.set_bit_generator(rg.bit_generator)
     &gt;&gt;&gt; np.random.normal()
    The swap is permanent (until reversed) and so any call to functions in
    the `random` module will use the new bit generator. The original can be
    restored if required for code to run correctly:
     &gt;&gt;&gt; np.random.set_bit_generator(orig_bit_gen)
    `np.void` now has a `dtype` argument
    NumPy now allows constructing structured void scalars directly by
    passing the `dtype` argument to `np.void`.
    F2PY Improvements
    -   The generated extension modules don\&#x27;t use the deprecated NumPy-C
     API anymore
    -   Improved `f2py` generated exception messages
    -   Numerous bug and `flake8` warning fixes
    -   various CPP macros that one can use within C-expressions of
     signature files are prefixed with `f2py_`. For example, one should
     use `f2py_len(x)` instead of `len(x)`
    -   A new construct `character(f2py_len=...)` is introduced to support
     returning assumed length character strings (e.g. `character(len=*)`)
     from wrapper functions
    A hook to support rewriting `f2py` internal data structures after
    reading all its input files is introduced. This is required, for
    instance, for BC of SciPy support where character arguments are treated
    as character strings arguments in `C` expressions.
    IBM zSystems Vector Extension Facility (SIMD)
    Added support for SIMD extensions of zSystem (z13, z14, z15), through
    the universal intrinsics interface. This support leads to performance
    improvements for all SIMD kernels implemented using the universal
    intrinsics, including the following operations: rint, floor, trunc,
    ceil, sqrt, absolute, square, reciprocal, tanh, sin, cos, equal,
    not_equal, greater, greater_equal, less, less_equal, maximum, minimum,
    fmax, fmin, argmax, argmin, add, subtract, multiply, divide.
    NumPy now gives floating point errors in casts
    In most cases, NumPy previously did not give floating point warnings or
    errors when these happened during casts. For examples, casts like:
     np.array([2e300]).astype(np.float32)   overflow for float32
    Should now generally give floating point warnings. These warnings should
    warn that floating point overflow occurred. For errors when converting
    floating point values to integers users should expect invalid value
    Users can modify the behavior of these warnings using `np.errstate`.
    Note that for float to int casts, the exact warnings that are given may
    be platform dependent. For example:
     arr = np.full(100, value=1000, dtype=np.float64)
    May give a result equivalent to (the intermediate cast means no warning
    is given):
    May return an undefined result, with a warning set:
     RuntimeWarning: invalid value encountered in cast
    The precise behavior is subject to the C99 standard and its
    implementation in both software and hardware.
    F2PY supports the value attribute
    The Fortran standard requires that variables declared with the `value`
    attribute must be passed by value instead of reference. F2PY now
    supports this use pattern correctly. So
    `integer, intent(in), value :: x` in Fortran codes will have correct
    wrappers generated.
    Added pickle support for third-party BitGenerators
    The pickle format for bit generators was extended to allow each bit
    generator to supply its own constructor when during pickling. Previous
    versions of NumPy only supported unpickling `Generator` instances
    created with one of the core set of bit generators supplied with NumPy.
    Attempting to unpickle a `Generator` that used a third-party bit
    generators would fail since the constructor used during the unpickling
    was only aware of the bit generators included in NumPy.
    arange() now explicitly fails with dtype=str
    Previously, the `np.arange(n, dtype=str)` function worked for `n=1` and
    `n=2`, but would raise a non-specific exception message for other values
    of `n`. Now, it raises a [TypeError]{.title-ref} informing that `arange`
    does not support string dtypes:
     &gt;&gt;&gt; np.arange(2, dtype=str)
     Traceback (most recent call last)
     TypeError: arange() not supported for inputs with DType &lt;class &#x27;numpy.dtype[str_]&#x27;&gt;.
    `numpy.typing` protocols are now runtime checkable
    The protocols used in `numpy.typing.ArrayLike` and
    `numpy.typing.DTypeLike` are now properly marked as runtime checkable,
    making them easier to use for runtime type checkers.
    Performance improvements and changes
    Faster version of `np.isin` and `np.in1d` for integer arrays
    `np.in1d` (used by `np.isin`) can now switch to a faster algorithm (up
    to \&gt;10x faster) when it is passed two integer arrays. This is often
    automatically used, but you can use `kind=&quot;sort&quot;` or `kind=&quot;table&quot;` to
    force the old or new method, respectively.
    Faster comparison operators
    The comparison functions (`numpy.equal`, `numpy.not_equal`,
    `numpy.less`, `numpy.less_equal`, `numpy.greater` and
    `numpy.greater_equal`) are now much faster as they are now vectorized
    with universal intrinsics. For a CPU with SIMD extension AVX512BW, the
    performance gain is up to 2.57x, 1.65x and 19.15x for integer, float and
    boolean data types, respectively (with N=50000).
    Better reporting of integer division overflow
    Integer division overflow of scalars and arrays used to provide a
    `RuntimeWarning` and the return value was undefined leading to crashes
    at rare occasions:
     &gt;&gt;&gt; np.array([np.iinfo(np.int32).min]*10, dtype=np.int32) // np.int32(-1)
     &lt;stdin&gt;:1: RuntimeWarning: divide by zero encountered in floor_divide
     array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)
    Integer division overflow now returns the input dtype\&#x27;s minimum value
    and raise the following `RuntimeWarning`:
     &gt;&gt;&gt; np.array([np.iinfo(np.int32).min]*10, dtype=np.int32) // np.int32(-1)
     &lt;stdin&gt;:1: RuntimeWarning: overflow encountered in floor_divide
     array([-2147483648, -2147483648, -2147483648, -2147483648, -2147483648,
            -2147483648, -2147483648, -2147483648, -2147483648, -2147483648],
    `masked_invalid` now modifies the mask in-place
    When used with `copy=False`, `` now modifies the
    input masked array in-place. This makes it behave identically to
    `masked_where` and better matches the documentation.
    `nditer`/`NpyIter` allows all allocating all operands
    The NumPy iterator available through `np.nditer` in Python and as
    `NpyIter` in C now supports allocating all arrays. The iterator shape
    defaults to `()` in this case. The operands dtype must be provided,
    since a \&quot;common dtype\&quot; cannot be inferred from the other inputs.
     1f08c901040ebe1324d16cfc71fe3cd2  numpy-1.24.0rc1-cp310-cp310-macosx_10_9_x86_64.whl
     d35a59a1ccf1542d690860ad85fbb0f0  numpy-1.24.0rc1-cp310-cp310-macosx_11_0_arm64.whl
     c7db37964986d7b9756fd1aa077b7e72  numpy-1.24.0rc1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     72c2dad61fc86c4d87e23d0de975e0b6  numpy-1.24.0rc1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     3c769f1089253266d7a522144696bde3  numpy-1.24.0rc1-cp310-cp310-win32.whl
     96226a2045063b9caff40fe2a2098e72  numpy-1.24.0rc1-cp310-cp310-win_amd64.whl
     b20897446f52e7fcde80e12c7cc1dc1e  numpy-1.24.0rc1-cp311-cp311-macosx_10_9_x86_64.whl
     9cafe21759e90c705533d1f3201d35aa  numpy-1.24.0rc1-cp311-cp311-macosx_11_0_arm64.whl
     0e8621d07dae7ffaba6cfe83f7288042  numpy-1.24.0rc1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     0c67808eed6ba6f9e9074e6f11951f09  numpy-1.24.0rc1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     1065bea5d0670360353e698093954e35  numpy-1.24.0rc1-cp311-cp311-win32.whl
     fe2122ec86b45e00b648071ee2931fbc  numpy-1.24.0rc1-cp311-cp311-win_amd64.whl
     ab3e8424a04338d43ed466ade66de7a8  numpy-1.24.0rc1-cp38-cp38-macosx_10_9_x86_64.whl
     fc6eac08a59c4efb3962d990ff94f2b7  numpy-1.24.0rc1-cp38-cp38-macosx_11_0_arm64.whl
     3498ac93ae6abba813e5d76f86ae5356  numpy-1.24.0rc1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     629ce4b8cb011ff735ebd482fbf51702  numpy-1.24.0rc1-cp38-cp38-win32.whl
     cb503a78e27f0f46b6b43d211275dc58  numpy-1.24.0rc1-cp38-cp38-win_amd64.whl
     ffccdb9750336f5e55ab90c8eb7c1a8d  numpy-1.24.0rc1-cp39-cp39-macosx_10_9_x86_64.whl
     9751b9f833238a7309ad4e6b43fa8cb5  numpy-1.24.0rc1-cp39-cp39-macosx_11_0_arm64.whl
     cb8a10f411773f0ac5e06df067599d45  numpy-1.24.0rc1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     8d670816134824972afb512498b95ede  numpy-1.24.0rc1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     60687b97ab720f6be9e3542e5761769f  numpy-1.24.0rc1-cp39-cp39-win32.whl
     11fd99748acc0726ac164034c32bb3cd  numpy-1.24.0rc1-cp39-cp39-win_amd64.whl
     09e1d6f6d75facaf84d2b87a33874d4b  numpy-1.24.0rc1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl
     2da9ad07343b410aca4edf1285e4266b  numpy-1.24.0rc1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     9a0e466a55632cc1d67db119f586cd05  numpy-1.24.0rc1-pp38-pypy38_pp73-win_amd64.whl
     abc863895b02cdcc436474f6cdf2d14d  numpy-1.24.0rc1.tar.gz
     36acf6043b94a0e8af75d0a1931678d20e673b83fd79798c805ebc995e233cff  numpy-1.24.0rc1-cp310-cp310-macosx_10_9_x86_64.whl
     244c2c22f776e168e1060112f87717d73df2462e0eba4095a7673fe87db49b7a  numpy-1.24.0rc1-cp310-cp310-macosx_11_0_arm64.whl
     730112e692c165e8ad69071c70653522ee19d8c8af2da839339de01013eeef24  numpy-1.24.0rc1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     960b0d980adfa5c37fea89fc556bb482f9d957a3188be46d03a00fa1bd8f617b  numpy-1.24.0rc1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     f54788f1a6941cb1b57bcf5ff09a281e5db75bbf9f2ac9534a626128ded0244f  numpy-1.24.0rc1-cp310-cp310-win32.whl
     07fef63a5113969d7897589928870c57dd3e28671d617f688486f12c3a3b466a  numpy-1.24.0rc1-cp310-cp310-win_amd64.whl
     aea88e02d9335052172f4d6c8163721c3edd086ea3bf3bc9b6d5c55661540f1b  numpy-1.24.0rc1-cp311-cp311-macosx_10_9_x86_64.whl
     3950be11c03d250ea780280ce37a6fe7bd21dafcb478e08190c72b6c58ed7d18  numpy-1.24.0rc1-cp311-cp311-macosx_11_0_arm64.whl
     743c30cda228f8be9fe552453870b412b38ac232972c617a0f18765dedf395a5  numpy-1.24.0rc1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     cab1335b70e24e88ef2b9f727b9f5fc6e0d31d9fe9da0213f6c28cf615b65db0  numpy-1.24.0rc1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     5283759f0dd905f9e62ed55775345fbb233a53146ceaf2f75e96d939f564ee79  numpy-1.24.0rc1-cp311-cp311-win32.whl
     427bd9c45777e8baf782b6b33ebc26a88716c2d9b76b0474987660c2c066dca0  numpy-1.24.0rc1-cp311-cp311-win_amd64.whl
     20edfad312395d1cb8ad6ca5d2c42d2dab057f5d1920af3f94c7a72103335d8a  numpy-1.24.0rc1-cp38-cp38-macosx_10_9_x86_64.whl
     79134b92e1fb86915369753b3e64a359416cd98ea2329d270eb4e1d0ab300c0d  numpy-1.24.0rc1-cp38-cp38-macosx_11_0_arm64.whl
     6f00858573e2316ac5d190cf81dc178d94579969f827ac34c7a53110428e6f72  numpy-1.24.0rc1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     a8d6f78be3ad0bd9b4adecba2fda570ef491ae69f8c7cc84acd382802a81e242  numpy-1.24.0rc1-cp38-cp38-win32.whl
     f1f5fa912df64dd48ec55352b72f4b036ab7b3911e996703f436e17baca780f9  numpy-1.24.0rc1-cp38-cp38-win_amd64.whl
     8d149b3c3062dc68e29bdb244edc30c5d80e2c654b5c27c32773bf7354452b48  numpy-1.24.0rc1-cp39-cp39-macosx_10_9_x86_64.whl
     d177fbd4d22248640d73f07c3aac2cc1f79c412f61564452abd08606ee5e3713  numpy-1.24.0rc1-cp39-cp39-macosx_11_0_arm64.whl
     05faa4ecb98d7bc593afc5b10c25f0e7dd65244b653756b083c605fbf60b9b67  numpy-1.24.0rc1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
     06d8827c6fa511b61047376efc3a677d447193bf88e6bbde35b4e5223a4b58d6  numpy-1.24.0rc1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     15605b92bf10b10e110a9c0f1c4ef6cd58246532c62a0c3d3188c05e69cdcdb6  numpy-1.24.0rc1-cp39-cp39-win32.whl
     8046f5c23769791be8432a592b9881984e0e4abc7f552c7e5c349420a27323e7  numpy-1.24.0rc1-cp39-cp39-win_amd64.whl
     aa9c4a2f65d669e6559123154da944ad6bd7605cbba5cce81bf6794617870510  numpy-1.24.0rc1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl
     e44fd1bdfa50979ddec76318e21abc82ee3858e5f45dfc5153b6f660d9d29851  numpy-1.24.0rc1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
     1802199d70d9f8ac11eb63a1ef50d33915b78a84bacacaadb2896175005103d4  numpy-1.24.0rc1-pp38-pypy38_pp73-win_amd64.whl
     d601180710004799acb8f80e564b84e71490fac9d84e115e2f5b0f6709754f16  numpy-1.24.0rc1.tar.gz
    • PyPI:
    • Changelog:
    • Homepage:

    Update packaging from 21.3 to 22.0.



    * Explicitly declare support for Python 3.11 (:issue:`587`)
    * Remove support for Python 3.6 (:issue:`500`)
    * Remove ``LegacySpecifier`` and ``LegacyVersion`` (:issue:`407`)
    * Add ``__hash__`` and ``__eq__`` to ``Requirement`` (:issue:`499`)
    * Add a ``cpNNN-none-any`` tag (:issue:`541`)
    * Adhere to :pep:`685` when evaluating markers with extras (:issue:`545`)
    * Allow accepting locally installed prereleases with ``SpecifierSet``  (:issue:`515`)
    * Allow pre-release versions in marker evaluation (:issue:`523`)
    * Correctly parse ELF for musllinux on Big Endian (:issue:`538`)
    * Document ``packaging.utils.NormalizedName`` (:issue:`565`)
    * Document exceptions raised by functions in ``packaging.utils`` (:issue:`544`)
    * Fix compatible version specifier incorrectly strip trailing ``0`` (:issue:`493`)
    * Fix macOS platform tags with old macOS SDK (:issue:`513`)
    * Forbid prefix version matching on pre-release/post-release segments (:issue:`563`)
    * Normalize specifier version for prefix matching (:issue:`561`)
    * Improve documentation for ``packaging.specifiers`` and ``packaging.version``. (:issue:`572`)
    * ``Marker.evaluate`` will now assume evaluation environment with empty ``extra``.
    Evaluating markers like ``&quot;extra == &#x27;xyz&#x27;&quot;`` without passing any extra in the
    ``environment`` will no longer raise an exception (:issue:`550`)
    * Remove dependency on ``pyparsing``, by replacing it with a hand-written parser.
    This package now has no runtime dependencies (:issue:`468`)
    * Update return type hint for ``Specifier.filter`` and ``SpecifierSet.filter``
    to use ``Iterator`` instead of ``Iterable`` (:issue:`584`)
    • PyPI:
    • Changelog:

    Update pandas-market-calendars from 4.1.1 to 4.1.2.



    - Added 2023 holidays to BSE calendar
    • PyPI:
    • Changelog:
    • Repo:

    Update pathspec from 0.10.2 to 0.10.3.



    New features:
    - Added utility function `pathspec.util.append_dir_sep()` to aid in distinguishing between directories and files on the file-system. See `Issue 65`_.
    Bug fixes:
    - `Issue 66`_/`Pull 67`_: Package not marked as py.typed.
    - `Issue 68`_: Exports are considered private.
    - `Issue 70`_/`Pull 71`_: &#x27;Self&#x27; string literal type is Unknown in pyright.
    - `Issue 65`_: Checking directories via match_file() does not wo
    opened by pyup-bot 1
  • Avoid a  `ModuleNotFoundError` in `streamlit`

    Avoid a `ModuleNotFoundError` in `streamlit`

    Running a streamlit run ../analysis/ causes a ModuleNotFoundError :

    File "/LiuAlgoTrader/.venv/lib/python3.10/site-packages/streamlit-1.15.1-py3.10.egg/streamlit/runtime/scriptrunner/", line 564, in _run_script
        exec(code, module.__dict__)
      File "/LiuAlgoTrader/analysis/", line 15, in <module>
        from streamlit.uploaded_file_manager import UploadedFile
    ModuleNotFoundError: No module named 'streamlit.uploaded_file_manager'

    The location of UploadedFile was changed from streamlit.uploaded_file_manager to streamlit.runtime.uploaded_file_manager.

    opened by TheSnoozer 1
  • Bump certifi from 2022.9.24 to 2022.12.7 in /liualgotrader/requirements

    Bump certifi from 2022.9.24 to 2022.12.7 in /liualgotrader/requirements

    Bumps certifi from 2022.9.24 to 2022.12.7.


    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.

    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    opened by dependabot[bot] 1
  • v0.4.29(Aug 8, 2022)

  • v0.4.28(Jul 26, 2022)

    1. Notes on environment variables for non-power-shell users,
    2. Change sample default from 'sip' to 'iex' for freemium Alpaca users,
    3. Various bug-fixes in trader
    Source code(tar.gz)
    Source code(zip)
  • v0.4.27(Jul 10, 2022)

  • v0.4.26(Jul 2, 2022)

  • v0.4.24(Jun 6, 2022)

  • v0.4.22(Jun 5, 2022)

  • v0.4.21(Jun 5, 2022)

    1. Enhance support for Python 10 while keeping backward compatibility,
    2. Resolve release issues reported by users,
    3. Minor bug fixes.
    4. Update packages.

    Full Changelog:

    Source code(tar.gz)
    Source code(zip)
  • v0.4.20(Mar 27, 2022)

  • v0.4.13(Feb 18, 2022)

    1. New momentum scanner running on top Alpaca and Polygon,
    2. DataLoader class, now takes into account symbol changes transparently,
    3. stock-symbol change data updated
    4. updates to dependent libraries, improve tests
    5. bug fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.4.12(Jan 22, 2022)

    This release includes:

    1. Improvements to Crypto data loading.
    2. Adding list function to portfolio CLI.
    3. Adding tutorial links to README and documentation.
    4. Bug-fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.4.11(Jan 12, 2022)

    • Add support to database-based tradeplan.
    • Infrastructure for supporting multiple.
    • Complete re-write to DataLoader.
    • Significant performance improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.10(Jan 12, 2022)

    • Add support to database-based tradeplan.
    • Infrastructure for supporting multiple.
    • Complete re-write to DataLoader.
    • Significant performance improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.03(Dec 17, 2021)

  • v0.4.02(Dec 17, 2021)

  • v0.4.00(Nov 2, 2021)

    1. Add support for Crypto assets,
    2. Add support for Alpaca BTC/ETH data loading,
    3. Add support for Gemini Bitcoin Exchange (trading, data & streaming) sandbox
    Source code(tar.gz)
    Source code(zip)
  • v0.3.25(Sep 15, 2021)

  • v0.3.10(Jun 24, 2021)

  • v0.3.0(May 28, 2021)

  • v0.2.00(May 7, 2021)

    1. Add support for Account management,
    2. Add support for Portfolio management,
    3. Add support for a key-value store,
    4. Extend Strategy to better support 'swing' trades: allow to run on all symbols at one go, instead of getting real-time updates and actions per symbol,
    5. Improve analytics for swing trading, including Sharpe Ratio and SP-500 comparison,
    6. Added test-automation,
    7. Improve documentation.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.05(Apr 9, 2021)

    1. Default data provider switched to Alpaca, tested and ready for production,
    2. Polygon as data-provider available in all subscription levels (features obviously depend on selected level),
    3. Finnhub as a data provider: work in progress.
    Source code(tar.gz)
    Source code(zip)
  • 0.1.00(Mar 20, 2021)

    1. Full list of changes in's%20New.html ,
    2. Rearchitecting due to changes in both Alpaca and Polygon APIs, reducing dependencies and making it easier to migrate or adopt new brokers & data-providers,
    3. Adding full backtesting support.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.86(Jan 11, 2021)

  • v0.0.85(Dec 31, 2020)

  • v0.0.82(Dec 22, 2020)

  • v0.0.80(Dec 15, 2020)

    1. Added new DB Tables to track P&L and r units per symbol per trade,
    2. Add miner to re-calc performance for past trading sessions,
    3. Improved tear-sheet notebook,
    4. bug fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.0.76(Nov 21, 2020)

    1. add calculations of anchored-vwap to fincalcs.vwap module,
    2. add analytics notebook, w/ interactive candlestick w/ volume diagram, showing anchored vwap(s).
    Source code(tar.gz)
    Source code(zip)
  • v0.0.74(Nov 18, 2020)

  • v0.0.68(Nov 2, 2020)

  • v0.0.67(Oct 29, 2020)

    1. documentation alignment with the framework enhancements,
    2. collection of bug fixes and improvements,
    3. improvements to notebooks release cycle .
    Source code(tar.gz)
    Source code(zip)
  • v0.0.61(Oct 18, 2020)

    • added an installation & configuration wizard,
    • reduced dependencies to streamline installation, especially for Windows users,
    • improved analytics and adding a streamlit UI,
    • additional configuration parameters,
    • various bug fixes
    Source code(tar.gz)
    Source code(zip)
Amichay Oren
Coder and Cat Herder
Amichay Oren
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Ray provides a simple, universal API for building distributed applications. Ray is packaged with the following libraries for accelerating machine lear

null 23.3k Dec 31, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. 10x Larger Models 10x Faster Trainin

Microsoft 8.4k Dec 30, 2022
Exemplary lightweight and ready-to-deploy machine learning project

Exemplary lightweight and ready-to-deploy machine learning project

snapADDY GmbH 6 Dec 20, 2022
MasTrade is a trading bot in baselines3,pytorch,gym

mastrade MasTrade is a trading bot in baselines3,pytorch,gym idea we have for example 1 btc and we buy a crypto with it with market option to trade in

Masoud Azizi 18 May 24, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 8, 2023
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 1, 2023
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Jan 3, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 5, 2023
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 6, 2023
UpliftML: A Python Package for Scalable Uplift Modeling

UpliftML is a Python package for scalable unconstrained and constrained uplift modeling from experimental data. To accommodate working with big data, the package uses PySpark and H2O models as base learners for the uplift models. Evaluation functions expect a PySpark dataframe as input. 254 Dec 31, 2022
Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

SDK: Overview of the Kubeflow pipelines service Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on

Kubeflow 3.1k Jan 6, 2023
DistML is a Ray extension library to support large-scale distributed ML training on heterogeneous multi-node multi-GPU clusters

DistML is a Ray extension library to support large-scale distributed ML training on heterogeneous multi-node multi-GPU clusters

null 27 Aug 19, 2022
A scikit-learn based module for multi-label et. al. classification

scikit-multilearn scikit-multilearn is a Python module capable of performing multi-label learning tasks. It is built on-top of various scientific Pyth

null 802 Jan 1, 2023
MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees.

MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees. MooGBT optimizes for multiple objectives by defining constraints on sub-objective(s) along with a primary objective. The constraints are defined as upper bounds on sub-objective loss function. MooGBT uses a Augmented Lagrangian(AL) based constrained optimization framework with Gradient Boosted Trees, to optimize for multiple objectives.

Swiggy 66 Dec 6, 2022
Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application

Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application (with docker-compose).

Philip May 2 Dec 3, 2021
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022
Simple structured learning framework for python

PyStruct PyStruct aims at being an easy-to-use structured learning and prediction library. Currently it implements only max-margin methods and a perce

pystruct 666 Jan 3, 2023
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 3, 2023
Probabilistic programming framework that facilitates objective model selection for time-varying parameter models.

Time series analysis today is an important cornerstone of quantitative science in many disciplines, including natural and life sciences as well as eco

Christoph Mark 129 Dec 24, 2022