Trio – a friendly Python library for async concurrency and I/O

Overview
Join chatroom Join forum Documentation Latest PyPi version Latest conda-forge version Test coverage

Trio – a friendly Python library for async concurrency and I/O

The Trio project aims to produce a production-quality, permissively licensed, async/await-native I/O library for Python. Like all async libraries, its main purpose is to help you write programs that do multiple things at the same time with parallelized I/O. A web spider that wants to fetch lots of pages in parallel, a web server that needs to juggle lots of downloads and websocket connections simultaneously, a process supervisor monitoring multiple subprocesses... that sort of thing. Compared to other libraries, Trio attempts to distinguish itself with an obsessive focus on usability and correctness. Concurrency is complicated; we try to make it easy to get things right.

Trio was built from the ground up to take advantage of the latest Python features, and draws inspiration from many sources, in particular Dave Beazley's Curio. The resulting design is radically simpler than older competitors like asyncio and Twisted, yet just as capable. Trio is the Python I/O library I always wanted; I find it makes building I/O-oriented programs easier, less error-prone, and just plain more fun. Perhaps you'll find the same.

This project is young and still somewhat experimental: the overall design is solid, and the existing features are fully tested and documented, but you may encounter missing functionality or rough edges. We do encourage you to use it, but you should read and subscribe to issue #1 to get a warning and a chance to give feedback about any compatibility-breaking changes.

Where to next?

I want to try it out! Awesome! We have a friendly tutorial to get you started; no prior experience with async coding is required.

Ugh, I don't want to read all that – show me some code! If you're impatient, then here's a simple concurrency example, an echo client, and an echo server.

How does Trio make programs easier to read and reason about than competing approaches? Trio is based on a new way of thinking that we call "structured concurrency". The best theoretical introduction is the article Notes on structured concurrency, or: Go statement considered harmful. Or, check out this talk at PyCon 2018 to see a demonstration of implementing the "Happy Eyeballs" algorithm in an older library versus Trio.

Cool, but will it work on my system? Probably! As long as you have some kind of Python 3.6-or-better (CPython or the latest PyPy3 are both fine), and are using Linux, macOS, Windows, or FreeBSD, then Trio will work. Other environments might work too, but those are the ones we test on. And all of our dependencies are pure Python, except for CFFI on Windows, which has wheels available, so installation should be easy (no C compiler needed).

I tried it, but it's not working. Sorry to hear that! You can try asking for help in our chat room or forum, filing a bug, or posting a question on StackOverflow, and we'll do our best to help you out.

Trio is awesome, and I want to help make it more awesome! You're the best! There's tons of work to do – filling in missing functionality, building up an ecosystem of Trio-using libraries, usability testing (e.g., maybe try teaching yourself or a friend to use Trio and make a list of every error message you hit and place where you got confused?), improving the docs, ... check out our guide for contributors!

I don't have any immediate plans to use it, but I love geeking out about I/O library design! That's a little weird? But let's be honest, you'll fit in great around here. We have a whole sub-forum for discussing structured concurrency (developers of other systems welcome!). Or check out our discussion of design choices, reading list, and issues tagged design-discussion.

I want to make sure my company's lawyers won't get angry at me! No worries, Trio is permissively licensed under your choice of MIT or Apache 2. See LICENSE for details.

Code of conduct

Contributors are requested to follow our code of conduct in all project spaces.

Comments
  • Make trio/_core/_run less magical and understandable for static code analysis issue(#542)

    Make trio/_core/_run less magical and understandable for static code analysis issue(#542)

    This is the first step on exec replacement. Manual code works (at least on OSX). Please take a look and add any comments. The code still has the old code as dead commented code in it as a reference.

    Addresses part of #542

    opened by jmfrank63 67
  • Simplify imports

    Simplify imports

    As advised I replaced all import * with explicit imports in __init__.py of the trio root.

    I tested against vscode code completion and it seems to work. I did not run any tests yet.

    Also pylint does not complain anymore about missing attributes.

    The issue was #542.

    opened by jmfrank63 44
  • Should we rename trio.hazmat?

    Should we rename trio.hazmat?

    On the one hand, the name hazmat is definitely effective at getting people to pay attention and be careful. OTOH, I'm not sure it's quite communicating what we want: people seem scared of it, and uncertain whether they can actually use it. A bit of that is fine, but... I'm not sure it's communicating what we want. The block of text at the beginning helps of course, but you can't really fix up a confusing name with good docs.

    Maybe trio.lowlevel? Though trio.socket is kind of low-level too. Does lowlevel have the kind of "if you see this during a code review, ask for a comment justifying its usage" implication that we're going for?

    trio.foundation? trio.core?

    design discussion potential API breaker 
    opened by njsmith 41
  • Raise at next Checkpoint if Non-awaited coroutine found.

    Raise at next Checkpoint if Non-awaited coroutine found.

    right now that works only if the task is not finisehd. Not sure what to do if the task is done as you have no occasions to throw in it. Should we still return the final_result as a Result , or make it an Error() ?

    Debugging is still a bit weird as you need to find the checkpoint in the middle of the stacktrace (I guess we can improve that). Othe question would be is there a way to get the previous checkpoint of the current task to narrow tings down.

    It's still hard to debug when the non-awaited coroutine is not at the same stacklevel than the schedule point. We may be able to do better by inspecting unawaited coro frames maybe ?

    The await_later and context managers to relax/enforce await are not exposed, and I'm unsure whether we want to have custom CoroProtectors (likely yes for testing). We may also want to list all the unawaited coroutines in the error message, and so far I have not tried with many tasks, but the internals of Trio are still unfamiliar.

    Docs and Tests are still missing.

    opened by Carreau 39
  • MultiError v2

    MultiError v2

    MultiError is the one part of trio's core design that I'm really not satisfied with. Trying to support multiple exceptions on top of the language's firm one-exception-at-a-time stance raises (heh) all kinds of issues. Some of them can probably be fixed by changing the design. But the hardest problem is that there are lots of third-party packages that want to do custom handling of exception tracebacks (e.g. ipython, pytest, raven/sentry). And right now we have to monkeypatch all of them to work with MultiError, with more or less pain and success.

    Now @1st1 wants to add nurseries to asyncio, and as part of that he'll need to add something like MultiError to the stdlib. We talked a bunch about this at PyCon, and the plan is to figure out how to do this in a way that works for both asyncio and trio. Probably we'll start by splitting MultiError off into a standalone library, that trio and uvloop can both consume, and then add that library to the stdlib in 3.8 (and then the library will remain as a backport library for those using trio on 3.7 and earlier). This way asyncio can build on our experience, and trio can get out of the monkeypatching business (because if MultiErrors are in the stdlib, then it becomes ipython/pytest/sentry's job to figure out how to cope with them).

    But before we can do that we need to fix the design, and do it in writing so we (and Yury) can all see that the new design is right :-).

    Current design

    [If this were a PEP I'd talk more about the basic assumptions underlying the design: multiple errors cna happen concurrently, you need to preserve that fact, you need to be able to catch some-but-not-all of those errors, you need to make sure that don't accidentally throw away any errors that you didn't explicitly try to catch, etc. But this is a working doc so I'm just going to dive into the details...]

    Currently, trio thinks of MultiError objects as being ephemeral things. It tries as much as possible to simulate a system where multiple exceptions just happen to propagate next to each other. So it's important to keep track of the individual errors and their tracebacks, but the MultiError objects themselves are just a detail needed to accomplish this.

    So, we only create MultiErrors when there are actually multiple errors – if a MultiError has only a single exception, we "collapse" it, so MultiError([single_exc]) is single_exc. The basic primitive for working with a MultiError is the filter function, which is really a kind of weird flat-map-ish kind of thing: it runs a function over each of the "real" exceptions inside a MultiError, and can replace or remove any of them. If this results in any MultiError object that has zero or one child, then filter collapses it. And the catch helper, MultiError.catch, is a thin wrapper for filter: it catches an exception, then runs a filter over it, and then reraises whatever is left (if anything).

    One more important detail: traceback handling. When you have a nested collection of MultiErrors, e.g. MultiError([RuntimeError(), MultiError([KeyError(), ValueError()])]), then the leaf exceptions' __traceback__ attr holds the traceback for the frames where they traveled independently before meeting up to become a MultiError, and then each MultiError object's __traceback__ attr holds the frames that that particular MultiError traversed. This is just how Python's __traceback__ handling works; there's no way to avoid it. But that's OK, it's actually quite convenient – when we display a traceback, we don't want to say "exception 1 went through frames A, B, C, D, and independently, exception 2 went through frames A', B, C, D" – it's more meaningful, and less cluttered, to say "exception 1 went through frame A, and exception 2 went through frame A', and then they met up and together they went through frames B, C, D". The way __traceback__ data ends up distributed over the MultiError structure makes this structure really easy to extract.

    Proposal for new design

    Never collapse MultiErrors. Example:

    async def some_func():
        async with trio.open_nursery() as nursery:
            async with trio.open_nursery() as nursery:
                raise RuntimeError
    

    If you do await some_func() then currently you get a RuntimeError; in this proposal, you'll instead get a MultiError([MultiError([RuntimeError()])]).

    Get rid of filter, and replace it with a new primitive split. Given an exception and a predicate, split splits the exception into one half representing all the parts of the exception that match the predicate, and another half representing all the parts that don't match. Example:

    match, rest = MultiError.split(
      # simple predicate: isinstance(exc, RuntimeError)
      RuntimeError,
      # The exception being split:
      MultiError([
        RuntimeError("a"),
        MultiError([
          RuntimeError("b"),
          ValueError(),
        ]),
      ])
    )
    
    # produces:
    
    match = MultiError([RuntimeError("a"), MultiError([RuntimeError("b")])])
    rest = MultiError([MultiError([ValueError()])])
    
    

    The split operation always takes an exception type (or tuple of types) to match, just like an except clause. It should also take an optional arbitrary function predicate, like match=lambda exc: ....

    If either match or rest is empty, it gets set to None. It's a classmethod rather than a regular method so that you can still use it in cases where you have an exception but don't know whether it's a MultiError or a regular exception, without having to check.

    Catching MultiErrors is still done with a context manager, like with MultiError.catch(RuntimeError, handler). But now, catch takes a predicate + a handler (as opposed to filter, which combines these into one thing), uses the predicate to split any caught error, and then if there is a match it calls the handler exactly once, passing it the matched object.

    Also, support async with MultiError.acatch(...) so you can write async handlers.

    Limitations of the current design, and how this fixes them

    Collapsing is not as helpful to users as you might think

    A "nice" thing about collapsing out MultiErrors is that most of the time, when only one thing goes wrong, you get a nice regular exception and don't need to think about this MultiError stuff. I say "nice", but really this is... bad. When you write error handling code, you want to be prepared for everything that could happen, and this design makes it very easy to forget that MultiError is a possibility, and hard to figure out where MultiError handling is actually required. If the language made handling MultiErrors more natural/ergonomic, this might not be as big an issue, but that's just not how Python works. So Yury is strongly against the collapsing design, and he has a point.

    Basically, seeing MultiError([RuntimeError()]) tells you "ah, this time it was a single exception, but it could have been multiple exceptions, so I'd better be prepared to handle that".

    This also has the nice effect that it becomes much easier to teach people about MultiError, because it shows up front-and-center the first time you have an error inside a nursery.

    One of my original motivations for collapsing was that trio.run has a hidden nursery (the "system nursery") that the main task runs inside, and if you do trio.run(main) and main raises RuntimeError, I wanted trio.run to raise RuntimeError as well, not MultiError([RuntimeError()]). But actually this is OK, because the way things have worked out, we never raise errors through the system nursery anyway: either we re-raise whatever main raised, or we raise TrioInternalError. So my concern was unfounded.

    Collapsing makes traceback handling more complicated and fragile

    Collapsing also makes the traceback handling code substantially more complicated. When filter simplifies a MultiError tree by removing intermediate nodes, it has to preserve the traceback data those nodes held, which it does by patching it into the remaining exceptions. (In our example above, if exception 2 gets caught, then we patch exception 1's __traceback__ so that it shows frames A, B, C, D after all.) This all works, but it makes the implementation much more complex. If we don't collapse, then we can throw away all the traceback patching code: the tracebacks can just continue to live on whichever object they started out on.

    Collapsing also means that filter is a destructive operation: it has to mutate the underlying exception objects' __traceback__ attributes in place, so you can't like, speculatively run a filter and then change your mind and go back to using the original MultiError. That object still exists but after the filter operation it's now in an inconsistent state. Fine if you're careful, but it'd be nicer if users didn't have to be careful. If we don't collapse, then this isn't an issue: split doesn't have to mutate its input (and neither would filter, if we were still using filter).

    Collapsing loses __context__ for intermediate MultiError nodes

    Currently, Trio basically ignores the __context__ and __cause__ attributes on MultiError objects. They don't get assigned any semantics, they get freely discarded when collapsing, and they often end up containing garbage data. (In particular, if you catch a MultiError, handle part of it, and re-raise the remainder... the interpreter doesn't know that this is semantically a "re-raise", and insists on sticking the old MultiError object onto the new one's __context__. We have countermeasures, but it's all super annoying and messy.)

    It turns out though that we do actually have a good use for __context__ on MultiErrors. It's super not obvious, but consider this scenario: you have two tasks, A and B, executing concurrently in the same nursery. They both crash. But! Task A's exception propagates faster, and reaches the nursery first. So the nursery sees this, and cancels task B. Meanwhile, the task B has blocked somewhere – maybe it's trying to send a graceful shutdown message from a finally: block or something. The cancellation interrupts this, so now task B has a Cancelled exception propagating, and that exception's __context__ is set to the original exception in task B. Eventually, the Cancelled exception reaches the nursery, which catches it. What happens to task B's original exception?

    Currently, in Trio, it gets discarded. But it turns out that this is not so great – in particular, it's very possible that task A and B were working together on something, task B hit an error, and then task A crashed because task B suddenly stopped talking to it. So here task B's exception is the actual root cause, and task A's exception is detritus. At least two people have hit this in Trio (see #416 and https://github.com/python-trio/pytest-trio/issues/30).

    In the new design, we should declare that a MultiError object's __context__ held any exceptions that were preempted by the creation of that MultiError, i.e., by the nursery getting cancelled. We'd basically just look at the Cancelled objects, and move their __context__ attributes onto the MultiError that the nursery was raising. But this only works if we avoid collapsing.

    It would be nice if tracebacks could show where exceptions jumped across task boundaries

    This has been on our todo list forever. It'd be nice if we could like... annotate tracebacks somehow?

    If we stopped collapsing MultiErrors, then there's a natural place to put this information: each MultiError corresponds to a jump across task boundaries, so we can put it in the exception string or something. (Oh yeah, maybe we should switch MultiErrors to having associated message strings? Currently they don't have that.)

    Filtering is just an annoying abstraction to use

    If I wanted to express exception catching using a weird flat-map-ish thing, I'd be writing haskell. In Python it's awkward and unidiomatic. But with filter, it's necessary, because you could have any number of exceptions you need to call it on.

    With split, there's always exactly 2 outputs, so you can perform basic MultiError manipulations in straight-line code without callbacks.

    Tracebacks make filtering even more annoying to use than it would otherwise be

    When filter maps a function over a MultiError tree, the exceptions passed in are not really complete, standalone exceptions: they only have partial tracebacks attached. So you have to handle them carefully. You can't raise or catch them – if you did, the interpreter would start inserting new tracebacks and make a mess of things.

    You might think it was natural to write a filter function using a generator, like how @contextmanager works:

    def catch_and_log_handler():
        try:
            yield
        except Exception as exc:
            log.exception(exc)
    
    def normalize_errors_handler():
        try:
            yield
        except OtherLibraryError as exc:
            raise MyLibraryError(...) from exc
    

    But this can't work, because the tracebacks would get all corrupted. Instead, handlers take exceptions are arguments, and return either that exception object, or a new exception object (like MyLibraryError).

    If a handler function does raise an exception (e.g., b/c of a typo), then there's no good way to cope with that. Currently Trio's MultiError code doesn't even try to handle this.

    In the proposed design, all of these issues go away. The exceptions returned by split are always complete and self-contained. Probably for MultiError.catch we still will pass in the exception as an argument instead of using a generator and .throwing it – the only advantage of the .throw is that it lets you use an except block to say which exceptions you want to catch, and with the new MultiError.catch we've already done that before we call the handler. But we can totally allow raise as a way to replace the exception, or handle accidental exceptions. (See the code below for details.)

    Async catching

    Currently we don't have an async version of filter or catch (i.e., one where the user-specified handler can be async). Partly this is because when I was first implementing this I hit an interpreter bug that made it not work, but it's also because filter's is extremely complicated and maintaining two copies makes it that much worse.

    With the new design, there's no need for async split, I think the new catch logic makes supporting both sync and async easy (see below).

    Details

    # A classic "catch-all" handler, very common in servers
    with MultiError.catch(Exception, logger.exception):
        ...
    
    # is equivalent to:
    
    except BaseException as exc:
        caught, rest = MultiError.split(Exception, exc)
        if caught is None:
            raise
        try:
            logger.exception(caught)
        except BaseException as exc:
            # The way we set exc.__context__ here isn't quite right...
            # Ideally we should stash it the interpreter's implicit exception
            # handling context state before calling the handler.
            if rest is None:
                try:
                    raise
                finally:
                    exc.__context__ = caught
            else:
                exc.__context__ = caught
                new_exc = MultiError([exc, rest])
                try:
                    raise new_exc
                finally:
                    new_exc.__context__ = None
                    # IIRC this traceback fixup doesn't actually work b/c
                    # of interpreter details, but maybe we can fix that in 3.8
                    new_exc.__traceback__ = new_exc.__traceback__.tb_next
        else:
            if rest is not None:
                orig_context = rest.__context__
                try:
                    raise rest
                finally:
                    rest.__context__ = orig_context
                    # See above re: traceback fixup
                    rest.__traceback__ = rest.__traceback__.tb_next
    

    Notes:

    As noted in comments, __context__ and __traceback__ handling is super finicky and has subtle bugs. Interpreter help would be very... helpful.

    Notice that all the logic around the logger.exception call is always synchronous and can be factored into a context manager, so we can do something like:

    def catch(type, handler, match=None):
        with _catch_impl(type, match=None) as caught:
            handler(caught)
            
    async def acatch(type, handler, match=None):
        with _catch_impl(type, match=None) as caught:
            await handler(caught)
    

    Other notes

    Subclassing

    Do we want to support subclassing of MultiError, like class NurseryError(MultiError)? Should we encourage it?

    If so, we need to think about handling subclasses when cloning MultiErrors in .split.

    I think we should not support this though. Because, we don't have, and don't want, a way to distinguish between a MultiError([MultiError([...])]) and a MultiError([NurseryError([...])]) – we preserve the structure, it contains metadata, but still it's structural. split and catch still only allow you address the leaf nodes. And that's important, because if we made it easy to match on structure, then people would do things like try to catch a MultiError([MultiError([RuntimeError()])]), when what they should be doing is trying to catch one-or-more-RuntimeErrors. The point of keeping the MultiErrors around instead of collapsing is to push you to handle this case, not continue to hard-code assumptions about there being only a single error.

    Naming

    MultiError isn't bad, but might as well consider other options while we're redoing everything. AggregateError? CombinedError? NurseryError?

    Relevant open issues

    #408, #285, #204, #56, https://github.com/python-trio/pytest-trio/issues/30

    design discussion potential API breaker exception handling 
    opened by njsmith 38
  • Add contextvars support.

    Add contextvars support.

    This adds PEP 567 contextvars support to the core of Trio.

    Todo:

    • [x] Tests
    • [x] Deprecate TaskLocal
    • [x] Wait for a backport for 3.5 and 3.6.

    Closes #420 and closes #417, see also: #178.

    opened by Fuyukai 38
  • Should we deprecate trio.Event.clear?

    Should we deprecate trio.Event.clear?

    It occurs to me that I've seen 4 different pieces of code try to use Event.clear lately, and they were all buggy:

    • My first suggestion for solving #591 used an Event that we called clear on, and this created a race condition. (This was using threading.Event, but the principle is the same.) Details: https://github.com/python-trio/trio/pull/596#issuecomment-415270221

    • #619 uses it correctly (I think), but it's only safe because we enforce that only one task can call SignalReceiver.__anext__, which is subtle enough that I originally forgot to enforce it

    • @belm0 tried to use Event to track a value that toggles between true and false, but then found it wasn't appropriate for what he needed after all. (Not exactly Event's fault, but if we didn't have the clear method then I'm sure he'd have realized more quickly that it wasn't what he was looking for.)

    • @HyperionGray's websocket library tries to use Event objects to pass control back and forth between calls to send_message (in the main task) and a background writer task. Here's the writer task:

      https://github.com/HyperionGray/trio-websocket/blob/b787bf1a8a026ef1d9ca995d044bc97d42e7f727/trio_websocket/init.py#L300-L305

      If another task calls send_message while the writer task is blocked in send_all, then the send_message call will set() the event again, and then when send_all completes, it gets unconditionally cleared, so we end up in the invalid state where there is data pending, but self._data_pending is not set.

    Now, maybe this isn't Event.clear's fault, but it makes me wonder :-). (And this is partly coming out of my general reassessment of the APIs we inherited from the stdlib threading module, see also #322 and #573.)

    The core use case for Event is tracking whether an event has happened, and broadcasting that to an arbitrary number of listeners. For this purpose, clear isn't meaningful: once an event has happened, it can't unhappen. And if you stick to this core use case, Event seems very robust and difficult to mis-use.

    All of the trouble above came when someone tried to use it for something outside of this core use case. Some of these patterns do make sense:

    • If you have a periodic event, you might want to have the semantics of "wait for the next event". That can be done with an Event, where waiters call await ev.wait() and wakers call ev.set(); ev.clear(). But it can also be done with a Condition or a ParkingLot, or we could have a PeriodicEvent type if it comes up enough... for a dedicated PeriodicEvent it might also make sense to have a close method of some kind to avoid race conditions at shutdown, where tasks call wait after the last event has happened and deadlock.

      • Another option in many cases is to model a periodic event by creating one Event object per period. This is nice because it allows you to have overlapping periods. For example, consider a batching API, where tasks submit requests, and then every once in a while they get gathered up and submitted together. The submitting tasks want to wait until their request has been submitted. One way to do it would be to have an Event for each submission period. When a batch is gathered up for submission, the Event gets replaced, but the old Event doesn't get set until after the submission finishes. Maybe this is a pattern we should be nudging people towards, because it's more general/powerful.
    • The websocket example above could be made correct by moving the clear so that it's right after the wait, and before the call that consumes the data (data = self._wsproto.bytes_to_send()). (It might be more complicated if the consuming call wasn't itself synchronous.) So ev.wait(); ev.clear() can make sense... IF we know there is exactly one task listening. Which is way outside Event's core use case. In this case, it's basically a way of "passing control" from one task to another, which is often a mistake anyway – Python already has a very easy way to express sequential control like this, just execute the two things in the same task :-). Here I think a Lock would be better in any case: https://github.com/HyperionGray/trio-websocket/issues/3

    Are there any other use cases where Event.clear is really the right choice? Or where maybe it's not the right choice, but it's currently the best available choice?

    design discussion potential API breaker 
    opened by njsmith 37
  • Subprocess support

    Subprocess support

    Edit: if you're just coming to this issue, then this comment has a good overview of what there is to do: https://github.com/python-trio/trio/issues/4#issuecomment-398967572


    Original description

    Lots of annoying and fiddly details, but important!

    I think for waiting, the best plan is to just give up on SIGCHLD (seriously, SIGCHLD is the worst) and park a thread in waitpid for each child process. Threads are lighter weight than processes so one thread-per-process shouldn't be a big deal. At least on Linux - if we're feeling ambitious we can do better on kqueue platforms. On Windows, it depends on what the state of our WaitFor{Multiple,Single}Object-fu is.

    design discussion missing piece asyncio feature parity 
    opened by njsmith 37
  • Windows event notification

    Windows event notification

    Problem

    Windows has 3 incompatible families of event notifications APIs: IOCP, select/WSAPoll, and WaitForMultipleEvents-and-variants. They each have unique capabilities. This means: if you want to be able to react to all the different possible events that Windows can signal, then you must use all 3 of these. Needless to say, this creates a challenge for event loop design. There are a number of potentially viable ways to arrange these pieces; the question is which one we should use.

    (Actually, all 3 together still isn't sufficient, b/c there are some things that still require threads – like console IO – and I'm ignoring GUI events entirely because Trio isn't a GUI library. But never mind. Just remember that when someone tells you that Windows' I/O subsystem is great, that their statement isn't wrong but does require taking a certain narrow perspective...)

    Considerations

    The WaitFor*Event family

    The Event-related APIs are necessary to, for example, wait for a notification that a child process has exited. (The job object API provides a way to request IOCP notifications about process death, but the docs warn that the notifications are lossy and therefore useless...) Otherwise though they're very limited – in particular they have both O(n) behavior and max 64 objects in an interest set – so you definitely don't want to use these as your primary blocking call. We're going to be calling these in a background thread of some kind. The two natural architectures are to use WaitForSingleObject(Ex) and allocate one-thread-per-event, or else use WaitForMultipleObjects(Ex) and try and coalesce up to 64 events into each thread (substantially more complicated to implement but with 64x less memory overhead for thread stacks, if it matters). This is orthogonal to the rest of this issue, so it gets its own thread: #233

    IOCP

    IOCP is the crown jewel of Windows the I/O subsystem, and what you generally hear recommended. It follows a natively asynchronous model where you just go ahead and issue a read or write or whatever, and it runs in the background until eventually the kernel tells you it's done. It provides an O(1) notification mechanism. It's pretty slick. But... it's not as obvious a choice as everyone makes it sound. (Did you know the Chrome team has mostly given up on trying to make it work?)

    Issues:

    • When doing a UDP send, the send is only notified as complete once the packet hits the wire; i.e., using IOCP for UDP totally removes in-kernel buffering/flow-control. So to get decent throughput you must implement your own buffering system allowing multiple UDP sends to be in flight at once (but not too many because you don't want to introduce arbitrary latency). Or you could just use the non-blocking API and the kernel worries about this for you. (This hit Chrome hard; they switched to using non-blocking IO for UDP on Windows. ref1, ref2.)

    • When doing a TCP receive with a large buffer, apparently the kernel does a Nagle-like thing where it tries to hang onto the data for a while before delivering it to the application, thus introducing pointless latency. (This also bit Chrome hard; they switched to using non-blocking IO for TCP receive on Windows. ref1, ref2)

    • Sometimes you really do want to check whether a socket is readable before issuing a read: in particular, apparently outstanding IOCP receive buffers get pinned into kernel memory or some such nonsense, so it's possible to exhaust system resources by trying to listen to a large number of mostly-idle sockets.

    • Sometimes you really do want to check whether a socket is writable before issuing a write: in particular, because it allows adaptive protocols to provide lower latency if they can delay deciding what bytes to write until the last moment.

    • Python provides a complete non-blocking API out-of-the-box, and we use this API on other platforms, so using non-blocking IO on Windows as well is much MUCH simpler for us to implement than IOCP, which requires us to pretty much build our own wrappers from scratch.

    On the other hand, IOCP is the only way to do a number of things like: non-blocking IO to the filesystem, or monitoring the filesystem for changes, or non-blocking IO on named pipes. (Named pipes are popular for talking to subprocesses – though it's also possible to use a socket if you set it up right.)

    select/WSAPoll

    You can also use select/WSAPoll. This is the only documented way to check if a socket is readable/writable. However:

    • As is well known, these are O(n) APIs, which sucks if you have lots of sockets. It's not clear how much it sucks exactly -- just copying the buffer into kernel-space probably isn't a big deal for realistic interest set sizes -- but clearly it's not as nice as O(1). On my laptop, select.select on 3 sets of 512 idle sockets takes <200 microseconds, so I don't think this will, like, immediately kill us. Especially since people mostly don't run big servers on Windows? OTOH an empty epoll on the same laptop returns in ~0.6 microseconds, so there is some difference...

    • select.select is limited to 512 sockets, but this is trivially overcome; the Windows fd_set structure is just a array of SOCKETs + a length field, which you can allocate in any size you like (#3). (This is a nice side-effect of Windows never having had a dense fd space. This also means WSAPoll doesn't have much reason to exist. Unlike other platforms where poll beats select because poll uses an array and select uses a bitmap, WSAPoll is not really any more efficient than select. Its only advantage is that it's similar to how poll works on other platforms... but it's gratuitously incompatible. The one other interesting feature is that you can do an alertable wait with it, which gives a way to cancel it from another thread without using an explicit wakeup socket, via QueueUserAPC.)

    • Non-blocking IO on windows is apparently a bit inefficient because it adds an extra copy. (I guess they don't have zero-copy enqueueing of data to receive buffers? And on send I guess it makes sense that you can do that legitimately zero-copy with IOCP but not with nonblocking, which is nice.) Again I'm not sure how much this matters given that we don't have zero-copy byte buffers in Python to start with, but it's a thing.

    • select only works for sockets; you still need IOCP etc. for responding to other kinds of notifications.

    Options

    Given all of the above, our current design is a hybrid that uses select and non-blocking IO for sockets, with IOCP available when needed. We run select in the main thread, and IOCP in a worker thread, with a wakeup socket to notify when IOCP events occur. This is vastly simpler than doing it the other way around, because you can trivially queue work to an IOCP from any thread, while if you want to modify select's interest set from another thread it's a mess. As an initial design, this makes a lot of sense, because it allows us to provide full features (including e.g. wait_writable for adaptive protocols), avoid the tricky issues that IOCP creates for sockets, and requires a minimum of special code.

    The other attractive option would be if we could solve the issues with IOCP and switch to using it alone – this would be simpler and get rid of the O(n) select. However, as we can see above, there are a whole list of challenges that would need to be overcome first.

    Working around IOCP's limitations

    UDP sends

    I'm not really sure what the best approach here is. One option is just to limit the number of outstanding UDP data to some fixed amount (maybe tunable through a "virtual" (i.e. implemented by us) sockopt), and drop packets or return errors if we exceed that. This is clearly solvable in principle, it's just a bit annoying to figure out the details.

    Spurious extra latency in TCP receives

    I think that using the MSG_PUSH_IMMEDIATE flag should solve this.

    Checking readability / writability

    It turns out that IOCP actually can check readability! It's not mentioned on MSDN at all, but there's a well-known bit of folklore about the "zero-byte read". If you issue a zero-byte read, it won't complete until there's data ready to read. ref1 (← official MS docs! also note this is ch. 6 of "NPfMW", referenced below), ref2, ref3.

    That's for SOCK_STREAM sockets. What about SOCK_DGRAM? libuv does zero-byte reads with MSG_PEEK set (to avoid consuming the packet, truncating it to zero bytes in the process). MSDN explicitly says that this doesn't work (MSG_PEEK and overlapped IO supposedly don't work together), but I guess I trust libuv more than MSDN? I don't 100% trust either – this would need to be verified.

    What about writability? Empirically, if you have a non-blocking socket on windows with a full send buffer and you do a zero-byte send, it returns EWOULDBLOCK. (This is weird; other platforms don't do this.) If this behavior also translates to IOCP sends, then this zero-byte send trick would give us a way to use IOCP to check writability on SOCK_STREAM sockets.

    For writability of SOCK_DGRAM I don't think there's any trick, but it's not clear how meaningful SOCK_DGRAM writability is anyway. If we do our own buffering than presumably we can implement it there.

    Alternatively, there is a remarkable piece of undocumented sorcery, where you reach down directly to make syscalls, bypassing the Winsock userland, and apparently can get OVERLAPPED notifications when a socket is readable/writable: ref1, ref2, ref3, ref4, ref5. I guess this is how select is implemented? The problem with this is that it only works if your sockets are implemented directly in the kernel, which is apparently not always the case (because of like... antivirus tools and other horrible things that can interpose themselves into your socket API). So I'm inclined to discount this as unreliable. [Edit: or maybe not, see below]

    Implementing all this junk

    I actually got a ways into this. Then I ripped it out when I realized how many nasty issues there were beyond just typing in long and annoying API calls. But it could easily be resurrected; see 7e7a809c51d05729011506bc9de38cd97a35be44 and its parent.

    TODO

    If we do want to switch to using IOCP in general, then the sequence would go something like:

    • [ ] ~~check whether zero-byte sends give a way to check TCP writability via IOCP – this is probably the biggest determinant of whether going to IOCP-only is even possible (might be worth checking what doing UDP sends with MSG_PARTIAL does too while we're at it)~~
    • [ ] ~~check whether you really can do zero-byte reads on UDP sockets like libuv claims~~
    • [ ] ~~figure out what kind of UDP send buffering strategy makes sense (or if we decide that UDP sends can just drop packets instead of blocking then I guess the non-blocking APIs remain viable even if we can't do wait_socket_writable on UDP sockets)~~

    ~~At this point we'd have the information to decide whether we can/should go ahead. If so, then the plan would look something like:~~

    • [ ] ~~migrate away from select for the cases that can't use IOCP readable/writable checking:~~ [Not necessary, AFD-based select should work for these too]
      • [ ] ~~connect~~
      • [ ] ~~accept~~
    • [ ] ~~implement wait_socket_readable and wait_socket_writable on top of IOCP and get rid of select (but at this point we're still doing non-blocking I/O on sockets, just using IOCP as a select replacement)~~
    • [ ] ~~(optional / someday) switch to using IOCP for everything instead of non-blocking I/O~~

    New plan:

    • [ ] Use the tricks from the thread below to reimplement wait_socket_{readable,writable} using AFD, and confirm it works
    • [ ] Add LSP testing to our Windows CI
    • [ ] Consider whether we want to switch to using IOCP in more cases, e.g. send/recv. Not sure it's worth bothering.
    design discussion todo soon Windows low-level 
    opened by njsmith 30
  • add asynchronous file io and path wrappers

    add asynchronous file io and path wrappers

    This is an attempt to implement #20.

    Todo:

    _file_io:

    • [x] make duck-file definition more restrictive https://github.com/python-trio/trio/pull/180#discussion_r121320844

    _path:

    • [x] improve test_path_wraps_path https://github.com/python-trio/trio/pull/180#discussion_r121321347
    • [x] add tests for non-rewrapping forwards https://github.com/python-trio/trio/pull/180#discussion_r121323894
    • [x] add __div__
    • [x] py35 https://github.com/python-trio/trio/pull/180#discussion_r121320712
    • [x] ~improve Path docstring~ I was going to write some glossary entry on asynchronous path object, then I realized there is no upstream concept of path object, only PathLike
    • [x] properties can return new pathlib.Path instances too https://github.com/python-trio/trio/pull/180#discussion_r121582161

    Currently supported usages:

    # files
    
    async with await trio.open_file(filename) as f:
        await f.read()
    
    async_string = trio.wrap_file(StringIO('test'))
    assert await async_string.read() == 'test'
    
    # paths
    
    path = trio.Path('foo')
    await path.resolve()
    
    opened by buhman 29
  • Migrate hazmat to faked explicit importing and reexporting, to aid static analysis (issue #542)

    Migrate hazmat to faked explicit importing and reexporting, to aid static analysis (issue #542)

    Continuing towards finalisation of #542 We do a try / except import of our symbols to aid static analysis tools to pick them up. Then we update the list dynamically to have python get the correct symbols.

    opened by jmfrank63 27
  • Bump types-pyopenssl from 22.1.0.2 to 23.0.0.0

    Bump types-pyopenssl from 22.1.0.2 to 23.0.0.0

    Bumps types-pyopenssl from 22.1.0.2 to 23.0.0.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Bump cryptography from 38.0.4 to 39.0.0

    Bump cryptography from 38.0.4 to 39.0.0

    Bumps cryptography from 38.0.4 to 39.0.0.

    Changelog

    Sourced from cryptography's changelog.

    39.0.0 - 2023-01-01

    
    * **BACKWARDS INCOMPATIBLE:** Support for OpenSSL 1.1.0 has been removed.
      Users on older version of OpenSSL will need to upgrade.
    * **BACKWARDS INCOMPATIBLE:** Dropped support for LibreSSL < 3.5. The new
      minimum LibreSSL version is 3.5.0. Going forward our policy is to support
      versions of LibreSSL that are available in versions of OpenBSD that are
      still receiving security support.
    * **BACKWARDS INCOMPATIBLE:** Removed the ``encode_point`` and
      ``from_encoded_point`` methods on
      :class:`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicNumbers`,
      which had been deprecated for several years.
      :meth:`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey.public_bytes`
      and
      :meth:`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey.from_encoded_point`
      should be used instead.
    * **BACKWARDS INCOMPATIBLE:** Support for using MD5 or SHA1 in
      :class:`~cryptography.x509.CertificateBuilder`, other X.509 builders, and
      PKCS7 has been removed.
    * **BACKWARDS INCOMPATIBLE:** Dropped support for macOS 10.10 and 10.11, macOS
      users must upgrade to 10.12 or newer.
    * **ANNOUNCEMENT:** The next version of ``cryptography`` (40.0) will change
      the way we link OpenSSL. This will only impact users who build
      ``cryptography`` from source (i.e., not from a ``wheel``), and specify their
      own version of OpenSSL. For those users, the ``CFLAGS``, ``LDFLAGS``,
      ``INCLUDE``, ``LIB``, and ``CRYPTOGRAPHY_SUPPRESS_LINK_FLAGS`` environment
      variables will no longer be respected. Instead, users will need to
      configure their builds `as documented here`_.
    * Added support for
      :ref:`disabling the legacy provider in OpenSSL 3.0.x<legacy-provider>`.
    * Added support for disabling RSA key validation checks when loading RSA
      keys via
      :func:`~cryptography.hazmat.primitives.serialization.load_pem_private_key`,
      :func:`~cryptography.hazmat.primitives.serialization.load_der_private_key`,
      and
      :meth:`~cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateNumbers.private_key`.
      This speeds up key loading but is :term:`unsafe` if you are loading potentially
      attacker supplied keys.
    * Significantly improved performance for
      :class:`~cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305`
      when repeatedly calling ``encrypt`` or ``decrypt`` with the same key.
    * Added support for creating OCSP requests with precomputed hashes using
      :meth:`~cryptography.x509.ocsp.OCSPRequestBuilder.add_certificate_by_hash`.
    * Added support for loading multiple PEM-encoded X.509 certificates from
      a single input via :func:`~cryptography.x509.load_pem_x509_certificates`.
    

    .. _v38-0-4:

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Bump platformdirs from 2.6.0 to 2.6.2

    Bump platformdirs from 2.6.0 to 2.6.2

    Bumps platformdirs from 2.6.0 to 2.6.2.

    Release notes

    Sourced from platformdirs's releases.

    2.6.2

    What's Changed

    New Contributors

    Full Changelog: https://github.com/platformdirs/platformdirs/compare/2.6.1...2.6.2

    2.6.1

    What's Changed

    New Contributors

    Full Changelog: https://github.com/platformdirs/platformdirs/compare/2.6.0...2.6.1

    Changelog

    Sourced from platformdirs's changelog.

    platformdirs 2.6.2 (2022-12-28)

    • Fix missing typing-extensions dependency.

    platformdirs 2.6.1 (2022-12-28)

    • Add detection of $PREFIX for android.
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • `fail_after` deadline is set on initialization not context entry

    `fail_after` deadline is set on initialization not context entry

    This is a duplicate of https://github.com/agronholm/anyio/issues/514 — they are following the convention established here in trio.

    When using a fail_after context, the deadline is set at "initialization" time rather than __enter__. This behavior can result in unexpected bugs if the context is declared before it is entered. This is not particularly intuitive for a Python context manager — I'd expect the timer to start when the context is entered. I do not think changing it would be complex, but it could be considered breaking.

    import trio
    
    
    async def main():
        ctx = trio.fail_after(5)
        await trio.sleep(5)
        with ctx:
            for i in range(1, 6):
                print(i)
                await trio.sleep(1)
    
    
    trio.run(main)
    
    ❯ python example.py  
    1
    Traceback (most recent call last):
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
        yield scope
      File "/Users/mz/dev/prefect/example.py", line 168, in main
        await trio.sleep(1)
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_timeouts.py", line 76, in sleep
        await sleep_until(trio.current_time() + seconds)
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_timeouts.py", line 57, in sleep_until
        await sleep_forever()
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_timeouts.py", line 40, in sleep_forever
        await trio.lowlevel.wait_task_rescheduled(lambda _: trio.lowlevel.Abort.SUCCEEDED)
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
        return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
        raise captured_error
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_core/_run.py", line 1222, in raise_cancel
        raise Cancelled._create()
    trio.Cancelled: Cancelled
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/Users/mz/dev/prefect/example.py", line 171, in <module>
        trio.run(main)
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_core/_run.py", line 2010, in run
        raise runner.main_task_outcome.error
      File "/Users/mz/dev/prefect/example.py", line 168, in main
        await trio.sleep(1)
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/contextlib.py", line 137, in __exit__
        self.gen.throw(typ, value, traceback)
      File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/trio/_timeouts.py", line 108, in fail_at
        raise TooSlowError
    trio.TooSlowError
    
    opened by madkinsz 15
  • Bump towncrier from 22.8.0 to 22.12.0

    Bump towncrier from 22.8.0 to 22.12.0

    Bumps towncrier from 22.8.0 to 22.12.0.

    Release notes

    Sourced from towncrier's releases.

    Towncrier 22.12.0

    towncrier 22.12.0 (2022-12-21)

    Features

    • Added --keep option to the build command that allows generating a newsfile, but keeps the newsfragments in place. This option can not be used together with --yes. ([#129](https://github.com/twisted/towncrier/issues/129) <https://github.com/hawkowl/towncrier/issues/129>_)

    • Python 3.11 is now officially supported. ([#427](https://github.com/twisted/towncrier/issues/427) <https://github.com/hawkowl/towncrier/issues/427>_)

    • You can now create fragments that are not associated with issues. Start the name of the fragment with + (e.g. +anything.feature). The content of these orphan news fragments will be included in the release notes, at the end of the category corresponding to the file extension.

      To help quickly create a unique orphan news fragment, towncrier create +.feature will append a random string to the base name of the file, to avoid name collisions. ([#428](https://github.com/twisted/towncrier/issues/428) <https://github.com/hawkowl/towncrier/issues/428>_)

    Improved Documentation

    • Improved contribution documentation. ([#415](https://github.com/twisted/towncrier/issues/415) <https://github.com/hawkowl/towncrier/issues/415>_)
    • Correct a typo in the readme that incorrectly documented custom fragments in a format that does not work. ([#424](https://github.com/twisted/towncrier/issues/424) <https://github.com/hawkowl/towncrier/issues/424>_)
    • The documentation has been restructured and (hopefully) improved. ([#435](https://github.com/twisted/towncrier/issues/435) <https://github.com/hawkowl/towncrier/issues/435>_)
    • Added a Markdown-based how-to guide. ([#436](https://github.com/twisted/towncrier/issues/436) <https://github.com/hawkowl/towncrier/issues/436>_)
    • Defining custom fragments using a TOML array is not deprecated anymore. ([#438](https://github.com/twisted/towncrier/issues/438) <https://github.com/hawkowl/towncrier/issues/438>_)

    Deprecations and Removals

    • Default branch for towncrier check is now "origin/main" instead of "origin/master". If "origin/main" does not exist, fallback to "origin/master" with a deprecation warning. ([#400](https://github.com/twisted/towncrier/issues/400) <https://github.com/hawkowl/towncrier/issues/400>_)

    22.12.0rc1

    towncrier 22.12.0rc1 (2022-12-20)

    Features

    • Added --keep option to the build command that allows generating a newsfile, but keeps the newsfragments in place. This option can not be used together with --yes. ([#129](https://github.com/twisted/towncrier/issues/129) <https://github.com/hawkowl/towncrier/issues/129>_)

    • Python 3.11 is now officially supported. ([#427](https://github.com/twisted/towncrier/issues/427) <https://github.com/hawkowl/towncrier/issues/427>_)

    • You can now create fragments that are not associated with issues. Start the name of the fragment with + (e.g. +anything.feature). The content of these orphan news fragments will be included in the release notes, at the end of the category corresponding to the file extension.

      To help quickly create a unique orphan news fragment, towncrier create +.feature will append a random string to the base name of the file, to avoid name collisions. ([#428](https://github.com/twisted/towncrier/issues/428) <https://github.com/hawkowl/towncrier/issues/428>_)

    Improved Documentation

    ... (truncated)

    Changelog

    Sourced from towncrier's changelog.

    towncrier 22.12.0 (2022-12-21)

    No changes since the previous release candidate.

    towncrier 22.12.0rc1 (2022-12-20)

    Features

    • Added --keep option to the build command that allows generating a newsfile, but keeps the newsfragments in place. This option can not be used together with --yes. ([#129](https://github.com/twisted/towncrier/issues/129) <https://github.com/hawkowl/towncrier/issues/129>_)

    • Python 3.11 is now officially supported. ([#427](https://github.com/twisted/towncrier/issues/427) <https://github.com/hawkowl/towncrier/issues/427>_)

    • You can now create fragments that are not associated with issues. Start the name of the fragment with + (e.g. +anything.feature). The content of these orphan news fragments will be included in the release notes, at the end of the category corresponding to the file extension.

      To help quickly create a unique orphan news fragment, towncrier create +.feature will append a random string to the base name of the file, to avoid name collisions. ([#428](https://github.com/twisted/towncrier/issues/428) <https://github.com/hawkowl/towncrier/issues/428>_)

    Improved Documentation

    • Improved contribution documentation. ([#415](https://github.com/twisted/towncrier/issues/415) <https://github.com/hawkowl/towncrier/issues/415>_)
    • Correct a typo in the readme that incorrectly documented custom fragments in a format that does not work. ([#424](https://github.com/twisted/towncrier/issues/424) <https://github.com/hawkowl/towncrier/issues/424>_)
    • The documentation has been restructured and (hopefully) improved. ([#435](https://github.com/twisted/towncrier/issues/435) <https://github.com/hawkowl/towncrier/issues/435>_)
    • Added a Markdown-based how-to guide. ([#436](https://github.com/twisted/towncrier/issues/436) <https://github.com/hawkowl/towncrier/issues/436>_)
    • Defining custom fragments using a TOML array is not deprecated anymore. ([#438](https://github.com/twisted/towncrier/issues/438) <https://github.com/hawkowl/towncrier/issues/438>_)

    Deprecations and Removals

    • Default branch for towncrier check is now "origin/main" instead of "origin/master". If "origin/main" does not exist, fallback to "origin/master" with a deprecation warning. ([#400](https://github.com/twisted/towncrier/issues/400) <https://github.com/hawkowl/towncrier/issues/400>_)

    Misc

    • [#406](https://github.com/twisted/towncrier/issues/406) <https://github.com/hawkowl/towncrier/issues/406>, [#408](https://github.com/twisted/towncrier/issues/408) <https://github.com/hawkowl/towncrier/issues/408>, [#411](https://github.com/twisted/towncrier/issues/411) <https://github.com/hawkowl/towncrier/issues/411>, [#412](https://github.com/twisted/towncrier/issues/412) <https://github.com/hawkowl/towncrier/issues/412>, [#413](https://github.com/twisted/towncrier/issues/413) <https://github.com/hawkowl/towncrier/issues/413>, [#414](https://github.com/twisted/towncrier/issues/414) <https://github.com/hawkowl/towncrier/issues/414>, [#416](https://github.com/twisted/towncrier/issues/416) <https://github.com/hawkowl/towncrier/issues/416>, [#418](https://github.com/twisted/towncrier/issues/418) <https://github.com/hawkowl/towncrier/issues/418>, [#419](https://github.com/twisted/towncrier/issues/419) <https://github.com/hawkowl/towncrier/issues/419>, [#421](https://github.com/twisted/towncrier/issues/421) <https://github.com/hawkowl/towncrier/issues/421>, [#429](https://github.com/twisted/towncrier/issues/429) <https://github.com/hawkowl/towncrier/issues/429>, [#430](https://github.com/twisted/towncrier/issues/430) <https://github.com/hawkowl/towncrier/issues/430>, [#431](https://github.com/twisted/towncrier/issues/431) <https://github.com/hawkowl/towncrier/issues/431>, [#434](https://github.com/twisted/towncrier/issues/434) <https://github.com/hawkowl/towncrier/issues/434>, [#446](https://github.com/twisted/towncrier/issues/446) <https://github.com/hawkowl/towncrier/issues/446>, [#447](https://github.com/twisted/towncrier/issues/447) <https://github.com/hawkowl/towncrier/issues/447>
    Commits
    • b0e201f RST is hard.
    • 2c611be Fix rst format.
    • 76a2007 Fix typo.
    • 62feaf6 Update version for final release.
    • fbc4f1f Quick fix for incremental rc version normalization.
    • 26a5eba Try latest incremental since a test is failing.
    • 7cbec75 Create rc1.
    • 3859e58 [pre-commit.ci] pre-commit autoupdate (#452)
    • 3418975 Revert Generate coverage reports using only GitHub Actions (#455)
    • 24f65a0 Add --keep option to allow to generate newsfile, but keep newsfragmen… (#453)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
A curated list of awesome Python asyncio frameworks, libraries, software and resources

Awesome asyncio A carefully curated list of awesome Python asyncio frameworks, libraries, software and resources. The Python asyncio module introduced

Timo Furrer 3.8k Jan 8, 2023
A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs.

A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs, supporting both control flow and dataflow execution paradigms as well as de-centralized CPU & GPU scheduling.

null 102 Jan 6, 2023
SCOOP (Scalable COncurrent Operations in Python)

SCOOP (Scalable COncurrent Operations in Python) is a distributed task module allowing concurrent parallel programming on various environments, from h

Yannick Hold 573 Dec 27, 2022
A Python package for easy multiprocessing, but faster than multiprocessing

MPIRE, short for MultiProcessing Is Really Easy, is a Python package for multiprocessing, but faster and more user-friendly than the default multiprocessing package.

null 753 Dec 29, 2022
Simple package to enhance Python's concurrent.futures for memory efficiency

future-map is a Python library to use together with the official concurrent.futures module.

Arai Hiroki 2 Nov 15, 2022
A concurrent sync tool which works with multiple sources and targets.

Concurrent Sync A concurrent sync tool which works similar to rsync. It supports syncing given sources with multiple targets concurrently. Requirement

Halit Şimşek 2 Jan 11, 2022
A Python concurrency scheduling library, compatible with asyncio and trio.

aiometer aiometer is a Python 3.6+ concurrency scheduling library compatible with asyncio and trio and inspired by Trimeter. It makes it easier to exe

Florimond Manca 182 Dec 26, 2022
Async (trio) KuCoin minimal REST API + Websocket

Minimal Async KuCoin REST API + WebSocket using trio Coded by π ([email protected] TG: @pipad) 22 January 2022 KuCoin needs an async Python client This cod

Pi 2 Oct 23, 2022
This is a survey of python's async concurrency features by example.

Survey of Python's Async Features This is a survey of python's async concurrency features by example. The purpose of this survey is to demonstrate tha

Tyler Lovely 4 Feb 10, 2022
Async timeit - Async version of python's timeit

Async Timeit Replica of default python timeit module with small changes to allow

Raghava G Dhanya 3 Apr 13, 2022
Coroutine-based concurrency library for Python

gevent Read the documentation online at http://www.gevent.org. Post issues on the bug tracker, discuss and ask open ended questions on the mailing lis

gevent 5.9k Dec 28, 2022
TriOTP, the OTP framework for Python Trio

TriOTP, the OTP framework for Python Trio See documentation for more informations. Introduction This project is a simplified implementation of the Erl

David Delassus 7 Nov 21, 2022
SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

SNV Pipeline SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

East Genomics 1 Nov 2, 2021
Trio Assembly Snakemake Workflow

Trio Assembly Snakemake Workflow Input HiFi reads for child in bam format Either

Juniper A. Lake 1 Jan 28, 2022
A SOCKS proxy server implemented with the powerful python cooperative concurrency framework asyncio.

asyncio-socks-server A SOCKS proxy server implemented with the powerful python cooperative concurrency framework asyncio. Features Supports both TCP a

Amaindex 164 Dec 30, 2022
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
An esoteric programming language that supports concurrency, regex, and web requests.

The Hofstadter Esoteric Programming Language Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's La

Austin Henley 19 Dec 27, 2022
API for concurrency connections

Multi-connection-server-API API for concurrency connections difference between this server and the echo server is the call to lsock.setblocking(False)

Muziwandile Nkomo 1 Jan 4, 2022
Aio-binance-library - Async library for connecting to the Binance API on Python

aio-binance-library Async library for connecting to the Binance API on Python Th

GRinvest 10 Nov 21, 2022
Async-first dependency injection library based on python type hints

Dependency Depression Async-first dependency injection library based on python type hints Quickstart First let's create a class we would be injecting:

Doctor 8 Oct 10, 2022