Despite ideas on adding additional features to uasyncio not present in upstream asyncio (e.g. https://github.com/micropython/micropython/issues/2989), it has more mundane problems, like no support for features which upstream offers. One of such feature is ability to cancel coroutine execution on timeout (asyncio.wait_for() https://docs.python.org/3/library/asyncio-task.html#asyncio.wait_for). With the original uasyncio's usecase, writing webapps, it's kind of not needed, but of course it's required for generic applications, or even for UDP networking (e.g. implementing DNS resolver).
Before continuing, I'd like to remind that uasyncio's goal has always been to implement both runtime- and memory-efficient async scheduler. One of the means to achieve memory efficiency was basing uasyncio solely on the native coroutines, and avoiding intermediate objects which upstream has, like Future or Task.
So, let's consider how wait_for() can be implemented. First of all, we somehow need to track timeout expiration. Well, there's little choice but to use standard task queue for that, and actually, that's just the right choice - the task queue is intended to execute tasks at the specified time, and we don't want to event/use another mechanism to track times specifically for timeout. So, we'd need to schedule some task to occur at timeout's time. The simplest such task would be a callback which would cancel target coro. And we would need to wrap the original coro, and cancel the timeout callback if the coro finishes earlier. So, depending on what happens first - a timeout or coro completion, it would cancel the other thing.
So far so good, and while this adds bunch of overhead, that's apparently the most low-profile way to implement a timeout support without adhoc features. But there's a problem already - the processing above talks about cancelling tasks, but uasyncio doesn't actually support that. The upstream asyncio returns a handle from functions which schedule a task for execution, but uasyncio doesn't. Suppose it would, but then operation of removing a task from queue by handle would be inefficient, requiring scanning thru the
queue, O(n).
But the problems only start there. What does it really mean to cancel a coroutine? Per wait_for() description, it raises TimeoutError exception on timeout, and a natural way to achieve that would be to inject
TimeoutError into a coroutine, to give it a chance to handle it, and then let propagate upwards to wait_for() and its caller. There's a .throw() method for coroutines which exactly injects an exception into a coro,
but it doesn't work as required here. From the above, this would happen in a timeout callback. And .throw() works by injecting an exception and starting to run a coro. If timeout callback calls .throw(), it gets
TimeoutError exception immediately bubble up, and the whole application terminated, because it's not handled.
What's needed is not calling .throw() on coroutine right away, but recording the fact that a coroutine should receive TimeoutError and calling .throw() in the future, in the mainloop context. And that "future" really should be "soon" (as timeout has already expired), so the coro needs to rescheduled to the top of the queue.
That "future" work should give a hint - the object which has the needed behavior is exactly called Future (and upstream wraps coros in Task, which is subclass of Future).
So, uasyncio isn't going to acquire a bloaty Future/Task wrappers. Then the talk is how to emulate that behavior with pure coroutines. One possible way would be to store the "overriding exception" in the task queue, and .throw() it into a coro (instead of .send()ing a normal value) when main loop is about to execute it. That means adding a new field to each task queue entry, unused in majority of cases. Another approach would be to mark a coro itself as "throw something on next run", i.e. move Future
functionality into it.
And all this only talks about cancelling a CPU-bound coroutine, and doesn't talk about cancelling I/O-bound coroutines, which aren't even in the task queue, and instead in I/O poll queue.
rfc