Core Python libraries ported to MicroPython

Overview

micropython-lib

This is a repository of libraries designed to be useful for writing MicroPython applications.

The libraries here fall into four categories corresponding to the four top-level directories:

  • python-stdlib: Compatible versions of modules from the Python Standard Library. These should be drop-in replacements for the Python libraries, although many have reduced functionality or missing methods or classes (which may not be an issue for many most cases).

  • python-ecosys: Compatible, but reduced-functionality versions of modules from the larger Python ecosystem, for example that might be found in the Python Package Index.

  • micropython: MicroPython-specific modules that do not have equivalents in other Python environments. These are typically hardware drivers or highly-optimised alternative implementations of functionality available in other Python modules.

  • unix-ffi: These modules are specifically for the MicroPython Unix port and provide access to operating-system and third-party libraries via FFI.

Usage

Many libraries are self contained modules, and you can quickly get started by copying the relevant Python file to your device. For example, to add the base64 library, you can directly copy python-stdlib/base64/base64.py to the lib directory on your device.

Other libraries are packages, in which case you'll need to copy the directory instead. For example, to add collections.defaultdict, copy collections/collections/__init__.py and collections.defaultdict/collections/defaultdict.py to a directory named lib/collections on your device.

Future plans (and new contributor ideas)

  • Provide compiled .mpy distributions.
  • Develop a set of example programs using these libraries.
  • Develop more MicroPython libraries for common tasks.
  • Provide a replacement for the previous upip tool.
Comments
  • logging: Improve the logging module.

    logging: Improve the logging module.

    • Add support for all format specifiers, support for datefmt using strftime, and support for Stream and File handlers.
    • Ports/boards that need to use FileHandlers should enable MICROPY_PY_SYS_ATEXIT, and enable MICROPY_PY_SYS_EXC_INFO if using logging.exception().
    • Uses https://github.com/micropython/micropython-lib/pull/508 if available.
    opened by iabdalkader 35
  • Pathlib Support

    Pathlib Support

    Here's my first stab at adding most of the common functionality of pathlib.Path. I'd say that 99% of the common use-case support is there. The glob functionality could use some work; currently it only supports a single "*" wildcard; however, this is the vast majority of common use-cases and it won't fail silently if non-supported glob patterns are provided.

    Currently the module is named upathlib; this was just so that I could get it working with pytest. If someone could help me import it in tests without colliding with the builtin pathlib, that would be greatly appreciated!

    Unit tests require pytest-mock to be installed.

    opened by BrianPugh 30
  • binascii.unhexlify, base64, hashlib.sha224/256/384/512

    binascii.unhexlify, base64, hashlib.sha224/256/384/512

    My first contribution to micropython-lib. SHA-X ported from PyPy hmac ported from CPython 3.3 base64 ported from CPython 3.3 unhexlify is custom as PyPy's version was even slower.

    opened by slush0 26
  • [solved] MQTT: c.connect() -> IndexError: bytes index out of range

    [solved] MQTT: c.connect() -> IndexError: bytes index out of range

    On a raspian (IP=10.0.0.32) I installed mosquitto: service mosquitto status gives me: [ ok ] mosquitto is running.

    An ESP8266 is in the same network: FW: esp8266-20161017-v1.8.5.bin '>>> sta_if.ifconfig() gives me: ('10.0.0.105', ...)

    On the raspian-terminal I can also ping my ESP8266: ping 10.0.0.105 gives me: 64 bytes from 10.0.0.105: icmp_req=1 ttl=255 time=2.62 ms ...

    But I am failing to connect with the MQTT: '>>> from umqtt.simple import MQTTClient '>>> c = MQTTClient("client", "10.0.0.32") '>>> c.connect() gives me: Traceback (most recent call last): File "", line 1, in File "umqtt/simple.py", line 84, in connect IndexError: bytes index out of range

    opened by BigMan200 25
  • uasyncio timeout functionality (wait_for) fiasco

    uasyncio timeout functionality (wait_for) fiasco

    Despite ideas on adding additional features to uasyncio not present in upstream asyncio (e.g. https://github.com/micropython/micropython/issues/2989), it has more mundane problems, like no support for features which upstream offers. One of such feature is ability to cancel coroutine execution on timeout (asyncio.wait_for() https://docs.python.org/3/library/asyncio-task.html#asyncio.wait_for). With the original uasyncio's usecase, writing webapps, it's kind of not needed, but of course it's required for generic applications, or even for UDP networking (e.g. implementing DNS resolver).

    Before continuing, I'd like to remind that uasyncio's goal has always been to implement both runtime- and memory-efficient async scheduler. One of the means to achieve memory efficiency was basing uasyncio solely on the native coroutines, and avoiding intermediate objects which upstream has, like Future or Task.

    So, let's consider how wait_for() can be implemented. First of all, we somehow need to track timeout expiration. Well, there's little choice but to use standard task queue for that, and actually, that's just the right choice - the task queue is intended to execute tasks at the specified time, and we don't want to event/use another mechanism to track times specifically for timeout. So, we'd need to schedule some task to occur at timeout's time. The simplest such task would be a callback which would cancel target coro. And we would need to wrap the original coro, and cancel the timeout callback if the coro finishes earlier. So, depending on what happens first - a timeout or coro completion, it would cancel the other thing.

    So far so good, and while this adds bunch of overhead, that's apparently the most low-profile way to implement a timeout support without adhoc features. But there's a problem already - the processing above talks about cancelling tasks, but uasyncio doesn't actually support that. The upstream asyncio returns a handle from functions which schedule a task for execution, but uasyncio doesn't. Suppose it would, but then operation of removing a task from queue by handle would be inefficient, requiring scanning thru the queue, O(n).

    But the problems only start there. What does it really mean to cancel a coroutine? Per wait_for() description, it raises TimeoutError exception on timeout, and a natural way to achieve that would be to inject TimeoutError into a coroutine, to give it a chance to handle it, and then let propagate upwards to wait_for() and its caller. There's a .throw() method for coroutines which exactly injects an exception into a coro, but it doesn't work as required here. From the above, this would happen in a timeout callback. And .throw() works by injecting an exception and starting to run a coro. If timeout callback calls .throw(), it gets TimeoutError exception immediately bubble up, and the whole application terminated, because it's not handled.

    What's needed is not calling .throw() on coroutine right away, but recording the fact that a coroutine should receive TimeoutError and calling .throw() in the future, in the mainloop context. And that "future" really should be "soon" (as timeout has already expired), so the coro needs to rescheduled to the top of the queue.

    That "future" work should give a hint - the object which has the needed behavior is exactly called Future (and upstream wraps coros in Task, which is subclass of Future).

    So, uasyncio isn't going to acquire a bloaty Future/Task wrappers. Then the talk is how to emulate that behavior with pure coroutines. One possible way would be to store the "overriding exception" in the task queue, and .throw() it into a coro (instead of .send()ing a normal value) when main loop is about to execute it. That means adding a new field to each task queue entry, unused in majority of cases. Another approach would be to mark a coro itself as "throw something on next run", i.e. move Future functionality into it.

    And all this only talks about cancelling a CPU-bound coroutine, and doesn't talk about cancelling I/O-bound coroutines, which aren't even in the task queue, and instead in I/O poll queue.

    rfc 
    opened by pfalcon 23
  • uasyncio: Finalizing naming of a function to schedule a coroutine after a millisecond delay

    uasyncio: Finalizing naming of a function to schedule a coroutine after a millisecond delay

    It's convenient to be able to schedule a coroutine to run, but after some delay. Convenient for what? Well, at least for demonstration purposes - you run the same coroutine against one LED, that the same coro against another LED, but with a delay, etc. Is it useful for much beyond that? Dunno.

    Normally, this would be done by creating another (perhaps lambda) coro which would delay and await original coro. Memory hog. So, how it's done currently is that there's call_later_ms_() method. It comes from the original call_later() upstream method, but as clear, takes delay in ms. Underscore at the end means tentative naming. Also, in upstream, call_later() applies only to callbacks, but due to the way uasyncio scheduling works, in it it can apply to both callbacks and coros.

    So, time to decide whether we're ok with (ab)using that function for this functionality (incompatible with upstream) and drop the trailing underscore, or invent something else. I thought about adding optional arg to upstream's create_task(). But that's even bigger abuse.

    @dpgeorge : Thoughts? (Now that I wrote everything down, sticking with call_later_ms() seems a natural choice).

    @peterhinch: Comments are welcome too.

    prio-high 
    opened by pfalcon 23
  • top: Reorganise micropython-lib

    top: Reorganise micropython-lib

    One long-standing point of confusion around this repo is what it's actually for. The answer is that it's for making the Unix port be more like CPython, but that is not at all clear without really diving into the repo.

    This PR aims to re-think this module instead as a common repository of useful modules, with the unix/ffi bits out of the spotlight. I'd eventually like to see this become a centralised place for MicroPython users (on all ports) to find drivers and useful libraries, and possibly as a staging ground for candidate libraries for freezing into some firmware build.

    So this PR:

    • Removes any empty placeholder libraries (they're not helpful, and it's not clear that they're empty without some work)
    • Removes any libraries that are no longer useful (e.g. asyncio_slow)
    • Moves all libraries that require the Unix port into the unix-ffi directory.
    • Separates MicroPython-specific and CPython-compatibility libraries into the micropython and cpython-stdlib directories.
    • Adds new READMEs.

    I'd like to volunteer as a sort of curator/maintainer role, especially with the goal of fostering library development for libraries that automate or simplify common tasks on MicroPython. Here's a random example off the top of my head: if I'd like to build a wifi-enabled widget, I'd love to come here and find a library for each of the tasks like captive portal, ap-setup-mode, save-credentials, etc. Similarly some high-level wrappers around common tasks with ubluetooth. Or pin debouncing, or drivers, etc.

    The big outstanding TODO is what to do about upip / pypi / etc. Before I tackle that I'd like to see what people think of this approach.

    The other option is to just deprecate this repo (i.e. re-revert the "this repo is unmaintained" commit) -- let libraries be developed and distributed independently and perhaps consider only bringing really core useful libraries into the main micropython repo. I'd prefer to see a centralised repo, I think that's good for the community and good for discoverability.

    opened by jimmo 20
  • uasyncio: core.py frozen results in OverflowError: overflow converting long int to machine word

    uasyncio: core.py frozen results in OverflowError: overflow converting long int to machine word

    I just created the following files within the modules folder: uasyncio / init.py core.py queues.py collections / deque.py

    When I then run my code I get the following error:

    Traceback (most recent call last): File "main.py", line 226, in File "main1.py", line 92, in main File "uasyncio/core.py", line 133, in run_forever File "uasyncio/core.py", line 38, in call_later_ms_ OverflowError: overflow converting long int to machine word

    This is the function:

        def call_later_ms_(self, delay, callback, args=()):
            self.call_at_(time.ticks_add(self.time(), delay), callback, args)
    
    opened by riegaz 19
  • Fix issue in umqtt when MQTT CONNECT packet is greater than 127 bytes

    Fix issue in umqtt when MQTT CONNECT packet is greater than 127 bytes

    using logic from http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc385349213

    Please see sample code which works (with this code fix) connecting to Azure IoT:

    from umqtt.simple import MQTTClient
    import utime
    import ubinascii
    import uhashlib
    import network
    
    try:
        import usocket as socket
    except:
        import socket
    try:
        import ustruct as struct
    except:
        import struct
    
    # (date(2000, 1, 1) - date(1900, 1, 1)).days * 24*60*60
    NTP_DELTA = 3155673600 
    UTIME_DELTA = 946684800
    
    host = "pool.ntp.org"
    wifi_ssid = "<ssid>"
    wifi_password = "<password>"
    deviceId = '<Azure IoT device Id>'
    url = '<Azure IoT Hub Hostname>.azure-devices.net'
    deviceKey = '<Azure IoT device key>'
    
    sha256_blocksize = 64;
    
    def time():
        NTP_QUERY = bytearray(48)
        NTP_QUERY[0] = 0x1b
        addr = socket.getaddrinfo(host, 123)[0][-1]
        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        s.settimeout(1)
        res = s.sendto(NTP_QUERY, addr)
        msg = s.recv(48)
        s.close()
        val = struct.unpack("!I", msg[40:44])[0]
        return val - NTP_DELTA
    
    # There's currently no timezone support in MicroPython, so
    # utime.localtime() will return UTC time (as if it was .gmtime())
    def settime():
        t = time()
        import machine
        import utime
        tm = utime.localtime(t)
        tm = tm[0:3] + (0,) + tm[3:6] + (0,)
        machine.RTC().datetime(tm)
        #print(utime.time())
        print(utime.localtime())
    
    
    def strxor(a, b):
    	c = bytearray(len(a))
    	for i in range(len(a)):
    		c[i] = a[i] ^ b
    	return c
    	
    def hmac_sha256(key, message):
    	if (len(key) > sha256_blocksize):
    		key = uhashlib.sha256(key) # keys longer than blocksize are shortened
    	if (len(key) < sha256_blocksize):
    		# keys shorter than blocksize are zero-padded (where ∥ is concatenation)
    		key = key + (b'\x00' * (sha256_blocksize - len(key))) # Where * is repetition.
    	
    	o_key_pad = strxor(key, 0x5C) # Where blocksize is that of the underlying hash function
    	i_key_pad = strxor(key, 0x36) # Where ⊕ is exclusive or (XOR)
    	
    	hasher = uhashlib.sha256(i_key_pad)
    	hasher.update(message)
    	internalhash = hasher.digest()
    	
    	hasher2 = uhashlib.sha256(o_key_pad)
    	hasher2.update(internalhash)
    
    	return hasher2.digest()
    	
    # sas generator from https://github.com/bechynsky/AzureIoTDeviceClientPY/blob/master/DeviceClient.py
    def generate_sas_token(uri, deviceid, key, ttl):
    	urlToSign = uri.replace("\'","%27").replace("/","%2F").replace("=","%3D").replace("+","%2B")
    	sign_key = "%s\n%d" % (urlToSign, int(ttl))
    	
    	h = hmac_sha256(ubinascii.a2b_base64(key), message = "{0}\n{1}".format(urlToSign, ttl).encode('utf-8'))
    	signature=ubinascii.b2a_base64(h).decode("utf-8").replace("\'","%27").replace("/","%2F").replace("=","%3D").replace("+","%2B").replace("\n","")
    	
    	return_sas_token =  "SharedAccessSignature sr={0}&sig={1}&se={2}".format(urlToSign, signature, ttl)
    	return return_sas_token
    	
    # Test reception e.g. with:
    # mosquitto_sub -t foo_topic
    
    def do_connect():
        import network
        sta_if = network.WLAN(network.STA_IF)
        if not sta_if.isconnected():
            print('connecting to network...')
            sta_if.active(True)
            sta_if.connect(wifi_ssid, wifi_password)
            while not sta_if.isconnected():
                pass
        print('network config:', sta_if.ifconfig())
    
    def main():
        do_connect() 
        sta_if = network.WLAN(network.STA_IF)
        while not sta_if.isconnected():
            utime.sleep(1)
        # ensure that epoch is correct
        settime()
        
        ttl = int(utime.time()) + 36000 + UTIME_DELTA
        p = generate_sas_token(url,deviceId,deviceKey,ttl)
        c = MQTTClient(client_id=deviceId, server=url, port=8883, ssl=True, keepalive=10000, user=(url + '/' + deviceId), password=generate_sas_token(url,deviceId,deviceKey,ttl))
        c.connect()
        c.publish(b"foo_topic", b"hello")
        c.disconnect()
    
    if __name__ == "__main__":
        main()
    
    
    opened by dmascord 19
  • argparse: reproduce CPython's behaviour more closely

    argparse: reproduce CPython's behaviour more closely

    Make argparse compile with CPython. Make test work with CPython. Make description a kwarg for ArgumentParser(). Make add_argument() accept multiple option names (i.e. long and short options).

    opened by dxxb 18
  • python-stdlib/unittest: Move back from unix-ffi.

    python-stdlib/unittest: Move back from unix-ffi.

    This was mistakenly moved to unix-ffi because it was marked as having a dependency on unix-ffi/argparse, but this is actually only a dependency for the "unitest_discover.py" script.

    opened by jimmo 16
  • Added duration_ms to example

    Added duration_ms to example

    I've added duration_ms to the example scanning code, since it's not an optional parameter. Also, I've added some more aggressive parameters so that it has a better chance of working for someone just trying it for the first time, like myself :) And I've removed the if statement for the name, since that does not cause runtime problems, and allows to show more data. Hope it's useful!

    opened by ivan-galinskiy 0
  • urequests support for user-provided http versions

    urequests support for user-provided http versions

    About

    Hi there! :) This pull request implements a simple update to the urequests's request function. This implementation includes an argument to allow for modifcation of the socket's written http version.

    Why this was done?

    I made this change in my own fork because I had scrapped together an API in FastAPI, and this did not accept my microcontroller's HTTP 1.0 post requests.

    Please let me know if there is a reason the HTTP 1.1 version remains in the code - despite possible need for it to be http 1.0. I am new to learning about HTTP and related topics :)

    Cheers!

    opened by coldenate 0
  • Add

    Add "Content-Type: application/octet-stream" to header

    for when sending binary data

    example usage on my ESP32-CAM module:

    # capture and upload
    import camera
    import urequests
    
    camera.init(0, format=camera.JPEG, fb_location=camera.PSRAM)
    buf = camera.capture()
    response = urequests.post("https://.............", data=buf)
    print(response.text)
    response.close()
    
    opened by trungdq88 0
  • umqtt.simple and umqtt.robust improvements

    umqtt.simple and umqtt.robust improvements

    • Keep track of MQTT ping responses (PINGRESP) via MQTTClient.pingresp_pending attribute
    • An attempt to send PINGREQ (by calling ping()) when we haven’t yet received a PINGRESP for our previous one will raise an exception
    • MQTTClient.ping superclassed in mqtt.robust to make use of the pingresp_pending attribute. Reconnect will be performed on 'No response for previous ping’ exception
    • MQTTClient.connect() will always try to close the previous socket (if any) first. Otherwise the reconnect() function from mqtt.robust might fail on trying to re-instantiate the socket with OSerr=23
    opened by zbig-t 0
Owner
MicroPython
The MicroPython project
MicroPython
Python for .NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) and provides a powerful application scripting tool for .NET developers.

pythonnet - Python.NET Python.NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) a

null 3.5k Jan 6, 2023
An implementation of Python in Common Lisp

CLPython - an implementation of Python in Common Lisp CLPython is an open-source implementation of Python written in Common Lisp. With CLPython you ca

Willem Broekema 339 Jan 4, 2023
The Python programming language

This is Python version 3.10.0 alpha 5 Copyright (c) 2001-2021 Python Software Foundation. All rights reserved. See the end of this file for further co

Python 49.7k Dec 30, 2022
Grumpy is a Python to Go source code transcompiler and runtime.

Grumpy: Go running Python Overview Grumpy is a Python to Go source code transcompiler and runtime that is intended to be a near drop-in replacement fo

Google 10.6k Dec 24, 2022
DO NOT USE. Implementation of Python 3.x for .NET Framework that is built on top of the Dynamic Language Runtime.

IronPython 3 IronPython3 is NOT ready for use yet. There is still much that needs to be done to support Python 3.x. We are working on it, albeit slowl

IronLanguages 2k Dec 30, 2022
x86-64 assembler embedded in Python

Portable Efficient Assembly Code-generator in Higher-level Python (PeachPy) PeachPy is a Python framework for writing high-performance assembly kernel

Marat Dukhan 1.7k Jan 3, 2023
Pyjion - A JIT for Python based upon CoreCLR

Pyjion Designing a JIT API for CPython A note on development Development has moved to https://github.com/tonybaloney/Pyjion FAQ What are the goals of

Microsoft 1.6k Dec 30, 2022
A faster and highly-compatible implementation of the Python programming language. The code here is out of date, please follow our blog

Pyston is a faster and highly-compatible implementation of the Python programming language. Version 2 is currently closed source, but you can find the

null 4.9k Dec 21, 2022
The Stackless Python programming language

This is Python version 3.7.0 alpha 4+ Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 20

Stackless Python 891 Jan 3, 2023
A mini implementation of python library.

minipy author = RQDYSGN date = 2021.10.11 version = 0.2 1. 简介 基于python3.7环境,通过py原生库和leetcode上的一些习题构建的超小型py lib。 2. 环境 Python 3.7 2. 结构 ${project_name}

RQDYGSN 2 Oct 26, 2021
A faster and highly-compatible implementation of the Python programming language.

Pyston Pyston is a fork of CPython 3.8.8 with additional optimizations for performance. It is targeted at large real-world applications such as web se

null 2.3k Jan 9, 2023
Rust syntax and lexical analyzer implemented in Python.

Rust Scanner Rust syntax and lexical analyzer implemented in Python. This project was made for the Programming Languages class at ESPOL (SOFG1009). Me

Joangie Marquez 0 Jul 3, 2022
Micropython-wifimanager-esp8266 - Simple Wifi Manager for ESP8266 using MicroPython

micropython-wifimanager-esp8266 Simple Wifi Manager for ESP8266 using MicroPytho

Abhinuv Nitin Pitale 1 Jan 4, 2022
wmctrl ported to Python Ctypes

work in progress wmctrl is a command that can be used to interact with an X Window manager that is compatible with the EWMH/NetWM specification. wmctr

Iyad Ahmed 22 Dec 31, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 8, 2023
Physicochemical properties and indices for amino-acid sequences (ported from R).

peptides.py Physicochemical properties and indices for amino-acid sequences. ??️ Overview peptides.py is a pure-Python package to compute common descr

Martin Larralde 1 Oct 22, 2021
The original weights of some Caffe models, ported to PyTorch.

pytorch-caffe-models This repo contains the original weights of some Caffe models, ported to PyTorch. Currently there are: GoogLeNet (Going Deeper wit

Katherine Crowson 9 Nov 4, 2022
The Linux defender anti-virus software ported to work on CentOS Linux.

By: Seanpm2001, Et; Al. Top README.md Read this article in a different language Sorted by: A-Z Sorting options unavailable ( af Afrikaans Afrikaans |

Sean P. Myrick V19.1.7.2 2 Sep 12, 2022
This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe libraries.

CVZone This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe librar

CVZone 648 Dec 30, 2022