Ready-to-run Docker images containing Jupyter applications

Overview

Discourse badge Read the Docs badge DockerHub badge Binder badget

Jupyter Docker Stacks

Jupyter Docker Stacks are a set of ready-to-run Docker images containing Jupyter applications and interactive computing tools.

Maintainer Help Wanted

We value all positive contributions to the Docker stacks project, from bug reports to pull requests to translations to help answering questions. We'd also like to invite members of the community to help with two maintainer activities:

  • Issue triage: Reading and providing a first response to issues, labeling issues appropriately, redirecting cross-project questions to Jupyter Discourse
  • Pull request reviews: Reading proposed documentation and code changes, working with the submitter to improve the contribution, deciding if the contribution should take another form (e.g., a recipe instead of a permanent change to the images)

Anyone in the community can jump in and help with these activities at any time. We will happily grant additional permissions (e.g., ability to merge PRs) to anyone who shows an on-going interest in working on the project.

Jupyter Notebook Deprecation Notice

Following Jupyter Notebook notice, we encourage users to transition to JupyterLab. This can be done by passing the environment variable JUPYTER_ENABLE_LAB=yes at container startup, more information is available in the documentation.

At some point, JupyterLab will become the default for all of the Jupyter Docker stack images, however a new environment variable will be introduced to switch back to Jupyter Notebook if needed.

After the change of default, and according to the Jupyter Notebook project status and its compatibility with JupyterLab, these Docker images may remove the classic Jupyter Notebook interface altogether in favor of another classic-like UI built atop JupyterLab.

This change is tracked in the issue #1217, please check its content for more information.

Quick Start

You can try a relatively recent build of the jupyter/base-notebook image on mybinder.org by simply clicking the preceding link. The image used in binder was last updated on 19 Jan 2021. Otherwise, the two examples below may help you get started if you have Docker installed know which Docker image you want to use, and want to launch a single Jupyter Notebook server in a container.

The User Guide on ReadTheDocs describes additional uses and features in detail.

Example 1: This command pulls the jupyter/scipy-notebook image tagged 17aba6048f44 from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888. The server logs appear in the terminal. Visiting http://:8888/?token= in a browser loads the Jupyter Notebook dashboard page, where hostname is the name of the computer running docker and token is the secret token printed in the console. The container remains intact for restart after the notebook server exits.

docker run -p 8888:8888 jupyter/scipy-notebook:17aba6048f44

Example 2: This command performs the same operations as Example 1, but it exposes the server on host port 10000 instead of port 8888. Visiting http://:10000/?token= in a browser loads JupyterLab, where hostname is the name of the computer running docker and token is the secret token printed in the console.::

docker run -p 10000:8888 jupyter/scipy-notebook:17aba6048f44

Example 3: This command pulls the jupyter/datascience-notebook image tagged 9b06df75e445 from Docker Hub if it is not already present on the local host. It then starts an ephemeral container running a Jupyter Notebook server and exposes the server on host port 10000. The command mounts the current working directory on the host as /home/jovyan/work in the container. The server logs appear in the terminal. Visiting http://:10000/?token= in a browser loads JupyterLab, where hostname is the name of the computer running docker and token is the secret token printed in the console. Docker destroys the container after notebook server exit, but any files written to ~/work in the container remain intact on the host.

docker run --rm -p 10000:8888 -e JUPYTER_ENABLE_LAB=yes -v "$PWD":/home/jovyan/work jupyter/datascience-notebook:9b06df75e445

Contributing

Please see the Contributor Guide on ReadTheDocs for information about how to contribute package updates, recipes, features, tests, and community maintained stacks.

Alternatives

Resources

Comments
  • Use micromamba to bootstrap mamba

    Use micromamba to bootstrap mamba

    If Micromamba is as mature as I think, then there is no longer any need to use Miniforge.

    In case there are significant bugs in Micromamba, then maybe we find them here.

    :eyes:

    opened by maresb 66
  • unable to plot in jupyter notebook using R kernel

    unable to plot in jupyter notebook using R kernel

    I am using default configuration of jupyter/datascience-notebook.

    By accessing http://docker.local:8888/, I could start a notebook with R kernel, but I could not generate plots inside the notebook.

    The simple official example:

    require(stats) # for lowess, rpois, rnorm
    plot(cars)
    lines(lowess(cars))
    

    capture

    The #54 might relevant, but I am using the most up-to-todate image (20160519) .

    type:Bug tag:Upstream 
    opened by fyears 53
  • Implement build system using self-hosted aarch64 runners, GitHub `needs` jobs feature and reusable workflows

    Implement build system using self-hosted aarch64 runners, GitHub `needs` jobs feature and reusable workflows

    I've decided to implement our build system from scratch.

    Ideas, that came to my mind:

    • one job in a workflow only does one essential job, only one platform
    • so, we will have lots of jobs, like build-x86-minimal-notebook, but we can easily write dependencies between them
    • share everything as GitHub artifacts between these jobs
    • amd64 and aarch64 are almost independent
    • do not even try to merge tags between amd64 and aarch64, let's add the aarch64- prefix for all arm builds

    Implementation details:

    • I rely heavily on reusable workflows. This way there is almost zero code duplication
    • The duplication is only needed in the main docker.yml workflow. This file can be seen as a simple config file of build dependencies
    • No build system is actually needed because we will be relying on the needs feature of GitHub
    • Heavily rely on well-made GitHub workflows actions/upload-artifact and actions/download-artifact to pass the image between jobs

    TODO:

    • [x] aarch64: self-hosted runners
    • [x] self-hosted runners: easy setup docs
    • [x] adapt taggers to support aarch64 prefix
    • [x] adapt manifest creation for the new build system
    • [x] delete buildx Makefile parts (or event the Makefile itself)
    • [x] add all images to the main workflow
    • [x] fix documentation where it mentions aarch64 or make commands
    • [x] test, that only fresh images are used - add an empty file to the image and check, that it persists in every image built

    Fix: https://github.com/jupyter/docker-stacks/issues/1407 Fix: https://github.com/jupyter/docker-stacks/issues/1530 Fix: https://github.com/jupyter/docker-stacks/issues/1402 Fix: https://github.com/jupyter/docker-stacks/issues/1401 Fix: https://github.com/jupyter/docker-stacks/issues/1203 Supersedes: https://github.com/jupyter/docker-stacks/pull/1631 Fix is no longer needed for: https://github.com/jupyter/docker-stacks/issues/1539

    Upsides:

    • (all the issues above)
    • no need to use Makefile
    • no sudo rm -rf to maximise space, because we will have a small amount of images in the local cache (hopefully)
    • the summary of jobs shows a nice dependency graph
    • aarch64 are first-class images now, there are no special hacks for building them (except for native runners and sharing state in VM)
    • adding new platforms is theoretically possible now, but we will have to have native runners for them as well (in practice - do not expect new platforms)
    • no need for a separate amd64 workflow
    • easily adaptable for GitHub native aarch64 runners when they will be available
    • no need to manually create sections to make GitHub steps look good - I think it will be easier to jump right into the error if one occurs
    • QEMU aarch64 bug fix removal (no longer needed), which will allow faster mamba execution on multi-CPU machines

    Downsides:

    • Actions in self-hosted runners are run in VMs environment: this means worse security and some work to clean up the environment.
    • in theory should be extremely fast, because everything that can be run in parallel, will be. The overhead of using a clean environment is small. But docker save/load and uploading/downloading artifacts is quite slow.
    • maintenance self-hosted GitHub runners
    • we will have an aarch64- prefix for every arm tag
    • local experience will be a bit different from the remote one, I don't want to support docker buildx and multi-plaform in Makefile
    • to add a new core image, more lines of changes will be required. But since we haven't added new images for many years, this is not a problem.
    type:Maintenance 
    opened by mathbunnyru 43
  • Jupyter Lab Has (Almost) Blank Login Screen

    Jupyter Lab Has (Almost) Blank Login Screen

    What docker image you are using?

    Example: jupyter/datascience-notebook

    What complete docker command do you run to launch the container (omitting sensitive values)?

    version: "3"
    services:
        jupyterlab:
        image: jupyter/datascience-notebook
        restart: always
        ports:
            - 8888:8888
        volumes:
            - ./jupyter-data:/home/jovyan
            - ./notebooks:/home/jovyan/notebooks
        environment:
            - NB_UID=1000
            - NB_GUID=1000
            - JUPYTER_ENABLE_LAB=yes
            - RESTARTABLE=yes
    

    What steps do you take once the container is running to reproduce the issue?

    Example:

    1. Visit https://jupyter.domain.com

    What do you expect to happen?

    Expect to see a login screen, as usual.

    What actually happens?

    This rarely works, for no apparent reason, when I remove the JUPYTER_ENABLE_LAB variable, but usually results in a blank screen with a message and textbox to "Enter password or token:". My normal password does not work. I cannot do anything else. Once I remove the variable, it goes back to being a classic notebook, but it does work. This also does not work through an SSH tunnel, so I do not believe it has anything to do with my reverse proxy setup.

    opened by TheCatster 41
  • Use Travis CI

    Use Travis CI

    Related: https://github.com/jupyter/docker-stacks/issues/7

    This doesn't include releases or anything else at this point. All this does is tries to use Travis CI to do the builds. It also makes sure all images have been refreshed so that hopefully Dockerfiles that were left unchanged and don't have changed dependencies won't be rebuilt.

    opened by jakirkham 41
  • Build is stuck for jupyter/scipy-notebook arm64 image

    Build is stuck for jupyter/scipy-notebook arm64 image

    There is a timeout and it happened several times already: https://github.com/jupyter/docker-stacks/runs/4292319380?check_suite_focus=true

    I think the reason might be in QEMU :(

    type:Bug 
    opened by mathbunnyru 37
  • Spark hanging on docker-machine 0.5.1

    Spark hanging on docker-machine 0.5.1

    I'm having some trouble with both pyspark-notebook and all-spark-notebook on version 0.5.1 of docker-machine. It's working fine for me on an earlier version, 0.4.1 and that's the only system difference.

    When I run the test code to see it's all up and running:

    import pyspark
    sc = pyspark.SparkContext('local[*]')
    

    It hangs on trying to load the context. This also eventually brings down the whole machine.

    Anyone else seen this on 0.5.1?

    opened by dougmet 37
  • Add nvidia based notebooks to stack

    Add nvidia based notebooks to stack

    This PR adds NVIDIA based images to the stack that allow for TensorFlow to use GPU's. It uses Ubuntu 18.04 due to an issue with installing the libcudnn8, libnvinfer and libnvinfer-plugin on the Focal. I based the apt packages off of the official TF dockerfile. The below code might still need to be added to the tensorflow dockerfile as it is missing compared to the official TensorFlow dockerfile but I haven't run into issues without it while using PyTorch in the resulting TensorFlow notebook.

    # Install TensorRT if not building for PowerPC
    RUN [[ "${ARCH}" = "ppc64le" ]] || { apt-get update && \
            apt-get install -y --no-install-recommends libnvinfer${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
            libnvinfer-plugin${LIBNVINFER_MAJOR_VERSION}=${LIBNVINFER}+cuda${CUDA} \
            && apt-get clean \
            && rm -rf /var/lib/apt/lists/*; }
    
    # For CUDA profiling, TensorFlow requires CUPTI.
    ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:$LD_LIBRARY_PATH
    
    # Link the libcuda stub to the location where tensorflow is searching for it and reconfigure
    # dynamic linker run-time bindings
    RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 \
        && echo "/usr/local/cuda/lib64/stubs" > /etc/ld.so.conf.d/z-cuda-stubs.conf \
        && ldconfig
    
    tag:Community Stack 
    opened by DavidSpek 34
  • just installed the docker image-requesting sudo password for jovyan

    just installed the docker image-requesting sudo password for jovyan

    Hi, I had just installed the jupyter/datascience-notebook image.

    Command: docker pull jupyter/datascience-notebook

    When I executed command to run docker run -d -p 8888:8888 -v /home/Paula/dockerNotebooks:/home/ds/notebooks and docker exec -it containerName /bin/bash

    I get 'jovyan' instead of 'ds'. When I execute a sudo command it requires a password and I don't know what it is for jovyan...can you advise? For example, I tried to change file permissions on a file 'sudo chmod 777 filename' and I am requested to enter a password.

    Thanks. paula

    type:Question 
    opened by PaulaLaurenA219 33
  • Improve performance of CI system

    Improve performance of CI system

    We have crippled our CI systems performance after introducing support for arm64 based images. A key reason for this is that emulation of arm64 images from the amd64 based runners github provide is far worse, besides the fact that we end up building base-notebook and minimal-notebook for arm64 in sequence alongside the other images now.

    I'm not fully sure how we should optimize this long run, but under the assumption that we will have high performance self-hosted arm64 based GitHub Action runners that can work in parallel to the amd64 runners. Below is an overview of a very optimized system, where several parts can be done separately.

    1. Nightly builds We have nightly builds with :nightly-amd64 and nightly-arm64 tags

    2. amd64 / arm64 in parallel All tests for amd64 and arm64 run in parallel, relying on nightly-amd64 and nightly-arm64 caches

    3. Images in parallel where possible All tests for individual images are run in a dedicated job that needs its base image job to complete.

      Some images can run in parallel:

      • base
      • minimal
      • scipy | r
      • tensorflow | datascience | pyspark
      • all-spark
    4. Avoid rebuilds when merging Tests finish by updating a github container registry associated with a PR. By doing so, our publishing job on merge to master can opt to use the images as they were built during tests if they are considered fresh enough.

    5. Parallel manifest creation Merge to default branch triggers manifest creation jobs on both amd64 and arm64. If we opt to not optimize using step 4 then this could also build fresh images using nightly cache first.

    6. Combine manifests into one before pushing to official registry Merge to default branch triggers a job that pulls both the amd64 image and arm64 image and defines a combined docker manifest which is then pushed to our official container registry. I think this could be done with something like docker manifest create <name of combined image> <amd64 only image> <arm64 only image> but @manics knows more and I lack experience with this.

    Standalone performance issue

    This standalone issue will go away by using better strategies like above. It isn't so critical to fix either I'd say. But currently, we build minimal-notebook again without using cache during push-multi assuming push-multi for base-notebook has already run. It is because we re-tag jupyter/base-notebook:latest I think.

    type:Enhancement type:Arm 
    opened by consideRatio 32
  • Make the base & minimal notebook containers not amd specific (e.g. support building for arm64)

    Make the base & minimal notebook containers not amd specific (e.g. support building for arm64)

    I run a small Kubernetes cluster with a mix of arm64 & x86_64 hosts so I can use Jupyter I changed the Dockerfile to support being cross-built. (for additional context see https://scalingpythonml.com/2020/12/12/deploying-jupyter-lab-notebook-for-dask-on-arm-on-k8s.html )

    Even without the cross-build, this is still useful for anyone who wants to build the Jupyter container on any other non-x86_64 platform.

    Right now this currently has two lists of containers: those that support cross-building & those who's Dockerfiles need to be updated to support cross-building. make build-all "does the right thing" in that it cross-builds containers which support cross-building and builds single arch containers for those that are not yet ported.

    opened by holdenk 32
  • Check healthcheck in all jupyter applications

    Check healthcheck in all jupyter applications

    Describe your changes

    Issue ticket if applicable

    Checklist (especially for first-time contributors)

    • [ ] I have performed a self-review of my code
    • [ ] If it is a core feature, I have added thorough tests
    • [ ] I will try not to use force-push to make the review process easier for reviewers
    • [ ] I have updated the documentation for significant changes
    opened by mathbunnyru 0
  • [BUG] start by singleuser doesn't has bashrc properly set in user folder

    [BUG] start by singleuser doesn't has bashrc properly set in user folder

    What docker image(s) are you using?

    minimal-notebook

    OS system and architecture running docker image

    Ubuntu 18.04 amd64

    What Docker command are you running?

    ---
    singleuser:
        image:
            name: "docker.io/jupyter/minimal-notebook"      
            tag: "python-3.9.13"
            pullPolicy: "Always"
        extraEnv:
            DOCKER_STACKS_JUPYTER_CMD: "notebook"
            #JUPYTERHUB_SINGLEUSER_APP: "notebook.notebookapp.NotebookApp"
        storage:
            dynamic:
                storageClass: aiidalab-storage
    ...
    

    How to Reproduce the problem?

    1. open the container spawned (I use microk8s in my deployment).
    2. It is still running the lab mode, which I expect the notebook since I set DOCKER_STACKS_JUPYTER_CMD=notebook
    3. Also run ls -al, there is no .bashrc in the user home folder.

    Command output

    No response

    Expected behavior

    The entity start at notebook mode and the .bashrc setting is not being wiped out.

    Actual behavior

    It is all fine if I run the container without JUPYTHERHUB_API_TOKEN by docker run in my local machine. But spawn the user by spawner, the JUPYTERHUB_API_TOKEN will shift to start-singleuser.sh (as https://github.com/jupyter/docker-stacks/blob/f60eda163f9164e5264335933dfeddd04323450c/base-notebook/start-notebook.sh#L11-L14 here indicated).

    Is there anything run in my singleuser setting?

    Anything else?

    No response

    type:Bug 
    opened by unkcpz 0
  • Upgrade `Python` 3.11

    Upgrade `Python` 3.11

    Describe your changes

    Upgrade python from 3.10 to 3.11

    Release notes

    Issue ticket if applicable

    Checklist (especially for first-time contributors)

    • [ ] I have performed a self-review of my code
    • [ ] If it is a core feature, I have added thorough tests
    • [x] I will try not to use force-push to make the review process easier for reviewers
    • [ ] I have updated the documentation for significant changes
    opened by bjornjorgensen 7
  • [ENH] - Release pyspark-notebook with spark 3.2.3 containing fix for CVE-2022-42889

    [ENH] - Release pyspark-notebook with spark 3.2.3 containing fix for CVE-2022-42889

    What docker image(s) is this feature applicable to?

    pyspark-notebook

    What changes are you proposing?

    Spark 3.2.3 has been released which contains a fix for CVE-2022-42889. https://github.com/apache/spark/pull/38352

    The fix has also been applied to 3.3.1 but 3.3.2 will not be released until Feb/March. https://github.com/apache/spark/pull/38262#issuecomment-1319552102

    Even though this CVE is not harmful, internal alarms/notifications are continuously bothering. Is it possible to release an image of pyspark-notebook with 3.2.3? This will allow me to switch to this image and clear all the alarms.

    How does this affect the user?

    Continuous alert notifications will be bothering multiple users and a fix will allow to shut them.

    Anything else?

    No response

    type:Enhancement tag:Upstream 
    opened by bsikander 4
  • JupyterCon talk

    JupyterCon talk

    Hi everyone,

    I'm thinking about giving a talk for JupyterCon about this project.

    I would like to cover 2 main topics:

    1. What is our project and the ways you can use it (run it as a quick try-and-forget container / a single user on a single machine / jupyterhub / k8s). It should be useful for people, who would like to use our images.
    2. How we do what we do (the internals), and what has changed in the past two years. This part will be mostly for the developers, who would like to contribute. I will talk about our build system, GitHub actions/workflows, Docker multi-arch images and start scripts.

    Please, let me know your thoughts on this and if you have other ideas for a talk, I would be really happy to hear from you.

    opened by mathbunnyru 8
  • [BUG] - Health Check fails if you change the port jupyter runs on

    [BUG] - Health Check fails if you change the port jupyter runs on

    What docker image(s) are you using?

    minimal-notebook

    OS system and architecture running docker image

    RHEL7 docker swarm

    What Docker command are you running?

    Not really relevant, but I need to run it in a docker swarm, with a generalise 'ingress service'.

    For this I needed to change internal port jupyter runs on needs to be changes for intergation into a 'ingress proxy' To change the port I made a slight modification the docker image to set the internal port it runs on (see below)

    The problem is the docker container dies unexpectedly after running for 46 seconds. During that time the service is visible within the conatiner, but not external to the container. This is because the built-in heathcheck never succeeds, and eventually kills the container with little logged reporting. (see below)

    How to Reproduce the problem?

    Dockerfile, to set port

    FROM "jupyter/minimal-notebook:latest"
    # Update Jupyter configuration to set port
    RUN set -eux; \
        sed -i 's/port = 8888/port = 8080/' /etc/jupyter/jupyter_notebook_config.py ;\
        sed -i 's/port = 8888/port = 8080/' /etc/jupyter/jupyter_server_config.py ;\
        :;
    

    You can also change the port in other ways such as...

    Creating a ~joyvan/.jupyter/jupyter_server_config.py file (which can also set a password)

    or setting a JUPYTER_PORT environment variable (IF the setting in /etc/jupyter configs are removed)

    Command output

    When you build and then run the modified docker image, docker ps reports Up 9 seconds (health: starting) 8888/tcp despite the fact that jupyter is now running on port 8080

    46 seconds after starting the container dies with a unhelpful (Signal 15)

    Log output...

    [I 2022-10-28 05:20:00.393 ServerApp] Jupyter Server 1.21.0 is running at:
    [I 2022-10-28 05:20:00.393 ServerApp] http://jupyter_service:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c
    or http://127.0.0.1:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c
    [I 2022-10-28 05:20:00.393 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
    [C 2022-10-28 05:20:00.397 ServerApp] 
    
    To access the server, open this file in a browser:
    file:///home/jovyan/.local/share/jupyter/runtime/jpserver-8-open.html
    Or copy and paste one of these URLs:
    http://jupyter_service:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c
    or http://127.0.0.1:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c
    Entered start.sh with args: jupyter lab
    Executing the command: jupyter lab
    [C 2022-10-28 05:20:46.261 ServerApp] received signal 15, stopping
    [I 2022-10-28 05:20:46.262 ServerApp] Shutting down 2 extensions
    [I 2022-10-28 05:20:46.263 ServerApp] Shutting down 0 terminals
    

    Expected behavior

    Changing the internal port should not take days of work to track down, it should be straight forward and documented.

    The healthcheck should also be properly documented in jupyter-stacks documentation. This will make it more 'swarm friendly' as well as allow others to integrate it better when port 8888 is NOT available.

    Yes you can map the port when doing a 'docker run', but that is NOT always possible.

    Actual behavior

    Internal Port changing is undocumented in stacks

    Heathcheck kills the container without notice (signal 15 hardly makes it clear) when port is different.

    Days of work lost trying to figure out what should be a straight forward and simple task.

    Anything else?

    There is an existing environment variable "JUPYTER_PORT" that defines the default port. But any such setting is currently overridden by the configuration files in /etc/jupyter

    This may be usable to set healthcheck, especially if the config file default is removed, or allows the env var to override.

    in Dockerfile....

    HEALTHCHECK  --interval=15s --timeout=3s --start-period=5s --retries=3 \
        CMD wget -O- --no-verbose --tries=1 --no-check-certificate \
        http${GEN_CERT:+s}://localhost:${JUPYTER_PORT:-8888}${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1
    

    That Environment variable also needs to be documented in the jupyter-stacks documentation, with the health check.

    type:Bug status:Need Info 
    opened by antofthy 6
Owner
Project Jupyter
Interactive Computing
Project Jupyter
Run a command in the named virtualenv.

Vex Run a command in the named virtualenv. vex is an alternative to virtualenv's source wherever/bin/activate and deactivate, and virtualenvwrapper's

Sasha Hart 374 Dec 21, 2022
A simple but powerful Python packer to run any project with any virtualenv dependencies anywhwere.

PyEmpaq A simple but powerful Python packer to run any project with any virtualenv dependencies anywhwere. With PyEmpaq you can convert any Python pro

Facundo Batista 23 Sep 22, 2022
Convert lecture videos to slides in one line. Takes an input of a directory containing your lecture videos and outputs a directory containing .PDF files containing the slides of each lecture.

Convert lecture videos to slides in one line. Takes an input of a directory containing your lecture videos and outputs a directory containing .PDF files containing the slides of each lecture.

Sidharth Anand 12 Sep 10, 2022
Run your jupyter notebooks as a REST API endpoint. This isn't a jupyter server but rather just a way to run your notebooks as a REST API Endpoint.

Jupter Notebook REST API Run your jupyter notebooks as a REST API endpoint. This isn't a jupyter server but rather just a way to run your notebooks as

Invictify 54 Nov 4, 2022
Python Script to download hundreds of images from 'Google Images'. It is a ready-to-run code!

Google Images Download Python Script for 'searching' and 'downloading' hundreds of Google images to the local hard disk! Documentation Documentation H

Hardik Vasa 8.2k Jan 5, 2023
This is a Docker-based pipeline for preparing sextractor-ready multiwavelength images

Pipeline for creating NB422-detected (ODI) catalog The repository contains a Docker-based pipeline for preprocessing observational data. The pipeline

null 1 Sep 1, 2022
Define and run multi-container applications with Docker

Docker Compose Docker Compose is a tool for running multi-container applications on Docker defined using the Compose file format. A Compose file is us

Docker 28.2k Jan 8, 2023
Write maintainable, production-ready pipelines using Jupyter or your favorite text editor. Develop locally, deploy to the cloud. ☁️

Write maintainable, production-ready pipelines using Jupyter or your favorite text editor. Develop locally, deploy to the cloud. ☁️

Ploomber 2.9k Jan 6, 2023
A repository containing a short tutorial for Docker (with Python).

Docker Tutorial for IFT 6758 Lab In this repository, we examine the advtanges of virtualization, what Docker is and how we can deploy simple programs

Arka Mukherjee 0 Dec 14, 2021
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.

Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar

null 82 Nov 29, 2022
Simpliest django(uvicorn)+postgresql+nginx docker-compose (ready for production and dev)

simpliest django(uvicorn)+postgresql+nginx docker-compose (ready for production and dev) To run in production: docker-compose up -d Site available on

Artyom Lisovskii 1 Dec 16, 2021
Admin Panel for GinoORM - ready to up & run (just add your models)

Gino-Admin Docs (state: in process): Gino-Admin docs Play with Demo (current master 0.2.3) >>>> Gino-Admin demo <<<< (login: admin, pass: 1234) Admin

Iuliia Volkova 46 Nov 2, 2022
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.

NVIDIA DALI The NVIDIA Data Loading Library (DALI) is a library for data loading and pre-processing to accelerate deep learning applications. It provi

NVIDIA Corporation 4.2k Jan 8, 2023
Turn images of tables into CSV data. Detect tables from images and run OCR on the cells.

Table of Contents Overview Requirements Demo Modules Overview This python package contains modules to help with finding and extracting tabular data fr

Eric Ihli 311 Dec 24, 2022
A ready-to-use curated list of Spectral Indices for Remote Sensing applications.

A ready-to-use curated list of Spectral Indices for Remote Sensing applications. GitHub: https://github.com/davemlz/awesome-ee-spectral-indices Docume

David Montero Loaiza 488 Jan 3, 2023
A simple tool to extract python code from a Jupyter notebook, and then run pylint on it for static analysis.

Jupyter Pylinter A simple tool to extract python code from a Jupyter notebook, and then run pylint on it for static analysis. If you find this tool us

Edmund Goodman 10 Oct 13, 2022
Ipylivebash - Run shell script in Jupyter with live output

ipylivebash ipylivebash is a library to run shell script in Jupyter with live ou

Ben Lau 6 Aug 27, 2022
a Scrapy spider that utilizes Postgres as a DB, Squid as a proxy server, Redis for de-duplication and Splash to render JavaScript. All in a microservices architecture utilizing Docker and Docker Compose

This is George's Scraping Project To get started cd into the theZoo file and run: chmod +x script.sh then: ./script.sh This will spin up a Postgres co

George Reyes 7 Nov 27, 2022