Python process launching

Overview
Logo

Version Downloads Status Python Versions Build Status Coverage Status

sh is a full-fledged subprocess replacement for Python 2.6 - 3.8, PyPy and PyPy3 that allows you to call any program as if it were a function:

from sh import ifconfig
print(ifconfig("eth0"))

sh is not a collection of system commands implemented in Python.

Complete documentation here

Installation

$> pip install sh

Support

Developers

Updating the docs

Check out the gh-pages branch and follow the README.rst there.

Testing

I've included a Docker test suite in the docker_test_suit/ folder. To build the image, cd into that directory and run:

$> ./build.sh

This will install ubuntu 18.04 LTS and all python versions from 2.6-3.8. Once it's done, stay in that directory and run:

$> ./run.sh

This will mount your local code directory into the container and start the test suite, which will take a long time to run. If you wish to run a single test, you may pass that test to ./run.sh:

$> ./run.sh FunctionalTests.test_unicode_arg

To run a single test for a single environment:

$> ./run.sh -e 3.4 FunctionalTests.test_unicode_arg

Coverage

First run all of the tests:

$> python sh.py test

This will aggregate a .coverage. You may then visualize the report with:

$> coverage report

Or generate visual html files with:

$> coverage html

Which will create ./htmlcov/index.html that you may open in a web browser.

Comments
  • Standard file descriptors should work as generators

    Standard file descriptors should work as generators

    I've implemented a test at https://github.com/pcn/pbs in the generator branch. I think it's clumsy, but I'm still feeling my way around.

    Use case:

    Do something useful with vmstat, tcpdump, etc. These commands are most useful when producing output constantly, and pbs should be able to consume these line by line.

    The implementation avoids calling subprocess.communicate and selects directly from the pbs_object.process objects stdout.

    I'm looking for thoughts on how to do this "right" (e.g. should it be a subclass or a separate module that inherits from pbs?). Currently it changes how it's used, but it is convenient because if only the last command in a pipeline has _generator=True set, then everything should work as normal (once I hook stdin back up :) except the last command will yield lines that can be worked with.

    feature 
    opened by pcn 32
  • sh gets zero exit code for nonzero exit code

    sh gets zero exit code for nonzero exit code

    Consider the following bundle of code:

    http://inversethought.com/jordi/wtf.zip

    Run ./wtf and notice how there is no Python exception. Now look at what sh.py is running, and it's ss/bin/scansetup, which does error out. If you edit wtf to remove the sh.cd("..") call, you'll see that now you do get a pretty Python stack trace.

    I have spent some time trying to debug this, and I am utterly baffled. I have managed to reproduce this on a few systems, including Mac OS X. The Python version has been 2.6.x on all systems I've tested. I haven't tried with a different Python version.

    opened by jordigh 28
  • Suggest pbs into the Standard Library

    Suggest pbs into the Standard Library

    who can to write a PEP for putting pbs into the Standard Library ? I know it might be a bit early, and pbs need to be a bit more mature for that...

    But I was banging my head with subprocess, so many times until I got it right. I need to head to documentation, every time I want to use subprocess.

    with pbs it's much much more simple...

    docs 
    opened by fruch 25
  • Does not work when imported from compiled-only modules

    Does not work when imported from compiled-only modules

    Because pbs assumes that the module that first imports comes from a readable .py file, it won't work when imported from modules where there's only a .pyc. For example:

    test_a.py:

    import test_b
    test_b.main()
    

    test_b.py:

    from pbs import echo
    def main():
        echo('Hello world!')
    

    Run python test_a.py once, then delete python test_b.py (leaving only python test_b.pyc). Now, running test_a.py will crash:

    Traceback (most recent call last):
    File "test_a.py", line 2, in <module>
        import test_b
    File "/home/me/pbs/test_b.py", line 2, in <module>
    File "/home/me/pbs/pbs.py", line 419, in <module>
        with open(script, "r") as h: source = h.readlines()
    IOError: [Errno 2] No such file or directory: '/home/me/pbs/test_b.py'
    

    This could come up when a program is packaged somehow (e.g. frozen in a zip file).


    All in all, the magic here is pretty fragile and will undoubtedly fail in other mysterious ways. Maybe disallowing import * and allowing the above instead (and also import from the REPL) would be the lesser evil?

    bug low priority 
    opened by encukou 24
  • Setting default redirections for all commands

    Setting default redirections for all commands

    Hello,

    First, a big THANK YOU for your work on this project. I find it extremly useful, simple and pythonic. And it's going to be extremely useful to convince my coworkers to migrate their shell scripts to Python :)

    Now, would you be open to allowing some kind of global configuration, specifically on the default redirection for stderr ?

    Currently, some "hardcoded" defaults are defined there: https://github.com/amoffat/sh/blob/master/sh.py#L669 And by default the commands stderr is simply discarded.

    Could that default behaviour be somehow configurable ? I would like the commands stderr to be written to the parent Python script stderr by default. And in the same spirit, it could be sometimes handy to forward stdout the same way.

    I'd love to work on a pull request if you are ok with this feature request.

    Regards

    opened by Lucas-C 23
  • Pipeline failure

    Pipeline failure

    Hi,

    Given the code import sh sh.dd(sh.dd('if=/dev/zero', 'bs=1M', "count=1024"), 'of=/dev/null')

    sh dies with a MemoryError as seen in http://pastebin.ca/2306288

    It looks as though sh is trying to buffer the input rather than connecting the stdout of the inner dd process to the stdin of the outer dd process, and is such blowing up.

    To give you an example of the behaviour I was expecting, see the example below.

    import subprocess
    
    s = subprocess.Popen(['dd', 'if=/dev/zero', 'bs=1M', 'count=10240'], stdout=subprocess.PIPE)
    r = subprocess.Popen(['dd', 'of=/dev/null', 'bs=1M'], stdin=s.stdout)
    
    r.wait()
    

    I've taken a look through sh.py but I can't seem to work out what exactly I'd need to do in order to patch this. Would someone mind lending a hand?

    opened by asharp 19
  • Fix: catching IOError errno=35 on mac os

    Fix: catching IOError errno=35 on mac os

    Using sh in a python script that is running under jenkins, we got the following Exception :

    Exception in thread Thread-330: Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run self.__target(_self.__args, *_self.__kwargs) File "/usr/local/lib/python2.7/site-packages/sh.py", line 1484, in output_thread done = stream.read() File "/usr/local/lib/python2.7/site-packages/sh.py", line 1974, in read self.write_chunk(chunk) File "/usr/local/lib/python2.7/site-packages/sh.py", line 1949, in write_chunk self.should_quit = self.process_chunk(chunk) File "/usr/local/lib/python2.7/site-packages/sh.py", line 1850, in process handler.flush() IOError: [Errno 35] Resource temporarily unavailable

    We identified that this is an issue that may happen under OSX only, and the proposed pull request fixed the issue. This is due to the fact that OSX deals with non blocking IOs in a different way, and the non fatal IOError 35 may happen, you then just have to retry the operation.

    cant reproduce 
    opened by madlag 18
  • long line truncated in stdout ?

    long line truncated in stdout ?

    It looks like long line are truncated, which make a command fails.

    sh.grep(sh.ps('aux'), "-ie", "java.*Cassandra")
    

    while this shell command works well:

     ps aux | grep -ie "java.*Cassandra"
    

    Anything I should know about the line truncated?

    opened by aboudreault 17
  • Better support for windows platfrom

    Better support for windows platfrom

    • reading PATHEXT to search for executable files
    • building internal commands from cmd.exe, and adding them also
    • added support for uppercase internal command
    • added a section to the README.md
    • support for non-ascii windows console (thanks to @stania)
    feature 
    opened by fruch 17
  • Lack of _fg option?

    Lack of _fg option?

    The switch from pbs to sh seems to have removed the _fg option. I've scanned the documentation and can't see a recommended solution for throwing commands to the foreground. I find this useful if I'm running an intense command and want to watch its output. What's the recommended fix for this?

    For completeness, I'm specifically using _fg from pbs for the following:

    • make menuconfig on Linux kernels
    • tar extractions for those tarballs
    • wget for lurking the downloads of those tarballs
    • make for making those kernels
    feature 
    opened by akerl 15
  • Command execution returns success on segfault

    Command execution returns success on segfault

    Commands silently succeed when the underlying process segfaults. e.g.:

    sh.Command('sh')('-c', 'kill -SEGV $$')
    

    returns without error, where:

    sh.Command('false')
    

    raises a non-zero exit code, as expected.

    Testing from the shell, the segfaulting processes all do set a non-zero exit code (139), but it is silently dropped by sh. (The same is not true of subprocess.check_call.)

    (This is on OS X 10.8.1, with the latest sh.py from git.)

    opened by jrk 15
  • parsing default argument for `_env` does not work for `dict` literals spanning multiple lines

    parsing default argument for `_env` does not work for `dict` literals spanning multiple lines

    The following

    import sh
    
    
    def test1():
        sh2 = sh(_env=dict(
            PATH='/usr/bin'
            ))
        output = sh2.env()
        print(f'test1 env: {output.stdout}')
    
    
    def test2():
        sh2 = sh(_env={'PATH': '/usr/bin'})
        output = sh2.env()
        print(f'test2 env: {output.stdout}')
    
    
    def test3():
        sh2 = sh(_env={
            'PATH': '/usr/bin'
            })
        output = sh2.env()
        print(f'test3 env: {output.stdout}')
    
    
    test1()
    test2()
    test3()
    
    

    gives

    test1 env: b'PATH=/usr/bin\n'
    test2 env: b'PATH=/usr/bin\n'
    Traceback (most recent call last):
      File "test.py", line 28, in <module>
        test3()
      File "test.py", line 20, in test3
        'PATH': '/usr/bin'
      File "/home/jimustafa/.pyenv/versions/cookiecutter-voila-app/lib/python3.7/site-packages/sh.py", line 3581, in __call__
        parsed = ast.parse(code)
      File "/home/jimustafa/.pyenv/versions/3.7.13/lib/python3.7/ast.py", line 35, in parse
        return compile(source, filename, mode, PyCF_ONLY_AST)
      File "<unknown>", line 1
    SyntaxError: illegal target for annotation
    
    opened by jimustafa 2
  • Provide wheels for sh

    Provide wheels for sh

    Hey, would it be possible to provide Python wheels for the latest sh version?

    Wheels seem to be available on pypi for older versions, but they are missing for version 1.14.3 https://pypi.org/project/sh/1.14.3/#files

    opened by philipp-sontag-by 0
  •  sh.Command: not enough values to unpack (expected 2, got 1)

    sh.Command: not enough values to unpack (expected 2, got 1)

    My code has:

        avahi_browse = sh.Command("avahi-browse")
        for line in avahi_browse("-rlpa", _iter=True):
            print(line)
    

    This results in:

    2022-09-25 19:38:47.648 T:1126    ERROR <general>: for line in avahi_browse("-rlpa", _iter=True):
    2022-09-25 19:38:47.648 T:1126    ERROR <general>: 
                                                       
    2022-09-25 19:38:47.648 T:1126    ERROR <general>:   File "/storage/.kodi/addons/plugin.program.zeroconfbrowse/resources/lib/sh.py", line 1524, in __call__
                                                       
    2022-09-25 19:38:47.651 T:1126    ERROR <general>:     
    2022-09-25 19:38:47.651 T:1126    ERROR <general>: return self.__class__.RunningCommandCls(cmd, call_args, stdin, stdout, stderr)
    2022-09-25 19:38:47.651 T:1126    ERROR <general>: 
                                                       
    2022-09-25 19:38:47.651 T:1126    ERROR <general>:   File "/storage/.kodi/addons/plugin.program.zeroconfbrowse/resources/lib/sh.py", line 779, in __init__
                                                       
    2022-09-25 19:38:47.653 T:1126    ERROR <general>:     
    2022-09-25 19:38:47.653 T:1126    ERROR <general>: self.process = OProc(self, self.log, cmd, stdin, stdout, stderr,
    2022-09-25 19:38:47.653 T:1126    ERROR <general>: 
                                                       
    2022-09-25 19:38:47.653 T:1126    ERROR <general>:   File "/storage/.kodi/addons/plugin.program.zeroconfbrowse/resources/lib/sh.py", line 2128, in __init__
                                                       
    2022-09-25 19:38:47.657 T:1126    ERROR <general>:     
    2022-09-25 19:38:47.657 T:1126    ERROR <general>: sid, pgid = os.read(session_pipe_read, 1024).decode(DEFAULT_ENCODING).split(",")
    2022-09-25 19:38:47.657 T:1126    ERROR <general>: 
                                                       
    2022-09-25 19:38:47.657 T:1126    ERROR <general>: ValueError
    2022-09-25 19:38:47.658 T:1126    ERROR <general>: : 
    2022-09-25 19:38:47.658 T:1126    ERROR <general>: not enough values to unpack (expected 2, got 1)
    

    Am I doing it wrong? Why am I getting not enough values to unpack (expected 2, got 1)?

    (Note to self: plugin.program.zeroconfbrowse on Kodi 20)

    opened by probonopd 0
  • explicitly set 'exit_code' on 'ErrorReturnCode'

    explicitly set 'exit_code' on 'ErrorReturnCode'

    This commit explicitly defines 'exit_code' for ErrorReturnCode class to prevent false-positives from linters, i.e. pylint.

    Example:

    import sh
    
    try:
        sh.bash("-c", "exit 1")
    except sh.ErrorReturnCode as error:
        print(error.exit_code)
    
    $ pylint ./example.py
    ...
    E1101: Instance of 'ErrorReturnCode' has no 'exit_code' member (no-member)
    

    This happens because 'exit_code' is defined during metaclass initialization (see get_rc_exc function), and pylint does not handle such members correctly.

    Explicitly setting 'exit_code' during 'ErrorReturnCode' initialization to itself gives pylint enough info to deduce that this class actually has 'exit_code' member.

    opened by kotborealis 0
  • POC: Add lazy resolving of command paths

    POC: Add lazy resolving of command paths

    Defer resolving of the actual command path to when the command is called, not when it's imported. This allows providing a custom PATH via _env to customize resolving of the command path.

    Remove test case that only tested if the mocker works.

    NOTE: this is just a proof of concept of how lazy resolving could work. See discussions in https://github.com/amoffat/sh/pull/602

    opened by ecederstrand 1
  • sh deadlocking in child fork process due to logging

    sh deadlocking in child fork process due to logging

    I recently had an issue where some tests I have which were invoking sh with _bg=True were deadlocking.

    After investigating, I noticed this was happening inside the sh library, and related to logging inside of an at-fork handler. This is a minimal reproducible example of the issue:

    # Run using python3.9 -mpytest --full-trace -o log_cli=true -s -vv --log-cli-level=DEBUG repro.py
    import logging
    import os
    
    import sh
    
    
    def test_sh_log_in_child_fork():
        logger = logging.getLogger()
    
        os.register_at_fork(after_in_child=lambda: logger.debug("in child"))
    
        procs = [sh.Command("true")(_bg=True) for _ in range(10)]
        for proc in procs:
            proc.wait()
    

    Hitting this issue does require a bit of scheduling bad luck, I think the background thread needs to be interrupted while the lock is held. My guess is that the log line is being formatted. Then fork() needs to be called from the main thread. In practice though this seems to not be that rare. In my testing the above reproduction hits it every time, with only 10 iterations. I'm guessing some of the objects being logged take some time to format? I also think instead of using os.register_at_fork() you could shoot yourself in the foot via the preexec_fn argument.

    I am not an expert on python internals - but generally calling fork() from a multi-threaded application has some pretty concerning risks. It's realistically not going to be possible to avoid the issue entirely since users will always be able to cause this themselves, but I think the sh library's background thread is making this problem far more likely than it needs to be.

    Can this thread avoid logging / other lock-holding-operations?

    I think it's also worth noting that at-fork handlers were only added in python3.7 - so they may not be that common yet

    opened by adamncasey 2
Owner
Andrew Moffat
Tech Generalist
Andrew Moffat
A Python module for controlling interactive programs in a pseudo-terminal

Pexpect is a Pure Python Expect-like module Pexpect makes Python a better tool for controlling other applications. Pexpect is a pure Python module for

null 2.3k Dec 26, 2022
DirBruter is a Python based CLI tool. It looks for hidden or existing directories/files using brute force method. It basically works by launching a dictionary based attack against a webserver and analyse its response.

DirBruter DirBruter is a Python based CLI tool. It looks for hidden or existing directories/files using brute force method. It basically works by laun

vijay sahu 12 Dec 17, 2022
A framework for launching new Django Rest Framework projects quickly.

DRFx A framework for launching new Django Rest Framework projects quickly. Comes with a custom user model, login/logout/signup, social authentication

William Vincent 400 Dec 29, 2022
A job launching library for docker, EC2, GCP, etc.

doodad A library for packaging dependencies and launching scripts (with a focus on python) on different platforms using Docker. Currently supported pl

Justin Fu 55 Aug 27, 2022
A small POC plugin for launching dumpulator emulation within IDA, passing it addresses from your IDA view using the context menu.

Dumpulator-IDA Currently proof-of-concept This project is a small POC plugin for launching dumpulator emulation within IDA, passing it addresses from

Michael 9 Sep 21, 2022
nvitop, an interactive NVIDIA-GPU process viewer, the one-stop solution for GPU process management

An interactive NVIDIA-GPU process viewer, the one-stop solution for GPU process management.

Xuehai Pan 1.3k Jan 2, 2023
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 2, 2023
A fast Python in-process signal/event dispatching system.

Blinker Blinker provides a fast dispatching system that allows any number of interested parties to subscribe to events, or "signals". Signal receivers

jason kirtland 1.4k Dec 31, 2022
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 2, 2023
An introduction of Markov decision process (MDP) and two algorithms that solve MDPs (value iteration, policy iteration) along with their Python implementations.

Markov Decision Process A Markov decision process (MDP), by definition, is a sequential decision problem for a fully observable, stochastic environmen

Yu Shen 31 Dec 30, 2022
Download and process satellite imagery in Python using Sentinel Hub services.

Description The sentinelhub Python package allows users to make OGC (WMS and WCS) web requests to download and process satellite images within your Py

Sentinel Hub 659 Dec 23, 2022
A simple in-process python scheduler library, designed to be integrated seamlessly with the `datetime` standard library.

scheduler A simple in-process python scheduler library, designed to be integrated seamlessly with the datetime standard library. Due to the support of

null 30 Dec 30, 2022
A python script that enables a raspberry pi sd card through the CLI and automates the process of configuring network details and ssh.

This project is one script (wpa_helper.py) written in python that will allow for the user to automate the proccess of setting up a new boot disk and configuring ssh and network settings for the pi

Theo Kirby 6 Jun 24, 2021
jq for Python programmers Process JSON and HTML on the command-line with familiar syntax.

jq for Python programmers Process JSON and HTML on the command-line with familiar syntax.

Denis Volk 3 Jan 9, 2022
A python script developed to process Windows memory images based on triage type.

Overview A python script developed to process Windows memory images based on triage type. Requirements Python3 Bulk Extractor Volatility2 with Communi

CrowdStrike 245 Nov 24, 2022
Python bytecode manipulation and import process customization to do evil stuff with format strings. Nasty!

formathack Python bytecode manipulation and import process customization to do evil stuff with format strings. Nasty! This is an answer to a StackOver

Michiel Van den Berghe 5 Jan 18, 2022
These are After Effects and Python files that were made in the process of creating the video for the contest.

spirograph These are After Effects and Python files that were made in the process of creating the video for the contest. In the python file you can qu

null 91 Dec 7, 2022
A python open source CMS scanner that automates the process of detecting security flaws of the most popular CMSs

CMSmap CMSmap is a python open source CMS scanner that automates the process of detecting security flaws of the most popular CMSs. The main purpose of

RazzorBack 1 Oct 31, 2021
Python functions to run WASS stereo wave processing executables, and load and post process WASS output files.

wass-pyfuns Python functions to run the WASS stereo wave processing executables, and load and post process the WASS output files. General WASS (Waves

Mika Malila 3 May 13, 2022
Python script using Twitter API to change user banner to see 100DaysOfCode process.

100DaysOfCode - Automatic Banners ??‍?? Adds a number to your twitter banner indicating the number of days you have in the #100DaysOfCode challenge Se

Ingrid Echeverri 10 Jul 6, 2022