Run a subprocess in a pseudo terminal

Related tags

ptyprocess
Overview

Launch a subprocess in a pseudo terminal (pty), and interact with both the process and its pty.

Sometimes, piping stdin and stdout is not enough. There might be a password prompt that doesn't read from stdin, output that changes when it's going to a pipe rather than a terminal, or curses-style interfaces that rely on a terminal. If you need to automate these things, running the process in a pseudo terminal (pty) is the answer.

Interface:

p = PtyProcessUnicode.spawn(['python'])
p.read(20)
p.write('6+6\n')
p.read(20)
Issues
  • ptyprocess 0.5.1

    ptyprocess 0.5.1

    Hello,

    I was wondering if we could make ptyprocess 0.5.1 available for [legacy] easy-install installations

    Thanks

    Carlos

    opened by papachoco 12
  • Use stdin in child process

    Use stdin in child process

    Is it possible?

    I'd like to spawn a process that reads stdin.

    cat requirements.txt | ./script_that_spawns.py safety --check --stdin
    

    When doing so and trying to read safety output, it blocks, and I have to interrupt it with control-c:

    Traceback (most recent call last):
      File "/home/pawamoy/.cache/pypoetry/virtualenvs/mkdocstrings-ytlBmpdO-py3.8/bin/failprint", line 8, in <module>
        sys.exit(main())
      File "/home/pawamoy/.cache/pypoetry/virtualenvs/mkdocstrings-ytlBmpdO-py3.8/lib/python3.8/site-packages/failprint/cli.py", line 125, in main
        return run(
      File "/home/pawamoy/.cache/pypoetry/virtualenvs/mkdocstrings-ytlBmpdO-py3.8/lib/python3.8/site-packages/failprint/cli.py", line 54, in run
        output.append(process.read())
      File "/home/pawamoy/.cache/pypoetry/virtualenvs/mkdocstrings-ytlBmpdO-py3.8/lib/python3.8/site-packages/ptyprocess/ptyprocess.py", line 818, in read
        b = super(PtyProcessUnicode, self).read(size)
      File "/home/pawamoy/.cache/pypoetry/virtualenvs/mkdocstrings-ytlBmpdO-py3.8/lib/python3.8/site-packages/ptyprocess/ptyprocess.py", line 516, in read
        s = self.fileobj.read1(size)
    KeyboardInterrupt
    

    Here is the actual Python code I'm using:

    process = PtyProcessUnicode.spawn(cmd)
    
    output = []
    
    while True:
        try:
            output.append(process.read())
        except EOFError:
            break
    
    process.close()
    
    opened by pawamoy 12
  • Potential fix for 'exec' failure case

    Potential fix for 'exec' failure case

    • Adding more robust code to handle the case where the exec call within fails. Now, spawn will raise an exception if this happens.
    • Adding a test to ensure this exception is raised when an invalid binary is run
    opened by anwilli5 9
  • Potential performance issue with unbuffered IO and the PtyProcess readline method

    Potential performance issue with unbuffered IO and the PtyProcess readline method

    Calls to self.fileobj.readline() from the PtyProcess readline() method read data one byte at a time (most likely since fileobj is opened with 'buffering=0'.) Thus, this program:

    from ptyprocess import PtyProcess, PtyProcessUnicode
    p = PtyProcess.spawn(['perl',  '-e', '''use 5.010; foreach my $letter ('a'..'z'){ say $letter x 1000; }'''])
    while(1):
        try:
            print p.readline()
        except EOFError:
            break
    p.close()
    

    Has pretty poor performance (output from strace):

    % time     seconds  usecs/call     calls    errors syscall
    ------ ----------- ----------- --------- --------- ----------------
     93.48    0.465020          18     26214         1 read
      2.28    0.011353          23       489       381 open
      0.84    0.004197        4197         1           clone
      0.61    0.003037          19       160       113 stat
    

    Is there a compelling reason to specify that the fileobj should have unbuffered IO?

    The PtyProcess read() method does not experience this behavior because it uses a default buffer size of 1024.

    enhancement help wanted needs-tests 
    opened by anwilli5 7
  • Flit packaging

    Flit packaging

    @jquast flit is my packaging tool to build wheels without involving setuptools. With this branch I can build a wheel by running flit wheel.

    We can use this:

    1. Standalone, by getting rid of setup.py and MANIFEST.in. This means that future releases would only have wheels on PyPI, not sdist tarballs. I'm already doing this for a number of my other projects - pip has been able to install wheels for about 2½ years - but it may surprise some people.
    2. In parallel, using flit to build wheels and setup.py for sdists. There's a risk of duplicated information getting out of date, but the main thing we update is the version number, and flit takes that from __version__, which we need to update anyway.
    3. I have a shim called flituptools so setup.py can use the flit information. But that would limit sdists to use with Python 3.
    opened by takluyver 6
  • Use setuptools, not distutils

    Use setuptools, not distutils

    To make wheel-building easier (see this).

    opened by njwhite 6
  • Logging of stdout and stderr

    Logging of stdout and stderr

    Hi :)

    I would like to subprocess any given command and log its stdout and stderr separately. This would seem like an easy thing to do, but im having no luck because:

    1. Processes which detect that their stdout/stderr are not ttys will modify their output
    2. Processes which detect that their stdout/stderr are not the same file path will assume theres stdout redirection and modify their output.

    So in jumps ptys and ptyprocess to the resuce. Tie the stdout and stderr to a pty, you get 1 file path like /dev/tty0021, and isatty() returns true for both. Problem is - now we can't distinguish stdout from stderr. Ok, no problem, just make two ptys - one for stdout and one for stderr. But now although both pass isatty(), their file paths will now look like /dev/tty0021 and /dev/tty0022 (for example). The subprocess reacts as if you weren't using a pty in the first place, and you log nothing.

    I have been trying for four months to figure a way around this, and because you are the expert in ptys I thought i might ask you directly - could you think of a way to log stdout and stderr separatly in your program, and still fool a script like this: https://bpaste.net/show/000d6f70ef41

    THANK YOU :D

    opened by JohnLonginotto 6
  • fixed typo

    fixed typo

    opened by Nystrex 5
  • FreeBSD fails fork_pty: OSError: [Errno 6] Device not configured: '/dev/tty'

    FreeBSD fails fork_pty: OSError: [Errno 6] Device not configured: '/dev/tty'

    Got a FreeBSD (digital ocean droplet, freebsd.pexpect.org) build agent prepared. It raises exception very early in critical codepath causing test runner to fork and eventually the build agent is killed by the kernel due to an OOM condition.

    Error is in method pty_make_controlling_tty at:

            # Verify we now have a controlling tty.
            fd = os.open("/dev/tty", os.O_WRONLY)
    
    [[email protected] ~]$ sudo -u teamcity -s
    $ cd /opt/TeamCity/work/210ae16cc3f30c30/ptyprocess
    $ . `which virtualenvwrapper.sh`
    $ mkvirtualenv pexpect27 --python=`which python2.7`
    $ pip install -e .
    $ cd ../pexpect
    $ python
    Python 2.7.9 (default, Jan  8 2015, 21:47:19)
    [GCC 4.2.1 Compatible FreeBSD Clang 3.3 (tags/RELEASE_33/final 183502)] on freebsd10
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pexpect
    >>> bash = pexpect.spawn('/bin/bash')
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "pexpect/pty_spawn.py", line 189, in __init__
        self._spawn(command, args, preexec_fn)
      File "pexpect/pty_spawn.py", line 281, in _spawn
        cwd=self.cwd, **kwargs)
      File "/opt/TeamCity/work/210ae16cc3f30c30/ptyprocess/ptyprocess/ptyprocess.py", line 220, in spawn
        pid, fd = _fork_pty.fork_pty()
      File "/opt/TeamCity/work/210ae16cc3f30c30/ptyprocess/ptyprocess/_fork_pty.py", line 30, in fork_pty
        pty_make_controlling_tty(child_fd)
      File "/opt/TeamCity/work/210ae16cc3f30c30/ptyprocess/ptyprocess/_fork_pty.py", line 76, in pty_make_controlling_tty
        fd = os.open("/dev/tty", os.O_WRONLY)
    OSError: [Errno 6] Device not configured: '/dev/tty'
    

    /dev/tty may be opened under normal conditions.

    bug 
    opened by jquast 5
  • Integrate unicode support into PtyProcess class

    Integrate unicode support into PtyProcess class

    Deprecating PtyProcessUnicode.

    This branch will be required for similar experimentation I'm about to do in Pexpect.

    opened by takluyver 5
  • Add **kwargs to PtyProcess.spawn

    Add **kwargs to PtyProcess.spawn

    The PtyProcessUnicode class accept keywords argument like encoding and codec_errors. However, it seems not easy to set these argument just using PtyProcess.spawn.

    I think we could add a **kwargs to spawn and create the class instance using cls(pid, fd, **kwargs). It will be neat and improve extensibility.

    opened by dong-zeyu 0
  • PtyProcess.read() returns a different value every call

    PtyProcess.read() returns a different value every call

    This is a very severe bug. When calling Ptyprocess.read() the value returned is different almost every time:

    ptyprocess.PtyProcess.spawn(['openssl', "ec", '-noout', '-text', '-in', '/opt/key/s128r1.key']).read()

    The output: image

    And again with the same params: image

    And again: image

    I don't know what is causing this but this is very weird.

    opened by gggal123 1
  • The preexec_fn should be executed before closing the file descriptors.

    The preexec_fn should be executed before closing the file descriptors.

    Currently, the preexec_fn is executed after the file descriptors are closed. This has some unwanted effects:

    • if preexec_fn opens a file descriptor, it will be inherit by the child process
    • if preexec_fn relays on having some file descriptor open, it will crash (see https://github.com/pexpect/pexpect/issues/368)

    The proposal is to move the "close the fds" section below the "execute the preexec_fn" code: https://github.com/pexpect/ptyprocess/blob/master/ptyprocess/ptyprocess.py#L266-L285

    For reference, this is what it does subprocess.Popen: https://github.com/python/cpython/blob/master/Modules/_posixsubprocess.c#L528-L549

    If it is okay, I can do a PR with the fix but I would like to hear your opinions about this.

    opened by eldipa 0
  • Use and prefer os.posix_spawn() when available

    Use and prefer os.posix_spawn() when available

    Python 3.8 has added os.posix_spawn(), this changes ptyprocess to use it when available.

    Since os.posix_spawn() completey bypasses os.fork(), pty.fork(), it avoids problems with logging locks and code needing to be thread-safe.

    opened by cagney 3
  • use os.posix_spawn when available (i.e, 3.8)

    use os.posix_spawn when available (i.e, 3.8)

    The attached patch is an experiment in using Python 3.8's os.posix_spawn() in ptyprocess. Since it doesn't use fork() it eliminates all those problems. A simple test using pexpect.interact() seemed to work.

    I know it has a race with inheritable when called in parallel; and I'm sure there's more.

    ptyprocess-posix-spawn.patch.gz

    The change looks bigger than it is because I indented all the old code. Below is what matters, which I've included so it is easier to pick it apart....

        if hasattr(os, 'posix_spawn'):
            print("using posix_spawn")
            # Issue 36603: Use os.openpty() (and try to avoid the
            # whole pty module) as that guarentees inheritable (if it
            # ever fails then just file a bug against os.openpty()
            fd, tty = os.openpty()
            # Try to set window size on TTY per below; but is this
            # needed?
            try:
                _setwinsize(tty, *dimensions)
            except IOError as err:
                if err.args[0] not in (errno.EINVAL, errno.ENOTTY):
                    raise
            # Try to disable echo if spawn argument echo was unset per
            # below; but does this work?
            if not echo:
                try:
                    _setecho(tty, False)
                except (IOError, termios.error) as err:
                    if err.args[0] not in (errno.EINVAL, errno.ENOTTY):
                        raise
            # Create the child: convert the tty into STDIO; use the
            # default ENV if needed; and try to make the child the
            # session head using SETSID.  Assume that all files have
            # inheritable (close-on-exec) correctly set.
            file_actions=[
                (os.POSIX_SPAWN_DUP2, tty, STDIN_FILENO),
                (os.POSIX_SPAWN_DUP2, tty, STDOUT_FILENO),
                (os.POSIX_SPAWN_DUP2, tty, STDERR_FILENO),
                (os.POSIX_SPAWN_CLOSE, tty),
                (os.POSIX_SPAWN_CLOSE, fd),
            ]
            spawn_env = env or os.environ
            pid = os.posix_spawn(command, argv, spawn_env,
                                 file_actions=file_actions,
                                 setsid=True)
            # Child started.  Now close tty and stop PTY(FD) being
            # inherited. Note that there's a race here: a parallel
            # fork/exec would unwittingly inherit this PTY(FD)/TTY
            # pair.  Probably need to wrap all this in a lock?
            os.close(tty)
            os.set_inheritable(fd, False)
    
    opened by cagney 4
  • PtyProcess.spawn just got slower and can trigger problems

    PtyProcess.spawn just got slower and can trigger problems

    This is somewhat related to #43.

    Python 3.7.1 contains the change https://bugs.python.org/issue6721 that is trying to avoid a deadlock in the logging code. It works by having os.fork() grab all the logging locks before executing fork(). It's expensive - if the forked process is just going to exec then os.spawn() is preferred. Its also creating, lets say, interesting problems (deadlocks) with code that was working.

    Since PtyProcess calls pty.fork() (which calls os.fork() ...) it triggering this code path.

    Using some equivalent of os.spawn() would eliminate this.

    opened by cagney 2
  • Rather than closing all the file descriptors, set close on exec on them

    Rather than closing all the file descriptors, set close on exec on them

    The code that closes all the file descriptors in the child process before exec breaks practically any and all efforts to debug exec errors; especially if they're not OSErrors.

    In my case, I was debugging under pycharm. pydevd monkeypatched all the exec routines and because of a behavioural change in pexpect, where it re-encoded all the arguments as utf-8 encoded binary, rather than strings, when pydevd tried to detect the presence of python in it's code, it crashed out because there was encoding of the strings and bstring.endswith() needs to be passed a b'' string, or else it bombs out.

    opened by petesh 3
  • PtyProcess.spawn (and thus pexpect) slowdown in close() loop

    PtyProcess.spawn (and thus pexpect) slowdown in close() loop

    The following code in ptyprocess

    https://github.com/pexpect/ptyprocess/blob/3931cd45db50ee8533b8b0fef424b8d75f7ba1c2/ptyprocess/ptyprocess.py#L260-L269

    is looping through all possible file descriptors in order to close those (note that closerange() implemented as a loop at least on Linux). In case the limit of open fds (aka ulimit -n, aka RLIMIT_NOFILE, aka SC_OPEN_MAX) is set too high (for example, with recent docker it is 1024*1024), this loop takes considerable time (as it results in about a million close() syscalls).

    The solution (at least for Linux and Darwin) is to obtain the list of actually opened fds, and only close those. This is implemented in subprocess module in Python3, and there is a backport of it to Python2 called subprocess32.

    This issue was originally reported to docker: https://github.com/docker/for-linux/issues/502

    Other good reason for using subprocess (being multithread-safe) is described in https://github.com/pexpect/ptyprocess/issues/43

    opened by kolyshkin 2
  • is it possible to PtyProcess.read() without blocking?

    is it possible to PtyProcess.read() without blocking?

    I'm running ptyprocess in read in a greenlet coroutine. so when I .read() and there's no data, it blocks and stops all the other greenlets. is it possible to read without blocking, or at least check if there's any new data?

    Sorry it this is a silly question

    Thank you

    opened by sentriz 3
  • improve error handling robustness for os.execvpe

    improve error handling robustness for os.execvpe

    see: https://github.com/pexpect/pexpect/issues/512

    Improved behavior (with this PR):

    image

    opened by ryanpetrello 12
Releases(0.7.0)
A Python module for controlling interactive programs in a pseudo-terminal

Pexpect is a Pure Python Expect-like module Pexpect makes Python a better tool for controlling other applications. Pexpect is a pure Python module for

null 2.1k Oct 16, 2021
Jurigged lets you update your code while it runs.

jurigged Jurigged lets you update your code while it runs. Using it is trivial: python -m jurigged your_script.py Change some function or method with

Olivier Breuleux 515 Oct 17, 2021
Supervisor process control system for UNIX

Supervisor Supervisor is a client/server system that allows its users to control a number of processes on UNIX-like operating systems. Supported Platf

Supervisor 7k Oct 17, 2021