Simple, Pythonic remote execution and deployment.

Related tags

DevOps Tools fabric
Overview

Welcome to Fabric!

Fabric is a high level Python (2.7, 3.4+) library designed to execute shell commands remotely over SSH, yielding useful Python objects in return. It builds on top of Invoke (subprocess command execution and command-line features) and Paramiko (SSH protocol implementation), extending their APIs to complement one another and provide additional functionality.

For a high level introduction, including example code, please see our main project website; or for detailed API docs, see the versioned API website.

Comments
  • Python 3.3 support

    Python 3.3 support

    As I understood it's not working with Python 3.3 yet. Maybe you can add this support now?

    Dependencies: rudolf - rewritten for python3 - https://github.com/pashinin/rudolf paramiko - https://github.com/paramiko/paramiko/pull/236

    Now have following problems - https://travis-ci.org/pashinin/fabric/builds/17479924

    Traceback (most recent call last):
    File "/home/travis/build/pashinin/fabric/fabric/operations.py", line 408, in put
       mode, local_is_path, temp_dir)
    File "/home/travis/build/pashinin/fabric/fabric/sftp.py", line 234, in put
       rattrs = putter(local_path, remote_path)
    File "/home/travis/virtualenv/python3.2/src/paramiko/paramiko/sftp_client.py", line 611, in put
       return self.putfo(fl, remotepath, os.stat(localpath).st_size, callback, confirm)
    File "/home/travis/virtualenv/python3.2/src/paramiko/paramiko/sftp_client.py", line 578, in putfo
       raise IOError('size mismatch in put! %d != %d' % (s.st_size, size))
    IOError: size mismatch in put! 0 != 4
    
    opened by pashinin 61
  • Split out non-ssh-dependent features into separate lib

    Split out non-ssh-dependent features into separate lib

    Things are coming to a head and it'd be good to split out Fabric's task execution stuff into its own "third party" tool/library so it can be used/referenced independently of our SSH functionality.

    Right now, anybody wanting to use Fab-as-runner must still install ssh and PyCrypto, which sucks.

    And if we're splitting it between task running and SSH, having "Fabric" be "SSH + dependency on new runner tool" makes much more sense (both re: backwards compatibility, and overall usefulness) than vice versa.

    Speaking of backwards compat, I am marking this 2.0 because it makes more sense to do it at a 2.0 backwards incompat barrier (since at the very least it adds a new install dependency to Fabric), but doing the split in, say, 1.6 or 1.7 should also be quite possible if the timing is better.


    To be clear, this new tool would:

    • Maybe, possibly, but probably not just be us glomping onto an existing tool like Paver
      • Paver tries to do too much and I've never been a big fan of how its API feels
      • Really not aware of any other tools that are at all well known and fit the use case any better
      • EDIT: Baker actually looks half decent, though it's obviously not a perfect match (nothing would be, anything would require some tweaks.)
    • Have a distinct identity from Fabric, while probably remaining "affiliated"
      • Name brainstorm incoming.
    • Encompass the "run Python callables as tasks from the CLI with args" functionality that currently exists within Fabric
    • Likely entail some refactoring of how that machinery works, if only just to make post-ripout integration easier
    • Probably get some of the remaining big task-runner "missing features" implemented right off the bat (really just #452)
    Packaging Support Refactoring 
    opened by bitprophet 55
  • Implement parallelism/thread-safety

    Implement parallelism/thread-safety

    Description

    Fabric currently uses the simplest approach to the most common use case, but as a result is quite naive and not threadsafe, and cannot easily be run in parallel even by an outside agent.

    Rework the execution model and state sharing to be thread-safe (whether by using threadlocals or something else), and if possible go further and actually implement a parallel execution mode users can choose to activate, with threading or multiprocessing or similar.


    Morgan Goose has been the lead on this feature and has a in-working-shape branch in his Github fork (link goes to the multiprocessing branch, which is the one you want to use). We hope to merge this into core Fabric for 1.1.


    Current TODO:

    • ~~Anal retentive renaming, e.g. s/runs_parallel/parallel/~~
    • ~~Code formatting cleanup/rearranging~~
    • ~~Mechanics/behavior/implementation double-check~~
    • ~~Linewise output added back in (may make sub-ticket)~~
    • ~~Paramiko situation examined re: dependency on 1.7.7.1+ and thus PyCrypto 2.1+~~
      • ~~Including documenting the change in the install docs if necessary~~
    • ~~Pull in anything useful that Morgan hadn't pushed at time of my merge~~
    • ~~(if not included in previous) Examine logging support and decide if it's worth bumping to next release~~
    • ~~Test, test, test~~

    Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 02:52pm EDT

    Attachments

    Relations

    • Related to #20: Rework output mechanisms
    • Related to #21: Make execution model more robust/flexible
    • Related to #197: Handle running without any controlling tty
    Wart Feature Core 
    opened by bitprophet 53
  • Move process/development/etc docs to static site

    Move process/development/etc docs to static site

    Most of the development and index pages of the current docs would work better as a dedicated static site, where changes can be made without having to copy them to each release branch, etc.

    This will also be a good excuse to rearrange a little bit.

    Specifically, move these to the new site:

    • Everything in index.rst which is not the Documentation section -- so the intro/about, installation, development, and getting help sections
    • The Installation and Development pages themselves
      • Installation is slightly more tied to the code itself than the rest, so it's a maybe -- e.g. we occasionally get patches that add to the install docs (such as the PyPM chunk).
    • Split out the Releases & Support of older releases sections from Development into their own page -- that info is both user and dev related.
    • Probably split existing intro material into front-page and "about the project" content...
    • ...so we can add more to the front-page, such as:
      • Latest stable versions & link to their changelogs
      • Up-front-and-personal links to various things ("Need help? ", "Want to download? ", "Want to contribute? " etc)

    This would then leave the in-code docs covering:

    • Some sort of basic intro, probably pointing to the main site
    • The notes about the docs themselves
    • Tutorial
    • Usage docs
    • FAQ
    • API docs
    • Changelog

    Finally, add this entirely new content to the new site:

    • Blog for announcements/releases - and then ML posts become links here
    • Roadmap page that gets updated periodically with what's ahead
    Support Docs 
    opened by bitprophet 50
  • Implement tunnelling

    Implement tunnelling

    Description

    It should be possible to tunnel all the commands through a single entry point in the network.


    Originally submitted by Anonymous () on 2009-07-27 at 05:22pm EDT

    Attachments

    Relations

    • Related to #275: Consider forking Paramiko
    • Duplicated by #344: Tunnelling SSH over HTTP Proxies
    • Related to #78: Add Tunneling Context to Fab
    • Related to #72: SSH key forwarding
    Feature Network 
    opened by bitprophet 42
  • SSH agent forwarding error

    SSH agent forwarding error

    I enabled ssh agent forwarding with fab 1.4 with env.forward_agent = 'True' and all seems fine, except I receive an odd error during code shipment:

    out: Authentication response too long: 3577407571 out: fatal: The remote end hung up unexpectedly

    Fab does properly forward the agent when pulling code from github, so not sure what's going on.

    Before fab 1.4, I used this function to get around the key forwarding issue:

    def sshagent_run(cmd): local('ssh -A %s@%s "%s"' % (env.user, env.host, cmd))

    Any idea what's going on?

    Network Bug 
    opened by jravetch 33
  • Improved prompt detection and passthrough

    Improved prompt detection and passthrough

    Description

    Pre-intro

    Apologies for ticket length; the issue at hand is not simple and has many overlapping factors/considerations. Consider skipping down to the bottom of the description, where there is a concise summary that should function as a tl;dr.

    Intro

    This ticket used to be partly about prompt detection. We're now of the opinion that detecting prompts beforehand (in order to know when to present users with a Python-level prompt) will alway be painful and will never cover 100% of possible use cases. Instead, we feel that actual live interaction with the remote end (i.e. sending local stdin to the other side) will not only sidestep this problem, but be more useful and more in line with user expectations. See #177 for more on the "expect" approach.

    The "live" approach itself has shortcomings, but none significantly worse than manually invoking ssh by hand, and anything in this space is certainly better than the "nothing" we have now.

    Investigation into SSH and terminal behavior

    Mostly because we can't really hope to offer "better" behavior than vanilla ssh does. Plus this presents a learning opportunity -- all of the below behaviors are reflected in Paramiko itself, as one might expect.

    There are basically two issues at stake when performing fully interactive command line calls remotely: the mixing of stdout and stderr, and how stdin is echoed.

    Stdout/stderr

    Stdout and stderr mixing were tested with the following program (which prints 0 through 9 alternating to stdout and stderr, unbuffered).

    #!/usr/bin/env python
    
    import sys
    from itertools import izip, cycle
    
    for pipe, num in izip(cycle([sys.stdout, sys.stderr]), range(10)):
        pipe.write("%s\n" % num)
        pipe.flush()
    

    No pty

    When invoked normally (without -t) ssh appears to separate stdout and stderr on at least a line-by-line basis, if not moreso, insofar as we see all of stdout first, and then stderr. Printed normally:

    $ ssh localhost "~/test.py"
    0
    2
    4
    6
    8
    1
    3
    5
    7
    9
    

    With streams separated for examination:

    $ ssh localhost "~/test.py" >out 2>err
    $ cat out
    0
    2
    4
    6
    8
    $ cat err
    1
    3
    5
    7
    9
    

    Thus, pty-less SSH is going to look a bit different than the same program interacted with locally.

    With pty

    When invoked with a pty, we get the expected result of the numbers being in order, but the streams are now combined together before we get to them (since all we get is the output from the pseudo-terminal device on the remote end, just as if we were reading a real terminal window). Printed normally:

    $ ssh localhost -t "~/test.py"
    0
    1
    2
    3
    4
    5
    6
    7
    8
    9
    Connection to localhost closed.
    

    Examining the streams:

    $ ssh localhost -t "~/test.py" >out 2>err
    $ cat out
    0
    1
    2
    3
    4
    5
    6
    7
    8
    9
    $ cat err
    Connection to localhost closed.
    

    Thus, the tradeoff here is "correct"-looking output versus the ability to get a distinct stdout and stderr.

    Echoing of stdin

    No pty

    Without a pty, ssh must echo the user's stdin wholesale (or hide it entirely, though there do not appear to be options for this) and this means that password prompts become unsafe. Sudo without a pty:

    $ ssh localhost "sudo ls /"
    Password:mypassword
    
    .DS_Store
    .Spotlight-V100
    .Trashes
    .com.apple.timemachine.supported
    Applications
    Developer
    [...]
    

    Note that the user's password, typed to stdin, shows up in the output. For thoroughness, let's examine what went to which stream:

    $ ssh localhost "sudo ls /" >out 2>err
    mypassword
    $ cat out
    .DS_Store
    .Spotlight-V100
    .Trashes
    .com.apple.timemachine.supported
    Applications
    Developer
    [...]
    $ cat err
    Password:
    

    As expected, the user's stdin didn't end up in the streams from the remote end (ergo it is the local terminal echoing stdin, and not the remote end) and the password prompt showed up in stderr.

    With pty

    Here's the same sequence but with -t enabled, forcing a pty:

    $ ssh -t localhost "sudo ls /"
    Password:
    .DS_Store               Applications
    .Spotlight-V100         .Trashes
    Developer               [...]
    Connection to localhost closed.
    

    Note that in addition to not echoing the user's password, ls picked up on the terminal being present and altered its behavior. This is orthogonal to our research but is still a useful thing to keep in mind.

    As before, use of pty means that all output now goes into stdout, leaving stderr empty save for local output from the ssh program itself:

    $ ssh -t localhost "sudo ls /" >out 2>err
    $ cat out
    Password:
    .DS_Store               Applications
    .Spotlight-V100         .Trashes
    Developer               [...]
    $ cat err
    Connection to localhost closed.
    

    And as with the previous invocation, our password never shows up, even on our local terminal.

    Non-hidden output

    Finally, as a sanity test to ensure that non-password stdin is echoed by the remote pty when appropriate, we remove a (previously created) test file with rm's "are you sure" option enabled:

    $ ssh -t localhost "rm -i /tmp/testfile"
    remove /tmp/testfile? y
    Connection to localhost closed.
    

    And proof that it is the remote end doing the echoing -- our stdin shows up in the stdout from the remote end:

    $ ssh -t localhost "rm -i /tmp/testfile" >out 2>err
    $ cat out
    remove /tmp/testfile? y
    $ cat err
    Connection to localhost closed.
    

    Conclusion

    As seen above, there are a number of different behaviors one may encounter when using, or not using, a pty. The tradeoff being, essentially, access to distinct stdout and stderr streams (but garbled output and blanket echo of stdin) versus a more shell-like behavior (but without the ability to tell the remote stderr from stdout).

    In our experience, the ssh program defaults to not using a pty, but the average Fabric user is probably best served by enforcing one. New users are more likely to expect "shell-like" behavior (such as proper multiplexing of stdout and stderr, and hiding of password prompt stdin) and Fabric already defaults to a "shell-like" behavior insofar as it wraps commands in a login shell.

    Summation of early comments

    A summary of findings so far (contains up through comment 16):

    1. Python's default I/O buffering is typically line-by-line (linewise). I/O is not typically printed to the destination until a line ending is encountered. This applies both to input and output. (It's also why fabric.utils.fastprint was created -- one must manually flush output to e.g. stdout to get things like progress bars to show up reliably.)
    2. Fab's current mode of I/O is also linewise, partly because of point 1, and partly to allow printing of stdout and stderr streams independently. As a side effect, partial line output such as prompts will not be displayed to the Fabric user's console.
    3. As seen above, SSH's default buffering mode is mostly linewise, insofar as the default non-pty behavior mixes the two streams up but on a line by line basis, but it is still capable of presenting partial lines (prompts) when necessary.
    4. Because we cannot discern a reliable way of printing less-than-a-line output without moving to bytewise buffering, we'll need to switch to printing every byte as we receive it, in order for the user to see things such as prompts (or more complicated output, e.g. curses apps or things like top).
      • If/when the secret of ssh's print buffering is found, use that algorithm instead.
    5. Forcing Python's stdin to be bytewise requires the use of the Unix-only termios and tty libraries, but I believe there may be Windows alternatives. For now, we plan to focus on the best Unix-oriented approach and will implement Windows compatibility later if possible. (Sorry, Windows folks.)
    6. Obtaining remote data bytewise is a bit easier insofar as data from the client isn't linewise. However, shortening the size of the buffer throws a wrench in Fabric's current method of detecting whether there is no more output to be had, so we are currently experimenting with other approaches, specifically select.select (which, yes, is another Windows compatibility pain point.)
      • Any new solution should also hopefully obviate all the annoying, painful, error-prone issues with the current output_thread I/O loop, insofar as line remainders and such are concerned.
      • Ideally, as with select, this should also remove the need for threads entirely, which will make it easier to fully paralellize Fabric in the future, and kill another entire class of occasional problems.
    7. With bytewise output, we run into problems where the remote stdout and stderr get mixed up character-by-character (e.g. the last line of regular output can become garbled up with a "following" line containing a prompt, since many prompts print to stderr). Until/unless we can figure out how the regular SSH client accomplishes its "linewise but not really" buffering, the only way to avoid this problem is to set set_combine_stderr to True.
      • We could, and probably should, offer this as a setting in case users have need for it.
    8. And without using a pty, we are forced to manually echo all stdin, just as how vanilla SSH does (see previous major section). This then presents issues with password prompts becoming insecure.

    Putting it all together

    So, here's the planned TODO for this issue, given all of the above and the current state of the feature branch (namely, hardcoded bytewise stdin, skipping out on the output threads in favor of select, and printing prefixes after each newline):

    1. Abstract out the currently-implemented stdin manipulation; it essentially requires a try/finally and I think it'd be handy to have as a context manager or similar.
      • Possibly also make it configurable, since bytewise stdin is not absolutely required much of the time. Still feel it should be enabled by default, though.
      • Offer an option to allow suppression of stdin echoing, just because.
    2. Expose set_combine_stderr as a user-facing option. Default should be on -- not too many people need the distinct stderr access, and with it off, output is very likely to be garbled unexpectedly. It's an advanced user sort of thing.
    3. Change the pty option to default to True (currently False). This will provide the smoothest user experience, and since we're combining the streams by default anyway, it's a no-brainer.
    4. Decide what to do with output_thread's password detection and response. This may become more difficult with bytewise buffering, and was originally implemented to get around the lack of stdin.
      • Drop the feature entirely, since users can now enter prompts interactively. Dropping features isn't great, though.
      • Repackage it as a "password memory" feature (it needs an overhaul anyways). Maybe as part of #177.
      • Keep it entirely as-is, and just use the output capturing as the read buffer in place of the current approach (checking the as-big-as-possible chunk from the remote end). Possibly quickest. We won't be able to hide the prompt itself from user eyes anymore (that's the biggest reason #80 can't work) but that's not required, just nice.
    5. Figure out if it's possible to omit printing the output prefix in lines where the user's input is being echoed by the remote end. Currently this results in said prefix showing up mid-line in some prompt situations (usually where the echoed stdin is the first data to show up in the stdout buffer, though it could also be a problem once the user hits Enter to submit the prompt too).
      • Might be able to conditionally hide prefix in cases where the byte coming in to stdout is the same as the last byte seen on stdin, but that is messy (e.g. output coming in long after the user is done typing -- do we add time memory? how much of one? etc)
      • Depending on exactly how it shakes out, this may not even be an issue for anything but the case where the typed input's echo is the first stdout. will have to see.
    6. Add an interact_with that makes use of invoke_shell, assuming it can work seamlessly with the final exec_command based solution without code duplication.
    7. Come up with Windows-compatible solutions, if possible, for all Unix-isms used in this effort.
    8. Note in the parallel-related ticket(s) that this solution will make it more difficult for a parallel execution setup to function, insofar as bytewise-vs-linewise output is concerned. A truly parallel execution would be incredibly confusing even on a line-by-line basis, however, so a better solution is likely to be needed anyways.
    9. Reorganize operations.py and network.py -- nuke old outdated code, shuffle around new code, it should ideally live in another module that is neither network or operations (?)
    10. Document all of the above changes thoroughly, and attend to related tickets re: tutorial etc.
      • Update changelog (the pty default is now backwards incompatible!)
      • Make sure users know they need to deactivate both pty and combine-streams options in order to get distinct streams.
      • Update skeleton usage docs re: interactivity
      • Search for mentions of use of the stderr attribute and update them since it's not populated by default anymore

    Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 05:24pm EDT

    Relations

    • Related to #73: Once Git can be used, update tutorial to use it.
    • Duplicated by #49: Fabric does not prompt for input when the host does.
    • Related to #80: See whether paramiko.SSHClient.invoke_shell + paramiko.Channel.send is feasible
    • Duplicated by #153: Hangs When Encountering an Invalid Security Certificate
    • Related to #177: Investigate pexpect/expect integration
    • Related to #20: Rework output mechanisms
    • Related to #182: New I/O mechanisms print "extra" blank lines on \r
    • Related to #183: Prompts appear to kill capturing (now with bonus test server!)
    • Related to #190: Sudo prompt mixed up a bit
    • Related to #192: Per-user/host password memory (was: Possible issue in password memory)
    • Related to #193: Terminal resizing support/detection
    • Related to #196: open_shell() doesn't do readline too well
    • Related to #163: Formattable output prefix.
    • Related to #197: Handle running without any controlling tty
    • Related to #204: Better in-thread exception handling
    • Related to #209: Some password prompts no longer specify the user
    • Related to #212: Hitting Ctrl-C during I/O still requires shell reset
    • Related to #219: Blank lines after silent commands
    • Related to #223: Full stack tests choking on passphrase-vs-password issue

    Closed as Done on 2010-08-06 at 11:22pm EDT

    Wart Feature Network 
    opened by bitprophet 33
  • Recursive put() and get()

    Recursive put() and get()

    Description

    The current functionality of put and get are limited by only being able to place files in existing dirctories, and further (because of glob limitations), dont support deep recursion. This is fairly annoying for deployments, so I suggest using a module that implements scp-like behavior via paramiko. Here is one I found http://bazaar.launchpad.net/~jbardin/paramiko/paramiko_scp/annotate/500?file_id=scp.py-20081117202350-5q0ozjv6zz9ww66y-1 .

    put() and get() could just become thin wrappers for this module (with all the fabric goodness in state-keeping).


    Originally submitted by Erich Heine (sophacles) on 2010-02-06 at 01:59pm EST

    Relations

    • Related to #121: Make put() more flexible re: file modes
    • Related to #156: put() will try to chmod, even though this is not always possible. There is no way to turn this off
    • Related to #61: Extend cd to work for get/put
    • Related to #226: Update test server to handle sftp
    • Related to #79: Optional host prefix for get
    • Duplicated by #28: Allow put-style globbing with get
    • Related to #217: Allow file handling methods (put/get/etc) to handle file-like objects and/or StringIO
    • Related to #245: Consider breaking out the cd() behavior of local into lcd()
    • Related to #274: See if it makes sense to have get/put return values
    • Related to #279: Prune recursive kwarg from get()/put()
    • Related to #2: Make put sudo-able

    Closed as Done on 2011-02-25 at 10:08pm EST

    Feature 
    opened by bitprophet 31
  • Use decorator to define tasks

    Use decorator to define tasks

    Description

    Consider using python decorator(s) to define the tasks in fabfile (instead of listing all the callable objects as a tasks). While I prefer this explicit way, using decorators is also commonly used in projects like Django (http://www.djangoproject.com) and Paver (http://www.blueskyonmars.com/projects/paver/) to define published/callable functions. I'm also aware of methods like importing only the non-callable objects into namespace and using underscore in the beginning of the function, but found it less pythonic.

    Using a decorator, it would be also possible to define tasks like 'default' ála Paver.

    Example definition:

    from fabric.api import local
    
    @task
    def mytask():
      helperfunc()
      local('foobar')
    
    @default
    @task
    def mydefaulttask():
      # run this no command is given
    
    def helperfunc():
      # do this an that, not to be used directly
    

    Originally submitted by **** (jmu) on 2009-11-01 at 12:55pm EST

    Relations

    • Duplicated by #126: Allow developer to specify which functions are tasks
    • Duplicated by #248: fab --list should only show functions, not classes
    • Related to #286: Use call instead of init for classes that define one
    • Related to #297: Object-oriented hosts/roles/collections
    • Related to #4: Allow for storing/using metadata about hosts
    • Related to #21: Make execution model more robust/flexible
    • Related to #56: Add namespacing or dot notation

    Closed as Done on 2011-06-09 at 05:49pm EDT

    Wart Feature Core 
    opened by bitprophet 31
  • Support full logging to file

    Support full logging to file

    Description

    Outside of any further modifications to the stdout/stderr output controls, it would be very handy to log everything to a file; this would give users another alternate channel for debugging their fabfiles without having to wrestle with what they see at runtime.

    Main question is whether to output things as if debug were set, by default. Thinking to leave it off at first, and have a simple flag/option to turn it on, perhaps --log-debug.

    Where to log to? By default I'd say user's cwd, though that can get annoying (like "pipturds", i.e. pip-log.txt files that pip drops everywhere.) Unfortunately there's no other great standard location, so perhaps turn logging-to-file off by default? Either way, allow override, say via --log-location or similar.

    Unsure whether it makes any sense to utilize the logging module's concept of levels; the only place I can see it being useful at all is for the debug stuff, but we currently think of debugging as modifying output instead of adding to it -- which doesn't mesh with how logging works. So possibly best to just stick everything in, say, INFO for now, then tweak later if necessary.


    Potential modules:

    • stdlib logging
      • Pros: stdlib, well known/documented
      • Cons: kind of byzantine/complicated to use
    • Logbook
      • Pros: Armin
    • Twiggy
    • Others?

    Originally submitted by Jeff Forcier (bitprophet) on 2009-09-06 at 10:57am EDT

    Relations

    • Related to #101: ANSI Color support
    • Duplicated by #135: All output should be sent through logging module
    • Related to #151: Make fabfile print() statements controllable via output controls
    • Related to #163: Formattable output prefix.
    • Related to #244: Add additional, verbose-only output when connecting
    • Related to #71: Consider adding "global"/per-task capture of run/sudo/local stdout/stderr
    • Duplicated by #333: Use logger() instead of print()
    Feature Core 
    opened by bitprophet 31
  • Make reconnection more robust (was re: reboot() specifically)

    Make reconnection more robust (was re: reboot() specifically)

    Description

    Use a more robust reconnection/sleep mechanism than "guess how long a reboot takes and sleep that long". Possibilities:

    • Try reconnecting after, say, 30 seconds, with a short timeout value, then loop every, say, 10 seconds until we reconnect
    • Just give user a prompt, within a loop, so they can manually whack Enter to try reconnecting
    • Stick with the manual sleep timer entry, and just ensure it is explicitly documented, i.e. "we highly recommend figuring out how long your system takes to reboot before using this function"

    Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 11:38am EDT

    Relations

    • Related to #201: Ambiguous sudo call in reboot function
    Feature Contrib 
    opened by bitprophet 30
  • fabric2 connection doesn't support multi-thread

    fabric2 connection doesn't support multi-thread

    The following code try to submit 3 long run command to remote server at the same time using the same connection,

    from fabric import Connection, Result
    from concurrent.futures import ThreadPoolExecutor
    
    c = Connection('[email protected]')
    
    with ThreadPoolExecutor(max_workers=3) as executor:
      for i in range(3):
        executor.submit(lambda: c.run('sleep 5 && echo done'))
    

    And the following error are thrown"

    Unknown exception: object of type 'NoneType' has no len()
    Traceback (most recent call last):
      File "/home/henry/.local/lib/python3.9/site-packages/paramiko/transport.py", line 2164, in run
        handler(self.auth_handler, m)
      File "/home/henry/.local/lib/python3.9/site-packages/paramiko/auth_handler.py", line 376, in _parse_service_accept
        m.add_string(self.username)
      File "/home/henry/.local/lib/python3.9/site-packages/paramiko/message.py", line 274, in add_string
        self.add_int(len(s))
    TypeError: object of type 'NoneType' has no len()
    

    It works well when 3 jobs are submitted one by one. It looks like fabric2 doesn't handle multi-thread properly. Is there some kind of build-in concurrency support in fabric2 that I am missing? Or does the users have to handle it by themselves? Thank you.

    opened by link89 0
  • fabric.api.sudo() returning empty stderr on error condition

    fabric.api.sudo() returning empty stderr on error condition

    Issue Hey guys, I am new to fabric. I am running a command as res = fabric.api.sudo(f"pip install {something}",user=user) I expect the command to return stderr or abort when the package/version is not found i.e. pip install fails. However I am getting a res.return_code=0, res.stderr, as empty on an error condition. I do get the ERROR message on stdout. Is it expected behavior ? How can I make the stderr have the error condition and the correct return_code?

    Version Using Fabric3 with version 1.14.post1

    Any help would be great, thanks.

    opened by ICUMD 0
  • Connecting to a keyless linux remote

    Connecting to a keyless linux remote

    Hi, I have tried to search for similar issues and for suggestions but no luck so far, so I am subttming this to the dev community.

    I have an embedded hardware based on linux but with NO password at all (ssh does not even show a prompt, the root user has no password at all). This may seem crazy but in industrial systems it is useful, at least before deploying a system. I have an application based on python + fabric that can connect gracefully to a system with password but gives an Authentication failed error when the remote has no password configured. Can this be solved somehow? SSH can handle this, I suspect fabric has some option for that but I couldn't find it. setting password: '' does not work.

    Python 3.9 Fabric 2.7.1

    Thank you

    opened by LOGUNIVPM 0
  • Remote: pty-mode passes through TERM env-var

    Remote: pty-mode passes through TERM env-var

    Currently, setting TERM in Remote's env doesn't actually pass through TERM to the remote. Instead you have to hack around it by doing something like prepending the command with env TERM=<your-term>.

    This change passes through Runner's TERM env-var if given. If not given and TERM is in the local env-vars, it is used instead.

    opened by jevinskie 0
  • AttributeError 'inspect.getargspec' on Python 3.11

    AttributeError 'inspect.getargspec' on Python 3.11

    Simply declaring a fabric.task on Python 3.11 crashes with an AttributeError:

    $ py -3.11 -m pip-run fabric -- -c 'from fabric import task; task(lambda c: None)'
    Collecting fabric
      Using cached fabric-2.7.1-py2.py3-none-any.whl (53 kB)
    Collecting invoke<2.0,>=1.3
      Using cached invoke-1.7.3-py3-none-any.whl (216 kB)
    Collecting paramiko>=2.4
      Using cached paramiko-2.12.0-py2.py3-none-any.whl (213 kB)
    Collecting pathlib2
      Using cached pathlib2-2.3.7.post1-py2.py3-none-any.whl (18 kB)
    Collecting bcrypt>=3.1.3
      Using cached bcrypt-4.0.1-cp36-abi3-macosx_10_10_universal2.whl (473 kB)
    Collecting cryptography>=2.5
      Using cached cryptography-38.0.4-cp36-abi3-macosx_10_10_universal2.whl (5.4 MB)
    Collecting pynacl>=1.0.1
      Using cached PyNaCl-1.5.0-cp36-abi3-macosx_10_10_universal2.whl (349 kB)
    Collecting six
      Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
    Collecting cffi>=1.12
      Using cached cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl (174 kB)
    Collecting pycparser
      Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
    Installing collected packages: invoke, six, pycparser, bcrypt, pathlib2, cffi, pynacl, cryptography, paramiko, fabric
    Successfully installed bcrypt-4.0.1 cffi-1.15.1 cryptography-38.0.4 fabric-2.7.1 invoke-1.7.3 paramiko-2.12.0 pathlib2-2.3.7.post1 pycparser-2.21 pynacl-1.5.0 six-1.16.0
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pip-run-88oqhs31/fabric/tasks.py", line 71, in task
        return invoke.task(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pip-run-88oqhs31/invoke/tasks.py", line 331, in task
        return klass(args[0], **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^
      File "/var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pip-run-88oqhs31/fabric/tasks.py", line 21, in __init__
        super(Task, self).__init__(*args, **kwargs)
      File "/var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pip-run-88oqhs31/invoke/tasks.py", line 76, in __init__
        self.positional = self.fill_implicit_positionals(positional)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pip-run-88oqhs31/invoke/tasks.py", line 167, in fill_implicit_positionals
        args, spec_dict = self.argspec(self.body)
                          ^^^^^^^^^^^^^^^^^^^^^^^
      File "/var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pip-run-88oqhs31/invoke/tasks.py", line 153, in argspec
        spec = inspect.getargspec(func)
               ^^^^^^^^^^^^^^^^^^
    AttributeError: module 'inspect' has no attribute 'getargspec'. Did you mean: 'getargs'?
    

    The issue may or may not have been fixed in invoke, but if so, it's broken on the released versions of invoke required by fabric.

    opened by jaraco 1
  • Option to use rsync_project with two local directories?

    Option to use rsync_project with two local directories?

    The rsync_project() function project.py requires that the remote host be over ssh. My project requires two modes where paths are copied via ssh (working) and also where the source and destinations are both local directories.

    It looks like the @needs_host and cmd = "rsync %s %s:%s %s" % (options, remote_prefix, remote_dir, local_dir) code force remote_dir to be on another machine via ssh.

    It would be great if I could pass another bool that would remove the : from the string so that local -> local could be supported.

    opened by sifive-benjamin-morse 0
Owner
Fabric
Pythonic SSH library & related projects
Fabric
pyinfra automates infrastructure super fast at massive scale. It can be used for ad-hoc command execution, service deployment, configuration management and more.

pyinfra automates/provisions/manages/deploys infrastructure super fast at massive scale. It can be used for ad-hoc command execution, service deployme

Nick Barrett 2.1k Dec 29, 2022
Cobbler is a versatile Linux deployment server

Cobbler Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many

Cobbler 2.4k Dec 24, 2022
Get Response Of Container Deployment Kube with python

get-response-of-container-deployment-kube 概要 get-response-of-container-deployment-kube は、例えばエッジコンピューティング環境のコンテナデプロイメントシステムにおいて、デプロイ元の端末がデプロイ先のコンテナデプロイ

Latona, Inc. 3 Nov 5, 2021
CTF infrastructure deployment automation tool.

CTF infrastructure deployment automation tool. Focus on the challenges. Mirrored from

Fake News 1 Apr 12, 2022
MLops tools review for execution on multiple cluster types: slurm, kubernetes, dask...

MLops tools review focused on execution using multiple cluster types: slurm, kubernetes, dask...

null 4 Nov 30, 2022
Simple ssh overlay for easy, remote server management written in Python GTK with paramiko

Simple "ssh" overlay for easy, remote server management written in Python GTK with paramiko

kłapouch 3 May 1, 2022
SSH tunnels to remote server.

Author: Pahaz Repo: https://github.com/pahaz/sshtunnel/ Inspired by https://github.com/jmagnusson/bgtunnel, which doesn't work on Windows. See also: h

Pavel White 1k Dec 28, 2022
Remote Desktop Protocol in Twisted Python

RDPY Remote Desktop Protocol in twisted python. RDPY is a pure Python implementation of the Microsoft RDP (Remote Desktop Protocol) protocol (client a

Sylvain Peyrefitte 1.6k Dec 30, 2022
DAMPP (gui) is a Python based program to run simple webservers using MySQL, Php, Apache and PhpMyAdmin inside of Docker containers.

DAMPP (gui) is a Python based program to run simple webservers using MySQL, Php, Apache and PhpMyAdmin inside of Docker containers.

Sehan Weerasekara 1 Feb 19, 2022
A Simple script to hunt unused Kubernetes resources.

K8SPurger A Simple script to hunt unused Kubernetes resources. Release History Release 0.3 Added Ingress Added Services Account Adding RoleBindding Re

Yogesh Kunjir 202 Nov 19, 2022
Prometheus exporter for AWS Simple Queue Service (SQS)

Prometheus SQS Exporter Prometheus exporter for AWS Simple Queue Service (SQS) Metrics Metric Description ApproximateNumberOfMessages Returns the appr

Gabriel M. Dutra 0 Jan 31, 2022
A simple python application for running a CI pipeline locally This app currently supports GitLab CI scripts

?? Simple Local CI Runner ?? A simple python application for running a CI pipeline locally This app currently supports GitLab CI scripts ⚙️ Setup Inst

Tom Stowe 0 Jan 11, 2022
Deploy a simple Multi-Node Clickhouse Cluster with docker-compose in minutes.

Simple Multi Node Clickhouse Cluster I hate those single-node clickhouse clusters and manually installation, I mean, why should we: Running multiple c

Nova Kwok 11 Nov 18, 2022
A tool to convert AWS EC2 instances back and forth between On-Demand and Spot billing models.

ec2-spot-converter This tool converts existing AWS EC2 instances back and forth between On-Demand and 'persistent' Spot billing models while preservin

jcjorel 152 Dec 29, 2022
Iris is a highly configurable and flexible service for paging and messaging.

Iris Iris core, API, UI and sender service. For third-party integration support, see iris-relay, a stateless proxy designed to sit at the edge of a pr

LinkedIn 715 Dec 28, 2022
Let's learn how to build, release and operate your containerized applications to Amazon ECS and AWS Fargate using AWS Copilot.

?? Welcome to AWS Copilot Workshop In this workshop, you'll learn how to build, release and operate your containerised applications to Amazon ECS and

Donnie Prakoso 15 Jul 14, 2022
Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:

Latest Salt Documentation Open an issue (bug report, feature request, etc.) Salt is the world’s fastest, most intelligent and scalable automation engi

SaltStack 12.9k Jan 4, 2023