Cobalt Strike C2 Reverse proxy that fends off Blue Teams, AVs, EDRs, scanners through packet inspection and malleable profile correlation

Related tags

Networking RedWarden
Overview

RedWarden - Flexible CobaltStrike Malleable Redirector

(previously known as proxy2's malleable_redirector plugin)

Let's raise the bar in C2 redirectors IR resiliency, shall we?

RedWarden

Red Teaming business has seen several different great ideas on how to combat incident responders and misdirect them while offering resistant C2 redirectors network at the same time.

This work combines many of those great ideas into a one, lightweight utility, mimicking Apache2 in it's roots of being a simple HTTP(S) reverse-proxy.

Combining Malleable C2 profiles understanding, knowledge of bad IP addresses pool and a flexibility of easily adding new inspection and misrouting logic - resulted in having a crafty repellent for IR inspections.

RedWarden

Should any invalid inbound packet reach RedWarden - you can redirect, reset or just proxy it away!

Abstract

This program acts as a HTTP/HTTPS reverse-proxy with several restrictions imposed upon inbound C2 HTTP requests selecting which packets to direct to the Teamserver and which to drop, similarly to the .htaccess file restrictions mandated in Apache2's mod_rewrite.

RedWarden was created to solve the problem of IR/AV/EDRs/Sandboxes evasion on the C2 redirector layer. It's intended to supersede classical Apache2 + mod_rewrite setups used for that purpose.

Features:

  • Malleable C2 Profile parser able to validate inbound HTTP/S requests strictly according to malleable's contract and drop outlaying packets in case of violation (Malleable Profiles 4.0+ with variants covered)
  • Ability to unfilter/repair unexpected and unwanted HTTP headers added by interim systems such as proxies and caches (think CloudFlare) in order to conform to a valid Malleable contract.
  • Integrated curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors
  • Grepable output log entries (in both Apache2 combined access log and custom RedWarden formats) useful to track peer connectivity events/issues
  • Ability to query connecting peer's IPv4 address against IP Geolocation/whois information and confront that with predefined regular expressions to rule out peers connecting outside of trusted organizations/countries/cities etc.
  • Built-in Replay attacks mitigation enforced by logging accepted requests' MD5 hashsums into locally stored SQLite database and preventing requests previously accepted.
  • Allows to define ProxyPass statemtents to pass requests matching specific URL onto other Hosts
  • Support for multiple Teamservers
  • Support for many reverse-proxying Hosts/redirection sites giving in a randomized order - which lets load-balance traffic or build more versatile infrastructures
  • Can repair HTTP packets according to expected malleable contract in case some of the headers were corrupted in traffic
  • Sleepless nights spent on troubleshooting "why my Beacon doesn't work over CloudFlare/CDN/Domain Fronting" are over now thanks to detailed verbose HTTP(S) requests/responses logs

The RedWarden takes Malleable C2 profile and teamserver's hostname:port on its input. It then parses supplied malleable profile sections to understand the contract and pass through only those inbound requests that satisfied it while misdirecting others.

Sections such as http-stager, http-get, http-post and their corresponding uris, headers, prepend/append patterns, User-Agent are all used to distinguish between legitimate beacon's request and unrelated Internet noise or IR/AV/EDRs out of bound packets.

The program benefits from the marvelous known bad IP ranges coming from: curi0usJack and the others: https://gist.github.com/curi0usJack/971385e8334e189d93a6cb4671238b10

Using an IP addresses blacklisting along with known bad keywords lookup through Reverse-IP DNS queries and HTTP headers inspection, brings the reliability to considerably increase redirector's resiliency to the unauthorized peers wanting to examine attacker infrastructures.

Invalid packets may be misrouted according to three strategies:

  • redirect: Simply redirect peer to another websites, such as Rick Roll.
  • reset: Kill TCP connection straightaway.
  • proxy: Fetch a response from another website, to mimic cloned/hijacked website as closely as possible.

This configuration is mandated in configuration file:

#
# What to do with the request originating not conforming to Beacon, whitelisting or 
# ProxyPass inclusive statements: 
#   - 'redirect' it to another host with (HTTP 301), 
#   - 'reset' a TCP connection with connecting client
#   - 'proxy' the request, acting as a reverse-proxy against specified action_url 
#       (may be dangerous if client fetches something it shouldn't supposed to see!)
#
# Valid values: 'reset', 'redirect', 'proxy'. 
#
# Default: redirect
#
drop_action: redirect

Below example shows outcome of redirect to https://googole.com:

redirect

Use wisely, stay safe.

Requirements

This program can run only on Linux systems as it uses fork to spawn multiple processes.

Also, the openssl system command is expected to be installed as it is used to generate SSL certificates.

Finally, install all of the Python3 PIP requirements easily with:

bash $ sudo pip3 install -r requirements.txt

Usage

Example usage

The minimal RedWarden's config.yaml configuration file could contain:

port:
  - 80/http
  - 443/https

profile: jquery-c2.3.14.profile

ssl_cacert: /etc/letsencrypt/live/attacker.com/fullchain.pem
ssl_cakey: /etc/letsencrypt/live/attacker.com/privkey.pem

teamserver_url:
  - 1.2.3.4:8080

drop_action: reset

Then, the program can be launched by giving it a path to the config file:

bash$ sudo python3 RedWarden.py -c config.yaml

  [INFO] 19:21:42: Loading 1 plugin...
  [INFO] 19:21:42: Plugin "malleable_redirector" has been installed.
  [INFO] 19:21:42: Preparing SSL certificates and keys for https traffic interception...
  [INFO] 19:21:42: Using provided CA key file: ca-cert/ca.key
  [INFO] 19:21:42: Using provided CA certificate file: ca-cert/ca.crt
  [INFO] 19:21:42: Using provided Certificate key: ca-cert/cert.key
  [INFO] 19:21:42: Serving http proxy on: 0.0.0.0, port: 80...
  [INFO] 19:21:42: Serving https proxy on: 0.0.0.0, port: 443...
  [INFO] 19:21:42: [REQUEST] GET /jquery-3.3.1.min.js
  [INFO] 19:21:42: == Valid malleable http-get request inbound.
  [INFO] 19:21:42: Plugin redirected request from [code.jquery.com] to [1.2.3.4:8080]
  [INFO] 19:21:42: [RESPONSE] HTTP 200 OK, length: 5543
  [INFO] 19:21:45: [REQUEST] GET /jquery-3.3.1.min.js
  [INFO] 19:21:45: == Valid malleable http-get request inbound.
  [INFO] 19:21:45: Plugin redirected request from [code.jquery.com] to [1.2.3.4:8080]
  [INFO] 19:21:45: [RESPONSE] HTTP 200 OK, length: 5543
  [INFO] 19:21:46: [REQUEST] GET /
  [...]
  [ERROR] 19:24:46: [DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.
  [...]
  [INFO] 19:24:46: [RESPONSE] HTTP 301 Moved Permanently, length: 212
  [INFO] 19:24:48: [REQUEST] GET /jquery-3.3.1.min.js
  [INFO] 19:24:48: == Valid malleable http-get request inbound.
  [INFO] 19:24:48: Plugin redirected request from [code.jquery.com] to [1.2.3.4:8080]
  [...]

The above output contains a line pointing out that there has been an unauthorized, not compliant with our C2 profile inbound request, which got dropped due to incompatible User-Agent string presented:

  [...]
  [DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.
  [...]

Use Cases

Impose IP Geolocation on your Beacon traffic originators

You've done your Pre-Phish and OSINT very well. You now know where your targets live and have some clues where traffic should be originating from, or at least how to detect completely auxiliary traffic. How to impose IP Geolocation on Beacon requests on a redirector?

RedWarden comes at help!

Let's say, you want only to accept traffic originating from Poland, Europe. Your Pre-Phish/OSINT results indicate that:

  • 89.64.64.150 is a legitimate IP of one of your targets, originating from Poland
  • 59.99.140.76 whereas this one is not and it reached your systems as a regular Internet noise packet.

You can use RedWarden's utility lib/ipLookupHelper.py to collect IP Geo metadata about these two addresses:

bash$ python3 ipLookupHelper.py

Usage: ./ipLookupHelper.py <ipaddress> [malleable-redirector-config]

Use this small utility to collect IP Lookup details on your target IPv4 address and verify whether
your 'ip_geolocation_requirements' section of proxy2 malleable-redirector-config.yaml would match that
IP address. If second param is not given - no 

The former brings:

bash$ python3 ipLookupHelper.py 89.64.64.150
[dbg] Following IP Lookup providers will be used: ['ip_api_com', 'ipapi_co']
[.] Lookup of: 89.64.64.150
[dbg] Calling IP Lookup provider: ipapi_co
[dbg] Calling IP Lookup provider: ip_api_com
[dbg] New IP lookup entry cached: 89.64.64.150
[.] Output:
{
  "organization": [
    "UPC Polska Sp. z o.o.",
    "UPC.pl",
    "AS6830 Liberty Global B.V."
  ],
  "continent": "Europe",
  "continent_code": "EU",
  "country": "Poland",
  "country_code": "PL",
  "ip": "89.64.64.150",
  "city": "Warsaw",
  "timezone": "Europe/Warsaw",
  "fulldata": {
    "status": "success",
    "country": "Poland",
    "countryCode": "PL",
    "region": "14",
    "regionName": "Mazovia",
    "city": "Warsaw",
    "zip": "00-202",
    "lat": 52.2484,
    "lon": 21.0026,
    "timezone": "Europe/Warsaw",
    "isp": "UPC.pl",
    "org": "UPC Polska Sp. z o.o.",
    "as": "AS6830 Liberty Global B.V.",
    "query": "89.64.64.150"
  },
  "reverse_ip": "89-64-64-150.dynamic.chello.pl"
}

and the latter gives:

bash$ python3 ipLookupHelper.py 59.99.140.76
[dbg] Following IP Lookup providers will be used: ['ip_api_com', 'ipapi_co']
[dbg] Read 1 cached entries from file.
[.] Lookup of: 59.99.140.76
[dbg] Calling IP Lookup provider: ip_api_com
[dbg] New IP lookup entry cached: 59.99.140.76
[.] Output:
{
  "organization": [
    "",
    "BSNL Internet",
    "AS9829 National Internet Backbone"
  ],
  "continent": "Asia",
  "continent_code": "AS",
  "country": "India",
  "country_code": "IN",
  "ip": "59.99.140.76",
  "city": "Palakkad",
  "timezone": "Asia/Kolkata",
  "fulldata": {
    "status": "success",
    "country": "India",
    "countryCode": "IN",
    "region": "KL",
    "regionName": "Kerala",
    "city": "Palakkad",
    "zip": "678001",
    "lat": 10.7739,
    "lon": 76.6487,
    "timezone": "Asia/Kolkata",
    "isp": "BSNL Internet",
    "org": "",
    "as": "AS9829 National Internet Backbone",
    "query": "59.99.140.76"
  },
  "reverse_ip": ""
}

Now you see that the former one had "country": "Poland" whereas the latter "country": "India". With that knowledge we are ready to devise our constraints in form of a hefty YAML dictionary:

ip_geolocation_requirements:
  organization:
  continent:
  continent_code:
  country:
     - Poland
     - PL
     - Polska
  country_code:
  city:
  timezone:

Each of that dictionary's entries accept regular expression to be matched upon determined IP Geo metadata of inbound peer's IP address. We use three entries in country property to allow requests having one of specified values.

Having that set in your configuration, you can verify whether another IP address would get passed through RedWarden's IP Geolocation discriminator or not with ipLookupHelper utility accepting second parameter:

ipLookupHelper IP Geo discriminator

The very last line tells you whether packet would be blocked or accepted.

And that's all! Configure your IP Geolocation constraints wisely and safely, carefully inspect RedWarden logs for any IP Geo-related DROP entries and keep your C2 traffic nice and tidy!

Repair tampered Beacon requests

If you happen to use interim systems such as AWS Lambda or CloudFlare as your Domain Fronting / redirectors, you have surely came across a situation where some of your packets couldn't get accepted by the Teamserver as they deviated from the agreed malleable contract. Was it a tampered or removed HTTP header, reordered cookies or anything else - I bet that wasted plenty hours of your life.

To combat C2 channels setup process issues and interim systems tamperings, RedWarden offers functionality to repair Beacon packets.

It does so by checking what Malleable Profile expects packet to be and can restore configured HTTP headers to their agreed values according to the profile's requirements.

Consider following simple profile:

http-get {
    set uri "/api/abc";
    client {

        header "Accept-Encoding" "gzip, deflate";

        metadata {
            base64url;
            netbios;
            base64url;
            parameter "auth";
        }
    }
    ...

You see this Accept-Encoding? Every Beacon request has to come up with that Header and that value. What happens if your Beacon hits CloudFlare systems and they emit a request that will be stripped from that Header or will have Accept-Encoding: gzip instead? Teamserver will drop the request on the spot.

By setting this header in RedWarden configuration section dubbed protect_these_headers_from_tampering you can safe your connection.:

# protect_these_headers_from_tampering: - Accept-Encoding ">
#
# If RedWarden validates inbound request's HTTP headers, according to policy drop_malleable_without_expected_header_value:
#   "[IP: DROP, reason:6] HTTP request did not contain expected header value:"
#
# and senses some header is missing or was overwritten along the wire, the request will be dropped. We can relax this policy
# a bit however, since there are situations in which Cache systems (such as Cloudflare) could tamper with our requests thus
# breaking Malleable contracts. What we can do is to specify list of headers, that should be overwritten back to their values
# defined in provided Malleable profile.
#
# So for example, if our profile expects:
#   header "Accept-Encoding" "gzip, deflate";
#
# but we receive a request having following header set instead:
#   Accept-Encoding: gzip
#
# Because it was tampered along the wire by some of the interim systems (such as web-proxies or caches), we can
# detect that and set that header's value back to what was expected in Malleable profile.
#
# In order to protect Accept-Encoding header, as an example, the following configuration could be used:
#   protect_these_headers_from_tampering:
#     - Accept-Encoding
#
#
# Default: 
#
protect_these_headers_from_tampering:
  - Accept-Encoding

Example outputs

Let's take a look at the output the proxy produces.

Under verbose: True option, the verbosity will be set to INFO at most telling accepted requests from dropped ones.

The request may be accepted if it confronted to all of the criterias configured in RedWarden's configuration file. Such a situation will be followed with [ALLOW, ...] entry log:

[INFO] 2021-04-24/17:30:48: [REQUEST] GET /js/scripts.js
[INFO] 2021-04-24/17:30:48: == Valid malleable http-get (variant: default) request inbound.
[INFO] 2021-04-24/17:30:48: [ALLOW, 2021-04-24/19:30:48, 111.222.223.224] "/js/scripts.js" - UA: "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
[INFO] 2021-04-24/17:30:48: Connected peer sent 2 valid http-get and 0 valid http-post requests so far, out of 15/5 required to consider him temporarily trusted
[INFO] 2021-04-24/17:30:48: Plugin redirected request from [attacker.com] to [127.0.0.1:5555]

Should the request fail any of the checks RedWarden carries on each request, the corresponding [DROP, ...] line will be emitted containing information about the drop reason.:

[INFO] 2021-04-24/16:48:28: [REQUEST] GET /
[ERROR] 2021-04-24/16:48:29: [DROP, 2021-04-24/18:48:28, reason:1, 128.14.211.186] inbound User-Agent differs from the one defined in C2 profile.
[INFO] 2021-04-24/16:48:29: [DROP, 2021-04-24/18:48:28, 128.14.211.186] "/" - UA: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"
[ERROR] 2021-04-24/16:48:29: [REDIRECTING invalid request from 128.14.211.186 (zl-dal-us-gp3-wk107.internet-census.org)] GET /

Drop Policies Fine-Tuning

There are plenty of reasons dictating whether request can be dropped. Each of these checks can be independently turned on and off according to requirements or in a process of fine-tuning or erroneus decision fixing:

Excerpt from example-config.yaml:

section with URI-append containing prepend/append fragments drop_malleable_with_invalid_uri_append: True ">
#
# Fine-grained requests dropping policy - lets you decide which checks
# you want to have enforced and which to skip by setting them to False
#
# Default: all checks enabled
#
policy:
  # [IP: ALLOW, reason:0] Request conforms ProxyPass entry (url="..." host="..."). Passing request to specified host
  allow_proxy_pass: True
  # [IP: ALLOW, reason:2] Peer's IP was added dynamically to a whitelist based on a number of allowed requests
  allow_dynamic_peer_whitelisting: True
  # [IP: DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.
  drop_invalid_useragent: True
  # [IP: DROP, reason:2] HTTP header name contained banned word
  drop_http_banned_header_names: True
  # [IP: DROP, reason:3] HTTP header value contained banned word:
  drop_http_banned_header_value: True
  # [IP: DROP, reason:4b] peer's reverse-IP lookup contained banned word
  drop_dangerous_ip_reverse_lookup: True
  # [IP: DROP, reason:4e] Peer's IP geolocation metadata contained banned keyword! Peer banned in generic fashion.
  drop_ipgeo_metadata_containing_banned_keywords: True
  # [IP: DROP, reason:5] HTTP request did not contain expected header
  drop_malleable_without_expected_header: True
  # [IP: DROP, reason:6] HTTP request did not contain expected header value:
  drop_malleable_without_expected_header_value: True
  # [IP: DROP, reason:7] HTTP request did not contain expected (metadata|id|output) section header:
  drop_malleable_without_expected_request_section: True
  # [IP: DROP, reason:8] HTTP request was expected to contain (metadata|id|output) section with parameter in URI:
  drop_malleable_without_request_section_in_uri: True
  # [IP: DROP, reason:9] Did not found append pattern:
  drop_malleable_without_prepend_pattern: True
  # [IP: DROP, reason:10] Did not found append pattern:
  drop_malleable_without_apppend_pattern: True
  # [IP: DROP, reason:11] Requested URI does not aligns any of Malleable defined variants:
  drop_malleable_unknown_uris: True
  # [IP: DROP, reason:12] HTTP request was expected to contain <> section with URI-append containing prepend/append fragments
  drop_malleable_with_invalid_uri_append: True

By default all of these checks are enforced.

Turning debug: True will swamp your console buffer with plenty of log lines describing each step RedWarden takes in its complex decisioning process. If you want to see your requests and responses full bodies - set debug and trace to true and get buried in logging burden!

Known Issues

  • It may add a slight overhead to the interactive sleep throughput
  • ProxyPass processing logic is far from perfect and is really buggy (and oh boy its ugly!).
  • Weird forms of configuration files can derail RedWarden parser and make it complain. Easiest approach to overcome this would be to copy example-config.yaml and work on it instead.

TODO

  • Research possibility to use Threat Intelligence feeds to nefarious purposes - like for instance detecting Security Vendors based on IPs
  • Add support for MaxMind GeoIP database/API
  • Implement support for JA3 signatures in both detection & blocking and impersonation to fake nginx/Apache2/custom setups.
  • Add some unique beacons tracking logic to offer flexilibity of refusing staging and communication processes at the proxy's own discretion
  • Introduce day of time constraint when offering redirection capabilities (proxy only during office hours)
  • Add Proxy authentication and authorization logic on CONNECT/relay.
  • Add Mobile users targeted redirection
  • Add configuration options to define custom HTTP headers to be injected, or ones to be removed
  • Add configuration options to require specific HTTP headers to be present in requests passing ProxyPass criteria.
  • Interactive interface allowing to type simple characters controlling output logging verbosity, similarly to Nmap's

Author

Mariusz B. / mgeeky, '19-'21

Comments
  • Could not proxy request

    Could not proxy request

    I receive the following error when it tries to proxy a request back to my C2 server.

    Traceback (most recent call last):
      File "/usr/local/lib/python3.10/dist-packages/tornado/web.py", line 1713, in _execute
        result = await result
      File "/root/RedWarden/lib/proxyhandler.py", line 1176, in get
        self.my_handle_request()
      File "/root/RedWarden/lib/proxyhandler.py", line 289, in my_handle_request
        self._internal_my_handle_request(*args, **kwargs)
      File "/root/RedWarden/lib/proxyhandler.py", line 422, in _internal_my_handle_request
        output = handler()
      File "/root/RedWarden/lib/proxyhandler.py", line 632, in _my_handle_request
        assert scheme in ('http', 'https')
    AssertionError
    
    opened by brewballs 6
  • cannot receive outputs

    cannot receive outputs

    when i interact with a beacon for example to spawn a new beacon or print working directory no output comes back unnamed

    but when i use directly without redwarden everything works perfectly please reply

    opened by tionwayne2021 5
  • Outdated Auth protocol

    Outdated Auth protocol

    Great work with the project it really simplifies all mod_rewrite rules.

    • The "[Errno 17] File exists" Traceback error as described in the previous complain still persists even after setting up a virtualenv for python3.6. This din't ruin the overall functionality though.

    • I tried default and the new random malleable c2 profiles, they did work and succesfully redirected the traffic to my teamserver(Cobalt Strike 4.3) however it errors out saying "Invalid auth protocol(old client?)" on my CS teamserver.

    issue This might work succesfully with previous versions of CS 4.3 prior, Checked the changelog of CS , not sure what might be updated in version 4.3.

    I tried setting this up manually with nginx and apache2 using scripts like cs2modrewrite for assistance but the same issue persists.

    opened by m3rcer 5
  • Is it using python 3.6?

    Is it using python 3.6?

    1. first error

    Unexpected statement: prepend "PK.........080..W.3

    ----- Context -----

        output {
    
            netbios;
            	       
        prepend "PK.........080..W.3
        	     ...1.....InvoiceStatement.lnk.Z_.^G..m.j.....\".....f{...
        	     7..464.v7.6M..b.o.m..&.M6.
        	     ....\"..E..|..P.(R%.J..A.....'..9g...L>....;..;3g........B..1S..
        	     3.........V....v.......|.....>";
    

    [ERROR] Parsing failed. [ERROR] Could not parse specified Malleable C2 profile!

    Prepend data seems to cause problems, with c2lint its ok and already tested.

    1. second error

    INFO] 2021-06-02/18:14:30: Serving proxy on: http://0.0.0.0:80 ... [INFO] 2021-06-02/18:14:30: Serving proxy on: http://0.0.0.0:80 ... [INFO] 2021-06-02/18:14:30: Serving proxy on: https://0.0.0.0:443 ... Fatal error has occured. [Errno 17] File exists Traceback:

    Traceback (most recent call last): File "/usr/lib/python3.8/asyncio/selector_events.py", line 259, in _add_reader key = self._selector.get_key(fd) File "/usr/lib/python3.8/selectors.py", line 192, in get_key raise KeyError("{!r} is not registered".format(fileobj)) from None KeyError: '6 is not registered'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "RedWarden.py", line 214, in main serve_proxy(srv[0], srv[1], srv[2], srv[3]) File "RedWarden.py", line 145, in serve_proxy server.add_sockets(foosock) File "/usr/local/lib/python3.8/dist-packages/tornado/tcpserver.py", line 165, in add_sockets self._handlers[sock.fileno()] = add_accept_handler( File "/usr/local/lib/python3.8/dist-packages/tornado/netutil.py", line 282, in add_accept_handler io_loop.add_handler(sock, accept_handler, IOLoop.READ) File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 150, in add_handler self.selector_loop.add_reader(fd, self._handle_events, fd, IOLoop.READ) File "/usr/lib/python3.8/asyncio/selector_events.py", line 332, in add_reader return self._add_reader(fd, callback, *args) File "/usr/lib/python3.8/asyncio/selector_events.py", line 261, in _add_reader self._selector.register(fd, selectors.EVENT_READ, File "/usr/lib/python3.8/selectors.py", line 359, in register self._selector.register(key.fd, poller_events) FileExistsError: [Errno 17] File exists

    [DEBUG] 2021-06-02/18:14:30: SSL interception files cleaned up.

    bug 
    opened by ghost 4
  • usage: Usage: %prog [options]%prog 0.9.1: error: Unhandled exception occured while parsing RedWarden config file: load() missing 1 required positional argument: 'Loader'

    usage: Usage: %prog [options]%prog 0.9.1: error: Unhandled exception occured while parsing RedWarden config file: load() missing 1 required positional argument: 'Loader'

    I got error while try to start Redwarden with config usage: Usage: %prog [options]
    %prog 0.9.1: error: Unhandled exception occured while parsing RedWarden config file: load() missing 1 required positional argument: 'Loader' Any suggestions? Maybe some one can share stable config file?

    My config file

    This is a sample config file for RedWarden.

    ====================================================

    General proxy related settings

    ====================================================

    Print verbose output. Implied if debug=True. Default: False

    verbose: True

    Print debugging output that includes HTTP request/response trace. Default: False

    debug: True

    Redirect RedWarden's output to file. Default: stdout.

    Creates a file in the same directory that this config file is situated.

    output: redwarden_redirector.log

    Write web server access attempts in Apache2 access.log format into this file.

    access_log: redwarden_access.log

    Switches between one of the following pre-defined log formats:

    - 'apache2' combined access_log

    - 'redelk' log format

    access_log_format: apache2

    ===================================

    RedELK Integration

    If RedWarden is to be integrated with RedElk, following three variables will have to be set

    according to this redirector server role.

    Label marking packets coming from this specific Proxy server.

    Can be anything, but nice candidates are:

    - http, http-proxy, http-trackingpixel1, phishingwebsite, etc

    redelk_frontend_name: http-proxy

    Label for packets that are passed to the C2 server.

    This value MUST start with "c2" and cannot contain spaces.

    redelk_backend_name_c2: c2

    Label for packets that are NOT passed to the C2 (they either dropped, redirected, proxied away).

    This value MUST start wtih "decoy" and cannot contain spaces.

    redelk_backend_name_decoy: decoy

    ===================================

    If 'output' is specified, tee program's output to file and stdout at the same time.

    Default: False

    tee: True

    Ports on which RedWarden should bind & listen

    port:

    • 80/http
    • 443/https

    SSL certificate CAcert (pem, crt, cert) and private key CAkey

    ssl_cacert: /root/main1.crt ssl_cakey: /root/file.key #ssl_cacert: /etc/letsencrypt/live/attacker.com/fullchain.pem #ssl_cakey: /etc/letsencrypt/live/attacker.com/privkey.pem

    Drop invalid HTTP requests

    If a stream that doesn't resemble valid HTTP protocol reaches RedWarden listener,

    should we drop it or process it? By default we drop it.

    Default: True

    drop_invalid_http_requests: True

    Path to the Malleable C2 profile file.

    If not given, most of the request-validation logic won't be used.

    #profile: malleable.profile /root/Redwarden/1.profile

    (Required) Address where to redirect legitimate inbound beacon requests.

    A.k.a. TeamServer's Listener bind address, in a form of:

    [inport:][http(s)://]host:port

    If RedWarden was configured to listen on more than one port, specifying "inport" will

    help the plugin decide to which teamserver's listener redirect inbound request.

    If 'inport' values are not specified in the below option (teamserver_url) the script

    will pick destination teamserver at random.

    Having RedWarden listening on only one port does not mandate to include the "inport" part.

    This field can be either string or list of strings.

    teamserver_url:

    • ipv4teamserver:55550

    Report only instead of actually dropping/blocking/proxying bad/invalid requests.

    If this is true, will notify that the request would be block if that option wouldn't be

    set.

    Default: False

    report_only: False

    Log full bodies of dropped requests.

    Default: False

    log_dropped: False

    Throttle down number of log entries emitted for single Peer to lower I/O overhead.

    When you operate your Beacon in interactive mode, the RedWarden can go crazy with logging

    all of the allowed requests. We can throttle that down to minimize I/O and CPU impact.

    This option specifies number of seconds to wait before adding next log entry for specific IP,

    regardless of whether it was allowed or dropped.

    Default:

    log_request_delay: 60

    requests_threshold: 3

    throttle_down_peer_logging: log_request_delay: 60 requests_threshold: 3

    What to do with the request originating not conforming to Beacon, whitelisting or

    ProxyPass inclusive statements:

    - 'redirect' it to another host with (HTTP 301),

    - 'reset' a TCP connection with connecting client

    - 'proxy' the request, acting as a reverse-proxy against specified action_url

    (may be dangerous if client fetches something it shouldn't supposed to see!)

    Valid values: 'reset', 'redirect', 'proxy'.

    Default: redirect

    drop_action: redirect

    If someone who is not a beacon hits the proxy, or the inbound proxy does not meet

    malleable profile's requirements - where we should proxy/redirect his requests.

    The protocol HTTP/HTTPS used for proxying will be the same as originating

    requests' protocol. Redirection in turn respects protocol given in action_url.

    This value may be a comma-separated list of hosts, or a YAML array to specify that

    target action_url should be picked at random:

    action_url: https://google.com, https://gmail.com, https://calendar.google.com

    Default: https://google.com

    action_url:

    • https://mydomainname.net

    ProxyPass alike functionality known from mod_proxy.

    If inbound request matches given conditions, proxy that request to specified host,

    fetch response from target host and return to the client. Useful when you want to

    pass some requests targeting for instance attacker-hosted files onto another host, but

    through the one protected with malleable_redirector.

    Protocol used for ProxyPass will match the one from originating request unless specified explicitely.

    If host part contains http:// or https:// schema - that schema will be used.

    Syntax:

    proxy_pass:

    - /url_to_be_passed example.com

    - /url_to_be_passed_onto_http http://example.com

    The first parameter 'url' is a regex (case-insensitive). Must start with '/'.

    The regex begin/end operators are implied and will constitute following regex to be

    matched against inbound request's URL:

    '^/' + url_to_be_passed + '$'

    Here are the URL rewriting rules:

    Example, inbound request:

    https://attacker.com/dl/file-to-be-served.txt

    Rules:

    a) Entire URL to be substituted for proxy pass:

    proxy_pass:

    - /dl/.+ https://localhost:8888/

    ====> will redirect to https://localhost:8888/

    b) Only host to be substituted for proxy pass:

    proxy_pass:

    - /dl/.+ localhost:8888

    ====> will redirect to https://localhost:8888/dl/file-to-be-served.txt

    Following options are supported:

    - nodrop - Process this rule at first, before evaluating any DROP-logic.

    Does not let processed request to be dropped.

    Default: No proxy pass rules.

    proxy_pass:

    These are example proxy_pass definitions:

    #- /foobar\d* bing.com #- /myip http://ip-api.com/json/ #- /alwayspass google.com nodrop

    If set, removes all HTTP headers sent by Client that are not expected by Teamserver according

    to the supplied Malleable profile and its client { header ... } section statements. Some CDNs/WebProxy

    providers such as CloudFlare may add tons of their own metadata headers (like: CF-IPCountry, CF-RAY,

    CF-Visitor, CF-Request-ID, etc.) that can make Teamserver unhappy about inbound HTTP Request which could

    cause its refusal.

    We can strip all of these superfluous, not expected by Teamserver HTTP headers delivering a vanilla plain

    request. This is recommended setting in most scenarios.

    Do note however, that Teamserver by itself ignores superfluous headers it receives in requests, as long as they

    don't compromise integrity of the malleable transaction.

    Default: True

    remove_superfluous_headers: True

    Every time malleable_redirector decides to pass request to the Teamserver, as it conformed

    malleable profile's contract, a MD5 sum may be computed against that request and saved in sqlite

    file. Should there be any subsequent request evaluating to a hash value that was seen & stored

    previously, that request is considered as Replay-Attack attempt and thus should be banned.

    CobaltStrike's Teamserver has built measures aginst replay-attacks, however malleable_redirector may

    assist in that activity as well.

    Default: False

    mitigate_replay_attack: False

    List of whitelisted IP addresses/CIDR ranges.

    Inbound packets from these IP address/ranges will always be passed towards specified TeamServer without

    any sort of verification or validation.

    whitelisted_ip_addresses:

    • 127.0.0.0/24

    Maintain a volatile, dynamic list of whitelisted Peers (IPv4 addresses) based on a number of requests

    they originate that were allowed and passed to Teamserver.

    This option cuts down request processing time since whenever a request coming from a previously whitelisted

    peers gets processed, it will be accepted right away having observed that the peer was allowed to pass

    N requests to the Teamserver on a previous occassions.

    This whitelist gets cleared along with RedWarden being terminated. It is only held up in script's memory.

    Paramters:

    - number_of_valid_http_get_requests: defines number of successful http-get requests (polling Teamserver)

    that determine whether Peer can be trusted.

    - number_of_valid_http_post_requests: defines number of successful http-post requests (sending command

    results to the TS) that determine whether Peer can be trusted.

    Value of 0 denotes disabled counting of a corresponding type of requests.

    Function disabled if configuration option is missing.

    Default: (dynamic whitelist en=+abled)

    number_of_valid_http_get_requests: 15

    number_of_valid_http_post_requests: 5

    add_peers_to_whitelist_if_they_sent_valid_requests: number_of_valid_http_get_requests: 15 number_of_valid_http_post_requests: 5

    Ban peers based on their IPv4 address. The blacklist with IP address to check against is specified

    in 'ip_addresses_blacklist_file' option.

    Default: True

    ban_blacklisted_ip_addresses: True

    Specifies external list of CIDRs with IPv4 addresses to ban. Each entry in that file

    can contain a single IPv4, a CIDR or a line with commentary in following format:

    1.2.3.4/24 # Super Security System

    Default: data/banned_ips.txt

    ip_addresses_blacklist_file: data/banned_ips.txt

    Specifies external list of keywords to ban during reverse-IP lookup, User-Agents or

    HTTP headers analysis stage. The file can contain lines beginning with '#' to mark comments.

    Default: data/banned_words.txt

    banned_agents_words_file: data/banned_words.txt

    Specifies external list of phrases that should override banned phrases in case of ambiguity.

    If the request was to be banned because of a ambigue phrase, the override agents file can

    make the request pass blocking logic if it contained "allowed" phrase.

    Default: data/banned_words_override.txt

    override_banned_agents_file: data/banned_words_override.txt

    Ban peers based on their IPv4 address' resolved ISP/Organization value or other details.

    Whenever a peer connects to our proxy, we'll take its IPv4 address and use one of the specified

    APIs to collect all the available details about the address. Whenever a banned word

    (of a security product) is found in those details - peer will be banned.

    List of API keys for supported platforms are specified in ''. If there are no keys specified,

    only providers that don't require API keys will be used (e.g. ip-api.com, ipapi.co)

    This setting affects execution of policy:

    - drop_ipgeo_metadata_containing_banned_keywords

    Default: True

    verify_peer_ip_details: True

    Specifies a list of API keys for supported API details collection platforms.

    If 'verify_peer_ip_details' is set to True and there is at least one API key given in this option, the

    proxy will collect details of inbound peer's IPv4 address and verify them for occurences of banned words

    known from various security vendors. Do take a note that various API details platforms have their own

    thresholds for amount of lookups per month. By giving more than one API keys, the script will

    utilize them in a random order.

    To minimize number of IP lookups against each platform, the script will cache performed lookups in an

    external file named 'ip-lookups-cache.json'

    Supported IP Lookup providers:

    - ip-api.com: No API key needed, free plan: 45 requests / minute

    - ipapi.co: No API key needed, free plan: up to 30000 IP lookups/month and up to 1000/day.

    - ipgeolocation.io: requires an API key, up to 30000 IP lookups/month and up to 1000/day.

    Default: empty dictionary

    ip_details_api_keys: #ipgeolocation_io: 0123456789abcdef0123456789abcdef ipgeolocation_io:

    Restrict incoming peers based on their IP Geolocation information.

    Available only if 'verify_peer_ip_details' was set to True.

    IP Geolocation determination may happen based on the following supported characteristics:

    - organization,

    - continent,

    - continent_code,

    - country,

    - country_code,

    - city,

    - timezone

    The Peer will be served if at least one geolocation condition holds true for him

    (inclusive/alternative arithmetics).

    If no determinants are specified, IP Geolocation will not be taken into consideration while accepting peers.

    If determinants are specified, only those peers whose IP address matched geolocation determinants will be accepted.

    Each of the requirement values may be regular expression. Matching is case-insensitive.

    Following (continents_code, continent) pairs are supported:

    ('AF', 'Africa'),

    ('AN', 'Antarctica'),

    ('AS', 'Asia'),

    ('EU', 'Europe'),

    ('NA', 'North america'),

    ('OC', 'Oceania'),

    ('SA', 'South america)'

    Proper IP Lookup details values can be established by issuing one of the following API calls:

    $ curl -s 'https://ipapi.co/TARGET-IP-ADDRESS/json/'

    $ curl -s 'http://ip-api.com/json/TARGET-IP-ADDRESS'

    The organization/isp/as/asn/org fields will be merged into a common organization list of values.

    ip_geolocation_requirements: organization: #- My\s+Target+Company(?: Inc.)? continent: continent_code: country: country_code: city: timezone:

    Fine-grained requests dropping policy - lets you decide which checks

    you want to have enforced and which to skip by setting them to False

    Default: all checks enabled

    policy:

    [IP: ALLOW, reason:0] Request conforms ProxyPass entry (url="..." host="..."). Passing request to specified host

    allow_proxy_pass: True

    [IP: ALLOW, reason:2] Peer's IP was added dynamically to a whitelist based on a number of allowed requests

    allow_dynamic_peer_whitelisting: True

    [IP: DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.

    drop_invalid_useragent: True

    [IP: DROP, reason:2] HTTP header name contained banned word

    drop_http_banned_header_names: True

    [IP: DROP, reason:3] HTTP header value contained banned word:

    drop_http_banned_header_value: True

    [IP: DROP, reason:4b] peer's reverse-IP lookup contained banned word

    drop_dangerous_ip_reverse_lookup: True

    [IP: DROP, reason:4e] Peer's IP geolocation metadata contained banned keyword! Peer banned in generic fashion.

    drop_ipgeo_metadata_containing_banned_keywords: True

    [IP: DROP, reason:5] HTTP request did not contain expected header

    drop_malleable_without_expected_header: True

    [IP: DROP, reason:6] HTTP request did not contain expected header value:

    drop_malleable_without_expected_header_value: True

    [IP: DROP, reason:7] HTTP request did not contain expected (metadata|id|output) section header:

    drop_malleable_without_expected_request_section: True

    [IP: DROP, reason:8] HTTP request was expected to contain (metadata|id|output) section with parameter in URI:

    drop_malleable_without_request_section_in_uri: True

    [IP: DROP, reason:9] Did not found append pattern (this logic is known to cause troubles, use with caution):

    drop_malleable_without_prepend_pattern: False

    [IP: DROP, reason:10] Did not found append pattern (this logic is known to cause troubles, use with caution):

    drop_malleable_without_apppend_pattern: False

    [IP: DROP, reason:11] Requested URI does not aligns any of Malleable defined variants:

    drop_malleable_unknown_uris: True

    [IP: DROP, reason:12] HTTP request was expected to contain <> section with URI-append containing prepend/append fragments

    drop_malleable_with_invalid_uri_append: True

    This option repairs Beacon requests's header value by restoring to what was expected in Malleable C2 profile.

    If RedWarden validates inbound request's HTTP headers, according to policy drop_malleable_without_expected_header_value:

    "[IP: DROP, reason:6] HTTP request did not contain expected header value:"

    and detects some header is missing or was overwritten along the wire, the request will be dropped.

    We can relax this policy a bit however, since there are situations in which Cache systems (such as Cloudflare) could tamper with our

    requests thus breaking Malleable contracts. What we can do is to specify list of headers, that should be overwritten back to their values

    defined in provided Malleable profile.

    So for example, if our profile expects:

    header "Accept-Encoding" "gzip, deflate";

    but we receive a request having following header set instead:

    Accept-Encoding: gzip

    Because it was tampered along the wire by some of the interim systems (such as web-proxies or caches), we can

    detect that and set that header's value back to what was expected in Malleable profile.

    In order to protect Accept-Encoding header, as an example, the following configuration could be used:

    repair_these_headers:

    - Accept-Encoding

    Default:

    #repair_these_headers:

    - Accept-Encoding

    Malleable Redirector plugin can act as a basic oracle API responding to calls

    containing full request contents with classification whether that request would be

    blocked or passed along. The API may be used by custom payload droppers, HTML Smuggling

    payloads or any other javascript-based landing pages.

    The way to invoke it is as follows:

    1. Issue a POST request to the RedWarden server with the below specified URI in path.

    2. Include following JSON in your POST request:

    POST /malleable_redirector_hidden_api_endpoint

    Content-Type: application/json

    {

    "peerIP" : "IP-of-connecting-Peer",

    "headers" : {

    "headerName1" : "headerValue1",

    ...

    "headerNameN" : "headerValueN",

    },

    }

    If "peerIP" is empty (or was not given), RedWarden will try to extract peer's IP from HTTP

    headers such as (X-Forwarded-For, CF-Connecting-IP, X-Real-IP, etc.). If no IP will be present

    in headers, an error will be returned.:

    HTTP 404 Not Found

    {

    "error" : "number",

    "message" : "explanation"

    }

    RedWarden will take any non-empty field from a given JSON and evaluate it as it would do

    under currently provided configuration and all the knowledge it possesses.

    The response will contain following JSON:

    {

    "action": "allow|drop",

    "peerIP" : "returned-peerIP",

    "ipgeo" : {ip-geo-metadata-extracted}

    "message": "explanation",

    "reason": "reason",

    "drop_type": "proxy|reset|redirect",

    "action_url": ["proxy-URL-1|redirect-URL-1", ..., "proxy-URL-N|redirect-URL-N"]

    }

    Availbale Allow/Drop reasons for this endpoint:

    ALLOW:

    - Reason: 99 - Peer IP and HTTP headers did not contain anything suspicious

    - Reason: 1 - peer's IP address is whitelisted

    - Reason: 2 - Peer's IP was added dynamically to a whitelist based on a number of allowed requests

    DROP:

    - Reason: 2 - HTTP header name contained banned word

    - Reason: 3 - HTTP header value contained banned word

    - Reason: 4a - Peer's IP address is blacklisted

    - Reason: 4b - Peer's reverse-IP lookup contained banned word

    - Reason: 4c - Peer's IP lookup organization field contained banned word

    - Reason: 4d - Peer's IP geolocation DID NOT met expected conditions

    - Reason: 4e - Peer's IP geolocation metadata contained banned keyword! Peer banned in generic fashion

    Sample curl to debug:

    $ curl -sD- --request POST --data "{"headers":{"Accept": "/", "Sec-Fetch-Site": "same-origin", \

    "Sec-Fetch-Mode": "no-cors", "Sec-Fetch-Dest": "script", "Accept-Language": "en-US,en;q=0.9", \

    "Cookie": "__cfduid2=cHux014r17SG3v4gPUrZ0BZjDabMTY2eWDj1tuYdREBg", "User-Agent": \

    "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"}}" \

    https://attacker.com/12345678-9abc-def0-1234-567890abcdef

    Default: Turned off / not available

    #malleable_redirector_hidden_api_endpoint: /12345678-9abc-def0-1234-567890abcdef

    opened by useragent23253 3
  • [Errno 17] File exists

    [Errno 17] File exists

    Sir, in your latest version, there is still the behavior of repeated port monitoring, which is undoubtedly a bug and I hope to fix it as soon as possible.

    [INFO] 2022-04-05/03:36:50: Loading 1 plugin... [INFO] 2022-04-05/03:36:50: Plugin "malleable_redirector" has been installed. [INFO] 2022-04-05/03:36:50: Preparing SSL certificates and keys for https traffic interception... [INFO] 2022-04-05/03:36:50: Using provided CA key file: ssl/bundle.key [INFO] 2022-04-05/03:36:50: Using provided CA certificate file: ssl/bundle.crt [INFO] 2022-04-05/03:36:50: Using provided Certificate key: /root/cobaltstrike4.4/RedWarden-master/ca-cert/cert.key [INFO] 2022-04-05/03:36:50: Teeing stdout output to redwarden_redirector.log log file. [INFO] 2022-04-05/03:36:50: Loaded 1890 blacklisted CIDRs. [INFO] 2022-04-05/03:36:50:

    ____           ___       __               __
    

    / __ ___ / / | / / ________/ / ____ / // / _ / __ /| | /| / / __ `/ / __ / _ / __
    / , / __/ // / | |/ |/ / // / / / /
    / / / / / / // ||_/_,/ |
    /|/_,// _,_/_// //

    :: RedWarden - Keeps your malleable C2 packets slipping through AVs,
                   EDRs, Blue Teams and club bouncers like nothing else!
    
    by Mariusz Banach / mgeeky, '19-'21
    <mb [at] binary-offensive.com>
    
    v0.9.1
    

    [INFO] 2022-04-05/03:36:50: Serving proxy on: http://0.0.0.0:8888 ... [INFO] 2022-04-05/03:36:50: Serving proxy on: http://0.0.0.0:8888 ... [INFO] 2022-04-05/03:36:50: Serving proxy on: https://0.0.0.0:4444 ... Fatal error has occured. [Errno 17] File exists Traceback:

    Traceback (most recent call last): File "/usr/lib/python3.6/asyncio/selector_events.py", line 253, in _add_reader key = self._selector.get_key(fd) File "/usr/lib/python3.6/selectors.py", line 191, in get_key raise KeyError("{!r} is not registered".format(fileobj)) from None KeyError: '8 is not registered'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "RedWarden.py", line 233, in main serve_proxy(srv[0], srv[1], srv[2], srv[3]) File "RedWarden.py", line 152, in serve_proxy server.add_sockets(foosock) File "/usr/local/lib/python3.6/dist-packages/tornado/tcpserver.py", line 166, in add_sockets sock, self._handle_connection File "/usr/local/lib/python3.6/dist-packages/tornado/netutil.py", line 282, in add_accept_handler io_loop.add_handler(sock, accept_handler, IOLoop.READ) File "/usr/local/lib/python3.6/dist-packages/tornado/platform/asyncio.py", line 150, in add_handler self.selector_loop.add_reader(fd, self._handle_events, fd, IOLoop.READ) File "/usr/lib/python3.6/asyncio/selector_events.py", line 326, in add_reader return self._add_reader(fd, callback, *args) File "/usr/lib/python3.6/asyncio/selector_events.py", line 256, in _add_reader (handle, None)) File "/usr/lib/python3.6/selectors.py", line 412, in register self._epoll.register(key.fd, epoll_events) FileExistsError: [Errno 17] File exists

    ^C Proxy serving interrupted by user.

    Proxy serving interrupted by user.

    opened by wikiZ 3
  • [BUG] Malleable profile information not reconstructed completely

    [BUG] Malleable profile information not reconstructed completely

    When a malleable profile contains additional parameter URI schemes at the client block, this information will not be used at the URI filter allow list for letting malleable traffic pass through.

    For example, if my malleable profile looks like this:

    http-get "default" {
    
            set uri "/abc.php";
    
            client {
    
                    metadata {
                            header "Cookie";
                    }
    
                    parameter "parametername" "value";
    
            }
    

    A legit beacon request would be:

    GET /abc.php?parametername=value
    

    But RedWarden will drop this request as it expects GET /abc.php only.

    As a workaround, one can supply the additional GET parameter values inside the malleable profile supplied to RedWarden like so:

    http-get "default" {
    
            set uri "/abc.php?parametername=value";
    
            client {
    
                    metadata {
                            header "Cookie";
                    }
            }
    
    opened by 0xShkk 3
  • [BUG] Error No.17 -> RedWarden tries to start listener twice

    [BUG] Error No.17 -> RedWarden tries to start listener twice

    Hi,

    thank you for this great tool and your support to the community!

    I played around with it today to see if it can be used in my next assessment. I have noticed two bugs while doing so.

    Lets start with the first here.

    When starting, RedWarden always tries to start 2 listeners even if only one listener is defined at the config while. Basically the same as https://github.com/mgeeky/RedWarden/issues/3#issue-935316665 but the tool works as expected. However, the core bug seems not to be solved as I 100% have only one listener defined at my config file.

    opened by 0xShkk 3
  • dns beacon

    dns beacon

    seems like red warden completely ignored the dns config listener. it generates the default config of dns beacon with 0.0.0.0 as idle and all default params. is there any fix for that ? Does redwarden handles dns ? thx

    opened by ghost 3
  • [Errno 17] File exists

    [Errno 17] File exists

    Hi, tool looks very cool, thank you for sharing it.

    I get similar error message, while used example-config.yaml provided and commenting the following lines from the config:

    #  - 443/https
    #profile: malleable.profile
    

    python setup

    # cat /etc/os-release
    NAME="Ubuntu"
    VERSION="20.04.2 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.2 LTS"
    VERSION_ID="20.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal
    
    # python3 -V
    Python 3.8.5
    
    # sudo python3 -m pip install -r requirements.txt
    Requirement already satisfied: brotli in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 1)) (1.0.9)
    Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from -r requirements.txt (line 2)) (2.22.0)
    Requirement already satisfied: PyYaml in /usr/lib/python3/dist-packages (from -r requirements.txt (line 3)) (5.3.1)
    Requirement already satisfied: sqlitedict in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 4)) (1.7.0)
    Requirement already satisfied: tornado in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 5)) (6.1)
    

    Error:

    # sudo python3 RedWarden.py -c example-config.yaml
    [INFO] 2021-06-17/00:44:02: Loading 1 plugin...
    [INFO] 2021-06-17/00:44:02: Plugin "malleable_redirector" has been installed.
    [INFO] 2021-06-17/00:44:02: Preparing SSL certificates and keys for https traffic interception...
    [INFO] 2021-06-17/00:44:02: Using provided CA key file: /etc/letsencrypt/live/attacker.com/privkey.pem
    [INFO] 2021-06-17/00:44:02: Using provided CA certificate file: /etc/letsencrypt/live/attacker.com/fullchain.pem
    [INFO] 2021-06-17/00:44:02: Using provided Certificate key: /opt/RedWarden/ca-cert/cert.key
    [INFO] 2021-06-17/00:44:02: Teeing stdout output to /opt/RedWarden/redwarden_redirector.log log file.
    [ERROR] 2021-06-17/00:44:02:
    
    =================================================================================================
     MALLEABLE C2 PROFILE PATH NOT SPECIFIED! LOGIC BASED ON PARSING HTTP REQUESTS WILL BE DISABLED!
    =================================================================================================
    
    [INFO] 2021-06-17/00:44:02: Loaded 1890 blacklisted CIDRs.
    [INFO] 2021-06-17/00:44:02:
    
        ____           ___       __               __
       / __ \___  ____/ / |     / /___ __________/ /__  ____
      / /_/ / _ \/ __  /| | /| / / __ `/ ___/ __  / _ \/ __ \
     / _, _/  __/ /_/ / | |/ |/ / /_/ / /  / /_/ /  __/ / / /
    /_/ |_|\___/\__,_/  |__/|__/\__,_/_/   \__,_/\___/_/ /_/
    
        :: RedWarden - Keeps your malleable C2 packets slipping through AVs,
                       EDRs, Blue Teams and club bouncers like nothing else!
    
        by Mariusz B. / mgeeky, '19-'21
        <mb [at] binary-offensive.com>
    
        v0.7
    
    
    [INFO] 2021-06-17/00:44:02: Serving proxy on: http://0.0.0.0:80 ...
    [INFO] 2021-06-17/00:44:02: Serving proxy on: http://0.0.0.0:80 ...
    Fatal error has occured.
    	[Errno 17] File exists
    Traceback:
    ------------------------------
    Traceback (most recent call last):
      File "/usr/lib/python3.8/asyncio/selector_events.py", line 259, in _add_reader
        key = self._selector.get_key(fd)
      File "/usr/lib/python3.8/selectors.py", line 192, in get_key
        raise KeyError("{!r} is not registered".format(fileobj)) from None
    KeyError: '6 is not registered'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "RedWarden.py", line 214, in main
        serve_proxy(srv[0], srv[1], srv[2], srv[3])
      File "RedWarden.py", line 145, in serve_proxy
        server.add_sockets(foosock)
      File "/usr/local/lib/python3.8/dist-packages/tornado/tcpserver.py", line 165, in add_sockets
        self._handlers[sock.fileno()] = add_accept_handler(
      File "/usr/local/lib/python3.8/dist-packages/tornado/netutil.py", line 282, in add_accept_handler
        io_loop.add_handler(sock, accept_handler, IOLoop.READ)
      File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 150, in add_handler
        self.selector_loop.add_reader(fd, self._handle_events, fd, IOLoop.READ)
      File "/usr/lib/python3.8/asyncio/selector_events.py", line 332, in add_reader
        return self._add_reader(fd, callback, *args)
      File "/usr/lib/python3.8/asyncio/selector_events.py", line 261, in _add_reader
        self._selector.register(fd, selectors.EVENT_READ,
      File "/usr/lib/python3.8/selectors.py", line 359, in register
        self._selector.register(key.fd, poller_events)
    FileExistsError: [Errno 17] File exists
    ------------------------------
    

    Originally posted by @superuser5 in https://github.com/mgeeky/RedWarden/issues/1#issuecomment-862830440

    enhancement 
    opened by superuser5 3
  • Adding support for latest tornado version

    Adding support for latest tornado version

    Modified RedWarden.py to work with latest tornado version. Fixes #19

    I'm not super knowledgeable with asyncio but followed the documentation and only needed a few lines of change thankfully.

    Starting a simple config it looks like everything works, except for some reason it looks like "serve_proxy" runs twice for each port. I noticed a similar issue #11 that I think is related to this.

    I think maybe this is some weird threading thing but will have to dig deeper. I used the following snippet for a simple check that solved the issue but is super ugly:

    counter = 0
    async def serve_proxy(bind, port, _ssl, foosock):
        global counter
        ProxyRequestHandler.protocol_version = "HTTP/1.1"
        scheme = None
        certpath = ''
        if counter == 0:
            counter+=1
            return
    

    With the above modification the first time the code enters the loop it just exists.

    opened by ptr0x1 2
  • Issue with new 4.7 beacon (maybe my error)?

    Issue with new 4.7 beacon (maybe my error)?

    Hello, first of all awesome tool as usual!

    I'm having a strange issue when beacon will send first post data.

    First connection seems ok, as it appear on my screen, but after that i get an error and everything seems welll setup (i think)

    This is my error:

    [DEBUG] 2022-09-30/17:51:49: Returning cached entry for IP address: REDACTED
    [DEBUG] 2022-09-30/17:51:49: Analysing IP Geo metadata keywords...
    [DEBUG] 2022-09-30/17:51:49: Extracted keywords from Peer's IP Geolocation metadata: REDACTED
    [DEBUG] 2022-09-30/17:51:49: Peer's IP Geolocation metadata didn't raise any suspicion.
    [DEBUG] 2022-09-30/17:51:49: Deep request inspection of URI (/jquery-3.3.1.min.js) parsed as section:http-get, variant:default
    [DEBUG] 2022-09-30/17:51:49: Requests's MD5 hash computed: REDACTED
    [ERROR] 2022-09-30/17:51:49: [DROP, 2022-09-30/19:51:49, reason:0, REDACTED] identical request seen before. Possible Replay-Attack attempt.
    [ERROR] 2022-09-30/17:51:49: Exception catched in request_handler: cannot unpack non-iterable bool object
    Uncaught exception GET /jquery-3.3.1.min.js (REDACTED)
    HTTPServerRequest(protocol='https', host='REDACTED, method='GET', uri='/jquery-3.3.1.min.js', version='HTTP/1.1', remote_ip='REDACTED')
    Traceback (most recent call last):
      File "/usr/local/lib/python3.9/dist-packages/tornado/web.py", line 1713, in _execute
        result = await result
      File "/home/n0t/RedWarden/lib/proxyhandler.py", line 1176, in get
        self.my_handle_request()
      File "/home/n0t/RedWarden/lib/proxyhandler.py", line 289, in my_handle_request
        self._internal_my_handle_request(*args, **kwargs)
      File "/home/n0t/RedWarden/lib/proxyhandler.py", line 422, in _internal_my_handle_request
        output = handler()
      File "/home/n0t/RedWarden/lib/proxyhandler.py", line 529, in _my_handle_request
        (modified, req_body_modified) = self.request_handler(self.request, self.request.body)
      File "/home/n0t/RedWarden/lib/proxyhandler.py", line 1104, in request_handler
        req_body_current = handler(req, req_body_current)
      File "/home/n0t/RedWarden/lib/../plugins/malleable_redirector.py", line 1042, in request_handler
        return self._request_handler(req, req_body)
      File "/home/n0t/RedWarden/lib/../plugins/malleable_redirector.py", line 1064, in _request_handler
        drop_request = self.drop_check(req, req_body, malleable_meta)
      File "/home/n0t/RedWarden/lib/../plugins/malleable_redirector.py", line 1701, in drop_check
        (ret, reason) = self._client_request_inspect(section, variant, req, req_body, malleable_meta, ts, peerIP)
    TypeError: cannot unpack non-iterable bool object
    

    this error immediately appears on beacon's second request.

    maybe i did something wrong? Thanks in advace

    opened by h0nus 13
  • JARM fingerprint aka JA3/S defense.

    JARM fingerprint aka JA3/S defense.

    Hello,

    I'm wonder if RedWarden can stop or obfuscate JARM fingerprint ( Client Hello packet: Version, Accepted Ciphers, List of Extensions, Elliptic Curves, and Elliptic Curve Formats )..since this is one of privacy we wont share it in our infrastructure.

    https://miro.medium.com/max/1400/0*zZFKcFS-CkMtjUw1

    The information they gathering in order as follows: TLSVersion,Ciphers,Extensions,EllipticCurves,EllipticCurvePointFormats

    So i wonder if there is some rules or configuration we could apply on TLS, SSL, cipher, application security to didn't exposed or share this information with them, or with "not trusted" sources.

    or if we can obfuscate our SSL connections outbound on these fingerprint to Pivoting on them.

    Thanks

    enhancement 
    opened by b4sh1t1 6
  • Feature request : RedELK logging integration

    Feature request : RedELK logging integration

    It would be awesome to have integration with RedELK(https://github.com/outflanknl/RedELK)

    It seems all there is required is to have:

    • RedWarden to produce detailed logging
    • RedELK to have an connector/installer for RedWarden.

    If you can produce the first, we can do the second.

    Details on required logformat are detailed here: https://github.com/outflanknl/RedELK/wiki/Redirector-installation#Apache%20specifics.

    Maybe add a config.yml directive to enable detailed logging? For example:

    logging:
      - logging_type: redelk | basic
      - logging_file: $path
    
    opened by MarcOverIP 10
Owner
Mariusz B.
Wanna sip a sencha?
Mariusz B.
NetMiaou is an crossplatform hacking tool that can do reverse shells, send files, create an http server or send and receive tcp packet

NetMiaou is an crossplatform hacking tool that can do reverse shells, send files, create an http server or send and receive tcp packet

TRIKKSS 5 Oct 5, 2022
A simple python script that parses the MSFT Teams log file for the users current Teams status and then outputs the status color to a MQTT connected light.

Description A simple python script that parses the MSFT Teams log file for the users current Teams status and then outputs the status color to a MQTT

Lorentz Factr 8 Dec 16, 2022
Automatic Proxy scraper and Proxy-rotating Nitro Generator.

Automatic Proxy scraper and Proxy-rotating Nitro Generator.

Tawren007 2 Nov 8, 2021
Python port of proxy-www (https://github.com/justjavac/proxy-www)

proxy-www.py Python port of proxy-www (https://github.com/justjavac/proxy-www). Implemented additional functionalities! How to install pip install pro

Minjun Kim (Lapis0875) 20 Dec 8, 2021
Best discord webhook spammer using proxy (support all proxy type)

Best discord webhook spammer using proxy (support all proxy type)

Iтѕ_Ѵιcнч#1337 25 Nov 1, 2022
Azure-function-proxy - Basic proxy as an azure function serverless app

azure function proxy (for phishing) here are config files for using *[.]azureweb

null 17 Nov 9, 2022
Malcolm is a powerful, easily deployable network traffic analysis tool suite for full packet capture artifacts (PCAP files) and Zeek logs.

Malcolm is a powerful, easily deployable network traffic analysis tool suite for full packet capture artifacts (PCAP files) and Zeek logs.

Cybersecurity and Infrastructure Security Agency 1.3k Jan 8, 2023
nettrace is a powerful tool to trace network packet and diagnose network problem inside kernel.

nettrace nettrace is is a powerful tool to trace network packet and diagnose network problem inside kernel on TencentOS. It make use of eBPF and BCC.

null 84 Jan 1, 2023
PcapXray - A Network Forensics Tool - To visualize a Packet Capture offline as a Network Diagram

PcapXray - A Network Forensics Tool - To visualize a Packet Capture offline as a Network Diagram including device identification, highlight important communication and file extraction

Srinivas P G 1.4k Dec 28, 2022
User-friendly packet captures

capture-packets: User-friendly packet captures Please read before using All network traffic occurring on your machine is captured (unless you specify

Seth Michael Larson 2 Feb 5, 2022
EV: IDS Evasion via Packet Manipulation

EV: IDS Evasion via TCP/IP Packet Manipulation 中文文档 Introduction EV is a tool that allows you crafting TCP packets and leveraging some well-known TCP/

null 256 Dec 8, 2022
Utility for converting IP Fabric webhooks into a Teams format.

IP Fabric Webhook Integration for Microsoft Teams Setup IP Fabric Setup Go to Settings > Webhooks > Add webhook Provide a name URL will be: 'http://<Y

Community Fabric 1 Jan 26, 2022
Utility for converting IP Fabric webhooks into a Teams format.

IP Fabric Webhook Integration for Microsoft Teams and/or Slack Setup IP Fabric Setup Go to Settings > Webhooks > Add webhook Provide a name URL will b

Community Fabric 1 Jan 26, 2022
Fast and configurable script to get and check free HTTP, SOCKS4 and SOCKS5 proxy lists from different sources and save them to files

Fast and configurable script to get and check free HTTP, SOCKS4 and SOCKS5 proxy lists from different sources and save them to files. It can also get geolocation for each proxy and check if proxies are anonymous.

Almaz 385 Dec 31, 2022
Compare the contents of your hosted and proxy repositories for coordinate collisions

Nexus Repository Manager dependency/namespace confusion checker This repository contains a script to check if you have artifacts containing the same n

Sonatype Community 59 Mar 31, 2022
A Python library to utilize AWS API Gateway's large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.

A Python library to utilize AWS API Gateway's large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.

George O 929 Jan 1, 2023
A collection of domains, wildcards and substrings designed for dnscrypt-proxy filter method.

A collection of domains, wildcards and substrings designed for dnscrypt-proxy filter method.

null 3 Oct 25, 2022
a safe proxy over tls

TlsProxys 基于TLS协议的http流量代理 安装 ※ 需要python3.7+ linux: python3.9 -m pip install TlsProxys windows: pip install TlsProxys 基本用法 服务器端: $ tpserver [command]

null 56 Nov 30, 2022
sshuttle: where transparent proxy meets VPN meets ssh

Transparent proxy server that works as a poor man's VPN. Forwards over ssh. Doesn't require admin. Works with Linux and MacOS. Supports DNS tunneling.

null 9.4k Jan 9, 2023