Modular, cohesive, transparent and fast web server template

Overview

kingdom-python-server 🐍

Modular, transparent, batteries (half) included, lightning fast web server. Features a functional, isolated business layer with an imperative decoupled shell.

Goal

This is intendend as both to serve as a scaffold for our internal projects as to improve and give back to our community as an efficient bullet-proof backend design, leveraging Python's expressability.

Features

  • Lightning fast ASGI server via uvicorn.
  • GraphQL support via ariadne.
  • Full GraphQL compliant query pagination support.
  • JWT authentication.
  • Resource-based authorization integrated using GraphQL directives.
  • Efficient dependency management via poetry
  • Database migration systems using alembic.
  • Event-driven architecture:
    • Internal message bus that injects adapters dependencies into service-handlers functions.
    • External message bus for background workers integrated w/ AWS Lambda.
  • Sober test pyramid: units, integrations and e2e tests.
  • Decoupled service layer that responds only to commands and events.
  • Aggregate's atomic services consistency guaranteed using postgres isolation levels locks.
  • Isolated and pure domain layer that has no dependencies (no, not even with ORM).

Roadmap

This is project's in its early stages, and should receive a big WIP tag. We should track progress using GitHub features:

  1. Discussions for brainstorming & prioritizing
  2. Milestones for planned features
  3. Issues for ongoing tasks

Instructions

As it is disclaimed the project current status, running for now means making sure tests pass. We are shortly improving the entire installation experience and usage. Hold tight.

Step 1: Dependencies & environment

This projects uses poetry to manage dependencies. Having said that, how you instantiate your virtual environment is up to you. You can do that now.

Inside your blank python virtual environment:

pip install poetry & poetry install

Step 2: Prepare your database

As there aren't any containerization being done for now, you'd need postgres up and running in your local machine.

psql -c "create database template"

Step 3: Test it

Right now you should be able to run the entire test-suite properly.

make test

Why?

Why not use django? Or flask? Or FastAPI? Even though these are great frameworks they're (mostly heavily) opiniated. At T10, we have a need to implement and deliver maintainable software that we really know what's happening under the (at least conceptual Pythonic-layer) hood. As a software house, we've also come to find that by using such frameworks programmers are more likely to be inhibited from practicing and improving their software design skills.

We're (obviously) not alone here. pca have touched base a few years ago.

Philosophy

We are committed to these non-negotiables principle:

  1. Modularity, high cohesion and low coupling
  2. Transparency, ensuring readable, debugabble, maintainable software
  3. Orthogonality, which makes us sane by dilligently avoiding code that emmits side-effects
  4. Testability, we need code that can be (as easily as possibly) tested

Inspiration

We don't claim to have created everything from scratch. Quite the opposite, the work here is a direct fork from ideas we really identify with that were hard earned throughout the past two decades.

Specifically:

  1. Architecture Patterns with Python from Bob Gregory & Harry Percival,
  2. Python Clean Architecture, from pcah
  3. Functional Core, Imperative Shell from Destroy All Software,
  4. Hexagonal Architecture aka Ports & Adapters by Alistair Cockburn
  5. Domain-Driven-Design by Eric Evans & Martin Fowler
Comments
  • Authorization core module

    Authorization core module

    Contexts

    The entire context about research, design decisions and everything are on #18.

    Relevant implementation details and will be placed on code through mindful comments.

    Checkers

    Work is being split on these "greater" granularity:

    1. Tiny DSL parser conditional attributes statements
    2. Policy enforcement checks
    3. Policy constraints
    4. Proper packaging under core module
    5. Compilation of docs to make mechanics clear

    TDD is being applied to, within each of these tasks, find the thinnest and possible workable solution.

    enhancement review core 
    opened by ruiconti 7
  • Improvements on Authorization functionality

    Improvements on Authorization functionality

    Compliance of Permissions feature to ABAC specs

    Overview

    Authorization is a wide and broad range, and many forms of access control have emerged. Nowadays, there are mainly two kinds of access control that permeates implementations: ABAC[3], RBAC[1] and a mixture of both. There are plenty of material available online that discusses both in-depth so I'm discussing only trade-offs and decisions that are relevant to this project.

    What should always guide design decisions is input space. This is primarily important when dealing with authorization due to the fact that there are environments that needs overly restrictive policies, which must call for sophisticated and articulated authorization systems. In our scenario, which comprises environments and business that do not enforce nor implement strict policies (such as military force or large enterprises, for instance), we've come to choose a a RBAC with ABAC features, that cover both of RBAC and ABAC shortcomings[4].

    Requirements

    High-level

    1. Fine-grained permission control on specific resources instances e.g. user can only list a subset of a product list
    2. Granting must be explicit, that is, for a given user it should be cheap to query which permissions him/her have
    3. Design should be flexible in order to leverage even richer attribute-based authorization

    Functional

    Administrative commands

    1. Add a Role
    2. Delete a Role
    3. Assign a role to a User
    4. Dessign a role of a User
    5. Add a Permission
    6. Grant permission to a Role (creates a valid permission)
    7. Revoke permission to a Role (deletes a valid permission)

    Administrative views

    1. Query all Roles and Permissions assigned to a given User
    2. Query all Roles and Permissions registered
    3. Query all Sessions (and with which activated Roles) that were created for a given User

    Supporting system commands

    1. Create a Session
    2. Drop a Session
    3. Check if a User is authorized to perform an operation on a given Resource
    4. Check if an added Permission is valid and hold valid conditional clauses

    Proposition

    Our proposition mostly resembles AERBAC from Rajpoot et al[5]. In simple terms, it is a RBAC model enhanced with attributes that enable context-aware and fine-grained authorization cases.

    Consider a subject trying to access a given resource, we consider context only attributes of both the subject and the resource:

    class Context(object):
        cm: ChainMap
        resource: Entity
    
        def __init__(self, subject: User, resource: Entity):
            map_subject = {f"subject.{key}" for k in subject.__dict__.keys()}
            map_resource = {f"resource.{key}" for k in resource.__dict__.keys()}
            self.resource = resource
            self.cm = ChainMap(map_subject, map_resource)
        
        def __enter__(self) -> ChainMap:
            return self.cm
    
    requested_resource = Product
    logged_user = User("ee7851b8")
    ctx = Context(subject=logged_user, resource=requested_resource, operation=CREATE):
    logged_user.is_authorized(ctx)  # role association is dealt within domain
    

    It is a stricter kind of ABAC in a sense that if we're trying to prevent access with attributes that are both not from the subject nor from the resource, we find that it's better modeled as a business rule instead of an authorization directive.

    p1 = Permission(Product, CREATE, "resource.id=*")
    p2 = Permission(Product, CREATE, "resource.id=7fg6ab756d && resource.name='Foo'")
    p3 = Permission(Product, DELETE, "resource.id=7fg6ab756d")
    r1, r2 = Role(p1, p2), Role(p3)
    user.associate_roles([p1, p2, p3])
    user.permissions  # Product.*.CREATE, Product.7fg6ab756d.DELETE
    

    Please note that these snippets are anecdotes and are very likely to improve.

    Limitations

    However, since it makes sense to avoid premature optimisation, our design have a few limitations when constrasted to AERBAC and RBAC:

    1. There are no SSD (static separation of duty) and DSD (dynamic separation of duties) constraints or conflicting roles. We shall enforce conflict free role assignment in assign-time i.e. analogous to a static separation of duties.
    2. Attribute filtering is only related to the resource or the subject (logged user) of a given active session.
    3. There is no need for role activation in sessions; every user is activated to all of roles assigned to him/her at every login.

    Premises

    1. Least permission principle i.e. a user defaults to a DEFAULT_ROLE, which has the minimum possible set of permissions of a system. It also means that a user can only do something if it is stated through a permission statement.
    2. Operations and Resources are static. Their management —inclusion/update/removal— is to be done exclusively through database migrations.
    3. Permissions are cumulative and additive. In the example on section above, user will be able to CREATE on every instance of Product, even though p2 scope is limiting.
    4. Overlapping permissions have no effect.
    5. Operation CREATE can only be associated to a selector that is not a unique id.

    This premises aim to simplify conflicts that are major pain-points of RBAC systems.

    Phasing Attribute Selection

    In order to make things more tangible, we'll split Attribute Condition feature to gradually expand:

    1. first only support resource.id selecting,
    2. support resource's all attributes selecting then
    3. support user's all attributes selecting

    References

    Glossary

    • Role: an organizational job function with a clear definition of inherent responsibility and authority (permissions)
    • Operation: operation that can be performed on an instance e.g. CREATE, EDIT, REMOVE
    • Selector: an identifier that maps one or many instances of a Resource.
    • AttrCondition: predicate expression that selects rows of Context.
    • Permission: is a tuple of (Resource, Operation, Selector, AttrCondition)
    • Subject: A user that has an active session on a system.
    • User: a registered user.
    • Object: is a domain's mapped entity.

    External links

    [1]: Role Based Access Control – ANSI, https://profsandhu.com/journals/tissec/ANSI+INCITS+359-2004.pdf [2]: LI, Ninghui. Critique of the ANSI Standard on Role Based Access Control, https://www.cs.purdue.edu/homes/ninghui/papers/aboutRBACStandard.pdf [3]: Guide to Attribute Based Access Control (ABAC) Definition and Considerations — NIST, https://nvlpubs.nist.gov/nistpubs/specialpublications/NIST.sp.800-162.pdf [4]: Adding Attributes to Role-Based Access Control — NIST, https://csrc.nist.gov/publications/detail/journal-article/2010/adding-attributes-to-role-based-access-control [5]: Attributes Enhanced Role-Based Access Control Model, https://backend.orbit.dtu.dk/ws/files/110988163/AERBAC_TrustBus_20150618_.pdf

    enhancement help wanted refactor core 
    opened by ruiconti 1
  • Ahoy containers

    Ahoy containers

    This PR (partly) resolves #8.

    What this adds

    Mainly, it implements a few things:

    1. Local-development service execution can now be fully abstracted w/ docker-compose.
    2. Local-testing can now be done without the need to deal with python environments
    3. Makefile has been updated to help tooling.

    Memory footprint

    Using debian's busters distro, a few checks that might be useful when we're benchmarking this app.

    1. Idle-mode, meaning handling 0 requests, 8 workers:

    image

    1. Idle-mode, meaning handling 0 requests, single-worker:

    image

    Which is pretty straightforward, ~44MiB per worker of memory footprint. IMO it's definitely something work investigating i.e. what is taking so much space since this is supposed to be a lean framework.

    What is currently missing

    For this PR, it is missing a few things that are being added as work is progressed:

    • [x] Proper volume mapping
    • [ ] Proper logging
    • [x] Usage of poetry for dep mgmt

    What needs to be done

    This work enables the service to be fully usable in local and development environments. So what is "missing" from this work in order to be used in production environments:

    1. Security concerns (TLS everywhere)
    2. Real infrastructure concerns (reversed proxies, load balancers)
    3. CI/CD pipelines
    4. Integration to cloud providers & image repositories
    5. Proper & standardized logging

    But those concerns would be addressed after we went public, IMO. But are kept in record so we have a clear vision of next steps.

    @rafamelos @andreyrcdias care to review & complement?

    enhancement 
    opened by ruiconti 1
  • Update README env command

    Update README env command

    Update README fixing the poetry installation command, so that you can perform the installation of the dependencies, if, only if, the installation of the poetry is done.

    opened by andreyrcdias 1
  • Setting-up containerization

    Setting-up containerization

    Issue

    As of now, there's no way to run projects in a container environment. This is critical to enable proper evaluation and project publication.

    Proposed solution

    Implement docker-compose files to enable local and development environments deployments.

    enhancement 
    opened by ruiconti 0
  • Make tests work again

    Make tests work again

    Since we've ported this from an existing project and have done minor changes. This task is related to make tests work again as expected.

    There is an urgent need to use this repository to serve as a boilerplate takehome challenge for recruiting purposes. In that sense, no refactoring and/or improvements will be made.

    Hint: Use make to interface it as it is commonly expected.

    bug 
    opened by ruiconti 0
  • Add inline policy capabilities to authorization

    Add inline policy capabilities to authorization

    Proposal

    With a discussion with @rafamelos for an internal project, we came to realize that we'd fall in the common trap of RBAC systems that the number of roles and policies would increase exponentially as the number of user and resources increases.

    And the root of this problem lies, primarily for this context, in having too many fine-grained role-policies associations.

    Solution

    A first thought solution would be to implement inline-policies. As a legal direct relationship between user and a policy. With that in mind, changes are bound to happen at

    • Authorization base classes
    • Access interfaces to enable inline-policy management

    One benefit from current implementation is that authorization flow would remain unchanged. Meaning that no need to alter how permissions are checked.

    refactor design change core 
    opened by ruiconti 0
  • Adapt authorization module to work with current entrypoints

    Adapt authorization module to work with current entrypoints

    Analogous to #20, but for entrypoint integrations.

    Goal

    One liner: Integrate authorization module services to current implementations of middleware & directives.

    1. Adapt authentication middleware and context propagation
    2. Adapt authorization middleware and proper scope results handling on query resolvers
    3. Proper error handling
    refactor design change crucial 
    opened by ruiconti 0
  • Adapt authorization module to work with access aggregate

    Adapt authorization module to work with access aggregate

    Now that we've working authorization and authentication interfaces, we need to plug it into our access identity service.

    Goal

    One liner: Adapt current access (now named auth) service to support core-authentication and authorization functionality.

    1. Adapt domain and adapters (e.g orm) with authorization's base types
    2. Integrate services with proper interfaces
    refactor design change crucial 
    opened by ruiconti 0
  • WIP: adding observability

    WIP: adding observability

    This is still early in the process for tracing, but I think is a good time to open a PR just to see if this make sense. Some days ago I came across the repository, and since I wanted to do something related with observability I started this.

    Proposal

    Since this project will be used as a template for another projects, and one of the available feature is the event-driven architecture I thought this could be a nice feature to have. When starting to process a message that traverse multiple services, failures will be hard to debug and to identify the root cause, for this purpose this (early) feature is adding the possibility to start tracing distributed events using the OpenTelemetry and exporting the collected information to Jaeger. For exposing some metrics to monitor the service, a Prometheus exporter is also included.

    With the final implementation will be possible to verify which services an event had contact, the time spent in every step and to easily identify failures. On a distributed environment, with multiple services this should be a nice feature to have. The tracing feature is the main focus for the implementation, the gathering of metrics using the Prometheus can be enhanced -the possibility to add labels and expose more features -, but I think is already possible to use. Some examples will be added at the end.

    Current state

    In the current state I thinks some errors could arise when tracing a block that spawns multiple threads, since the span context must be propagated between the threads. Using coroutines could be another trouble, since coroutines will be using a single thread, a single span context will exist, and maybe this could lead to spans being associated with wrong traces.

    Using the current implementation, if we have the following scenario:

    caller           parent           childA          childB
       |                |               |               |
       | --- Thread --->|               |               |
       |                | --Blocking--> |               |
       |                |               |               |
       |                |               | ---Thread---> |
       |                |               |               |
       |                |               | <----Done---- |
       |                |               |               |
       |                | <----Done---- |               |
       | <----Done----- |               |               |
       |                |               |               |
       V                V               V               V
    

    The traces will be collected correctly, even with the second thread spawned by a already child thread, the context can be identified between threads, even though I'm not really sure if the second thread is identifying the parent thread correctly. But, on the following scenario ~~everything explodes~~:

    caller           parent           childA          childB
       |                |               |               |
       | --- Thread --->|               |               |
       |                | ---Thread---> |               |
       |                |               |               |
       |                | -------------Thread---------> |
       |                |               |               |
       |                | <------------Done------------ |
       |                |               |               |
       |                | <----Done---- |               |
       | <----Done----- |               |               |
       |                |               |               |
       V                V               V               V
    

    Since the context changes on every call, the traces could be wrong. At this moment, a stack is being used to control each span, which represent well a synchronous sequence of function calls, with more development this struct maybe needs to be changed. I did not tried to use coroutines to verify the behavior, but I guess the same problem remains. I think the majority of use cases needs this context propagation to work correctly.

    Probably exists more scenarios where the traces are not collected correctly, so with the current state is not ready to be used and trusted. Also, the opentelemetry libraries available for Python is receiving commits frequently, some of the docs are not updated and some researches are needed during development.

    Examples

    A simple example was written to verify how things are working. This example is tracing something similar with the first draw above, where a block is spawned in a separate thread. This spawned thread will spawn a separate block that sleeps for 1 second and a blocking call to calculate a Fibonacci of n.

    To verify the metric collection, a counter was added to the Fibonacci calls, since is recursive is more interesting, and a time evaluation was added to the root block. At the end, the collected metrics are displayed to the stdout. The code is:

    import threading
    import time
    
    from src.observability.decorator import default_tracer, trace, count, default_measurer, observe
    
    
    @trace("identity")
    def identity_fibo():
        def lazy():
            default_tracer.add_property("identify", threading.get_ident())
            time.sleep(1)
        t = threading.Thread(target=lazy)
        t.start()
        t.join()
    
    
    @count(name="fibonacci", description="Count how many times Fibonacci was called")
    def fib(n):
        if n < 2:
            return n
        return fib(n-1) + fib(n-2)
    
    
    @trace("fibo")
    def middleware(n):
        default_tracer.add_property("nth", str(n))
        identity_fibo()
        return fib(n)
    
    
    @trace("base")
    @observe(name="fibonacci_caller", description="Time took by the caller to execute 10 fibonacci calls")
    def traced_function(n):
        for i in range(10):
            multiplier = i + 1
            default_tracer.add_property("idx-" + str(multiplier), str(middleware(n * multiplier)))
    
    
    if __name__ == '__main__':
        threads = [threading.Thread(target=traced_function, args=(3,)) for _ in range(2)]
        [t.start() for t in threads]
        [t.join() for t in threads]
    
        print(default_measurer.export().decode('utf-8'))
    
    

    The collected metrics printed to stdout is:

    # HELP fibonacci_caller Time took by the caller to execute 10 fibonacci calls
    # TYPE fibonacci_caller summary
    fibonacci_caller_count 2.0
    fibonacci_caller_sum 96.8708688889983
    # HELP fibonacci_caller_created Time took by the caller to execute 10 fibonacci calls
    # TYPE fibonacci_caller_created gauge
    fibonacci_caller_created 1.6168688051154459e+09
    # HELP fibonacci_total Count how many times Fibonacci was called
    # TYPE fibonacci_total counter
    fibonacci_total 7.049132e+06
    # HELP fibonacci_created Count how many times Fibonacci was called
    # TYPE fibonacci_created gauge
    fibonacci_created 1.6168688061167545e+09
    

    The collected traces, that can be seen on the Jaeger dashboard:

    image

    To use this is needed to have the jaeger collector exposed. This can be done using the available docker image:

    $ docker run -p 16686:16686 -p 6831:6831/udp -p 14250:14250 jaegertracing/all-in-one
    

    Next steps

    To avoid the need to have a jaeger container executed locally, create a NoopTracer where nothing is really traced, based on the configured environment. For this test version nothing is configurable, so adding a configurable client is also needed. Then, focusing to solve the context propagation between threads/coroutines, and adding some tests to validate.

    I started this only for fun, but if you guys think is something nice to have I could work on this on my spare time (classes and work consume most of my time), and try to fix everything needed to a working version, and since python is not my native language some things will probably need changes.

    Tools used

    I only choose to use the OpenTelemetry because they are part of the CNCF and are trying to set standards for observability and telemetry. The jaeger for collecting data was also because it is a open source tool and also part of CNCF, another option that works really well for this scenario is Datadog, but this is a paid solution.

    For metrics, the Prometheus option also seems natural, following the same logic for the other tools.

    opened by jabolina 2
  • Memory footprint analysis & optimisation

    Memory footprint analysis & optimisation

    Issue

    As noted in #15, current memory footprint is actually kinda large. It makes sense to conduce an investigation to find out how overall memory usage is happening and if there's any optimisation to be done.

    Proposed solution

    Investigation using available tools and frameworks for memory profiling.

    refactor 
    opened by ruiconti 0
  • Stricter static-type checking

    Stricter static-type checking

    Issue

    Right now, static type checking is being done half-heartedly. We should embrace it and be as restrict as we can.

    Proposed solution

    Proper type facilities as well as a rigorous and unforgiving static type check using mypy.

    enhancement refactor review 
    opened by ruiconti 0
Owner
T10
🛠 We make computer do what we want. Mostly on the web
T10
graphw00f is Server Engine Fingerprinting utility for software security professionals looking to learn more about what technology is behind a given GraphQL endpoint.

graphw00f - GraphQL Server Fingerprinting graphw00f (inspired by wafw00f) is the GraphQL fingerprinting tool for GQL endpoints. Table of Contents How

Dolev Farhi 282 Jan 4, 2023
Getting the ip of a fivem server with a cfx.re link

Dark Utilities - FIVEM-IP-RESOLVER Our Website https://omega-project.cz/ ! Install the app on the server user@domain:~# pip3 install colored user@doma

Inplex-sys 12 Oct 25, 2022
Lightning fast and portable programming language!

Photon Documentation in English Lightning fast and portable programming language! What is Photon? Photon is a programming language aimed at filling th

William 58 Dec 27, 2022
MGE-GraphQL is a Python library for building GraphQL mutations fast and easily

MGE-GraphQL Introduction MGE-GraphQL is a Python library for building GraphQL mutations fast and easily. Data Validations: A similar data validation w

MGE Software 4 Apr 23, 2022
Blazing fast GraphQL endpoints finder using subdomain enumeration, scripts analysis and bruteforce.

Graphinder Graphinder is a tool that extracts all GraphQL endpoints from a given domain. Run with docker docker run -it -v $(pwd):/usr/bin/graphinder

Escape 76 Dec 28, 2022
🏆 A ranked list of awesome python libraries for web development. Updated weekly.

Best-of Web Development with Python ?? A ranked list of awesome python libraries for web development. Updated weekly. This curated list contains 540 a

Machine Learning Tooling 1.8k Jan 4, 2023
GraphQL is a query language and execution engine tied to any backend service.

GraphQL The GraphQL specification is edited in the markdown files found in /spec the latest release of which is published at https://graphql.github.io

GraphQL 14k Jan 1, 2023
Django registration and authentication with GraphQL.

Django GraphQL Auth Django registration and authentication with GraphQL. Demo About Abstract all the basic logic of handling user accounts out of your

pedrobern 301 Dec 9, 2022
Django Project with Rest and Graphql API's

Django-Rest-and-Graphql # 1. Django Project Setup With virtual environment: mkdir {project_name}. To install virtual Environment sudo apt-get install

Shubham Agrawal 5 Nov 22, 2022
GraphQL security auditing script with a focus on performing batch GraphQL queries and mutations

BatchQL BatchQL is a GraphQL security auditing script with a focus on performing batch GraphQL queries and mutations. This script is not complex, and

Assetnote 267 Dec 24, 2022
Generate a FullStack Playground using GraphQL and FastAPI 🚀

FastQL - FastAPI GraphQL Playground Generate a FullStack playground using FastAPI and GraphQL and Ariadne ?? . This Repository is based on this Articl

OBytes 109 Dec 23, 2022
This is a minimal project using graphene with django and user authentication to expose a graphql endpoint.

Welcome This is a minimal project using graphene with django and user authentication to expose a graphql endpoint. Definitely checkout how I have mana

yosef salmalian 1 Nov 18, 2021
🔪 Facebook Messenger to email bridge based on reverse engineered auth and GraphQL APIs.

Unzuckify This repository has a small Python application which allows me to receive an email notification when somebody sends me a Facebook message. W

Radon Rosborough 33 Dec 18, 2022
This is a graphql api build using ariadne python that serves a graphql-endpoint at port 3002 to perform language translation and identification using deep learning in python pytorch.

Language Translation and Identification this machine/deep learning api that will be served as a graphql-api using ariadne, to perform the following ta

crispengari 2 Dec 30, 2021
A Django GraphQL Starter that uses graphene and graphene_django to interface GraphQL.

Django GraphQL Starter GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data... According to the doc

0101 Solutions 1 Jan 10, 2022
A plug and play GraphQL API for Wagtail, powered by Strawberry 🍓

Strawberry Wagtail ?? A plug and play GraphQL API for Wagtail, powered by Strawberry ?? ⚠️ Strawberry wagtail is currently experimental, please report

Patrick Arminio 27 Nov 27, 2022
A simple program to recolour simple png icon-like pictures with just one colour + transparent or white background. Resulting images all have transparent background and a new colour.

A simple program to recolour simple png icon-like pictures with just one colour + transparent or white background. Resulting images all have transparent background and a new colour.

Anna Tůmová 0 Jan 30, 2022
A Python command-line utility for validating that the outputs of a given Declarative Form Azure Portal UI JSON template map to the input parameters of a given ARM Deployment Template JSON template

A Python command-line utility for validating that the outputs of a given Declarative Form Azure Portal UI JSON template map to the input parameters of a given ARM Deployment Template JSON template

Glenn Musa 1 Feb 3, 2022
Transparent proxy server that works as a poor man's VPN. Forwards over ssh. Doesn't require admin. Works with Linux and MacOS. Supports DNS tunneling.

sshuttle: where transparent proxy meets VPN meets ssh As far as I know, sshuttle is the only program that solves the following common case: Your clien

null 9.4k Jan 4, 2023
TickerRain is an open-source web app that stores and analysis Reddit posts in a transparent and semi-interactive manner.

TickerRain is an open-source web app that stores and analysis Reddit posts in a transparent and semi-interactive manner

GonVas 180 Oct 8, 2022