Blazingly-fast :rocket:, rock-solid, local application development :arrow_right: with Kubernetes.

Overview

Gefyra

Gefyra gives Kubernetes-("cloud-native")-developers a completely new way of writing and testing their applications. Over are the times of custom Docker-compose setups, Vagrants, custom scrips or other scenarios in order to develop (micro-)services for Kubernetes.

Gefyra offers you to:

  • run services locally on a developer machine
  • operate feature-branches in production-like Kubernetes environment with all adjacent services
  • write code in the IDE you already love, be fast, be confident
  • leverage all the neat development features, such as debugger, code-hot-reloading, override environment variables
  • run high-level integration tests against all dependant services
  • keep peace-of-mind when pushing new code to the integration environment

Gefyra was architected to be fast and robust on an average developer machine including most platforms.

What is Gefyra?

Gefyra is a toolkit written in Python to arrange a local development setup in order to produce software for and with Kubernetes while having fun. It is installed on any development computer and starts its work if it is asked. Gefyra runs as user-space application and controls the local Docker host and Kubernetes via Kubernetes Python Client.

Gefyra controls docker and kubeapi

(Kubectl is not really required but makes kinda sense to be in this picture)

In order for this to work, a few requirements have to be satisfied:

  • a Docker host must be available for the user on the development machine
  • there are a few container capabilities required on both sides, within the Kubernetes cluster and on the local computer
  • a node port must be opened up on the development cluster for the duration of the development work

Gefyra intercepts the target application running in the cluster and tunnels all traffic hitting said container to the one running locally. Now, developers can add new code, fix bugs or simply introspect the traffic and run it right away in the Kubernetes cluster. Gefyra proves the entire infrastructure to do so and provides a high level of developer convenience.

Did I hear developer convenience?

The idea is to relieve developers from the stress with containers to go back and forth to the integration system. Instead, take the integration system closer to the developer and make the development cycles as short as possible. No more waiting for the CI to complete just to see the service failing on the first request. Cloud-native (or Kubernetes-native) technologies have completely changed the developer experience: Infrastructure is increasingly becoming part of developer's business with all the barriers and obstacles.
Gefyra is here to provide a development workflow with the highest convenience possible. It brings low setup times, rapid development, high release cadence and super-satisfied managers.

Installation

Todo

How does it work?

In order to write software for and with Kubernetes, obviously a Kubernetes cluster is required. There are already a number of Kubernetes distribution available to run everything locally. A cloud-based Kubernetes cluster can be connected as well in order to spare the development computer from blasting off. A working KUBECONFIG connection is required with appropriate permissions which should always be the case for local clusters. Gefyra installs the required cluster-side components by itself once a development setup is about to be established.

Gefyra connects to a Kubernetes cluster

With these component, Gefyra is able to control a local development machine, and the development cluster, too. Both sides a now in the hand of Gefyra.
Once the developer's work is done, Gefyra well and truly removes all components from the cluster without leaving a trace.

A few things are required in order to achieve this:

  • a tunnel between the local development machine and the Kubernetes cluster
  • a local end of that tunnel to steer the traffic, DNS, and encrypt everything passing over the line
  • a cluster end of the tunnel, forwarding traffic, taking care of the encryption
  • a local DNS resolver that behaves like the cluster DNS
  • sophisticated IP routing mechanisms
  • a traffic interceptor for containers already running withing the Kubernetes cluster

Gefyra builds on top of the following popular open-source technologies:

Docker

Docker is currently used in order to manage to local container-based development setup, including the host, networking and container management procedures.

Wireguard

Wireguard is used to establish the connection tunnel between the two ends. It securely encrypts the UDP-bases traffic and allows to create a site-to-site network for Gefyra. That way, the development setup becomes part of the cluster and locally running containers are actually able to reach cluster-based resources, such as databases, other microservices and so on.

CoreDNS

CoreDNS provides local DNS functionality. It allows resolving resources running within the Kubernetes cluster.

Nginx

Nginx is used for all kinds of proxying and reverse-proxying traffic, including the interceptions of already running conatiners in the cluster.

Architecture of the entire development system

Local development setup

The local development happens with a running container instance of the application in question on the developer machine. Gefyra takes care of the local Docker host setup, and hence needs access to it. It creates a dedicated Docker network which the container is deployed to. Next to the developed application, Gefyra places a sidecar container. This container, as a component of Gefyra, is called Cargo.
Cargo acts as a network gateway for the app container and, as such, takes care of the IP routing into and from the cluster. In addition, Cargo provides a CoreDNS server which forwards all request to the cluster. That way, the app container will be able to resolve cluster resources and may not resolve domain names that are not supposed to be resolved (think of isolated application scenarios). Cargo encrypts all the passing traffic with Wireguard using ad-hoc connection secrets.

Gefyra local development

This local setup allows developers to use their existing tooling, including their favorite code editor and debuggers. The application, when it is supported, can perform code-hot-reloading upon changes and pipe logging output to a local shell (or other systems).
Of course, developers are able to mount local storage volumes into the container, override environment variables and modify everything as they'd like to.
In Gefyra this action is called bridge: from an architectural perspective the application is bridged into the cluster. If the container is already running within a Kubernetes Pod, it gets replaced and all traffic to the originally running container is proxies to the one on the developer machine.
During the container startup of the application, Gefyre modifies the container's networking from the outside and sets the default gateway to Cargo. That way, all container's traffic is passed to the cluster via Cargo's encrypted tunnel. The same procedure can be applied for multiple app containers at the same time.

The neat part is that with a debugger and two or more bridged containers, developers can introspect requests from the source to the target and back around while being attached to both ends.

The bridge operation in action

This chapter covers the important bridge operation by following an example.

Before the bridge operation

Think of a provisioned Kubernetes cluster running some workload. There is an Ingress, Kubernetes Services and Pods running containers. Some of them use the "sidecar" pattern.

Gefyra development workflow_step1

Preparing the bridge operation

Before the brigde can happen, Gefyra installs all required components to the cluster. A valid and privileged connection must be available on the developer machine to do so.
The main component is the cluster agend called Stowaway. The Stowaway controls the cluster side of the tunnel connection. It is operated by Gefyra's Operator application.

Gefyra development workflow step 2

Stowaway boots up and dynamically creates Wireguard connection secrets (private/public key-pair) for itself and Cargo. Gefyra copies these secrets to Cargo for it to establish a connection. This is a UDP connection. It requires a Kubernetes Service of kind nodeport to allow the traffic to pass through for the time of an active bridge operation. Gefyra's operator installs these componentens with the requested parameters and removes it after the session terminates.
By the way: the Gefyra's operator removes all components and itself from the cluster in case the connection was disrupted for some time, too.
Once a connection could be establised from Cargo to Stowaway, Gefyra spins up the app container on the local side for the developer to start working.
Another job of Gefyra's operator is to rewrite the target Pods, i.e. exchange the running container through Gefyras proxy, called Carrier.
For that, it creates a temporary Kubernetes Service that channels the Ingress traffic (or any other kind of cluster internal traffic) to the container through Stowaway and Cargo to the locally running app container.

During the bridge operation

A bridge can robustly run as long as it is required to (give the connection does not drop in the meanwhile). Looking at the example, Carrier was installed in Pod on port XY. That port was previously occupied by the container running originally here. In most cases, the local app container represents the development version of that originally provisioned container. Traffic coming from the Ingress, passing on to the Service hits Carrier (the proxy). Carrier bends the request to flow through Gefyras Service to the local app container via Stowaway' and Cargo's tunnel. This works since the app container's IP is routable from within the cluster.
The local app container does not simply return a response, but fires up another subsequent request by itself to Service . The request roams from the local app container back into the cluster and hits Pod 's container via Service . The response is awaited.
Once the local app container is done with constructing it's initial answer the response gets back to Carrier and afterwards to the Ingress and back to the client.

Gefyra development workflow step 3

With that, the local development container is reachable exactly the same way another container from within the cluster would be. That fact is a major advantage, especially for frontend applications or domain-sensitive services.
Developers now can run local integration tests with new software while having access to all interdependent services.
Once the development job is done, Gefyra properly removes everything, resets Pod to its original configuration, and tears the local environment down (as nothing ever happened).

Doge is excited about that.

Doge is excited

Credits

Todo

Comments
  • gefyra up: secrets

    gefyra up: secrets "gefyra-cargo-connection" not found

    Tried setting up gefyra, but it errored out with the below error:

    ➜  ~ gefyra version
    [INFO] Gefyra client version: 0.8.1
    

    Logs:

    ➜  ~ gefyra -d up
    [INFO] There was no --endpoint argument provided. Connecting to a local Kubernetes node.
    [INFO] Installing Gefyra Operator
    [DEBUG] Creating Docker network
    [INFO] Created network 'gefyra' (63ea1b4a3c)
    [DEBUG] Network {'Name': 'gefyra', 'Id': '63ea1b4a3c7db6343d701f981c2ecef650db3800911de5c8d61517c51bac5', 'Created': '2022-07-13T20:05:14.75968771Z', 'Scope': 'local', 'Driver': 'bridge', 'EnableIPv6': False, 'IPAM': {'Driver': 'default', 'Options': None, 'Config': [{'Subnet': '172.22.0.0/16'}]}, 'Internal': False, 'Attachable': False, 'Ingress': False, 'ConfigFrom': {'Network': ''}, 'ConfigOnly': False, 'Containers': {}, 'Options': {}, 'Labels': {}}
    [INFO] Container image "quay.io/gefyra/operator:0.8.1" already present on machine
    [INFO] Operator became ready in 190.4024 seconds
    [ERROR] Not Found: {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'secrets "gefyra-cargo-connection" not found', 'reason': 'NotFound', 'details': {'name': 'gefyra-cargo-connection', 'kind': 'secrets'}, 'code': 404}
    
     ~ oc get all
    NAME                                   READY   STATUS    RESTARTS   AGE
    pod/gefyra-operator-579fb7d567-s6qrp   1/1     Running   0          3m15s
    
    NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
    service/gefyra-stowaway-rsync       ClusterIP   None            <none>        10873/TCP         3m13s
    service/gefyra-stowaway-wireguard   NodePort    172.30.126.77   <none>        51820:31820/UDP   3m13s
    
    NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/gefyra-operator   1/1     1            1           3m16s
    deployment.apps/gefyra-stowaway   0/1     0            0           3m13s
    
    NAME                                         DESIRED   CURRENT   READY   AGE
    replicaset.apps/gefyra-operator-579fb7d567   1         1         1       3m17s
    replicaset.apps/gefyra-stowaway-68886d4c9c   1         0         0       3m14s
    ➜  ~ oc get secrets
    NAME                              TYPE                                  DATA   AGE
    builder-dockercfg-hj9sl           kubernetes.io/dockercfg               1      87s
    builder-token-6jznc               kubernetes.io/service-account-token   4      87s
    builder-token-sjd5s               kubernetes.io/service-account-token   4      87s
    default-dockercfg-trn5j           kubernetes.io/dockercfg               1      87s
    default-token-8wqtw               kubernetes.io/service-account-token   4      87s
    default-token-hk2ww               kubernetes.io/service-account-token   4      87s
    deployer-dockercfg-zsmqk          kubernetes.io/dockercfg               1      87s
    deployer-token-kndfn              kubernetes.io/service-account-token   4      87s
    deployer-token-nwk2q              kubernetes.io/service-account-token   4      87s
    gefyra-operator-dockercfg-f9vv9   kubernetes.io/dockercfg               1      87s
    gefyra-operator-token-r5ptn       kubernetes.io/service-account-token   4      87s
    gefyra-operator-token-sl6g8       kubernetes.io/service-account-token   4      87s
    

    oc version:

    Client Version: v4.2.0-alpha.0-1420-gf1f09a3
    Server Version: 4.8.43
    Kubernetes Version: v1.21.11+6b3cbdd
    
    question 
    opened by ilovechai 20
  • "[Errno 8] nodename nor servname provided, or not known" error when running `gefyra up`

    Hi, I'm following the instruction to try gefyra but when I run gefyra up, it fails with this error.

    Here is the full log for gefyra --debug up:

    [INFO] There was no --endpoint argument provided. Connecting to a local Kubernetes node.
    [INFO] Installing Gefyra Operator
    [DEBUG] Creating Docker network
    [CRITICAL] There was an error running Gefyra: [Errno 8] nodename nor servname provided, or not known
    

    I've already some docker networks so maybe there is some conflict with existing networks:

    docker network ls
    
    NETWORK ID     NAME                     DRIVER    SCOPE
    5c4c368c011f   bridge                   bridge    local
    17e511573627   host                     host      local
    914dfb2d33b0   compose-local_default    bridge    local
    5dbca0682625   none                     null      local
    c16bf9515da4   takeout                  bridge    local
    

    System info: OS: macOs Big Sur 11.6 Docker desktop: 4.8.2 (engine: 20.10.14) Gefyra installed from homebrew

    bug 
    opened by cappuc 16
  • Wireguard Connection not established when using Colima

    Wireguard Connection not established when using Colima

    What happened?

    Running gefyra up is a bit slow, waiting for the operator to be ready:

    [INFO] There was no --endpoint argument provided. Connecting to a local Kubernetes node.
    [INFO] Installing Gefyra Operator
    [INFO] Created network 'gefyra' (a2bbf4fae3)
    [INFO] Pulling image "quay.io/gefyra/operator:0.11.4"
    [INFO] Successfully pulled image "quay.io/gefyra/operator:0.11.4" in 8.311726046s
    [INFO] Pulling image "quay.io/gefyra/stowaway:0.11.4"
    [INFO] Successfully pulled image "quay.io/gefyra/stowaway:0.11.4" in 15.011779716s
    [INFO] Operator became ready in 87.6396 seconds
    [INFO] Deploying Cargo (network sidecar) with IP 172.18.0.149
    

    The operator has the following error in the log:

    [2022-10-18 21:01:34,545] kopf.activities.star [ERROR   ] Activity 'check_gefyra_components' failed with an exception. Will retry.
    Traceback (most recent call last):
      File "/usr/lib/python3.9/kopf/_core/actions/execution.py", line 279, in execute_handler_once
        result = await invoke_handler(
      File "/usr/lib/python3.9/kopf/_core/actions/execution.py", line 374, in invoke_handler
        result = await invocation.invoke(
      File "/usr/lib/python3.9/kopf/_core/actions/invocation.py", line 116, in invoke
        result = await fn(**kwargs)  # type: ignore
      File "/app/gefyra/handler/components.py", line 211, in check_gefyra_components
        await aw_wireguard_ready
      File "/app/gefyra/stowaway.py", line 80, in get_wireguard_connection_details
        stream_copy_from_pod(
      File "/app/gefyra/utils.py", line 112, in stream_copy_from_pod
        raise e
      File "/app/gefyra/utils.py", line 107, in stream_copy_from_pod
        member = tar.getmember(source_path.split("/", 1)[1])  
      File "/usr/lib/python3.9/tarfile.py", line 1790, in getmember
        raise KeyError("filename %r not found" % name)
    KeyError: "filename 'config/peer1/peer1.conf' not found"
    

    And after those, gefyra is not working and running gefyra status indicates it:

    {
      "summary": "Gefyra is not running properly",
      "cluster": {
        "connected": true,
        "operator": true,
        "operator_image": "quay.io/gefyra/operator:0.11.4",
        "stowaway": true,
        "stowaway_image": "quay.io/gefyra/stowaway:0.11.4",
        "namespace": true
      },
      "client": {
        "version": "0.11.4",
        "cargo": true,
        "cargo_image": "gefyra-cargo:20221019000235",
        "network": true,
        "connection": false,
        "containers": 0,
        "bridges": 0,
        "kubeconfig": "~/.kube/config",
        "context": "colima",
        "cargo_endpoint": "192.168.5.2:31820"
      }
    }
    

    What did you expect to happen?

    No errors

    How can we reproduce it (as minimally and precisely as possible)?

    on M1 mac:

    • brew install colima
    • colima start
    • gefyra up
    • gefyra status

    What Kubernetes setup are you working with?

    k3s ver: v1.23.6+k3s1

    OS version

    Darwin ventsislavg 22.1.0 Darwin Kernel Version 22.1.0: Tue Sep 27 22:08:45 PDT 2022; root:xnu-8792.41.6~5/RELEASE_ARM64_T6000 arm64

    Anything else we need to know?

    The current version doesn't support docker contexts. Check this issue for a workaround: https://github.com/gefyrahq/gefyra/issues/210

    bug 
    opened by ventsislav-georgiev 14
  • Gefyra Bridge: dict object cannot be interpreted as an integer

    Gefyra Bridge: dict object cannot be interpreted as an integer

    I am trying to run gefyra bridge like this:

    gefyra bridge -N myspacecraft -n default --deployment spacecrafts --container-name spacecrafts -p 8000:8000
    [INFO] Creating bridge for Pod spacecrafts-64fd95475c-9gdzh
    [INFO] Creating bridge for Pod spacecrafts-postgresql-0
    [INFO] Waiting for the bridge(s) to become active
    [INFO] Bridge spacecrafts-ireq-20220705102421-0 established
    [CRITICAL] There was an error running Gefyra: 'dict' object cannot be interpreted as an integer
    
    bug triage 
    opened by georgkrause 8
  • Add support for Rancher desktop

    Add support for Rancher desktop

    What is the new feature about?

    I want to use Gefyra with Rancher desktop (containerd+k3s) but they don't work nicely together. Basically gefyra up can't connect in any way I've tried to the running k8s. I've tried many different combinations of --endpoint and without endpoint with no luck. Here is a sample:

    $ gefyra up --endpoint 192.168.5.15:31820 --kubeconfig=~/.kube/config
    [INFO] Installing Gefyra Operator
    [INFO] Created network 'gefyra' (b28c846440)
    [INFO] Container image "quay.io/gefyra/operator:0.13.1" already present on machine
    [INFO] Pulling image "quay.io/gefyra/stowaway:0.13.1"
    [INFO] Successfully pulled image "quay.io/gefyra/stowaway:0.13.1" in 613.015103ms
    [INFO] Operator became ready in 3.4344 seconds
    [INFO] Deploying Cargo (network sidecar) with IP 192.168.96.149
    [ERROR] Gefyra could not successfully confirm the Wireguard connection working. Please make sure you are using the --endpoint argument for remote clusters and that 192.168.5.15:31820 can reach Kubernetes node port 31820 from this machine. Please check your firewall settings, too. If you are running a local Minikube cluster, please use the 'gefyra up --minikube' flag.
    [INFO] Removing running bridges
    [INFO] Uninstalling Operator
    ^[[A[INFO] Removing Cargo
    [INFO] Removing Docker network gefyra
    

    Why would such a feature be important to you?

    Gefyra + Rancher desktop will be a great stack

    Anything else we need to know?

    No response

    enhancement 
    opened by nforced 7
  • Improved DX for gefyra bridge action

    Improved DX for gefyra bridge action

    Implements changes suggested in #137. At the Kubernetes Community Days Munich @SteinRobert mentioned that this issue is waiting to be fixed, so here you go :-).

    @Schille said "... automatically name bridges following this scheme: /" This doesn't work as Kubernetes resource names can't contain a slash. Therefore I replaced the slash with a dot. I now removed the bridge name option completely as @Schille had the more convincing argument, but this decision is not mine to make.

    Another improvement would be to automatically select the container if there is only a single one inside the pod.

    This is only a draft. I am grateful for any criticism and feedback!

    opened by knorr3 7
  • Does this work on a remote cluster?

    Does this work on a remote cluster?

    Hi gefyra,

    Thanks for this great software. I did try this on a remote cluster but got a message I don't have rights to create a namespace. Some digging in the source indicated this only works within a 'gefyra' namespace.

    To me it seems this makes it (currently ?) unable to work inside a cloud-based cluster with namespaces already defined ? Is this correct ?

    Thanks !

    question 
    opened by sander76 7
  • docker context is not respected

    docker context is not respected

    What happened?

    Running gefyra status:

    [CRITICAL] Docker init error: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
    [ERROR] Could not create a valid configuration: Docker init error. Docker host not running?
    [INFO] There was no --endpoint argument provided. Connecting to a local Kubernetes node.
    [CRITICAL] Docker init error: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
    [ERROR] Could not create a valid configuration: Docker init error. Docker host not running?
    [CRITICAL] Docker init error: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
    [CRITICAL] There was an error running Gefyra: Docker init error. Docker host not running?
    

    What did you expect to happen?

    I am using a non-default docker context. Running docker context ls

    NAME              DESCRIPTION                               DOCKER ENDPOINT                                         KUBERNETES ENDPOINT                ORCHESTRATOR
    colima *          colima                                    unix:///Users/ventsislavg/.colima/default/docker.sock
    default           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                             https://127.0.0.1:6443 (default)   swarm
    

    How can we reproduce it (as minimally and precisely as possible)?

    on macOS:

    brew install colima
    colima start
    docker info
    gefyra status
    

    What Kubernetes setup are you working with?

    N/A

    OS version

    Darwin ventsislavg 22.1.0 Darwin Kernel Version 22.1.0: Tue Sep 27 22:08:45 PDT 2022; root:xnu-8792.41.6~5/RELEASE_ARM64_T6000 arm64

    Anything else we need to know?

    No response

    bug 
    opened by ventsislav-georgiev 6
  • Allow pulling docker image from a private docker registry

    Allow pulling docker image from a private docker registry

    We are running dev machines without access to the internet and gefyra up gets stuck pulling the quay.io/gefyra/operator:latest.

    In general for pulling docker images, we have an artifactory instance in which we set up a cache of the quay.io registry. So, we are able to pull the gefyra/operator:latest image via <artifactory-url>/quay.io-cache/gefyra/operator:latest.

    The question is, can adding support for pulling the docker images from a private registry other than quay.io be considered?

    documentation enhancement 
    opened by netrounds-guillaume 6
  • Gefyra error 109 on windows

    Gefyra error 109 on windows

    What happened?

    ran deck get deck.yaml with beiboot running ontop of EKS ran $env:KUBECONFIG=<beiboot cluster config> ran gefyra up

    (109, 'ReadFile', 'The pipe has been ended.')

    see debug log attached

    gefyra.log

    What did you expect to happen?

    I expected for the gefyra up command to complete

    How can we reproduce it (as minimally and precisely as possible)?

    get deck with beiboot running ontop of EKS
    $env:KUBECONFIG=<beiboot cluster config>
    gefyra up
    

    What Kubernetes setup are you working with?

    $ kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version. 
    
    Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:33:49Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"windows/amd64"} 
    
    Kustomize Version: v4.5.7 
    
    Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3+k3s1", GitCommit:"990ba0e88c90f8ed8b50e0ccd375937b841b176e", GitTreeState:"clean", BuildDate:"2022-07-19T01:10:03Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} ```
    
    </details>
    
    
    ### OS version
    
    <details>
    
    ```console
    # On Linux:
    $ cat /etc/os-release
    # paste output here
    $ uname -a
    # paste output here
    
    # On Windows:
    C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
    BuildNumber  Caption                                   OSArchitecture  Version 
    
    17763        Microsoft Windows Server 2019 Datacenter  64-bit          10.0.17763 ```
    
    </details>
    
    
    ### Anything else we need to know?
    
    using docker desktop v4.13.1
    not using wsl
    
    
    ```yaml
    version: "1" 
    cluster: 
      provider: beiboot 
      name: beka 
      nativeConfig: 
        timeouts: 
         api: 60  # in seconds, defaults to 30 
         cluster: 720  # in seconds, defaults to 180 
        context: arn:aws:eks:us-east-1:236287118453:cluster/crb-devops-devops 
        ports: 
          - port: 8080:80 
    decks: 
      - name: beka
        namespace: default 
        sources: 
          # ecr update 
          - type: file 
            ref: ./image_pull_secret_chron.yaml 
          # K8s Dashboard 
          - type: helm 
            ref: https://kubernetes.github.io/ingress-nginx 
            chart: ingress-nginx 
            releaseName: ingress-nginx 
            parameters: 
              - name: controller.admissionWebhooks.enabled 
                value: false 
              - name: controller.ingressClassResource.default 
                value: true 
          # Kubernetes dashboard 
          - type: helm 
            ref: https://kubernetes.github.io/dashboard/ 
            chart: kubernetes-dashboard 
            releaseName: dashboard 
            parameters: 
            - name: ingress.enabled 
              value: true 
            - name: ingress.hosts 
              value: '{dashboard.127.0.0.1.nip.io}' 
            - name: ingress.class 
              value: nginx 
            - name: protocolHttp 
              value: true 
            - name: service.externalPort 
              value: 8080 
            - name: serviceAccount.create 
              value: true 
            - name: serviceAccount.name 
              value: kubernetes-dashboard 
          - type: inline 
            content: 
              apiVersion: rbac.authorization.k8s.io/v1 
              kind: ClusterRoleBinding 
              metadata: 
                name: kubernetes-dashboard 
                namespace: default 
              roleRef: 
                apiGroup: rbac.authorization.k8s.io 
                kind: ClusterRole 
                name: cluster-admin 
              subjects: 
                - kind: ServiceAccount 
                  name: kubernetes-dashboard 
                  namespace: default 
                - kind: ServiceAccount 
                  name: default 
                  namespace: default 
          # Backend Resources 
          - type: helm 
            ref: ./helm-chart/rabbitmq/ 
            chart: rabbitmq 
            version: "0.1.0" 
            releaseName: rabbitmq 
            namespace: default 
            valueFiles: 
              - /sources/helm-chart/rabbitmq/inf.yaml 
          - type: helm 
            ref: ./helm-chart/redis 
            releaseName: redis 
            chart: redis 
            namespace: default 
            valueFiles: 
              - /sources/helm-chart/redis/inf.yaml 
          - type: helm 
            ref: ./helm-chart/sql-server 
            chart: sql-server 
            namespace: default 
            releaseName: sql 
            valueFiles: 
              - /sources/helm-chart/sql-server/inf.yaml 
          - type: helm 
            ref: ./helm-chart/orchestrator 
            chart: orchestrator 
            releaseName: orchestrator 
            namespace: default 
            valueFiles: 
              - /sources/helm-chart/orchestrator/values-local.yaml 
          - type: helm 
            ref: ./helm-chart/query 
            chart: query 
            releaseName: query-service 
            namespace: default 
            valueFiles: 
              - /sources/helm-chart/query/values-local.yaml  
          - type: helm 
            ref: ./helm-chart/monitor 
            chart: monitor 
            releaseName: monitor 
            namespace: default 
            valueFiles: 
              - /sources/helm-chart/monitor/values-local.yaml 
          - type: helm 
            ref: ./helm-chart/cos 
            chart: cos 
            releaseName: cos 
            namespace: default 
            valueFiles:
              - /sources/helm-chart/cos/values-local.yaml 
    
    bug 
    opened by eyammer 5
  • Add PodSecurity admission label to configure namespace as privileged

    Add PodSecurity admission label to configure namespace as privileged

    This is required to run stowaway on PodSecurity Admission-enabled clusters and since it is only a label, it won't do any harm on other clusters.

    Signed-off-by: Mara Sophie Grosch [email protected]

    opened by LittleFox94 5
  • What's next?

    What's next?

    We have received some awesome feedback for Gefyra. We made it to v1 within 2022. Thank you to all the people who gave us feedback, opened issues and all the valuable contributions.

    So what's next?

    Before building new features we'd like to increase the community and user base of Gefyra. Our hope is that with more users we will be able to better select which features are actually needed / requested.

    How do we grow the user base? We do appreciate anyone sharing Gefyra on any social networks or even among your fellow tech people. Blueshoe is sponsoring some ad-budget to help spread the word.

    On the development side we'd like increase the accessibility of Gefyra. It has potentially a lot of flags you need pass to make it work - constructing the commands can be cumbersome sometimes. We currently have 2 approaches on how this could be improved:

    1. An extension for Docker Desktop - this is a simple UI which allows to enter everything needed to run an image with Gefyra (in a certain namespace, with env variables...).
    2. A VSCode Extension - helping developers within their IDE - adding some UI elements to make the start/input easier. We're using VSCode a lot - if you have other suggestions feel free to comment here.

    If you have any ideas on how we could grow our community - we're very grateful for any help or suggestions!

    As soon as these things are done and stable we will take a look into further development of Gefyra itself. We will still investigate any bugs or problems with Gefyra itself - so the development is not stuck!

    Please let me know if you have questions, ideas or feedback for our team.


    Robert

    opened by SteinRobert 0
  • `gefyra up --minikube` not working when Minikube was created with a defined profile

    `gefyra up --minikube` not working when Minikube was created with a defined profile

    What happened?

    I started a Minikube cluster with the following parameters:

    minikube start -p beiboot --cpus=max --memory=4000 --driver=docker --addons=default-storageclass storage-provisioner
    

    Please mind the -p beiboot argument which creates a dedicated cluster for this particular profile. When up-ing Gefyra, it says:

    > gefyra up --minikube
    [CRITICAL] There was an error running Gefyra: Could not find the Minikube configuration at ~/.minikube/profiles/minikube/config.json. Did you start Minikube?
    

    What did you expect to happen?

    Well, I would expect Gefyra to tell me that no cluster is running for the default profile. However, it does not consider multiple profiles. In addition, there is no option to specify the Minikube profile I want to connect to.

    How can we reproduce it (as minimally and precisely as possible)?

    Please see above.

    What Kubernetes setup are you working with?

    $ kubectl version
    # paste output here
    

    OS version

    # On Linux:
    $ cat /etc/os-release
    # paste output here
    $ uname -a
    # paste output here
    
    # On Windows:
    C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
    # paste output here
    

    Anything else we need to know?

    No response

    bug enhancement 
    opened by Schille 1
  • unable to bridge -  'bool' object is not subscriptable

    unable to bridge - 'bool' object is not subscriptable

    What happened?

    I am testing out Gefyra as an alternative to telepresence but hit an error in my initial testing.

    I have an EKS cluster already configured and heavily used. I can install gefyra using homebrew without issue. I can start gefyra without issue after opening UDP port 31820

    gefyra up --endpoint <my node IP>:31820
    [INFO] Installing Gefyra Operator
    [INFO] Created network 'gefyra' (de529461f820)
    [INFO] Container image "quay.io/gefyra/operator:0.13.4" already present on machine
    [INFO] Pulling image "quay.io/gefyra/stowaway:0.13.4"
    [INFO] Successfully pulled image "quay.io/gefyra/stowaway:0.13.4" in 6.651035992s
    [INFO] Operator became ready in 15.2111 seconds
    [INFO] Deploying Cargo (network sidecar) with IP 172.25.0.149
    

    I can start a local container using gefyra via:

    [INFO] Container image 'myapp:latest' started with name 'myLocalContainer' in Kubernetes namespace 'test-namespace' (from --namespace argument)
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
    10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
    /docker-entrypoint.sh: Configuration complete; ready for start up
    2022/12/09 22:59:45 [notice] 17095#17095: using the "epoll" event method
    2022/12/09 22:59:45 [notice] 17095#17095: nginx/1.21.3
    2022/12/09 22:59:45 [notice] 17095#17095: built by gcc 8.3.0 (Debian 8.3.0-6)
    2022/12/09 22:59:45 [notice] 17095#17095: OS: Linux 5.10.76-linuxkit
    2022/12/09 22:59:45 [notice] 17095#17095: getrlimit(RLIMIT_NOFILE): 1048576:1048576
    2022/12/09 22:59:45 [notice] 17095#17095: start worker processes
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17176
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17177
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17178
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17179
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17180
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17181
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17182
    2022/12/09 22:59:45 [notice] 17095#17095: start worker process 17183
    

    I can then docker exec into the running container using gefyra in a separate shell: docker exec -it myLocalContainer bash However, when I try to intercept traffic with a bridge via: gefyra bridge -N myLocalContainer -n test-namespace --port 80:8000 --target deployment/test-deployment/myapp-container It fails with:

    [INFO] Creating bridge for Pod test-namespace-myapp-659c598bbd-nptk5
    [CRITICAL] There was an error running Gefyra: 'bool' object is not subscriptable
    

    What did you expect to happen?

    I expected a bridge to start and be able to forward traffic to my local container.

    How can we reproduce it (as minimally and precisely as possible)?

    These are the steps I ran^

    What Kubernetes setup are you working with?

    $ kubectl version
    WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
    Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:28:30Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
    Kustomize Version: v4.5.7
    Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:35:40Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
    WARNING: version difference between client (1.25) and server (1.23) exceeds the supported minor version skew of +/-1
    

    OS version

    macOS Big Sur Version 11.6.8

    Anything else we need to know?

    I was just trying to go through https://gefyra.dev/getting-started/aws-eks/#running-gefyra any help would be appreciated!

    bug 
    opened by grant30mad 8
  • More portable tests / move to pytest

    More portable tests / move to pytest

    Taking a look at the tests they have become pretty difficult to read - and pretty much impossible to execute locally. Currently every test is based on executing the gefyra package through poetry/coverage on a Github Runner with several sleeps / workarounds to make the tests kind of stable.

    Some thoughts on what we should improve about the new tests:

    • tests should be readable
    • tests should be executable on the developer's machine
    • tests should be easily extendible
    • tests should not be flaky anymore
    • tests coverage should remain the same
    • tests should output more debuggable information[^1]
    • tests should more than the exit code[^1]

    We could probably write "normal" python tests, which are then executed with pytest or something similar.

    [^1]: Currently our tests execute the whole command and we pretty much just check the exit code. It would probably only benefit us if we checked all the things that are actually changed during a command (is a container really running? is the port really available...). If a test fails we often just add more logging which leaves us still with a lot of guessing.

    CI/CD python 
    opened by SteinRobert 1
  • Running `gefyra up --endpoint` on nodes without public ip address

    Running `gefyra up --endpoint` on nodes without public ip address

    What is the new feature about?

    I have a cluster whose nodes don't have a public IP address. Therefore I can't use Gefyra. It would be nice to have a solution for that, as it is not an uncommon setup and might for example even be a security requirement for some companies.

    We also got following feedback, which reports the same isse:

    Relying on NodePort is not yet viable for us as our nodes do not have public ip addresses. We would have to use LoadBalancer to make this work. In the future, developers might have VPN to gain access to NodePorts, but not yet. Could "port-forward" be used instead, or is that not performant enough?

    Port-forward can only do TCP, so that's out of the question. What about using a load balancer for Gefyra to connect to?

    Why would such a feature be important to you?

    It supports the case of clusters whose nodes don't have public IPs.

    Anything else we need to know?

    No response

    enhancement 
    opened by tschale 2
  • Geyfra equivelant of personal intercept

    Geyfra equivelant of personal intercept

    Does Geyfra have the equivalent of Ambassador Telepresence's personal intercept? Where certain traffic can be forwarded to a destination developer's workstation based on something like an HTTP header passed from the client? If so can someone point me to the documentation as to how that's setup? If not are there any plans to add this sort of functionality?

    Thanks! Brad

    question 
    opened by bab5470 7
Releases(1.0.0)
  • 1.0.0(Dec 16, 2022)

    1.0.0 is there! 🎉 Gefyra is now considered stable. This marks a big milestone in the development of Gefyra. We're very thankful for all the feedback we received and a special thanks to all our contributors!

    What's Changed

    • refactor(#217): --endpoint flag improvements by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/234
    • feat(#211): add command check for pods by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/257
    • chore(deps-dev): bump flake8-bugbear from 22.10.27 to 22.12.6 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/297
    • chore(deps): bump certifi from 2022.6.15 to 2022.12.7 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/299
    • chore(deps): bump certifi from 2022.6.15 to 2022.12.7 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/298
    • chore(deps-dev): bump black from 22.10.0 to 22.12.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/300
    • chore(deps-dev): bump black from 22.10.0 to 22.12.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/301
    • fix(#302): missing exeception raise by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/303
    • feat: add KeyboardInterrupt/EOF handling to gefyra by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/295

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.13.4...1.0.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-1.0.0-darwin-universal.zip(29.15 MB)
    gefyra-1.0.0-linux-amd64.zip(29.74 MB)
    gefyra-1.0.0-windows-x86_64.zip(60.82 MB)
  • 0.13.4(Nov 26, 2022)

    What's Changed

    We made some minor refactorings and bumped some dependencies.

    • chore: fix track for binary checks by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/287
    • fix: no track for pytest by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/288
    • chore(deps-dev): bump flake8-black from 0.3.4 to 0.3.5 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/290
    • chore(deps-dev): bump flake8-black from 0.3.4 to 0.3.5 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/289
    • chore(deps): bump kubernetes from 24.2.0 to 25.3.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/227
    • chore(deps): bump kubernetes from 24.2.0 to 25.3.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/229
    • Remove dead code / more testing by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/291
    • chore(deps-dev): bump flake8 from 5.0.4 to 6.0.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/293

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.13.3...0.13.4

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.13.4-darwin-universal.zip(29.13 MB)
    gefyra-0.13.4-linux-amd64.zip(29.73 MB)
    gefyra-0.13.4-windows-x86_64.zip(60.79 MB)
  • 0.13.3(Nov 18, 2022)

    What's Changed

    • fix: low timeout for gefyra up by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/264
    • test: add tests for better coverage by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/265
    • fix: pod spec fail by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/266
    • chore: add human-friendly error message by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/267
    • test: restructure test for minikube tests by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/270
    • Resolve bridge issues by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/246
    • chore(deps): bump kopf from 1.35.6 to 1.36.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/252
    • ci: add waits before bridging again by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/272
    • fix: run mac codesign on publish only by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/275
    • chore(deps): bump cli-tracker from 0.2.8 to 0.3.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/273
    • Test for unique user id / telemetry by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/276
    • chore: bump ubuntu ci/cd image by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/277
    • chore(#278): do not fail fast on matrix by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/279
    • chore(deps-dev): bump flake8-black from 0.3.3 to 0.3.4 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/280
    • chore(deps-dev): bump flake8-black from 0.3.3 to 0.3.4 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/281
    • chore: check whether default kubeconfig exists by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/283
    • feat: add blocking flag --wait for bridge by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/285
    • CI Pipeline - Docker Image Build Optimization / PyTest singular execution by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/286

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.13.2...0.13.3

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.13.3-darwin-universal.zip(29.14 MB)
    gefyra-0.13.3-linux-amd64.zip(29.73 MB)
    gefyra-0.13.3-windows-x86_64.zip(60.79 MB)
  • 0.13.2(Nov 11, 2022)

    This release adds the PodSecurity admission label to the Gefyra namespace. We also added a lot of improvements to our CI pipeline!

    What's Changed

    • fix(status-test): higher sleep before running status test by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/237
    • fix(#241): pin python version for Github actions by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/243
    • fix: dependabot github actions yaml format by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/247
    • chore(deps): bump actions/setup-python from 2 to 4 by @dependabot in https://github.com/gefyrahq/gefyra/pull/248
    • chore(deps): bump actions/upload-artifact from 2 to 3 by @dependabot in https://github.com/gefyrahq/gefyra/pull/251
    • chore(deps): bump github/codeql-action from 1 to 2 by @dependabot in https://github.com/gefyrahq/gefyra/pull/250
    • chore(deps): bump actions/checkout from 2 to 3 by @dependabot in https://github.com/gefyrahq/gefyra/pull/249
    • feat(#253): run tests for multiple k8s versions by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/254
    • Add PodSecurity admission label to configure namespace as privileged by @LittleFox94 in https://github.com/gefyrahq/gefyra/pull/258
    • test: add sanity check for mac binary by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/260
    • chore: add binary checks by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/261
    • chore(deps): bump docker from 6.0.0 to 6.0.1 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/238
    • chore: add timeouts for python tests by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/262

    New Contributors

    • @LittleFox94 made their first contribution in https://github.com/gefyrahq/gefyra/pull/258

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.13.1...0.13.2

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.13.2-darwin-universal.zip(29.13 MB)
    gefyra-0.13.2-linux-amd64.zip(29.72 MB)
    gefyra-0.13.2-windows-x86_64.zip(60.77 MB)
  • 0.13.1(Nov 2, 2022)

    In this release we fixed the misleading "missing endpoint" logging statement. The stowaway got minor improvements to work more stable.

    What's Changed

    • chore: add debug output for homebrew release bot by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/220
    • fix(#198): move logging missing endpoint logging statement to only ap… by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/233
    • fix: add ipv4 forward on stowaway by @tschale in https://github.com/gefyrahq/gefyra/pull/236
    • chore(#221): add readiness probe for stowaway by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/235

    New Contributors

    • @tschale made their first contribution in https://github.com/gefyrahq/gefyra/pull/236

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.13.0...0.13.1

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.13.1-darwin-universal.zip(28.82 MB)
    gefyra-0.13.1-linux-amd64.zip(29.42 MB)
    gefyra-0.13.1-windows-x86_64.zip(60.77 MB)
  • 0.13.0(Oct 31, 2022)

    What's Changed

    The install script now prints warning in case any tooling is not available on your machine. The default behaviour for running containers is now attached mode - and added -d for detached mode.

    • chore(deps-dev): bump flake8-bugbear from 22.9.23 to 22.10.25 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/223
    • chore(deps-dev): bump pytest from 7.1.3 to 7.2.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/225
    • fix(#204): add wait time for status command by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/219
    • chore(deps-dev): bump pytest from 7.1.3 to 7.2.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/224
    • chore(deps-dev): bump flake8-bugbear from 22.10.25 to 22.10.27 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/228
    • feat(installer): Check if all install dependencies are available by @georgkrause in https://github.com/gefyrahq/gefyra/pull/226
    • feat(installer): Check if install is available before running installer by @georgkrause in https://github.com/gefyrahq/gefyra/pull/230
    • feat: add detach flag to cli by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/232

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.12.0...0.13.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.13.0-darwin-universal.zip(28.82 MB)
    gefyra-0.13.0-linux-amd64.zip(29.42 MB)
    gefyra-0.13.0-windows-x86_64.zip(60.77 MB)
  • 0.12.0(Oct 21, 2022)

    This release introduces support for docker contexts! Furthermore the bridge command now supports the --target flag which improves the usage a lot! Thanks to @knorr3 for this amazing contribution!

    What's Changed

    • chore(deps-dev): bump black from 22.8.0 to 22.10.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/208
    • chore(deps-dev): bump black from 22.8.0 to 22.10.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/209
    • feat(GH): GH action tests to not require secrets by @Schille in https://github.com/gefyrahq/gefyra/pull/213
    • Improved DX for gefyra bridge action by @knorr3 in https://github.com/gefyrahq/gefyra/pull/212
    • feat(#210): add docker context handling by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/215

    New Contributors

    • @knorr3 made their first contribution in https://github.com/gefyrahq/gefyra/pull/212

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.11.4...0.12.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.12.0-darwin-universal.zip(29.00 MB)
    gefyra-0.12.0-linux-amd64.zip(29.59 MB)
    gefyra-0.12.0-windows-x86_64.zip(61.14 MB)
  • 0.11.4(Oct 7, 2022)

    This release fixes a bug which caused the unbridge command to use the wrong kubeconfig (#199). Furthermore a bug for the carrier was fixed which caused its startup to fail in certain scenarios (#200). Moreover some dependencies have been bumped.

    What's Changed

    • chore(deps-dev): bump coverage from 6.4.4 to 6.5.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/201
    • fix: return if tele cannot be initiated by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/203
    • fix(#199): use correct kubeconfig for unbridge by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/202
    • debug(status/cd): print gefyra status and try to fail by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/205
    • fix(#200): change carrier.log to /tmp/error.log by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/206
    • chore(deps): bump tabulate from 0.8.10 to 0.9.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/207

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.11.3...0.11.4

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.11.4-darwin-universal.zip(28.99 MB)
    gefyra-0.11.4-linux-amd64.zip(29.57 MB)
    gefyra-0.11.4-windows-x86_64.zip(61.10 MB)
  • 0.11.3(Sep 29, 2022)

    What's Changed

    • chore(deps): bump oauthlib from 3.2.0 to 3.2.1 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/193
    • chore(deps): bump oauthlib from 3.2.0 to 3.2.1 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/194
    • chore(deps-dev): bump flake8-bugbear from 22.9.11 to 22.9.23 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/195
    • chore(deps): bump dependencies by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/197

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.11.2...0.11.3

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.11.3-darwin-universal.zip(28.98 MB)
    gefyra-0.11.3-linux-amd64.zip(29.56 MB)
    gefyra-0.11.3-windows-x86_64.zip(61.08 MB)
  • 0.11.2(Sep 14, 2022)

  • 0.11.1(Sep 9, 2022)

    Gefyra Version 0.11.1

    Gefyra - Horizontal - Black

    Changed Commands

    gefyra run

    From this version on gefyra run does not automatically remove containers anymore. Instead, gefyra run mimics the default behavior of docker run. If you want the old default behavior, you now have to add the gefyra run ... --rm option. This is particularly useful if you have a faulty container that exits immediately and you would like to inspect the output using docker log <container>. That is possible now.

    gefyra status

    A new API function was added: status(...). It is available as CLI command using gefyra status. It will return the current status of Gefyra's client and cluster side.

    Issues

    If you have issues or want to see another feature in Gefyra, head over to the issues section. We added new templates: https://github.com/gefyrahq/gefyra/issues/new/choose

    What's Changed

    • chore(deps): bump cli-tracker from 0.2.5 to 0.2.7 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/165
    • chore(deps-dev): bump flake8 from 4.0.1 to 5.0.4 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/166
    • chore(deps-dev): bump flake8-black from 0.2.5 to 0.3.3 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/167
    • chore(deps-dev): bump pytest from 5.4.3 to 7.1.2 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/168
    • chore(deps-dev): bump flake8-bugbear from 22.7.1 to 22.8.23 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/169
    • chore(deps-dev): bump flake8 from 4.0.1 to 5.0.4 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/173
    • chore(deps): bump kopf from 1.35.5 to 1.35.6 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/170
    • chore(deps-dev): bump coverage from 6.4.2 to 6.4.4 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/171
    • chore(deps-dev): bump flake8-black from 0.2.5 to 0.3.3 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/174
    • chore(deps): bump kubernetes from 19.15.0 to 24.2.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/172
    • fix(#175): issue with multiple ports for port parser by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/177
    • chore(deps-dev): bump black from 22.6.0 to 22.8.0 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/181
    • chore(deps-dev): bump black from 22.6.0 to 22.8.0 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/180
    • feat: add bridge information after successful establish by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/164
    • Improve workload types error message by @sokratisvas in https://github.com/gefyrahq/gefyra/pull/182
    • Fix typos by @kianmeng in https://github.com/gefyrahq/gefyra/pull/162
    • chore(deps-dev): bump pytest from 7.1.2 to 7.1.3 in /client by @dependabot in https://github.com/gefyrahq/gefyra/pull/183
    • chore(deps-dev): bump pytest from 7.1.2 to 7.1.3 in /operator by @dependabot in https://github.com/gefyrahq/gefyra/pull/184
    • feat: store kubeconfig as label on Cargo container by @Schille in https://github.com/gefyrahq/gefyra/pull/186
    • Add gefyra status API and command by @Schille in https://github.com/gefyrahq/gefyra/pull/187
    • feat: add --rm flag to gefyra run; now defaults to false by @Schille in https://github.com/gefyrahq/gefyra/pull/188
    • fix imports for PyOxidizer by @Schille in https://github.com/gefyrahq/gefyra/pull/189

    New Contributors

    • @dependabot made their first contribution in https://github.com/gefyrahq/gefyra/pull/165
    • @sokratisvas made their first contribution in https://github.com/gefyrahq/gefyra/pull/182
    • @kianmeng made their first contribution in https://github.com/gefyrahq/gefyra/pull/162

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.10.2...0.11.1

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.11.1-darwin-universal.zip(28.97 MB)
    gefyra-0.11.1-linux-amd64.zip(29.55 MB)
    gefyra-0.11.1-windows-x86_64.zip(60.40 MB)
  • 0.10.2(Aug 26, 2022)

    What's Changed

    • fix: missing version output by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/157
    • chore: bump mac version in GitHub Actions by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/158
    • fix: fix CLI argument parser, set correct mtu for gefyra network by @Schille in https://github.com/gefyrahq/gefyra/pull/163
    • bump docker-py by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/160

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.10.1...0.10.2

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.10.2-darwin-universal.zip(28.95 MB)
    gefyra-0.10.2-linux-amd64.zip(29.53 MB)
    gefyra-0.10.2-windows-x86_64.zip(59.96 MB)
  • 0.10.1(Aug 19, 2022)

  • 0.10.0(Aug 19, 2022)

    What's Changed

    • chore: update docker actions / use build caching by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/146
    • fix: operator cache build when not in PR by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/152
    • feat: add kubeconfig / context flag by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/153

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.9.1...0.10.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.10.0-darwin-universal.zip(28.96 MB)
    gefyra-0.10.0-linux-amd64.zip(29.54 MB)
    gefyra-0.10.0-windows-x86_64.zip(59.95 MB)
  • 0.9.1(Aug 12, 2022)

    What's Changed

    • feat(#79): more empathetic install feedback by @buschNT in https://github.com/gefyrahq/gefyra/pull/143
    • feat(client): add --wireguard-mtu argument; default to 1340 by @Schille in https://github.com/gefyrahq/gefyra/pull/144
    • feat(client): fallback 'gefyra run' namespace from kubeconfig by @Schille in https://github.com/gefyrahq/gefyra/pull/140
    • feat(client): read endpoint connection from kubeconfig by @Schille in https://github.com/gefyrahq/gefyra/pull/139

    New Contributors

    • @buschNT made their first contribution in https://github.com/gefyrahq/gefyra/pull/143

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.9.0...0.9.1

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.9.1-darwin-universal.zip(28.95 MB)
    gefyra-0.9.1-linux-amd64.zip(29.53 MB)
    gefyra-0.9.1-windows-x86_64.zip(59.97 MB)
  • 0.9.0(Aug 5, 2022)

    What's Changed

    This release introduces a telemetry integration which anonymously tracks the usage of the Gefyra CLI, when opted-in. Furthermore, the integration with Minikube has been improved. Please find an example here: https://gefyra.dev/getting-started/minikube-docker/

    • fix: naming for MacOS binary by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/132
    • refactor: --port flag for portmapping by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/127
    • Telemetry for cli usage by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/124
    • feat: add kubernetes notation for --env-from flag by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/131
    • feat: introduce --minikube switch with auto conf detection by @Schille in https://github.com/gefyrahq/gefyra/pull/136

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.8.4...0.9.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.9.0-darwin-universal.zip(28.90 MB)
    gefyra-0.9.0-linux-amd64.zip(29.49 MB)
    gefyra-0.9.0-windows-x86_64.zip(59.87 MB)
  • 0.8.4(Jul 29, 2022)

  • 0.8.3(Jul 29, 2022)

    What's Changed

    • test(bridge): add test for bridge via --pod flag by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/121
    • fix: revert privileged stowaway pod by @Schille in https://github.com/gefyrahq/gefyra/pull/122
    • feat(#116): add namespace to container list by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/120

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.8.2...0.8.3

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.8.3-darwin-amd64.zip(28.54 MB)
    gefyra-0.8.3-linux-amd64.zip(29.11 MB)
    gefyra-0.8.3-windows-x86_64.zip(59.07 MB)
  • 0.8.2(Jul 22, 2022)

    What's Changed

    • chore: update dependencies by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/102
    • feat(#106): add port mapping flag / parser by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/109
    • fix(#107): bug when multiple bridges are created by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/110
    • feat: run Stowaway unprivileged by @Schille in https://github.com/gefyrahq/gefyra/pull/117
    • fix(client): add missing policyrule for serviceaccounts by @Schille in https://github.com/gefyrahq/gefyra/pull/118

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.8.1...0.8.2

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.8.2-darwin-amd64.zip(28.54 MB)
    gefyra-0.8.2-linux-amd64.zip(29.11 MB)
    gefyra-0.8.2-windows-x86_64.zip(59.07 MB)
  • 0.8.1(Jun 8, 2022)

    What's Changed

    • feat: add wireguard probe on gefyra run by @Schille in https://github.com/gefyrahq/gefyra/pull/94
    • fix(#70): exit process when no pod for bridging can be found by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/88
    • fix stowaway docker image for arm64 by @cappuc in https://github.com/gefyrahq/gefyra/pull/95
    • fix(operator): make check for Carrier container status in pod more ro… by @Schille in https://github.com/gefyrahq/gefyra/pull/99
    • fix: add another solution for docker network creation by @Schille in https://github.com/gefyrahq/gefyra/pull/96
    • fix: check if container is running before deploying by @vvvityaaa in https://github.com/gefyrahq/gefyra/pull/91
    • feat: add workflow for windows binary by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/82

    New Contributors

    • @cappuc made their first contribution in https://github.com/gefyrahq/gefyra/pull/95

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.8.0...0.8.1

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.8.1-darwin-amd64.zip(28.50 MB)
    gefyra-0.8.1-linux-amd64.zip(29.08 MB)
    gefyra-0.8.1-windows-x86_64.zip(59.01 MB)
  • 0.8.0(May 20, 2022)

    What's Changed

    • test: add code coverage to python-tester workflow by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/66
    • Add installation script by @georgkrause in https://github.com/gefyrahq/gefyra/pull/73
    • Allow to configure the kubeconfig by @georgkrause in https://github.com/gefyrahq/gefyra/pull/71
    • Fix installation on MacOS caused by missing group root. by @georgkrause in https://github.com/gefyrahq/gefyra/pull/74
    • Add gefyra list --containers by @vvvityaaa in https://github.com/gefyrahq/gefyra/pull/78
    • feat: add windows build (WIP) by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/81
    • chore: add hint when no flag is used for unbrige by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/80
    • Stop pretending not to support darwin arm64 by @georgkrause in https://github.com/gefyrahq/gefyra/pull/83
    • feat: add check for new version by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/87
    • Cleanup after installation by @georgkrause in https://github.com/gefyrahq/gefyra/pull/84

    New Contributors

    • @georgkrause made their first contribution in https://github.com/gefyrahq/gefyra/pull/73
    • @vvvityaaa made their first contribution in https://github.com/gefyrahq/gefyra/pull/78

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.7.2...0.8.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.8.0-darwin-amd64.zip(26.65 MB)
    gefyra-0.8.0-linux-amd64.zip(27.33 MB)
  • 0.7.2(Apr 25, 2022)

  • 0.7.1(Apr 25, 2022)

  • 0.7.0(Apr 25, 2022)

    What's Changed

    • Allow custom registry and image urls by @SteinRobert in https://github.com/gefyrahq/gefyra/pull/65
    • feat(client): adds probe for wireguard connection by @Schille in https://github.com/gefyrahq/gefyra/pull/69
    • fix(client): add gefyra network retry with tests by @Schille in https://github.com/gefyrahq/gefyra/pull/67

    From this release on, the client pulls all required container images matching the version of the client.

    Full Changelog: https://github.com/gefyrahq/gefyra/compare/0.6.16...0.7.0

    Source code(tar.gz)
    Source code(zip)
    gefyra-0.7.0-darwin-amd64.zip(26.47 MB)
    gefyra-0.7.0-linux-amd64.zip(27.16 MB)
  • 0.6.16(Apr 21, 2022)

  • 0.6.15(Mar 24, 2022)

  • 0.6.14(Mar 24, 2022)

Owner
Michael Schilonka
Michael Schilonka
Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App This example provisions a Google Kubernetes Engine

Pas Apicella 2 Feb 9, 2022
Helperpod - A CLI tool to run a Kubernetes utility pod with pre-installed tools that can be used for debugging/testing purposes inside a Kubernetes cluster

Helperpod is a CLI tool to run a Kubernetes utility pod with pre-installed tools that can be used for debugging/testing purposes inside a Kubernetes cluster.

Atakan Tatlı 2 Feb 5, 2022
A Blazing fast Security Auditing tool for Kubernetes

A Blazing fast Security Auditing tool for kubernetes!! Basic Overview Kubestriker performs numerous in depth checks on kubernetes infra to identify th

Vasant Chinnipilli 934 Jan 4, 2023
Tools and Docker images to make a fast Ruby on Rails development environment

Tools and Docker images to make a fast Ruby on Rails development environment. With the production templates, moving from development to production will be seamless.

null 1 Nov 13, 2022
This repository contains code examples and documentation for learning how applications can be developed with Kubernetes

BigBitBus KAT Components Click on the diagram to enlarge, or follow this link for detailed documentation Introduction Welcome to the BigBitBus Kuberne

null 51 Oct 16, 2022
Official Python client library for kubernetes

Kubernetes Python Client Python client for the kubernetes API. Installation From source: git clone --recursive https://github.com/kubernetes-client/py

Kubernetes Clients 5.4k Jan 2, 2023
A Kubernetes operator that creates UptimeRobot monitors for your ingresses

This operator automatically creates uptime monitors at UptimeRobot for your Kubernetes Ingress resources. This allows you to easily integrate uptime monitoring of your services into your Kubernetes deployments.

Max 49 Dec 14, 2022
A Simple script to hunt unused Kubernetes resources.

K8SPurger A Simple script to hunt unused Kubernetes resources. Release History Release 0.3 Added Ingress Added Services Account Adding RoleBindding Re

Yogesh Kunjir 202 Nov 19, 2022
Run Oracle on Kubernetes with El Carro

El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

Google Cloud Platform 205 Dec 30, 2022
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions

Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions

Arie Bregman 35.1k Jan 2, 2023
Chartreuse: Automated Alembic migrations within kubernetes

Chartreuse: Automated Alembic SQL schema migrations within kubernetes "How to automate management of Alembic database schema migration at scale using

Wiremind 8 Oct 25, 2022
sysctl/sysfs settings on a fly for Kubernetes Cluster. No restarts are required for clusters and nodes.

SysBindings Daemon Little toolkit for control the sysctl/sysfs bindings on Kubernetes Cluster on the fly and without unnecessary restarts of cluster o

Wallarm 19 May 6, 2022
Caboto, the Kubernetes semantic analysis tool

Caboto Caboto, the Kubernetes semantic analysis toolkit. It contains a lightweight Python library for semantic analysis of plain Kubernetes manifests

Michael Schilonka 8 Nov 26, 2022
Hubble - Network, Service & Security Observability for Kubernetes using eBPF

Network, Service & Security Observability for Kubernetes What is Hubble? Getting Started Features Service Dependency Graph Metrics & Monitoring Flow V

Cilium 2.4k Jan 4, 2023
Rancher Kubernetes API compatible with RKE, RKE2 and maybe others?

kctl Rancher Kubernetes API compatible with RKE, RKE2 and maybe others? Documentation is WIP. Quickstart pip install --upgrade kctl Usage from lazycls

null 1 Dec 2, 2021
A charmed operator for running PGbouncer on kubernetes.

operator-template Description TODO: Describe your charm in a few paragraphs of Markdown Usage TODO: Provide high-level usage, such as required config

Canonical 1 Dec 1, 2022
Quick & dirty controller to schedule Kubernetes Jobs later (once)

K8s Jobber Operator Quickly implemented Kubernetes controller to enable scheduling of Jobs at a later time. Usage: To schedule a Job later, Set .spec.

Jukka Väisänen 2 Feb 11, 2022
Copy a Kubernetes pod and run commands in its environment

copypod Utility for copying a running Kubernetes pod so you can run commands in a copy of its environment, without worrying about it the pod potential

Memrise 4 Apr 8, 2022
Autoscaling volumes for Kubernetes (with the help of Prometheus)

Kubernetes Volume Autoscaler (with Prometheus) This repository contains a service that automatically increases the size of a Persistent Volume Claim i

DevOps Nirvana 142 Dec 28, 2022