🦍 The Cloud-Native API Gateway

Overview

Stars GitHub commit activity Docker Pulls Build Status Version License

Kong or Kong API Gateway is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins.

By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease.

Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.


Installation | Documentation | Forum | Blog | Builds


Getting Started

Let’s test drive Kong by adding authentication to an API in under 5 minutes.

We suggest using the docker-compose distribution via the instructions below, but there is also a docker installation procedure if you’d prefer to run the Kong API Gateway in DB-less mode.

Whether you’re running in the cloud, on bare metal or using containers, you can find every supported distribution on our official installation page.

  1. To start, clone the Docker repository and navigate to the compose folder.
  $ git clone https://github.com/Kong/docker-kong
  $ cd compose/
  1. Start the Gateway stack using:
  $ docker-compose up

The Gateway will be available on the following ports on localhost:

:8000 on which Kong listens for incoming HTTP traffic from your clients, and forwards it to your upstream services. :8001 on which the Admin API used to configure Kong listens.

Next, follow the quick start guide to tour the Gateway features

Features

By centralizing common API functionality across all your organization's services, the Kong API Gateway creates more freedom for engineering teams to focus on the challenges that matter most.

The top Kong features include:

  • Advanced routing, load balancing, health checking - all configurable via an admin API or declarative configuration.
  • Authentication and Authorization for APIs using methods like JWT, basic auth, ACLs and more.
  • Proxy, SSL/TLS termination, and connectivity support for L4 or L7 traffic.
  • Plugins for enforcing traffic controls, req/res transformations, logging, monitoring and including a plugin developer hub.
  • Sophisticated deployment models like Declarative Databaseless Deployment and Hybrid Deployment (control plane/data plane separation) without any vendor lock-in.
  • Native ingress controller support for serving Kubernetes.

Plugin Hub

Plugins provide advanced functionality that extends the use of the Gateway. Many of the Kong Inc. and community-developed plugins like AWS Lambda, Correlation ID, and Response Transformer are showcased at the Plugin Hub.

Contribute to the Plugin Hub and ensure your next innovative idea is published and available to the broader community!

Contributing

We ❤️ pull requests, and we’re continually working hard to make it as easy as possible for developers to contribute. Before beginning development with the Kong API Gateway, please familiarize yourself with the following developer resources:

Use the Plugin Development Guide for building new and creative plugins, or browse the online version of Kong's source code documentation in the Plugin Development Kit (PDK) Reference. Developers can build plugins in Lua, Go or JavaScript.

Releases

Please see the Changelog for more details about a given release. The SemVer Specification is followed when versioning Gateway releases.

Join the Community

Konnect

Kong Inc. offers commercial subscriptions that enhance the Kong API Gateway in a variety of ways. Customers of Kong's Konnect subscription take advantage of additional gateway functionality, commercial support, and access to Kong's managed (SaaS) control plane platform. The Konnect platform features include real-time analytics, a service catalog, developer portals, and so much more! Get started with Konnect.

License

Copyright 2016-2021 Kong Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Comments
  • PostgreSQL support

    PostgreSQL support

    We should support other datastores than Cassandra which is primarily aimed at users who want to benefit from its distribution capabilities.

    Support for PostgreSQL seems the best candidate for the second datastore, due to its popularity, availability on Amazon RDS and reliability. It is also a very good fit for Kong due to its relational nature.

    opened by sonicaghi 118
  • Support load balancing with nginx dynamic upstreams

    Support load balancing with nginx dynamic upstreams

    Support for dynamic upstreams that will enable dynamic load balancing per API.

     # sample upstream block:
     upstream backend {
        server 127.0.0.1:12354;
        server 127.0.0.1:12355;
        server 127.0.0.1:12356 backup;
    }
    

    So we can proxy_pass like:

    proxy_pass http://backend;
    
    idea/new plugin 
    opened by subnetmarco 97
  • Kong 2.0.5 -> 2.1.4 Migratons C* issues still possible.

    Kong 2.0.5 -> 2.1.4 Migratons C* issues still possible.

    Summary

    Thought we were out of the woods here. But I just tested migrations from a 2.0.5 to 2.1.4 DB upgrades with our prod keyspace cloned (decided dry runs of this migrations stuff is best with real keyspace data too) and things went sideways again. Captured logs of each migrations execution from the up to having to run finish multiple times to reach a conclusion, detailed below:

    First the Kong migrations up:

    / $ kong migrations up --db-timeout 120 --vv
    2020/09/28 17:17:57 [verbose] Kong: 2.1.4
    2020/09/28 17:17:57 [debug] ngx_lua: 10015
    2020/09/28 17:17:57 [debug] nginx: 1015008
    2020/09/28 17:17:57 [debug] Lua: LuaJIT 2.1.0-beta3
    2020/09/28 17:17:57 [verbose] no config file found at /etc/kong/kong.conf
    2020/09/28 17:17:57 [verbose] no config file found at /etc/kong.conf
    2020/09/28 17:17:57 [verbose] no config file, skip loading
    2020/09/28 17:17:57 [debug] reading environment variables
    2020/09/28 17:17:57 [debug] KONG_PLUGINS ENV found with "kong-siteminder-auth,kong-kafka-log,stargate-waf-error-log,mtls,stargate-oidc-token-revoke,kong-tx-debugger,kong-plugin-oauth,zipkin,kong-error-log,kong-oidc-implicit-token,kong-response-size-limiting,request-transformer,kong-service-virtualization,kong-cluster-drain,kong-upstream-jwt,kong-splunk-log,kong-spec-expose,kong-path-based-routing,kong-oidc-multi-idp,correlation-id,oauth2,statsd,jwt,rate-limiting,acl,request-size-limiting,request-termination,cors"
    2020/09/28 17:17:57 [debug] KONG_ADMIN_LISTEN ENV found with "0.0.0.0:8001 deferred reuseport"
    2020/09/28 17:17:57 [debug] KONG_PROXY_ACCESS_LOG ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_USERNAME ENV found with "kongdba"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
    2020/09/28 17:17:57 [debug] KONG_PROXY_LISTEN ENV found with "0.0.0.0:8000, 0.0.0.0:8443 ssl http2 deferred reuseport"
    2020/09/28 17:17:57 [debug] KONG_DNS_NO_SYNC ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_DB_UPDATE_PROPAGATION ENV found with "5"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
    2020/09/28 17:17:57 [debug] KONG_HEADERS ENV found with "latency_tokens"
    2020/09/28 17:17:57 [debug] KONG_DNS_STALE_TTL ENV found with "4"
    2020/09/28 17:17:57 [debug] KONG_WAF_DEBUG_LEVEL ENV found with "0"
    2020/09/28 17:17:57 [debug] KONG_WAF_PARANOIA_LEVEL ENV found with "1"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_REFRESH_FREQUENCY ENV found with "0"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server05503,server05505,server05507,server05502,server05504,server05506"
    2020/09/28 17:17:57 [debug] KONG_DB_CACHE_WARMUP_ENTITIES ENV found with "services,plugins,consumers"
    2020/09/28 17:17:57 [debug] KONG_NGINX_HTTP_SSL_PROTOCOLS ENV found with "TLSv1.2 TLSv1.3"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC1"
    2020/09/28 17:17:57 [debug] KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT ENV found with"30"
    2020/09/28 17:17:57 [debug] KONG_DB_CACHE_TTL ENV found with "0"
    2020/09/28 17:17:57 [debug] KONG_PG_SSL ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_WAF_REQUEST_NO_FILE_SIZE_LIMIT ENV found with "50000000"
    2020/09/28 17:17:57 [debug] KONG_WAF_PCRE_MATCH_LIMIT_RECURSION ENV found with "10000"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "30000"
    2020/09/28 17:17:57 [debug] KONG_LOG_LEVEL ENV found with "notice"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "5000"
    2020/09/28 17:17:57 [debug] KONG_NGINX_MAIN_WORKER_PROCESSES ENV found with "6"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_prod2"
    2020/09/28 17:17:57 [debug] KONG_WAF ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_ERROR_DEFAULT_TYPE ENV found with "text/plain"
    2020/09/28 17:17:57 [debug] KONG_UPSTREAM_KEEPALIVE_POOL_SIZE ENV found with "400"
    2020/09/28 17:17:57 [debug] KONG_WORKER_CONSISTENCY ENV found with "eventual"
    2020/09/28 17:17:57 [debug] KONG_CLIENT_SSL ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_TRUSTED_IPS ENV found with "0.0.0.0/0,::/0"
    2020/09/28 17:17:57 [debug] KONG_SSL_CERT_KEY ENV found with "/usr/local/kong/ssl/kongprivatekey.key"
    2020/09/28 17:17:57 [debug] KONG_MEM_CACHE_SIZE ENV found with "1024m"
    2020/09/28 17:17:57 [debug] KONG_NGINX_PROXY_REAL_IP_HEADER ENV found with "X-Forwarded-For"
    2020/09/28 17:17:57 [debug] KONG_DB_UPDATE_FREQUENCY ENV found with "5"
    2020/09/28 17:17:57 [debug] KONG_DNS_ORDER ENV found with "LAST,SRV,A,CNAME"
    2020/09/28 17:17:57 [debug] KONG_DNS_ERROR_TTL ENV found with "1"
    2020/09/28 17:17:57 [debug] KONG_DATABASE ENV found with "cassandra"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC1:3,DC2:3"
    2020/09/28 17:17:57 [debug] KONG_WORKER_STATE_UPDATE_FREQUENCY ENV found with "5"
    2020/09/28 17:17:57 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
    2020/09/28 17:17:57 [debug] KONG_LUA_SOCKET_POOL_SIZE ENV found with "30"
    2020/09/28 17:17:57 [debug] KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS ENV found with"50000"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "LOCAL_QUORUM"
    2020/09/28 17:17:57 [debug] KONG_CLIENT_MAX_BODY_SIZE ENV found with "50m"
    2020/09/28 17:17:57 [debug] KONG_ADMIN_ERROR_LOG ENV found with "/dev/stderr"
    2020/09/28 17:17:57 [debug] KONG_DNS_NOT_FOUND_TTL ENV found with "30"
    2020/09/28 17:17:57 [debug] KONG_PROXY_ERROR_LOG ENV found with "/dev/stderr"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "NetworkTopologyStrategy"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
    2020/09/28 17:17:57 [debug] KONG_ADMIN_ACCESS_LOG ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_DNS_HOSTSFILE ENV found with "/etc/hosts"
    2020/09/28 17:17:57 [debug] KONG_WAF_REQUEST_FILE_SIZE_LIMIT ENV found with "50000000"
    2020/09/28 17:17:57 [debug] KONG_SSL_CERT ENV found with "/usr/local/kong/ssl/kongcert.crt"
    2020/09/28 17:17:57 [debug] KONG_NGINX_PROXY_REAL_IP_RECURSIVE ENV found with "on"
    2020/09/28 17:17:57 [debug] KONG_SSL_CIPHER_SUITE ENV found with "intermediate"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_SSL ENV found with "on"
    2020/09/28 17:17:57 [debug] KONG_ANONYMOUS_REPORTS ENV found with "off"
    2020/09/28 17:17:57 [debug] KONG_WAF_MODE ENV found with "On"
    2020/09/28 17:17:57 [debug] KONG_CLIENT_BODY_BUFFER_SIZE ENV found with "50m"
    2020/09/28 17:17:57 [debug] KONG_WAF_PCRE_MATCH_LIMIT ENV found with "10000"
    2020/09/28 17:17:57 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
    2020/09/28 17:17:57 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "RequestDCAwareRoundRobin"
    2020/09/28 17:17:57 [debug] KONG_WAF_AUDIT ENV found with "RelevantOnly"
    2020/09/28 17:17:57 [debug] admin_access_log = "off"
    2020/09/28 17:17:57 [debug] admin_error_log = "/dev/stderr"
    2020/09/28 17:17:57 [debug] admin_listen = {"0.0.0.0:8001 deferred reuseport"}
    2020/09/28 17:17:57 [debug] anonymous_reports = false
    2020/09/28 17:17:57 [debug] cassandra_consistency = "LOCAL_QUORUM"
    2020/09/28 17:17:57 [debug] cassandra_contact_points = {"apsrp05503","apsrp05505","apsrp05507","apsrp05502","apsrp05504","apsrp05506"}
    2020/09/28 17:17:57 [debug] cassandra_data_centers = {"DC1:3","DC2:3"}
    2020/09/28 17:17:57 [debug] cassandra_keyspace = "kong_prod2"
    2020/09/28 17:17:57 [debug] cassandra_lb_policy = "RequestDCAwareRoundRobin"
    2020/09/28 17:17:57 [debug] cassandra_local_datacenter = "DC1"
    2020/09/28 17:17:57 [debug] cassandra_password = "******"
    2020/09/28 17:17:57 [debug] cassandra_port = 9042
    2020/09/28 17:17:57 [debug] cassandra_read_consistency = "LOCAL_QUORUM"
    2020/09/28 17:17:57 [debug] cassandra_refresh_frequency = 0
    2020/09/28 17:17:57 [debug] cassandra_repl_factor = 1
    2020/09/28 17:17:57 [debug] cassandra_repl_strategy = "NetworkTopologyStrategy"
    2020/09/28 17:17:57 [debug] cassandra_schema_consensus_timeout = 30000
    2020/09/28 17:17:57 [debug] cassandra_ssl = true
    2020/09/28 17:17:57 [debug] cassandra_ssl_verify = true
    2020/09/28 17:17:57 [debug] cassandra_timeout = 5000
    2020/09/28 17:17:57 [debug] cassandra_username = "kongdba"
    2020/09/28 17:17:57 [debug] cassandra_write_consistency = "LOCAL_QUORUM"
    2020/09/28 17:17:57 [debug] client_body_buffer_size = "50m"
    2020/09/28 17:17:57 [debug] client_max_body_size = "50m"
    2020/09/28 17:17:57 [debug] client_ssl = false
    2020/09/28 17:17:57 [debug] cluster_control_plane = "127.0.0.1:8005"
    2020/09/28 17:17:57 [debug] cluster_listen = {"0.0.0.0:8005"}
    2020/09/28 17:17:57 [debug] cluster_mtls = "shared"
    2020/09/28 17:17:57 [debug] database = "cassandra"
    2020/09/28 17:17:57 [debug] db_cache_ttl = 0
    2020/09/28 17:17:57 [debug] db_cache_warmup_entities = {"services","plugins","consumers"}
    2020/09/28 17:17:57 [debug] db_resurrect_ttl = 30
    2020/09/28 17:17:57 [debug] db_update_frequency = 5
    2020/09/28 17:17:57 [debug] db_update_propagation = 5
    2020/09/28 17:17:57 [debug] dns_error_ttl = 1
    2020/09/28 17:17:57 [debug] dns_hostsfile = "/etc/hosts"
    2020/09/28 17:17:57 [debug] dns_no_sync = false
    2020/09/28 17:17:57 [debug] dns_not_found_ttl = 30
    2020/09/28 17:17:57 [debug] dns_order = {"LAST","SRV","A","CNAME"}
    2020/09/28 17:17:57 [debug] dns_resolver = {}
    2020/09/28 17:17:57 [debug] dns_stale_ttl = 4
    2020/09/28 17:17:57 [debug] error_default_type = "text/plain"
    2020/09/28 17:17:57 [debug] go_plugins_dir = "off"
    2020/09/28 17:17:57 [debug] go_pluginserver_exe = "/usr/local/bin/go-pluginserver"
    2020/09/28 17:17:57 [debug] headers = {"latency_tokens"}
    2020/09/28 17:17:57 [debug] host_ports = {}
    2020/09/28 17:17:57 [debug] kic = false
    2020/09/28 17:17:57 [debug] log_level = "notice"
    2020/09/28 17:17:57 [debug] lua_package_cpath = ""
    2020/09/28 17:17:57 [debug] lua_package_path = "./?.lua;./?/init.lua;"
    2020/09/28 17:17:57 [debug] lua_socket_pool_size = 30
    2020/09/28 17:17:57 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
    2020/09/28 17:17:57 [debug] lua_ssl_verify_depth = 3
    2020/09/28 17:17:57 [debug] mem_cache_size = "1024m"
    2020/09/28 17:17:57 [debug] nginx_admin_directives = {}
    2020/09/28 17:17:57 [debug] nginx_daemon = "on"
    2020/09/28 17:17:57 [debug] nginx_events_directives = {{name="multi_accept",value="on"},{name="worker_connections",value="auto"}}
    2020/09/28 17:17:57 [debug] nginx_events_multi_accept = "on"
    2020/09/28 17:17:57 [debug] nginx_events_worker_connections = "auto"
    2020/09/28 17:17:57 [debug] nginx_http_client_body_buffer_size = "50m"
    2020/09/28 17:17:57 [debug] nginx_http_client_max_body_size = "50m"
    2020/09/28 17:17:57 [debug] nginx_http_directives = {{name="client_max_body_size",value="50m"},{name="ssl_prefer_server_ciphers",value="off"},{name="client_body_buffer_size",value="50m"},{name="ssl_protocols",value="TLSv1.2 TLSv1.3"},{name="ssl_session_timeout",value="1d"},{name="ssl_session_tickets",value="on"}}
    2020/09/28 17:17:57 [debug] nginx_http_ssl_prefer_server_ciphers = "off"
    2020/09/28 17:17:57 [debug] nginx_http_ssl_protocols = "TLSv1.2 TLSv1.3"
    2020/09/28 17:17:57 [debug] nginx_http_ssl_session_tickets = "on"
    2020/09/28 17:17:57 [debug] nginx_http_ssl_session_timeout = "1d"
    2020/09/28 17:17:57 [debug] nginx_http_status_directives = {}
    2020/09/28 17:17:57 [debug] nginx_http_upstream_directives = {{name="keepalive_requests",value="100"},{name="keepalive_timeout",value="60s"},{name="keepalive",value="60"}}
    2020/09/28 17:17:57 [debug] nginx_http_upstream_keepalive = "60"
    2020/09/28 17:17:57 [debug] nginx_http_upstream_keepalive_requests = "100"
    2020/09/28 17:17:57 [debug] nginx_http_upstream_keepalive_timeout = "60s"
    2020/09/28 17:17:57 [debug] nginx_main_daemon = "on"
    2020/09/28 17:17:57 [debug] nginx_main_directives = {{name="daemon",value="on"},{name="worker_rlimit_nofile",value="auto"},{name="worker_processes",value="6"}}
    2020/09/28 17:17:57 [debug] nginx_main_worker_processes = "6"
    2020/09/28 17:17:57 [debug] nginx_main_worker_rlimit_nofile = "auto"
    2020/09/28 17:17:57 [debug] nginx_optimizations = true
    2020/09/28 17:17:57 [debug] nginx_proxy_directives = {{name="real_ip_header",value="X-Forwarded-For"},{name="real_ip_recursive",value="on"}}
    2020/09/28 17:17:57 [debug] nginx_proxy_real_ip_header = "X-Forwarded-For"
    2020/09/28 17:17:57 [debug] nginx_proxy_real_ip_recursive = "on"
    2020/09/28 17:17:57 [debug] nginx_sproxy_directives = {}
    2020/09/28 17:17:57 [debug] nginx_status_directives = {}
    2020/09/28 17:17:57 [debug] nginx_stream_directives = {{name="ssl_session_tickets",value="on"},{name="ssl_protocols",value="TLSv1.2 TLSv1.3"},{name="ssl_session_timeout",value="1d"},{name="ssl_prefer_server_ciphers",value="off"}}
    2020/09/28 17:17:57 [debug] nginx_stream_ssl_prefer_server_ciphers = "off"
    2020/09/28 17:17:57 [debug] nginx_stream_ssl_protocols = "TLSv1.2 TLSv1.3"
    2020/09/28 17:17:57 [debug] nginx_stream_ssl_session_tickets = "on"
    2020/09/28 17:17:57 [debug] nginx_stream_ssl_session_timeout = "1d"
    2020/09/28 17:17:57 [debug] nginx_supstream_directives = {}
    2020/09/28 17:17:57 [debug] nginx_upstream_directives = {{name="keepalive_requests",value="100"},{name="keepalive_timeout",value="60s"},{name="keepalive",value="60"}}
    2020/09/28 17:17:57 [debug] nginx_upstream_keepalive = "60"
    2020/09/28 17:17:57 [debug] nginx_upstream_keepalive_requests = "100"
    2020/09/28 17:17:57 [debug] nginx_upstream_keepalive_timeout = "60s"
    2020/09/28 17:17:57 [debug] nginx_worker_processes = "auto"
    2020/09/28 17:17:57 [debug] pg_database = "kong"
    2020/09/28 17:17:57 [debug] pg_host = "127.0.0.1"
    2020/09/28 17:17:57 [debug] pg_max_concurrent_queries = 0
    2020/09/28 17:17:57 [debug] pg_port = 5432
    2020/09/28 17:17:57 [debug] pg_ro_ssl = false
    2020/09/28 17:17:57 [debug] pg_ro_ssl_verify = false
    2020/09/28 17:17:57 [debug] pg_semaphore_timeout = 60000
    2020/09/28 17:17:57 [debug] pg_ssl = false
    2020/09/28 17:17:57 [debug] pg_ssl_verify = false
    2020/09/28 17:17:57 [debug] pg_timeout = 5000
    2020/09/28 17:17:57 [debug] pg_user = "kong"
    2020/09/28 17:17:57 [debug] plugins = {"kong-siteminder-auth","kong-kafka-log","stargate-waf-error-log","mtls","stargate-oidc-token-revoke","kong-tx-debugger","kong-plugin-oauth","zipkin","kong-error-log","kong-oidc-implicit-token","kong-response-size-limiting","request-transformer","kong-service-virtualization","kong-cluster-drain","kong-upstream-jwt","kong-splunk-log","kong-spec-expose","kong-path-based-routing","kong-oidc-multi-idp","correlation-id","oauth2","statsd","jwt","rate-limiting","acl","request-size-limiting","request-termination","cors"}
    2020/09/28 17:17:57 [debug] port_maps = {}
    2020/09/28 17:17:57 [debug] prefix = "/usr/local/kong/"
    2020/09/28 17:17:57 [debug] proxy_access_log = "off"
    2020/09/28 17:17:57 [debug] proxy_error_log = "/dev/stderr"
    2020/09/28 17:17:57 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl http2 deferred reuseport"}
    2020/09/28 17:17:57 [debug] real_ip_header = "X-Real-IP"
    2020/09/28 17:17:57 [debug] real_ip_recursive = "off"
    2020/09/28 17:17:57 [debug] role = "traditional"
    2020/09/28 17:17:57 [debug] ssl_cert = "/usr/local/kong/ssl/kongcert.crt"
    2020/09/28 17:17:57 [debug] ssl_cert_key = "/usr/local/kong/ssl/kongprivatekey.key"
    2020/09/28 17:17:57 [debug] ssl_cipher_suite = "intermediate"
    2020/09/28 17:17:57 [debug] ssl_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
    2020/09/28 17:17:57 [debug] ssl_prefer_server_ciphers = "on"
    2020/09/28 17:17:57 [debug] ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
    2020/09/28 17:17:57 [debug] ssl_session_tickets = "on"
    2020/09/28 17:17:57 [debug] ssl_session_timeout = "1d"
    2020/09/28 17:17:57 [debug] status_access_log = "off"
    2020/09/28 17:17:57 [debug] status_error_log = "logs/status_error.log"
    2020/09/28 17:17:57 [debug] status_listen = {"off"}
    2020/09/28 17:17:57 [debug] stream_listen = {"off"}
    2020/09/28 17:17:57 [debug] trusted_ips = {"0.0.0.0/0","::/0"}
    2020/09/28 17:17:57 [debug] upstream_keepalive = 60
    2020/09/28 17:17:57 [debug] upstream_keepalive_idle_timeout = 30
    2020/09/28 17:17:57 [debug] upstream_keepalive_max_requests = 50000
    2020/09/28 17:17:57 [debug] upstream_keepalive_pool_size = 400
    2020/09/28 17:17:57 [debug] waf = "off"
    2020/09/28 17:17:57 [debug] waf_audit = "RelevantOnly"
    2020/09/28 17:17:57 [debug] waf_debug_level = "0"
    2020/09/28 17:17:57 [debug] waf_mode = "On"
    2020/09/28 17:17:57 [debug] waf_paranoia_level = "1"
    2020/09/28 17:17:57 [debug] waf_pcre_match_limit = "10000"
    2020/09/28 17:17:57 [debug] waf_pcre_match_limit_recursion = "10000"
    2020/09/28 17:17:57 [debug] waf_request_file_size_limit = "50000000"
    2020/09/28 17:17:57 [debug] waf_request_no_file_size_limit = "50000000"
    2020/09/28 17:17:57 [debug] worker_consistency = "eventual"
    2020/09/28 17:17:57 [debug] worker_state_update_frequency = 5
    2020/09/28 17:17:57 [verbose] prefix in use: /usr/local/kong
    2020/09/28 17:17:57 [debug] resolved Cassandra contact point 'server05503' to: 10.204.90.234
    2020/09/28 17:17:57 [debug] resolved Cassandra contact point 'server05505' to: 10.204.90.235
    2020/09/28 17:17:57 [debug] resolved Cassandra contact point 'server05507' to: 10.86.173.32
    2020/09/28 17:17:57 [debug] resolved Cassandra contact point 'server05502' to: 10.106.184.179
    2020/09/28 17:17:57 [debug] resolved Cassandra contact point 'server05504' to: 10.106.184.193
    2020/09/28 17:17:57 [debug] resolved Cassandra contact point 'server05506' to: 10.87.52.252
    2020/09/28 17:17:58 [debug] loading subsystems migrations...
    2020/09/28 17:17:58 [verbose] retrieving keyspace schema state...
    2020/09/28 17:17:58 [verbose] schema state retrieved
    2020/09/28 17:17:58 [debug] loading subsystems migrations...
    2020/09/28 17:17:58 [verbose] retrieving keyspace schema state...
    2020/09/28 17:17:58 [verbose] schema state retrieved
    2020/09/28 17:17:58 [debug] migrations to run:
             core: 009_200_to_210, 010_210_to_211, 011_212_to_213
              jwt: 003_200_to_210
              acl: 003_200_to_210, 004_212_to_213
    rate-limiting: 004_200_to_210
           oauth2: 004_200_to_210, 005_210_to_211
    2020/09/28 17:17:58 [info] migrating core on keyspace 'kong_prod2'...
    2020/09/28 17:17:58 [debug] running migration: 009_200_to_210
    2020/09/28 17:18:02 [info] core migrated up to: 009_200_to_210 (pending)
    2020/09/28 17:18:02 [debug] running migration: 010_210_to_211
    2020/09/28 17:18:02 [info] core migrated up to: 010_210_to_211 (pending)
    2020/09/28 17:18:02 [debug] running migration: 011_212_to_213
    2020/09/28 17:18:02 [info] core migrated up to: 011_212_to_213 (executed)
    2020/09/28 17:18:02 [info] migrating jwt on keyspace 'kong_prod2'...
    2020/09/28 17:18:02 [debug] running migration: 003_200_to_210
    2020/09/28 17:18:02 [info] jwt migrated up to: 003_200_to_210 (pending)
    2020/09/28 17:18:02 [info] migrating acl on keyspace 'kong_prod2'...
    2020/09/28 17:18:02 [debug] running migration: 003_200_to_210
    2020/09/28 17:18:02 [info] acl migrated up to: 003_200_to_210 (pending)
    2020/09/28 17:18:02 [debug] running migration: 004_212_to_213
    2020/09/28 17:18:02 [info] acl migrated up to: 004_212_to_213 (pending)
    2020/09/28 17:18:02 [info] migrating rate-limiting on keyspace 'kong_prod2'...
    2020/09/28 17:18:02 [debug] running migration: 004_200_to_210
    2020/09/28 17:18:02 [info] rate-limiting migrated up to: 004_200_to_210 (executed)
    2020/09/28 17:18:02 [info] migrating oauth2 on keyspace 'kong_prod2'...
    2020/09/28 17:18:02 [debug] running migration: 004_200_to_210
    2020/09/28 17:18:03 [info] oauth2 migrated up to: 004_200_to_210 (pending)
    2020/09/28 17:18:03 [debug] running migration: 005_210_to_211
    2020/09/28 17:18:03 [info] oauth2 migrated up to: 005_210_to_211 (pending)
    2020/09/28 17:18:03 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:18:09 [verbose] Cassandra schema consensus: reached
    2020/09/28 17:18:09 [info] 9 migrations processed
    2020/09/28 17:18:09 [info] 2 executed
    2020/09/28 17:18:09 [info] 7 pending
    2020/09/28 17:18:09 [debug] loading subsystems migrations...
    2020/09/28 17:18:09 [verbose] retrieving keyspace schema state...
    2020/09/28 17:18:09 [verbose] schema state retrieved
    2020/09/28 17:18:09 [info]
    Database has pending migrations; run 'kong migrations finish' when ready
    

    So far so good right? Well its after the up that the finish starts to see problems, (removed env variable dump from verbosity for brevity):

    / $ kong migrations finish --db-timeout 120 --vv
    2020/09/28 17:19:57 [debug] loading subsystems migrations...
    2020/09/28 17:19:57 [verbose] retrieving keyspace schema state...
    2020/09/28 17:19:57 [verbose] schema state retrieved
    2020/09/28 17:19:57 [debug] loading subsystems migrations...
    2020/09/28 17:19:57 [verbose] retrieving keyspace schema state...
    2020/09/28 17:19:58 [verbose] schema state retrieved
    2020/09/28 17:19:58 [debug] pending migrations to finish:
      core: 009_200_to_210, 010_210_to_211
       jwt: 003_200_to_210
       acl: 003_200_to_210, 004_212_to_213
    oauth2: 004_200_to_210, 005_210_to_211
    2020/09/28 17:19:58 [info] migrating core on keyspace 'kong_prod2'...
    2020/09/28 17:19:58 [debug] running migration: 009_200_to_210
    2020/09/28 17:20:12 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:20:12 [verbose] Cassandra schema consensus: reached
    2020/09/28 17:20:12 [info] core migrated up to: 009_200_to_210 (executed)
    2020/09/28 17:20:12 [debug] running migration: 010_210_to_211
    2020/09/28 17:20:12 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:20:12 [verbose] Cassandra schema consensus: reached
    2020/09/28 17:20:12 [info] core migrated up to: 010_210_to_211 (executed)
    2020/09/28 17:20:12 [info] migrating jwt on keyspace 'kong_prod2'...
    2020/09/28 17:20:12 [debug] running migration: 003_200_to_210
    2020/09/28 17:20:14 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:20:14 [verbose] Cassandra schema consensus: not reached
    Error:
    ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: [Cassandra error] cluster_mutex callback threw an error: ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: [Cassandra error] failed to wait for schema consensus: [Read failure] Operation failed - received 0 responses and 1 failures
    stack traceback:
            [C]: in function 'assert'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: in function <...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:364: in function </usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:314>
            [C]: in function 'pcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/concurrency.lua:45: in function 'cluster_mutex'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:50: in function <init_worker_by_lua:48>
            [C]: in function 'xpcall'
            init_worker_by_lua:57: in function <init_worker_by_lua:55>
    stack traceback:
            [C]: in function 'error'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:50: in function <init_worker_by_lua:48>
            [C]: in function 'xpcall'
            init_worker_by_lua:57: in function <init_worker_by_lua:55>
    

    Second run of finish to try again:

    2020/09/28 17:22:14 [debug] loading subsystems migrations...
    2020/09/28 17:22:14 [verbose] retrieving keyspace schema state...
    2020/09/28 17:22:14 [verbose] schema state retrieved
    2020/09/28 17:22:15 [debug] loading subsystems migrations...
    2020/09/28 17:22:15 [verbose] retrieving keyspace schema state...
    2020/09/28 17:22:15 [verbose] schema state retrieved
    2020/09/28 17:22:15 [debug] pending migrations to finish:
       acl: 003_200_to_210, 004_212_to_213
    oauth2: 004_200_to_210, 005_210_to_211
    2020/09/28 17:22:15 [info] migrating acl on keyspace 'kong_prod2'...
    2020/09/28 17:22:15 [debug] running migration: 003_200_to_210
    2020/09/28 17:22:28 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:22:28 [verbose] Cassandra schema consensus: not reached
    Error:
    ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: [Cassandra error] cluster_mutex callback threw an error: ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: [Cassandra error] failed to wait for schema consensus: [Read failure] Operation failed - received 0 responses and 1 failures
    stack traceback:
            [C]: in function 'assert'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: in function <...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:364: in function </usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:314>
            [C]: in function 'pcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/concurrency.lua:45: in function 'cluster_mutex'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:50: in function <init_worker_by_lua:48>
            [C]: in function 'xpcall'
            init_worker_by_lua:57: in function <init_worker_by_lua:55>
    stack traceback:
            [C]: in function 'error'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:50: in function <init_worker_by_lua:48>
            [C]: in function 'xpcall'
            init_worker_by_lua:57: in function <init_worker_by_lua:55>
    

    Third Finish....:

    2020/09/28 17:23:28 [verbose] schema state retrieved
    2020/09/28 17:23:28 [debug] loading subsystems migrations...
    2020/09/28 17:23:28 [verbose] retrieving keyspace schema state...
    2020/09/28 17:23:29 [verbose] schema state retrieved
    2020/09/28 17:23:29 [debug] pending migrations to finish:
       acl: 004_212_to_213
    oauth2: 004_200_to_210, 005_210_to_211
    2020/09/28 17:23:29 [info] migrating acl on keyspace 'kong_prod2'...
    2020/09/28 17:23:29 [debug] running migration: 004_212_to_213
    2020/09/28 17:23:29 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:23:29 [verbose] Cassandra schema consensus: not reached
    Error:
    ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: [Cassandra error] cluster_mutex callback threw an error: ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: [Cassandra error] failed to wait for schema consensus: [Read failure] Operation failed - received 0 responses and 1 failures
    stack traceback:
            [C]: in function 'assert'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: in function <...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:364: in function </usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:314>
            [C]: in function 'pcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/concurrency.lua:45: in function 'cluster_mutex'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:50: in function <init_worker_by_lua:48>
            [C]: in function 'xpcall'
            init_worker_by_lua:57: in function <init_worker_by_lua:55>
    stack traceback:
            [C]: in function 'error'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:50: in function <init_worker_by_lua:48>
            [C]: in function 'xpcall'
            init_worker_by_lua:57: in function <init_worker_by_lua:55>
    

    4th Finish...... :

    2020/09/28 17:24:15 [debug] loading subsystems migrations...
    2020/09/28 17:24:15 [verbose] retrieving keyspace schema state...
    2020/09/28 17:24:15 [verbose] schema state retrieved
    2020/09/28 17:24:15 [debug] loading subsystems migrations...
    2020/09/28 17:24:15 [verbose] retrieving keyspace schema state...
    2020/09/28 17:24:16 [verbose] schema state retrieved
    2020/09/28 17:24:16 [debug] pending migrations to finish:
    oauth2: 004_200_to_210, 005_210_to_211
    2020/09/28 17:24:16 [info] migrating oauth2 on keyspace 'kong_prod2'...
    2020/09/28 17:24:16 [debug] running migration: 004_200_to_210
    2020/09/28 17:24:18 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:24:18 [verbose] Cassandra schema consensus: reached
    2020/09/28 17:24:18 [info] oauth2 migrated up to: 004_200_to_210 (executed)
    2020/09/28 17:24:18 [debug] running migration: 005_210_to_211
    2020/09/28 17:24:18 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:24:18 [verbose] Cassandra schema consensus: reached
    2020/09/28 17:24:18 [info] oauth2 migrated up to: 005_210_to_211 (executed)
    2020/09/28 17:24:18 [info] 2 migrations processed
    2020/09/28 17:24:18 [info] 2 executed
    2020/09/28 17:24:18 [debug] loading subsystems migrations...
    2020/09/28 17:24:18 [verbose] retrieving keyspace schema state...
    2020/09/28 17:24:18 [verbose] schema state retrieved
    2020/09/28 17:24:18 [info] No pending migrations to finish
    

    Lastly confirmed via list and finish again that there was nothing left to do:

    2020/09/28 17:25:35 [debug] loading subsystems migrations...
    2020/09/28 17:25:35 [verbose] retrieving keyspace schema state...
    2020/09/28 17:25:35 [verbose] schema state retrieved
    2020/09/28 17:25:35 [info] Executed migrations:
             core: 000_base, 003_100_to_110, 004_110_to_120, 005_120_to_130, 006_130_to_140, 007_140_to_150, 008_150_to_200, 009_200_to_210, 010_210_to_211, 011_212_to_213
              jwt: 000_base_jwt, 002_130_to_140, 003_200_to_210
              acl: 000_base_acl, 002_130_to_140, 003_200_to_210, 004_212_to_213
    rate-limiting: 000_base_rate_limiting, 003_10_to_112, 004_200_to_210
           oauth2: 000_base_oauth2, 003_130_to_140, 004_200_to_210, 005_210_to_211
    

    Looks like re-entrancy is still a problem for sure with the workspace tables, likely correlating to how many times I re-executed the finish command trying to get through this upgrade since the finish failed me a few times in a row and seemed to progress 1 table at a time with each failure:

    image

    Also find it interesting how I set the C* schema consensus timeouts long but it bounces back instantly saying consensus not reached? Ex:

    2020/09/28 17:23:29 [verbose] waiting for Cassandra schema consensus (120000ms timeout)...
    2020/09/28 17:23:29 [verbose] Cassandra schema consensus: not reached
    

    cc @bungle @locao @hishamhm

    Edit:

    Yeah I see some tables like oauth2_credentials using ws_id: 38321a07-1d68-4aa8-b01f-8d64d5bf665a and jwt_secrets using ws_id: 8f1d01e1-0609-4bda-a021-a01e619d1590

    Additional Details & Logs

    • Kong version 2.1.4
    task/needs-investigation core/db/migrations 
    opened by jeremyjpj0916 82
  • Kong 1.0

    Kong 1.0 "5K resource challenge/Plugin+Router map build" findings/errors(Pending Bungle PR fixes this)

    Summary

    Made it my weekend mission to vet Kong 1.0 as hard as I could in our real sandbox environment. Many awesome PRs have brought great improvements based on earlier reports. I decided to crank it up to a final test, I build out my Sandbox Kong to:

    5k service/route pairs. Each with oauth2 + acl with a whitelist entry (so 2 plugins per route). 5k consumers each with an acl whitelist group added and a pair of oauth2/jwt creds

    Additional PRs I included in my Kong 1.0 patched version: 4102 4179 4178 4177 4182

    I did so with a little script in a loop calling the admin API from a remote client. Of the resources being built out I would say maybe 5 times total a few 500 level error responses occurred directly from the Admin API calls or from my lb health check pings during the execution of the admin api calls, figured I would list some of those here if they bring any insight or room for improvements:

    First error came from my standard 5 second health-check ping calls against the gateway while under resource creation load:

    2019/01/13 21:07:05 [crit] 35#0: *1415187 [lua] init.lua:639: rewrite(): could not ensure plugins map is up to date: failed to rebuild plugins map: [cassandra] could not execute page query: [Server error] java.lang.NullPointerException, client: 10.xxx.x.x, server: kong, request: "GET /F5/status HTTP/1.1", host: "gateway-dev-core-dc2.company.com"
    

    Any way to add retry/hardening logic around this, strange seeing a null pointer exception being thrown(from C* itself considering is a logged java error?). Easy to say well C* caused it not Kong's problem, but is it possible Kong could safely and effectively ascend to attempt to remediate low level db query failures between client->server?

    Another one looked like this:

    2019/01/13 22:15:06 [error] 38#0: *1518092 [kong] init.lua:86 could not execute page query: [Server error] java.lang.NullPointerException, client: 10.xxx.xx.x, server: kong_admin, request: "GET /acls?offset=EKEtFbKFb0ThvMhuC02YnisA8H%2F%2F%2Bb8A HTTP/1.1", host: "kong:8001"
    

    Worker-event failed to post due to no memory and another timed out? Is the first an exhaustion of the Lua VM memory? My node as a whole is only using 1.2GB of 5GB mem so seems a bit odd to me, any potential for internal remediation around failed worker-events or ones that may have timed out, I don't see any kong configurations available to the conf/env variables that lets us configure the timeout on worker event data(or what is the seemingly baked in wait for worker event data)?:

    2019/01/13 22:50:10 [error] 37#0: *1569611 [lua] events.lua:254: post(): worker-events: failed posting event "healthy" by "lua-resty-healthcheck [test_upstream]"; no memory, context: ngx.timer
    2019/01/13 22:50:10 [error] 34#0: *1569615 [lua] events.lua:345: post(): worker-events: dropping event; waiting for event data timed out, id: 34317, client: 127.0.0.1, server: kong_admin, request: "PUT /routes/76bb1716-14d0-46fc-a4d3-9b6441e4fa6f HTTP/1.1", host: "localhost:8001"
    2019/01/13 22:50:10 [error] 38#0: *1569619 [lua] events.lua:345: poll(): worker-events: dropping event; waiting for event data timed out, id: 34317, context: ngx.timer
    2019/01/13 22:50:10 [error] 36#0: *1569621 [lua] events.lua:345: poll(): worker-events: dropping event; waiting for event data timed out, id: 34317, context: ngx.timer
    2019/01/13 22:50:10 [error] 35#0: *1569623 [lua] events.lua:345: poll(): worker-events: dropping event; waiting for event data timed out, id: 34317, context: ngx.timer
    2019/01/13 22:50:11 [error] 33#0: *1569632 [lua] events.lua:345: poll(): worker-events: dropping event; waiting for event data timed out, id: 34317, context: ngx.timer
    2019/01/13 22:50:11 [error] 37#0: *1569634 [lua] events.lua:345: poll(): worker-events: dropping event; waiting for event data timed out, id: 34317, context: ngx.timer
    

    Another interesting error, seems almost like my C* node failed the ssl handshake one time?:

    2019/01/14 00:16:07 [crit] 36#0: *1704447 [lua] init.lua:520: ssl_certificate(): could not ensure plugins map is up to date: failed to rebuild plugins map: [cassandra] could not execute page query: [Server error] java.lang.NullPointerException, context: ssl_certificate_by_lua*, client: 10.xxx.x.x, server: 0.0.0.0:8443
    2019/01/14 00:16:07 [crit] 36#0: *1704446 SSL_do_handshake() failed (SSL: error:1417A179:SSL routines:tls_post_process_client_hello:cert cb error) while SSL handshaking, client: 10.xxx.x.x, server: 0.0.0.0:8443
    

    Overall I have been impressed with Kong under this 5k stress challenge. I slightly assumed all sort of timers and rebuilding logic would fail due to resource growth but I am not seeing that behavior with the PR's applied and also leveraging -1 on the router perf pr(so I suppose I am partly leveraging async router building). Proxying still happens quickly and my chaos test that kept adding and deleting a service/route pair while the node was under load worked fine.

    Feel free to close this whenever. I just thought to relay what behavior I saw. Not too shabby by any means! I think core Kong is getting really close to hitting all my check boxes in recent weeks 👍 . 5k resources with good perf against a real db cluster w network latency should meet 99% of the worlds needs around serving as a gateway imo. Maybe one day in a year or two I will give the 10k challenge a spin 😆.

    Additional Details & Logs

    • Kong version 1.0 with pending/merged perf PRs
    opened by jeremyjpj0916 74
  • Kong throws 404 not found on route paths that exist at times.

    Kong throws 404 not found on route paths that exist at times.

    Summary

    We have noticed in our Kong Gateway nodes times when common endpoints the gateway exposes throwing 404 route not found on a % of API calls. The specific proxy we focused on with this post( "/F5/status") does not route, has no auth, and serves as a ping up time endpoint that returns static 200 success. We do notice this as well on other endpoints that have auth and plugins as well, but the frequency in which our ping endpoint gets consumed is consistent and provides the best insight.

    Steps To Reproduce

    Reproducing consistently seems impossible from our perspective at this time, but we will elaborate with as much detail and screenshots as we can.

    1. Create combination of services and resources 1 to 1. We see 130 service route pairs. 300 plugins. Some global, some applied directly to routes(acl/auth).

    2. Add additional services + routes pairs with standard plugins over extended time.

    3. Suspicion is eventually the 404's will reveal themselves, and we have a high degree of confidence it does not happen globally across all worker processes.

    New arbitrary Service+Route was made on the Gateway, specific important to note is created timestamp: servicecreatedattimestamp

    Converted to UTC: timestampconverted

    Above time matches identically with when the existing "/F5/status" began throwing 404 route not found errors: f5statuserrorrateoccurs

    You can see a direct correlation to when that new service+route pair was created to when the gateway began to throw 404 not found errors on a route that exists and previously had no problems. Note the "/F5/status" endpoint takes consistent traffic at all time from health check monitors.

    Interesting bit its the % of errors to this individual Kong Node, we run 6 worker processes and the error rate % is almost perfect for 1 worker process showing impact: f5status404rate

    To discuss our architecture we run Kong with Cassandra 3.X in 2 datacenters, 1 Kong node per data center. We run a 6 node Cassandra cluster, 3 C* nodes per datacenter. The errors only occurred in a single datacenter on the Kong node in our examples above, but both datacenters share identical settings. We redeploy Kong on a weekly basis every Monday early AM, but this error presented above Started on a Wednesday so we can't correlate the problem to any sort of Kong startup issue.

    To us the behavior points to cache rebuilding during new resource creation based on what we can correlate. Sadly nothing in Kong logging catches anything of interest when we notice issue presenting itself. Also note it does not happen every time obviously so its a very hard issue to nail down.

    We also notice the issue correcting itself too at times, we have not traced the correction to anything specific just yet, but I assume very likely its when further services and routes are created after the errors are occurring and what seems to be a problematic worker process has its router cleaned up again.

    Other points I can make are that production has not seen this issue with identical Kong configurations and architecture. But production has fewer proxies and has new services or routes added at a much lower frequency(1-2 per week vs 20+ in non-prod).

    I wonder at this time if it may be also safer for us to switch cache TTL back from the 0 infinity value to some arbitrary number of hours to force cycle on the resources. I suppose if it is indeed the cache as we suspect that that would actually make the frequency of this issue more prevalent possibly though.

    I may write a Kong health script that just arbitrarily grabs all routes on the gateway and calls each 1 one by 1 to ensure they don't return a 404 as a sanity check to run daily too right now. My biggest fear is as production grows in size and/or higher frequency in services/routes created daily we may begin to see the issue present itself there as well and that would cause dramatic impact to existing priority services if they start to 404 respond due to Kong not recognizing the proxy route exists in the db and caching appropriately.

    Sorry I could not provide a 100% reproducible scenario for this situation, can only go off the analytics we have. Although if it yields some underlying bug in how Kong currently manages services and routes, that would bring huge stability to the core product.

    Additional Details & Logs

    • Kong version 0.14.1

    • Kong error logs - Kong Error logs reveal nothing about the 404 not found's from Kong's perspetive. Nothing gets logged during these events in terms of normal or debug execution.

    • Kong configuration (the output of a GET request to Kong's Admin port - see https://docs.konghq.com/latest/admin-api/#retrieve-node-information)

    {
      "plugins": {
        "enabled_in_cluster": [
          "correlation-id",
          "acl",
          "kong-oidc-multi-idp",
          "oauth2",
          "rate-limiting",
          "cors",
          "jwt",
          "kong-spec-expose",
          "request-size-limiting",
          "kong-response-size-limiting",
          "request-transformer",
          "request-termination",
          "kong-error-log",
          "kong-oidc-implicit-token",
          "kong-splunk-log",
          "kong-upstream-jwt",
          "kong-cluster-drain",
          "statsd"
        ],
        "available_on_server": {
          "kong-path-based-routing": true,
          "kong-spec-expose": true,
          "kong-cluster-drain": true,
          "correlation-id": true,
          "statsd": true,
          "jwt": true,
          "cors": true,
          "kong-oidc-multi-idp": true,
          "kong-response-size-limiting": true,
          "kong-oidc-auth": true,
          "kong-upstream-jwt": true,
          "kong-error-log": true,
          "request-termination": true,
          "request-size-limiting": true,
          "acl": true,
          "rate-limiting": true,
          "kong-service-virtualization": true,
          "request-transformer": true,
          "kong-oidc-implicit-token": true,
          "kong-splunk-log": true,
          "oauth2": true
        }
      },
      "tagline": "Welcome to kong",
      "configuration": {
        "plugins": [
          "kong-error-log",
          "kong-oidc-implicit-token",
          "kong-response-size-limiting",
          "cors",
          "request-transformer",
          "kong-service-virtualization",
          "kong-cluster-drain",
          "kong-upstream-jwt",
          "kong-splunk-log",
          "kong-spec-expose",
          "kong-oidc-auth",
          "kong-path-based-routing",
          "kong-oidc-multi-idp",
          "correlation-id",
          "oauth2",
          "statsd",
          "jwt",
          "rate-limiting",
          "acl",
          "request-size-limiting",
          "request-termination"
        ],
        "admin_ssl_enabled": false,
        "lua_ssl_verify_depth": 3,
        "trusted_ips": [
          "0.0.0.0/0",
          "::/0"
        ],
        "lua_ssl_trusted_certificate": "/usr/local/kong/ssl/kongcert.pem",
        "loaded_plugins": {
          "kong-path-based-routing": true,
          "kong-spec-expose": true,
          "kong-cluster-drain": true,
          "correlation-id": true,
          "statsd": true,
          "jwt": true,
          "cors": true,
          "rate-limiting": true,
          "kong-response-size-limiting": true,
          "kong-oidc-auth": true,
          "kong-upstream-jwt": true,
          "acl": true,
          "oauth2": true,
          "kong-splunk-log": true,
          "kong-oidc-implicit-token": true,
          "kong-error-log": true,
          "kong-service-virtualization": true,
          "request-transformer": true,
          "kong-oidc-multi-idp": true,
          "request-size-limiting": true,
          "request-termination": true
        },
        "cassandra_username": "****",
        "admin_ssl_cert_csr_default": "/usr/local/kong/ssl/admin-kong-default.csr",
        "ssl_cert_key": "/usr/local/kong/ssl/kongprivatekey.key",
        "dns_resolver": {},
        "pg_user": "kong",
        "mem_cache_size": "1024m",
        "cassandra_data_centers": [
          "dc1:2",
          "dc2:3"
        ],
        "nginx_admin_directives": {},
        "cassandra_password": "******",
        "custom_plugins": {},
        "pg_host": "127.0.0.1",
        "nginx_acc_logs": "/usr/local/kong/logs/access.log",
        "proxy_listen": [
          "0.0.0.0:8000",
          "0.0.0.0:8443 ssl http2"
        ],
        "client_ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
        "ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
        "dns_no_sync": false,
        "db_update_propagation": 5,
        "nginx_err_logs": "/usr/local/kong/logs/error.log",
        "cassandra_port": 9042,
        "dns_order": [
          "LAST",
          "SRV",
          "A",
          "CNAME"
        ],
        "dns_error_ttl": 1,
        "headers": [
          "latency_tokens"
        ],
        "dns_stale_ttl": 4,
        "nginx_optimizations": true,
        "database": "cassandra",
        "pg_database": "kong",
        "nginx_worker_processes": "auto",
        "lua_package_cpath": "",
        "admin_acc_logs": "/usr/local/kong/logs/admin_access.log",
        "lua_package_path": "./?.lua;./?/init.lua;",
        "nginx_pid": "/usr/local/kong/pids/nginx.pid",
        "upstream_keepalive": 120,
        "cassandra_contact_points": [
          "server8429",
          "server8431",
          "server8432",
          "server8433",
          "server8445",
          "server8428"
        ],
        "admin_access_log": "off",
        "client_ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
        "proxy_listeners": [
          {
            "ssl": false,
            "ip": "0.0.0.0",
            "proxy_protocol": false,
            "port": 8000,
            "http2": false,
            "listener": "0.0.0.0:8000"
          },
          {
            "ssl": true,
            "ip": "0.0.0.0",
            "proxy_protocol": false,
            "port": 8443,
            "http2": true,
            "listener": "0.0.0.0:8443 ssl http2"
          }
        ],
        "proxy_ssl_enabled": true,
        "proxy_access_log": "off",
        "ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
        "enabled_headers": {
          "latency_tokens": true,
          "X-Upstream-Status": false,
          "X-Proxy-Latency": true,
          "server_tokens": false,
          "Server": false,
          "Via": false,
          "X-Upstream-Latency": true
        },
        "cassandra_ssl": true,
        "cassandra_local_datacenter": "DC2",
        "db_resurrect_ttl": 30,
        "db_update_frequency": 5,
        "cassandra_consistency": "LOCAL_QUORUM",
        "client_max_body_size": "100m",
        "admin_error_log": "/dev/stderr",
        "pg_ssl_verify": false,
        "dns_not_found_ttl": 30,
        "pg_ssl": false,
        "cassandra_repl_factor": 1,
        "cassandra_lb_policy": "RequestDCAwareRoundRobin",
        "cassandra_repl_strategy": "SimpleStrategy",
        "nginx_kong_conf": "/usr/local/kong/nginx-kong.conf",
        "error_default_type": "text/plain",
        "nginx_http_directives": {},
        "real_ip_header": "X-Forwarded-For",
        "kong_env": "/usr/local/kong/.kong_env",
        "cassandra_schema_consensus_timeout": 10000,
        "dns_hostsfile": "/etc/hosts",
        "admin_listeners": [
          {
            "ssl": false,
            "ip": "0.0.0.0",
            "proxy_protocol": false,
            "port": 8001,
            "http2": false,
            "listener": "0.0.0.0:8001"
          }
        ],
        "ssl_ciphers": "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256",
        "ssl_cert": "/usr/local/kong/ssl/kongcert.crt",
        "prefix": "/usr/local/kong",
        "admin_ssl_cert_key_default": "/usr/local/kong/ssl/admin-kong-default.key",
        "cassandra_ssl_verify": true,
        "db_cache_ttl": 0,
        "ssl_cipher_suite": "modern",
        "real_ip_recursive": "on",
        "proxy_error_log": "/dev/stderr",
        "client_ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
        "nginx_daemon": "off",
        "anonymous_reports": false,
        "cassandra_timeout": 5000,
        "nginx_proxy_directives": {},
        "pg_port": 5432,
        "log_level": "debug",
        "client_body_buffer_size": "50m",
        "client_ssl": false,
        "lua_socket_pool_size": 30,
        "admin_ssl_cert_default": "/usr/local/kong/ssl/admin-kong-default.crt",
        "cassandra_keyspace": "kong_stage",
        "ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
        "nginx_conf": "/usr/local/kong/nginx.conf",
        "admin_listen": [
          "0.0.0.0:8001"
        ]
      },
      "version": "0.14.1",
      "node_id": "cf0c92a9-724d-4972-baed-599785cff5ed",
      "lua_version": "LuaJIT 2.1.0-beta3",
      "prng_seeds": {
        "pid: 73": 183184219419,
        "pid: 71": 114224634222,
        "pid: 72": 213242339120,
        "pid: 70": 218221808514,
        "pid: 69": 233240145991,
        "pid: 68": 231238177547
      },
      "timers": {
        "pending": 4,
        "running": 0
      },
      "hostname": "kong-62-hxqnq"
    }
    
    • Operating system: Kong's Docker Alpine version 3.6 in github repo
    opened by jeremyjpj0916 65
  • Kong 2.0.5 to 2.X.X upgrade errors/problems

    Kong 2.0.5 to 2.X.X upgrade errors/problems

    Summary

    Seems we faced an issue attempting to upgrade today in stage(2.0.5->2.1.1), we did not face issues or do such a jump in DEV. We went to (2.0.5 -> 2.1.0 -> 2.1.1), saw the issues during 2.1.0 and we opened and Kong addressed other git issues, then migrated to 2.1.1(and dev region faced no such issues during those upgrades broken out):

    Now in stage after the kong migrations up with 2.1.1 nodes, we get this for a list check(from a 2.1.1 node)

    kong migrations list --vv 
    .....
    2020/08/11 22:25:38 [info]
    Pending migrations:
      core: 009_200_to_210, 010_210_to_211
       jwt: 003_200_to_210
       acl: 003_200_to_210
    oauth2: 004_200_to_210, 005_210_to_211
    2020/08/11 22:25:38 [info]
    Run 'kong migrations finish' when ready
    

    From 2.1.1 Nodes:

    kong migrations finish --vv
    2020/08/11 22:25:52 [debug] pending migrations to finish:
      core: 009_200_to_210, 010_210_to_211
       jwt: 003_200_to_210
       acl: 003_200_to_210
    oauth2: 004_200_to_210, 005_210_to_211
    2020/08/11 22:25:52 [info] migrating core on keyspace 'kong_stage'...
    2020/08/11 22:25:52 [debug] running migration: 009_200_to_210
    Error:
    ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: [Cassandra error] cluster_mutex callback threw an error: ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: [Cassandra error] failed to run migration '009_200_to_210' teardown: ...are/lua/5.1/kong/db/migrations/operations/200_to_210.lua:336: attempt to index a nil value
    stack traceback:
            ...are/lua/5.1/kong/db/migrations/operations/200_to_210.lua:336: in function 'ws_update_keys'
            ...are/lua/5.1/kong/db/migrations/operations/200_to_210.lua:446: in function 'ws_adjust_data'
            ...share/lua/5.1/kong/db/migrations/core/009_200_to_210.lua:125: in function <...share/lua/5.1/kong/db/migrations/core/009_200_to_210.lua:124>
            ...share/lua/5.1/kong/db/migrations/core/009_200_to_210.lua:211: in function <...share/lua/5.1/kong/db/migrations/core/009_200_to_210.lua:210>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:553: in function 'run_migrations'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: in function <...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:364: in function </usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:314>
            [C]: in function 'pcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/concurrency.lua:45: in function 'cluster_mutex'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:48: in function <init_worker_by_lua:46>
            [C]: in function 'xpcall'
            init_worker_by_lua:55: in function <init_worker_by_lua:53>
    stack traceback:
            [C]: in function 'assert'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:142: in function <...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:364: in function </usr/local/kong/luarocks/share/lua/5.1/kong/db/init.lua:314>
            [C]: in function 'pcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/concurrency.lua:45: in function 'cluster_mutex'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:126: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:48: in function <init_worker_by_lua:46>
            [C]: in function 'xpcall'
            init_worker_by_lua:55: in function <init_worker_by_lua:53>
    stack traceback:
            [C]: in function 'error'
            ...ong/luarocks/share/lua/5.1/kong/cmd/utils/migrations.lua:161: in function 'finish'
            ...ocal/kong/luarocks/share/lua/5.1/kong/cmd/migrations.lua:184: in function 'cmd_exec'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88>
            [C]: in function 'xpcall'
            /usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/kong/luarocks/share/lua/5.1/kong/cmd/init.lua:45>
            /usr/bin/kong:9: in function 'file_gen'
            init_worker_by_lua:48: in function <init_worker_by_lua:46>
            [C]: in function 'xpcall'
            init_worker_by_lua:55: in function <init_worker_by_lua:53>
    

    Looks like Stage tokgen gen for oauth2.0 started bombing as well, likely that client_id going from a uuid to string in db.

    2020/08/11 22:27:39 [error] 92#0: *7954 lua coroutine: runtime error: ...ng/luarocks/share/lua/5.1/kong/plugins/oauth2/access.lua:576: failed to get from node cache: callback threw an error: /usr/local/kong/luarocks/share/lua/5.1/kong/db/dao/init.lua:738: client_id must be a string
    --
      | stack traceback:
      | [C]: in function 'error'
      | /usr/local/kong/luarocks/share/lua/5.1/kong/db/dao/init.lua:131: in function 'validate_unique_type'
      | /usr/local/kong/luarocks/share/lua/5.1/kong/db/dao/init.lua:347: in function 'validate_unique_row_method'
      | /usr/local/kong/luarocks/share/lua/5.1/kong/db/dao/init.lua:738: in function 'select_by_client_id'
      | ...ng/luarocks/share/lua/5.1/kong/plugins/oauth2/access.lua:191: in function <...ng/luarocks/share/lua/5.1/kong/plugins/oauth2/access.lua:190>
      | [C]: in function 'xpcall'
      | /usr/local/kong/luarocks/share/lua/5.1/resty/mlcache.lua:741: in function 'get'
      | /usr/local/kong/luarocks/share/lua/5.1/kong/cache.lua:251: in function 'get'
      | ...ng/luarocks/share/lua/5.1/kong/plugins/oauth2/access.lua:204: in function 'get_redirect_uris'
      | ...ng/luarocks/share/lua/5.1/kong/plugins/oauth2/access.lua:576: in function 'execute'
      | ...g/luarocks/share/lua/5.1/kong/plugins/oauth2/handler.lua:11: in function <...g/luarocks/share/lua/5.1/kong/plugins/oauth2/handler.lua:10>
      | stack traceback:
      | coroutine 0:
      | [C]: in function 'get_redirect_uris'
      | ...ng/luarocks/share/lua/5.1/kong/plugins/oauth2/access.lua:576: in function 'execute'
      | ...g/luarocks/share/lua/5.1/kong/plugins/oauth2/handler.lua:11: in function <...g/luarocks/share/lua/5.1/kong/plugins/oauth2/handler.lua:10>
      | coroutine 1:
      | [C]: in function 'resume'
      | coroutine.wrap:21: in function <coroutine.wrap:21>
      | /usr/local/kong/luarocks/share/lua/5.1/kong/init.lua:758: in function 'access'
      | access_by_lua(nginx.conf:146):2: in main chunk, client: 34.239.51.241, server: kong, request: "POST /auth/oauth2/token HTTP/1.1", host: "gateawy.company.com"
    

    Steps To Reproduce

    1. 2.0.5 instance to 2.1.1 upgrade

    Additional Details & Logs

    • Kong version 2.1.1
    opened by jeremyjpj0916 59
  • allow for parameter mapping in path based routing

    allow for parameter mapping in path based routing

    e.g. allow request_path to contain values such as /api/{foo}/bar which map to the upstream_url /upstream/my/api/bar/{foo}

    this could also be more globally addressed with regex matching groups e.g.

    request_url = '/api/(.+)/bar`
    upstream_url = '/upstream/my/api/bar/$1'
    
    task/feature 
    opened by ahmadnassri 59
  • Memory leak or overly-agressive consumption

    Memory leak or overly-agressive consumption

    Summary

    We are seeing Kong consume memory at about 1G/week and fail to release. We limit our instances over the

    Steps To Reproduce

    1. Run Kong (in our case, in docker)
    2. Watch memory usage graphs

    Here's what ours looks like

    The exponential spike seen in the farthest right was due to a flood of requests during a DoS-like event. So the rate of climb appears to be related to traffic, meaning memory consumption is connected to something within the request cycle.

    Additional Details & Logs

    • Kong versions: 0.9.4 & 0.9.9
    • Kong debug-level startup logs
    • Kong configuration:
      • JWT
      • Datadog
      • Logstash
      • CORS
      • ACL
      • Basic Auth
    • Docker FROM kong:0.9.9
    task/needs-investigation pending author feedback 
    opened by plukevdh 52
  • [request] ACL

    [request] ACL

    Currently, we can create a list of APIs and a list of Consumers. Any consumer has access to any API. Instead we would like to create a to be API accessible only to a set of consumers. Something like a Premium plan to access a API. Do we have something already available? Or some way to achieve this?

    We can do this adding the list of consumers who have access to the API in the /apis resource. Or else, have a separate private-apis plugin and maintain the mapping of APIs and Consumers in a separate column family.

    Please let us know your suggestions. We would like send a PR for the change.

    idea/new plugin 
    opened by tamizhgeek 51
  • [request] support for DynamoDB

    [request] support for DynamoDB

    Cassandra is quite a beast to setup and maintain. If Kong runs in EC2,it would be great if it could also write it's data to DynamoDB (very similar to Cassandra) and use that as a service. This allows engineers to have less maintenance on Cassandra and focus more on Kong.

    Great project!

    task/feature 
    opened by nickveenhof 47
  • Kong 2.4 Auth Bugs OAuth2.0 , JWT.

    Kong 2.4 Auth Bugs OAuth2.0 , JWT.

    Summary

    Seeing some unexplainable behavior out of my Kong Dev nodes post upgrade to 2.4, Sometime attempting to generate an OAuth2.0 token fails with an error in a given datacenter, while the other DC succeeds to generate a token(and each DC may be able to flip flop after redeployments.. and sometimes I think I have seen moments where both DC deployments can work at the same time too, its very odd and not fully predictable). I also may have seen a similar issue with JWT Auth plugin telling me the error (No credentials found for given 'iss') found for given JWT token the gateway received on a proxy request but figured we take this one at a time and see if we can see what may cause the OAuth2.0 behavior, might fix the common core component that also made JWT auth seemingly fail... Nothing in the SRC code of OAuth2.0 in the version bump changes make sense for how they could be the culprit for this behavior, + I remember testing the changes in the OAuth2.0 plugin on earlier versions of Kong to improve the edge cases and OAuth2.0 worked fine then so I think its something indirectly related that has done this...

    Example:

    Fail case:

    POST https://gateway-dev-network-zone-dc2.company.com/auth/oauth2/token
    
    {
     "client_id": "6RE9IXHWgBbfybOXCqxZkUZnCQPKEuIV",
     "client_secret": "C1nnQ0wmwgT2PkUunUhs8NfkMQLNM7CA",
     "grant_type": "client_credentials"
    }
    

    Response:

    Status: 400
    
    {
        "error": "invalid_client",
        "error_description": "Invalid client authentication"
    }
    

    And Success Case:

    POST https://gateway-dev-network-zone-dc1.company.com/auth/oauth2/token
    
    {
     "client_id": "6RE9IXHWgBbfybOXCqxZkUZnCQPKEuIV",
     "client_secret": "C1nnQ0wmwgT2PkUunUhs8NfkMQLNM7CA",
     "grant_type": "client_credentials"
    }
    

    Response:

    Status: 200
    
    {
        "expires_in": 3600,
        "access_token": "PcH8nuM7MSI1SHAHVv4QrK8psIq75c8X",
        "token_type": "bearer"
    }
    

    OAuth2.0 Plugin configured on the Gateway endpoint Kong 2.4:

    {
      "name": "oauth2",
      "route": {
        "id": "ab469dbf-3806-4b22-8a32-246d7b158360"
      },
      "created_at": 1528309696,
      "tags": null,
      "id": "50a3fcfa-a660-425b-b181-698e8d1a196d",
      "config": {
        "enable_client_credentials": true,
        "enable_password_grant": false,
        "enable_authorization_code": false,
        "auth_header_name": "authorization",
        "reuse_refresh_token": false,
        "enable_implicit_grant": false,
        "token_expiration": 3600,
        "accept_http_if_already_terminated": false,
        "hide_credentials": false,
        "anonymous": null,
        "scopes": null,
        "provision_key": "function",
        "refresh_token_ttl": 3600,
        "global_credentials": true,
        "pkce": "lax",
        "mandatory_scope": false
      },
      "service": null,
      "consumer": null,
      "enabled": true,
      "protocols": [
        "http",
        "https"
      ]
    }
    

    Data of our user above from the DB for further evidence from C*, first their credential pair as seen above:

    kongdba@cqlsh:kong_dev> select * from oauth2_credentials WHERE client_secret='C1nnQ0wmwgT2PkUunUhs8NfkMQLNM7CA';
    
     id                                   | client_id                                                             | client_secret                    | client_type  | consumer_id                          | created_at                      | hash_secret | name   | redirect_uris         | tags | ws_id
    --------------------------------------+-----------------------------------------------------------------------+----------------------------------+--------------+--------------------------------------+---------------------------------+-------------+--------+-----------------------+------+--------------------------------------
     16093f3c-c51d-4523-ab10-1fa91464d443 | 7e175ef7-c50d-49d1-a648-0091156e2b4c:6RE9IXHWgBbfybOXCqxZkUZnCQPKEuIV | C1nnQ0wmwgT2PkUunUhs8NfkMQLNM7CA | confidential | 092798fd-5055-4b14-a699-4f915d874695 | 2019-02-22 18:59:48.000000+0000 |       False | oauth2 | {'https://optum.com'} | null | 7e175ef7-c50d-49d1-a648-0091156e2b4c
    
    (1 rows)
    

    And their consumer table data:

    kongdba@cqlsh:kong_dev> select * from consumers WHERE id=092798fd-5055-4b14-a699-4f915d874695;
    
     id                                   | created_at                      | custom_id | tags | username                                                | ws_id
    --------------------------------------+---------------------------------+-----------+------+---------------------------------------------------------+--------------------------------------
     092798fd-5055-4b14-a699-4f915d874695 | 2019-02-22 18:59:48.000000+0000 |      null | null | 7e175ef7-c50d-49d1-a648-0091156e2b4c:stargate.testsuite | 7e175ef7-c50d-49d1-a648-0091156e2b4c
    
    (1 rows)
    

    Workspace table just for reference, thought that could be the culprit but nah just 1 entry that is in sync w the above tables:

    kongdba@cqlsh:kong_dev> select * from workspaces;
    
     id                                   | comment | config | created_at                      | meta | name
    --------------------------------------+---------+--------+---------------------------------+------+---------
     7e175ef7-c50d-49d1-a648-0091156e2b4c |    null |   null | 2020-07-21 20:21:51.000000+0000 | null | default
    

    Note just for sanity I even was querying the local 3 node cluster in the DC thats currently throwing the oauth2 token gen error just to prove the actual raw data seems to be there and in place.. Why Kong seemingly does not resolve the db data consistently to recognize our user here and their token gen request creds I am not sure.

    I am assuming since this is the client credential grant flow we are failing right here(but I will try to confirm where in code its doing it later today): https://github.com/Kong/kong/blob/master/kong/plugins/oauth2/access.lua#L680

    Worth noting Kong logs do no good here, no DB connectivity or error reports. I will likely start hacking some debug log print lines to figure out which of the error blocks we fell into.

    Our functional test libs that hit the gateway regularly consistently flakey fails due to poor token generation capabilities in dev now as well calling the 2.4 gateways while still succeeding every time in stage/prod against our Kong 2.1.4 nodes.

    Some ideas are some kinda poisoned cache or lookup issue. Bug is very persistent once the issue presents itself, basically the working datacenter will continue to work proper it seems and the dysfunctional datacenter calls will continue to throw that 400 error like that users creds didn't exist.

    Definitely gonna be a blocker for deploying Kong 2.4 into customer facing environments for now until we get to the bottom of this. Worth noting I was testing Kong 2.3.3 in dev at one point(after upgrading from 2.1.4) and this kinda behavior was not presenting itself. So likely if there is some component of Kong not directly related to the plugin functional code itself that changed its likely somewhere between 2.3.3 to 2.4 caused it. I believe there were not even any db migration schema changes from 2.3 to 2.4 looking at core schema_meta so likely not a db schema/data related issue either hmm.

    Lemme know if you have any debug code snippets you want me to drop into the plugins or whatever, would like to get this fixed.

    Steps To Reproduce

    1. Stand up multi DC Kong(2) w 6 node C* cluster, 3 per DC
    2. Test OAuth2.0 endpoint above with your produced creds by a consumer.
    3. See if you see flakey behavior out of the global token gen endpoint too?

    Additional Details & Logs

    • Kong version 2.4

    ENV:

            - env:
                - name: KONG_ADMIN_SSL_ENABLED
                  value: 'off'
                - name: KONG_CASSANDRA_DATA_CENTERS
                  value: 'DC1:3,DC2:3'
                - name: KONG_CASSANDRA_REPL_STRATEGY
                  value: NetworkTopologyStrategy
                - name: KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT
                  value: '30000'
                - name: KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT
                  value: '30'
                - name: KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS
                  value: '50000'
                - name: KONG_UPSTREAM_KEEPALIVE_POOL_SIZE
                  value: '400'
                - name: KONG_WORKER_CONSISTENCY
                  value: eventual
                - name: KONG_WORKER_STATE_UPDATE_FREQUENCY
                  value: '5'
                - name: KONG_NGINX_PROXY_REAL_IP_HEADER
                  value: X-Forwarded-For
                - name: KONG_NGINX_PROXY_REAL_IP_RECURSIVE
                  value: 'on'
                - name: KONG_SSL_CIPHER_SUITE
                  value: intermediate
                - name: KONG_NGINX_MAIN_WORKER_PROCESSES
                  value: '6'
                - name: KONG_BOUNCE_REASON
                  value: 'Automated Bounce 20200511-06:58:13.262'
                - name: KONG_HEADERS
                  value: latency_tokens
                - name: KONG_DNS_ORDER
                  value: 'LAST,SRV,A,CNAME'
                - name: KONG_CASSANDRA_CONTACT_POINTS
                  value: 'server8171,server8176,server8180,server8172,server8173,server8175'
                - name: KONG_LOG_LEVEL
                  value: notice
                - name: KONG_PROXY_ACCESS_LOG
                  value: 'off'
                - name: KONG_ADMIN_ACCESS_LOG
                  value: 'off'
                - name: KONG_PROXY_ERROR_LOG
                  value: /dev/stderr
                - name: KONG_ADMIN_ERROR_LOG
                  value: /dev/stderr
                - name: KONG_ANONYMOUS_REPORTS
                  value: 'off'
                - name: KONG_PROXY_LISTEN
                  value: '0.0.0.0:8000, 0.0.0.0:8443 ssl http2 deferred reuseport'
                - name: KONG_ADMIN_LISTEN
                  value: '0.0.0.0:8001 deferred reuseport'
                - name: KONG_MEM_CACHE_SIZE
                  value: 1024m
                - name: KONG_SSL_CERT
                  value: /usr/local/kong/ssl/kongcert.crt
                - name: KONG_SSL_CERT_KEY
                  value: /usr/local/kong/ssl/kongprivatekey.key
                - name: KONG_SSL_CERT_DER
                  value: /usr/local/kong/ssl/kongcertder.der
                - name: KONG_CLIENT_SSL
                  value: 'off'
                - name: KONG_TRUSTED_IPS
                  value: '0.0.0.0/0,::/0'
                - name: KONG_CLIENT_MAX_BODY_SIZE
                  value: 50m
                - name: KONG_CLIENT_BODY_BUFFER_SIZE
                  value: 50m
                - name: KONG_ERROR_DEFAULT_TYPE
                  value: text/plain
                - name: KONG_DATABASE
                  value: cassandra
                - name: KONG_PG_SSL
                  value: 'off'
                - name: KONG_CASSANDRA_PORT
                  value: '9042'
                - name: KONG_CASSANDRA_KEYSPACE
                  value: kong_dev
                - name: LAST_AUTOMATED_RESTART
                  value: '20210426_0533'
                - name: KONG_CASSANDRA_TIMEOUT
                  value: '8000'
                - name: KONG_PLUGINS
                  value: >-
                    kong-siteminder-auth,kong-kafka-log,stargate-waf-error-log,mtls,stargate-oidc-token-revoke,kong-tx-debugger,kong-plugin-oauth,zipkin,kong-error-log,kong-oidc-implicit-token,kong-response-size-limiting,request-transformer,kong-service-virtualization,kong-cluster-drain,kong-upstream-jwt,kong-splunk-log,kong-spec-expose,kong-path-based-routing,kong-oidc-multi-idp,correlation-id,oauth2,statsd,jwt,rate-limiting,acl,request-size-limiting,request-termination,cors
                - name: KONG_CASSANDRA_SSL
                  value: 'on'
                - name: KONG_CASSANDRA_SSL_VERIFY
                  value: 'on'
                - name: KONG_CASSANDRA_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      key: password
                      name: cassandra-secret
                - name: KONG_CASSANDRA_USERNAME
                  valueFrom:
                    secretKeyRef:
                      key: username
                      name: cassandra-secret
                - name: KONG_CASSANDRA_CONSISTENCY
                  value: LOCAL_QUORUM
                - name: KONG_CASSANDRA_LB_POLICY
                  value: RequestDCAwareRoundRobin
                - name: KONG_CASSANDRA_LOCAL_DATACENTER
                  value: CTC
                - name: KONG_DB_UPDATE_FREQUENCY
                  value: '5'
                - name: KONG_DB_UPDATE_PROPAGATION
                  value: '5'
                - name: KONG_DB_CACHE_TTL
                  value: '0'
                - name: KONG_DNS_HOSTSFILE
                  value: /etc/hosts
                - name: KONG_DNS_STALE_TTL
                  value: '4'
                - name: KONG_DNS_NOT_FOUND_TTL
                  value: '30'
                - name: KONG_DNS_ERROR_TTL
                  value: '1'
                - name: KONG_DNS_NO_SYNC
                  value: 'off'
                - name: KONG_LUA_SSL_TRUSTED_CERTIFICATE
                  value: /usr/local/kong/ssl/kongcert.pem
                - name: KONG_LUA_SSL_VERIFY_DEPTH
                  value: '3'
                - name: KONG_LUA_SOCKET_POOL_SIZE
                  value: '30'
                - name: SPLUNK_HOST
                  value: gateway-dev-dmz-dc1.company.com
                - name: KONG_DB_CACHE_WARMUP_ENTITIES
                  value: 'services,plugins,consumers'
                - name: LUA_PATH
                  value: >-
                    /usr/local/kong/luarocks/share/lua/5.1/?.lua;;/usr/local/kong/luarocks/share/lua/5.1/?/init.lua;
                - name: KONG_NGINX_HTTP_SSL_PROTOCOLS
                  value: TLSv1.2 TLSv1.3
                - name: KONG_CASSANDRA_REFRESH_FREQUENCY
                  value: '0'
    
    task/needs-investigation 
    opened by jeremyjpj0916 45
  • fix(*): prevent queues from growing without bounds (#10046)

    fix(*): prevent queues from growing without bounds (#10046)

    Backported from master.

    This commit implements an upper limit on the number of batches that may be waiting on a queue for processing. Once the limit has been reached, the oldest batch is dropped from the queue and an error message is logged. The maximum number of batches that can be waiting on a queue is configured through the max_queued_batches parameter of the queue, which defaults to 100 and can be globally overriden with the max_queued_batches parameter in kong.conf

    KAG-303

    plugins/http-log core/configuration size/L changelog 
    opened by hanshuebner 0
  • fix(*): prevent queues from growing without bounds (#10046)

    fix(*): prevent queues from growing without bounds (#10046)

    Backported from master.

    This commit implements an upper limit on the number of batches that may be waiting on a queue for processing. Once the limit has been reached, the oldest batch is dropped from the queue and an error message is logged. The maximum number of batches that can be waiting on a queue is configured through the max_queued_batches parameter of the queue, which defaults to 100 and can be globally overriden with the max_queued_batches parameter in kong.conf

    KAG-303

    plugins/datadog plugins/statsd plugins/http-log core/configuration size/L changelog plugins/opentelemetry 
    opened by hanshuebner 0
  • fix(*): prevent queues from growing without bounds (#10046)

    fix(*): prevent queues from growing without bounds (#10046)

    Backported from master

    This commit implements an upper limit on the number of batches that may be waiting on a queue for processing. Once the limit has been reached, the oldest batch is dropped from the queue and an error message is logged. The maximum number of batches that can be waiting on a queue is configured through the max_queued_batches parameter of the queue, which defaults to 100 and can be globally overriden with the max_queued_batches parameter in kong.conf

    KAG-303

    plugins/http-log core/configuration size/L 
    opened by hanshuebner 0
  • fix(core): response phase skips header.before

    fix(core): response phase skips header.before

    This causes some of the header handlings to be skipped.

    Full changelog:

    runloop.lua: moving the header phase before and after handlers and changing the way to get status in before handler init.lua: adjust the ordering of handlers and set status (send headers).

    Fix #10031

    core/proxy size/L 
    opened by StarlightIbuki 1
  • feat(plugins/proxy-cache): add wildcard and parameter match support for content_type

    feat(plugins/proxy-cache): add wildcard and parameter match support for content_type

    Summary

    Add wildcard and parameter match support for content_type

    Checklist

    • [x] The Pull Request has tests
    • [ ] There's an entry in the CHANGELOG
    • [ ] There is a user-facing docs PR against https://github.com/Kong/docs.konghq.com - PUT DOCS PR HERE

    Issue reference

    FTI-1131

    plugins/proxy-cache size/L 
    opened by vm-001 1
  • feat: add enable_debug_header conf to disable kong_debug header function

    feat: add enable_debug_header conf to disable kong_debug header function

    Summary

    add enable_debug_header conf to disable kong_debug header function

    Checklist

    • [ ] The Pull Request has tests
    • [ ] There's an entry in the CHANGELOG
    • [ ] There is a user-facing docs PR against https://github.com/Kong/docs.konghq.com - PUT DOCS PR HERE

    Full changelog

    • introduce enable_debug_header conf to disable kong_debug header debug function

    Issue reference

    FTI-4521

    core/templates core/configuration size/M 
    opened by attenuation 1
Releases(3.0.2)
Owner
Kong
The Cloud Connectivity Company. Community Driven & Enterprise Adopted.
Kong
Python AsyncIO data API to manage billions of resources

Introduction Please read the detailed docs This is the working project of the next generation Guillotina server based on asyncio. Dependencies Python

Plone Foundation 183 Nov 15, 2022
Restful API framework wrapped around MongoEngine

Flask-MongoRest A Restful API framework wrapped around MongoEngine. Setup from flask import Flask from flask_mongoengine import MongoEngine from flask

Close 525 Jan 1, 2023
🔥 Fire up your API with this flamethrower

?? Fire up your API. Documentation: https://flama.perdy.io Flama Flama aims to bring a layer on top of Starlette to provide an easy to learn and fast

José Antonio Perdiguero 216 Dec 26, 2022
Fully featured framework for fast, easy and documented API development with Flask

Flask RestPlus IMPORTANT NOTICE: This project has been forked to Flask-RESTX and will be maintained by by the python-restx organization. Flask-RESTPlu

Axel H. 2.7k Jan 4, 2023
Flask-Potion is a RESTful API framework for Flask and SQLAlchemy, Peewee or MongoEngine

Flask-Potion Description Flask-Potion is a powerful Flask extension for building RESTful JSON APIs. Potion features include validation, model resource

DTU Biosustain 491 Dec 8, 2022
Restful API framework wrapped around MongoEngine

Flask-MongoRest A Restful API framework wrapped around MongoEngine. Setup from flask import Flask from flask_mongoengine import MongoEngine from flask

Close 505 Feb 11, 2021
REST API framework designed for human beings

Eve Eve is an open source Python REST API framework designed for human beings. It allows to effortlessly build and deploy highly customizable, fully f

eve 6.3k Feb 17, 2021
Fully featured framework for fast, easy and documented API development with Flask

Flask RestPlus IMPORTANT NOTICE: This project has been forked to Flask-RESTX and will be maintained by by the python-restx organization. Flask-RESTPlu

Axel H. 2.5k Feb 17, 2021
Flask-Potion is a RESTful API framework for Flask and SQLAlchemy, Peewee or MongoEngine

Flask-Potion Description Flask-Potion is a powerful Flask extension for building RESTful JSON APIs. Potion features include validation, model resource

DTU Biosustain 484 Feb 3, 2021
A Flask API REST to access words' definition

A Flask API to access words' definitions

Pablo Emídio S.S 9 Jul 22, 2022
News search API developed for the purposes of the ColdCase Project.

Saxion - Cold Case - News Search API Setup Local – Linux/MacOS Make sure you have python 3.9 and pip 21 installed. This project uses a MySQL database,

Dimitar Rangelov 3 Jul 1, 2021
A boilerplate Flask API for a Fullstack Project with some additional packages and configuration prebuilt. ⚙

Flask Boilerplate to quickly get started with production grade flask application with some additional packages and configuration prebuilt.

Yasser Tahiri 32 Dec 24, 2022
Pretty tornado wrapper for making lightweight REST API services

CleanAPI Pretty tornado wrapper for making lightweight REST API services Installation: pip install cleanapi Example: Project folders structure: . ├──

Vladimir Kirievskiy 26 Sep 11, 2022
Lemon is an async and lightweight API framework for python

Lemon is an async and lightweight API framework for python . Inspired by Koa and Sanic .

Joway 29 Nov 20, 2022
Endpoints is a lightweight REST api framework written in python and used in multiple production systems that handle millions of requests daily.

Endpoints Quickest API builder in the West! Endpoints is a lightweight REST api framework written in python and used in multiple production systems th

Jay Marcyes 30 Mar 5, 2022
APIFlask is a lightweight Python web API framework based on Flask and marshmallow-code projects

APIFlask APIFlask is a lightweight Python web API framework based on Flask and marshmallow-code projects. It's easy to use, highly customizable, ORM/O

Grey Li 705 Jan 4, 2023
A minimal, extensible, fast and productive API framework for Python 3.

molten A minimal, extensible, fast and productive API framework for Python 3. Changelog: https://moltenframework.com/changelog.html Community: https:/

Bogdan Popa 980 Nov 28, 2022
A public API written in Python using the Flask web framework to determine the direction of a road sign using AI

python-public-API This repository is a public API for solving the problem of the final of the AIIJC competition. The task is to create an AI for the c

Lev 1 Nov 8, 2021
An abstract and extensible framework in python for building client SDKs and CLI tools for a RESTful API.

django-rest-client An abstract and extensible framework in python for building client SDKs and CLI tools for a RESTful API. Suitable for APIs made wit

Certego 4 Aug 25, 2022