Gateway Pattern: Introduction on the bases of Kong Gateway

Basic Principles of Kong Gateway and its Pros and Cons

Itchimonji
CP Massive Programming

--

Resource: Kong Documentary

Today’s trend is moving further and further into a microservice landscape. Consumer services like native android apps or web pages may therefore have a large number of required API services they need to connect to. This can lead to a tangled web of hard dependencies. Such hard couplings can result in services being less easily modifiable over time, and that one change in a service can mean that you have to modify another service, too. This can not comply with the Open-Closed-Principle.

Microservice mess

If you are in a cloud environment, a possible problem could be the routing of each individual service, because Ingress or a service like a NodePort would have to give each API service its own URL or port. As soon as you add new services to the system landscape, you have to adjust your Ingress or NodePort and its routing at the same time. Accordingly, a change leads to the need to adapt another section or configuration. This can not comply with the Open-Closed-Principle either.

Routing mess

Possibilities of a Gateway

“[…] The answer is so common that it’s hardly worth stating. Wrap all the special API code into a class whose interface looks like a regular object. Other objects access the resource through this Gateway, which translates the simple method calls into the appropriate specialized API. […]” — Martin Fowler [Patterns of Enterprise Application Architecture]

In reality, a Gateway is a very simple Wrapper Pattern. This also includes, for example, the Facade Pattern, Adapter or the Mapper. For a service landscape as described above, you enclose all existing APIs behind another service. External objects, such as a website or an Android app, communicate with the included APIs via this new added service. This service is also called Gateway.

Keep a Gateway as simple as you can and as minimal as possible — a Gateway should be dumb. This way you do not have to change it every time. Often it is a good idea to use code generation or a declarative configuration file to create a Gateway.

A Gateway can also transform an awkward API into a more convenient one for other applications to use. For instance, if a web client needs to consume GraphQL, the Gateway can transform the REST endpoint into a GraphQL.

Gateway usage

When Should You Use a Gateway and What Are Its Advantages?

[…] You should consider Gateway whenever you have an awkward interface to something that feels external. Rather than let the awkwardness spread through the whole system, use a Gateway to contain it. There’s hardly any downside to making the Gateway , and the code elsewhere in the system becomes much easier to read. […]” — Martin Fowler [Patterns of Enterprise Application Architecture]

A Gateway makes a system easier to test by giving you a clear point and it makes it easier to swap out an API for another. For example, a web client is not aware of this because an abstraction layer (the Gateway) hangs between the actual API and the client. It is a simple and powerful form of protected variation. You increase flexibility.

In the example case above you simplify your resource dependencies like routing, too. For example, you only need one Ingress or NodePort declaration for your Gateway. All other routes and their maintenance are eliminated as external services only talk to the Gateway instead of the APIs behind it.

Introducing Kong Gateway

Kong Gateway is a lightweight, fast, and flexible cloud-native API gateway. An API gateway is a reverse proxy that lets you manage, configure, and route requests to your APIs.

Kong Gateway runs in front of any RESTful API and can be extended through modules and plugins. It’s designed to run on decentralized architectures, including hybrid-cloud and multi-cloud deployments. — Kong Docs

Kong Gateway is easy to integrate into an existing microservice landscape (e.g., DB-less mode) through declarative configuration, for instance, an YAML file. With a docker container, there are many deployment options available:

  • Docker
  • Kubernetes
  • Helm
  • OpenShift
  • Different operation systems like CentOS, Ubuntu, RHEL and many more

Also, it is not necessary to change any existing services as long as Kong Gateway and the services concerned are on the same network.

Running Kong in DB-less mode

Kong Gateway can be run without any dependencies (databases) in DB-less mode, using only in-memory storage for entities. The configuration of the entities is embedded in a second configuration file, like YAML — its declarative.

Docker

To run Kong with Docker the simplest and easiest way can be achieved as follows:

$ docker run -d --name kong \
-e "KONG_DATABASE=off" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong

Services that are supposed to be addressed via the Gateway are also specified in the configuration file, just like plugins. A configuration YAML file can look like this:

>>> .config.yaml_format_version: "2.1"
_transform: true

services:
- name: image-service // external API service
url: http://image-service:3333
routes:
- name: image-service
paths:
- /image-service
plugins:
- name: rate-limiting
service: image-service
config:
minute: 6
policy: local

To load a declarative configuration file into a running Kong node via its Admin API, you can use commands like this:

$ http :8001/config config=@.config.yaml

Docker-Compose

Another way to run Kong is by using a docker-compose file:

version: "3.9"
networks:
default:
name: kong-net
kong:
container_name: kong
image: kong:2.5.0-alpine
hostname: kong
environment:
KONG_DATABASE: 'off'
KONG_PROXY_ACCESS_LOG: '/dev/stdout'
KONG_ADMIN_ACCESS_LOG: '/dev/stdout'
KONG_PROXY_ERROR_LOG: '/dev/stderr'
KONG_ADMIN_ERROR_LOG: '/dev/stderr'
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DECLARATIVE_CONFIG: "/opt/kong/.config.yaml"
command: "kong start"
ports:
- "8000:8000"
- "8443:8443"
- "8001:8001"
- "8444:8444"
volumes:
- ./kong:/opt/kong

With docker-compose you can also mount the declarative configuration file into the container via volumes. After that you can boot the container.

$ docker-compose up

There are also ways to run Kong Gateway with databases like PostgreSQL, Cassandra or Redis.

At its core, Kong Gateway implements database abstraction, routing, and plugin management. Plugins can live in separate code bases and can be injected anywhere into the request lifecycle, all with a few lines of code. Example of plugins are Rate-Limiting, OAuth 2.0, CORS, or GraphQL Proxy Caching.

Kong Plugin: Rate Limiting

Rate limit how many HTTP requests can be made in a given period of seconds, minutes, hours, days, months, or years. If the underlying Service/Route (or deprecated API entity) has no authentication layer, the Client IP address will be used; otherwise, the Consumer will be used if an authentication plugin has been configured. — Kong Docs

Kong Gateway can be extended very easily with a large selection of plugins. So also Rate Limiting protects against DDoS attacks (for example with a Public API).

With the declarative method (configuration file) you can enable the plugin on a service…

plugins: 
- name: rate-limiting
service: {SERVICE}
config:
second: 5
hour: 10000
policy: local

… or on a route.

plugins:
- name: rate-limiting
route: <route>
config:
second: 5
hour: 10000
policy: local

There are other possibilities, so it is worthwhile having a look at the documentation.

Parameters like second and hour are the number of HTTP requests that can be made per the attribute. In the example above, you can make 5 HTTP or HTTPS requests per second and a maximum of 10000 requests per hour.

Example Project on GitHub

You can checkout my repository where I implemented an example on how to use Kong Gateway with NestJS APIs and Swagger Stats:

Architecture overview of the demo:

Conclusion

A Gateway is a very simple Wrapper Pattern. It helps you to apply to the Open-Closed-Principle at the system level. It should not be confused with the Facade-, Mapper- or the Adapter-Pattern.

Kong Gateway is a lightweight API Gateway that lets you secure, manage, and extend APIs and microservices. It has following the advantages:

  • Easy to install/integrate into an existing system (for example with DB-less mode)
  • Easy to maintain and very flexible
  • No changes to existing services necessary as long they are in the same network
  • Extendable with various plugins like Rate-Limiting, OAuth 2.0, CORS, or GraphQL Proxy Caching to address cross-cutting concerns
  • Configurable via declarative configuration file, K8s definition, or API Administration (e.g., POST methods)
  • Includes authentication, throttling, transformations, and analytics
  • Easy to monitor with Datadog or Prometheus (via Plugins)
  • Supports high-availability clusters
  • Large community

I hope I could help to decouple the publicly facing API endpoints from the underlying microservice architecture with Kong Gateway.

Follow me on Medium, or Twitter, or subscribe here on Medium to read more about DevOps, Agile & Development Principles, Angular and other useful stuff. Thanks for reading and hopefully you can use this article in the near future. Happy Coding! :)

Learn More

Resources

--

--

Itchimonji
CP Massive Programming

Freelancer | Site Reliability Engineer (DevOps) / Kubernetes (CKAD) | Full Stack Software Engineer | https://patrick-eichler.com/links