Architecture

Architecture

Application Services in Kubernetes

As discussed in this blog, in a Kubernetes environment, there are several locations where you might deploy application services:

All layers

Let’s take web application firewall (WAF) as an example. WAF policies implement advanced security measures to inspect and block undesirable traffic, but these policies often need to be fine‑tuned for specific applications in order to minimize the number of false positives.

Edge Proxy - Secured API GW on the Ingress Controller

Edge Proxy

In those labs, it was considered to do not have any security service on Front Door, only an Load Balancer. The WAF application service is deployed on Ingress controller in order to met the following customer context:

  • Baseline of the WAF policy definition is owned by SecOps team and they make it available as a catalog for DevOps consumption

  • The implementation of the WAF policies (baseline + modifications) are under the direction of the DevOps team

  • Customer wants to centralize WAF policies at the infrastructure layer, rather than delegating them to individual applications.

  • DevOps make extensive use of Kubernetes APIs to manage the deployment and operation of applications.

This approach still allows for a central SecOps team to define the WAF policies. They can define the policies in a manner that can be easily imported into Kubernetes, and the DevOps team responsible for the Ingress controller can then assign the WAF policies to specific applications.

The NGINX App Protect WAF module is deployed directly on the Ingress Controller. All WAF configuration is managed using Ingress resources [lab #3] or VirtualServer(Route) resources [lab #3-4], configured through the Kubernetes API.

Micro Proxy - Secured API GW GW on a Per‑Service Basis

Micro Proxy

In lab #4, an API GW is also deployed as a proxy tier within Kubernetes, in front of one or more specific services published publicly on Internet, therefore that require an API GW with authentication based on oAuth / OpenID Connect (OIDC) and apply rate limiting.

This API GW is also used to publish one or more specific services internally, i.e. published to other PODs hosted into Kubernetes, that do not require a security policy but require advanced delivery features.

Side Car Proxy - Secured API GW GW on a Per‑Pod Basis

Side Car Proxy

Finally, you may also deploy an API GW in a Pod, acting as an ingress proxy for the application running in the Pod. In this case, the API GW effectively becomes part of the application.

Deploying a WAF as a Per‑Pod Basis allows easily to bind the security policy and the application closely together, for example so that the application is deployed with the WAF policy at all points in the development pipeline and API Security policy imports the OpenAPI spec file of the current build release of the application.

This deployment supports only configuration applied in nginx.conf, i.e. not Kubernetes API.