Skip to content

Ingress and DNS Requirements⚓︎

Access to the EOEPCA BB services is provided via an ingress controller that acts as a reverse proxy. Proxy routes are configured using host-based routing that relies upon wildcard DNS to resolve traffic to the platform domain (e.g. *.myplatform.com). Thus, an ingress controller that supports these features is required.

The EOEPCA+ Identity and Access Management (IAM) solution advocates use of the APISIX Ingress Controller, which offers native integration of IAM request authorisation - which, for example, integrates with Keycloak via OIDC (authentication) and UMA (authorisation) flows. Thus, following description of the general requirements for cluster ingress, a guide is provided for deployment of APISIX.

Requirements:

  • Wildcard DNS: You must have a wildcard DNS entry pointing to your cluster’s load balancer or external IP. For example: *.myplatform.com.
    For testing, wildcard DNS can be simulated using IP-address-based nip.io hostnames - using the entrypoint IP-address of your cluster that routes to your ingress controller.
  • Ingress Controller: APISIX is recommended for use with the EOEPCA+ IAM soluiton; others such as NGINX can also be used if the IAM integration is not of interest

Production vs Development:

  • Production:

    • Ensure a stable and supported ingress controller (e.g. APISIX) exposed through a routable IP address
    • A fully functional wildcard DNS record
  • Development / Testing:

    • Can be exposed through a private IP address
    • Local or makeshift DNS solutions (e.g. nip.io) might be acceptable for internal development/test/demo

Additional Notes⚓︎

  • Wildcard DNS: Ensure a wildcard DNS record is configured for your domain.
  • Ingress Class: Specify the ingress class in your ingress resources for example apisix for APISIX, nginx for NGINX.

Further Reading⚓︎

APISIX Ingress Controller⚓︎

For full installation instructions for the APISIX Ingress Controller see the official Installation Guide.

As a quick start, the steps included here can be followed to deploy the APISIX Ingress Controller via helm chart…

helm repo add apisix https://charts.apiseven.com
helm repo update apisix
helm upgrade -i apisix apisix/apisix \
  --version 2.9.0 \
  --namespace ingress-apisix --create-namespace \
  --set service.type=NodePort \
  --set service.http.nodePort=31080 \
  --set service.tls.nodePort=31443 \
  --set apisix.enableIPv6=false \
  --set apisix.enableServerTokens=false \
  --set apisix.ssl.enabled=true \
  --set apisix.pluginAttrs.redirect.https_port=443 \
  --set ingress-controller.enabled=true

The above configuration assumes that the Kubernetes cluster exposes NodePorts 31080 (http) and 31443 (https) for external access to the cluster. This presumes that a (cloud) load balancer or similar is configured to forward public 80/443 traffic to these exposed ports on the cluster nodes.

This can be adapted according to the network topology of your cluster environment.

Forced TLS Redirection

The following ApisixGlobalRule is used to configure Apisix to redirect all http traffic to https.

cat - <<'EOF' | kubectl -n ingress-apisix apply -f -
apiVersion: apisix.apache.org/v2
kind: ApisixGlobalRule
metadata:
  name: redirect-to-tls
spec:
  plugins:
    - name: redirect
      enable: true
      config:
        http_to_https: true
        _meta:
          filter:
            # With '!OR' all conditions must be false
            - "!OR"
            # Exclude paths used by letsencrypt http challenge
            - [ 'request_uri', '~*', '^/\.well-known/acme-challenge.*' ]
            # Use header X-No-Force-Tls to override
            - [ "http_x_no_force_tls", "==", "true" ]
EOF

The filter is used to suppress the redirection in the specific case of traffic used by the Letsencrypt HTTP01 challenge whilst establishing TLS certificates.
Use of the header X-No-Force-Tls is included to provide an override that may prove useful in some circumstances or during development.

For filter reference see:

Forwarded Port Correction

By default, APISIX sets the X-Forwarded-Port header to its container port (9443 by default) when forwarding requests. This may confuse upstream systems, because the externally facing https port is 443.

Thus, we apply a global rule that replaces the value 9443 with the value 443.
Actually the rule also replaces port 9080 with port 80 though this should be irrelevant due to prior HTTP-to-HTTPS redirection

cat - <<'EOF' | kubectl -n ingress-apisix apply -f -
apiVersion: apisix.apache.org/v2
kind: ApisixGlobalRule
metadata:
  name: forwarded-port-correction
spec:
  plugins:
    - name: serverless-pre-function
      enable: true
      config:
        phase: "rewrite"
        functions:
          - "return function(conf, ctx) if tonumber(ngx.var.var_x_forwarded_port) > 9000 then ngx.var.var_x_forwarded_port = ngx.var.var_x_forwarded_port - 9000 end end"
EOF

APISIX Uninstallation

APISIX can be uninstalled as follows…

helm -n ingress-apisix uninstall apisix
kubectl delete ns ingress-apisix

Multiple Ingress Controllers⚓︎

Recognising that a platform may require use of an alternative ingress controller implementation, we offer an approach which may help to accommodate multiple controllers within the platform.

In this approach we introduce an Ingress Proxy that acts as the cluster entrypoint - using hostname rules to passthrough to the relevant ingress controller. This provides an example that could be adapted to your needs.

The Ingress Proxy is configured as an nginx instance that is configured to pass through traffic according to hostname rules…

  • By default forward to APISIX - ref. service apisix-gateway
  • Hostname prefixed with *-other forwards to alternative ingress controller - for example service ingress-nginx-controller

Ingress Proxy - ConfigMap

Ingress Proxy configuration, using ingress-nginx-controller as example of the other ingress controller.

cat - <<'EOF' | kubectl -n ingress-proxy apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-proxy
data:
  nginx.conf: |
    events {}
    stream {
      resolver kube-dns.kube-system;
      map $ssl_preread_server_name $ssl_upstream {
        default "apisix-gateway.ingress-apisix.svc.cluster.local:443";
        ~(?:^[^.]*)-other\..*$ "ingress-nginx-controller.ingress-nginx.svc.cluster.local:443";
      }
      server {
        listen 443 default_server;
        proxy_pass $ssl_upstream;
        ssl_preread on;
      }
    }
    http {
      resolver kube-dns.kube-system;
      map $host $upstream {
        default "apisix-gateway.ingress-apisix.svc.cluster.local:80";
        ~(?:^[^.]*)-other\..*$ "ingress-nginx-controller.ingress-nginx.svc.cluster.local:80";
      }
      server {
        listen 80 default_server;
        location / {
          proxy_pass http://$upstream;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
        }
      }
    }
EOF

Ingress Proxy - Deployment

Instantiates the Ingress Proxy configured via the ConfigMap.

cat - <<'EOF' | kubectl -n ingress-proxy apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-proxy
  labels:
    app: ingress-proxy
spec:
  selector:
    matchLabels:
      app: ingress-proxy
  template:
    metadata:
      labels:
        app: ingress-proxy
    spec:
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - name: ingress-proxy
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
      volumes:
        - name: ingress-proxy
          configMap:
            name: ingress-proxy
EOF

Ingress Proxy - Service

Creates a Service to expose the nginx instance as a NodePort service listening on the exposed ports 31080 (http) and 31443 (https) - i.e. using the previously assumed ports exposed by the cluster.

cat - <<'EOF' | kubectl -n ingress-proxy apply -f -
apiVersion: v1
kind: Service
metadata:
  name: ingress-proxy
spec:
  selector:
    app: ingress-proxy
  type: NodePort
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 31080
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
      nodePort: 31443
EOF

APISIX Behind Proxy

With the Ingress Proxy providing the cluster entrypoint, the APISIX deployment is adjusted to no longer listen on the exposed ports.

helm repo add apisix https://charts.apiseven.com && \
helm repo update apisix && \
helm upgrade -i apisix apisix/apisix \
  --version 2.9.0 \
  --namespace ingress-apisix --create-namespace \
  --set apisix.enableIPv6=false \
  --set apisix.enableServerTokens=false \
  --set apisix.ssl.enabled=true \
  --set apisix.pluginAttrs.redirect.https_port=443 \
  --set ingress-controller.enabled=true \
  --set etcd.replicaCount=1

Ingress Nginx Behind Proxy

To complete the example, ingress-nginx can be deployed to handle the other traffic.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && \
helm repo update ingress-nginx && \
helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx --create-namespace \
  --set controller.service.type=ClusterIP \
  --set controller.ingressClassResource.default=true \
  --set controller.allowSnippetAnnotations=true

The example ingress-nginx can be uninstalled with…

helm -n ingress-nginx uninstall ingress-nginx; \
kubectl delete ns ingress-nginx

Ingress Proxy - Uninstallation

The Ingress Proxy can be uninstalled as follows…

kubectl -n ingress-proxy delete svc ingress-proxy
kubectl -n ingress-proxy delete deploy ingress-proxy
kubectl -n ingress-proxy delete cm ingress-proxy
kubectl delete ns ingress-proxy