Kubernetes Requirements⚓︎
EOEPCA places a few special demands on your Kubernetes cluster. EOEPCA does not detail how to install Kubernetes—there are many official and third-party guides. Instead, we focus on specific EOEPCA demands you should verify in an existing cluster.
Requirements⚓︎
-
Ingress Controller (Mandatory)
- An ingress controller that supports host-based routing (NGINX, APISIX, etc.).
- For use with Cert Manager LetsEncrypt
HTTP01challenge, then this must be routable from the public internet - Ideally combined with Wildcard DNS for public exposure of each EOEPCA Building Block.
-
Wildcard DNS (Recommended)
- A cluster-level wildcard DNS record:
*.example.com → <Load Balancer IP>. - This ensures that each EOEPCA Building Block can expose
service1.example.com,service2.example.com, etc. - In the absence of dedicated DNS (e.g. for local development deployment), then host-based routing can be emulated with IP-address-based
nip.iohostnames
- A cluster-level wildcard DNS record:
-
Run Containers as Root (Mandatory)
Some EOEPCA components require root privileges (e.g., certain processing containers). Attempting to run them under a non-root UID can fail. Ensure your cluster’s security policies (PodSecurityPolicies, PodSecurity Standards, or Admission Controllers) allow
rootcontainers. -
Load Balancer with 80/443 (Recommended)
- If your environment is on-prem or in a cloud, you should have a load balancer or external IP that listens on HTTP/HTTPS.
- In a development scenario (e.g., Minikube or a single-node Rancher), you can rely on NodePort or port forwarding, but this is not recommended for production.
-
Cert-Manager or Equivalent (Recommended)
- We strongly recommend cert-manager for TLS automation.
- If you prefer manual certificates for dev or air-gapped setups, be prepared to manage rotation.
Production vs. Development⚓︎
-
Production
- Leverage a managed Kubernetes cluster (e.g., an enterprise Rancher deployment or a cloud provider’s managed K8S).
- Use cert-manager with Let’s Encrypt or your CA for auto-renewed certificates.
- Keep your images in a Docker Hub authenticated registry or a private repository to avoid pull-rate issues.
-
Development / Testing
- A single-node cluster (Rancher, K3s, Minikube, Docker Desktop) can suffice.
- You can manually manage TLS or skip it if everything is internal.
- For DNS, you can use a local DNS trick like
nip.ioor edit your/etc/hostsas needed (less flexible, though).
Additional Guidance⚓︎
- Ensure your container runtime (containerd, Docker, etc.) is up to date and fully compatible with your K8S version.
- Check Kubernetes official docs or Rancher docs for installation references.
- If you expect to run many EOEPCA components pulling images from Docker Hub, set up credentials or a proxy to avoid rate limits.
Creating an image pull secret for DockerHub⚓︎
We recommend this step if any
helmdeployment is returningImagePullBackOfferrors.
For example, docker credentials in the processing namespace…
kubectl create secret docker-registry regcred \
--docker-server="https://index.docker.io/v1/" \
--docker-username="YOUR_DOCKER_USERNAME" \
--docker-password="YOUR_DOCKER_PASSWORD_OR_TOKEN" \
--docker-email="YOUR_EMAIL" \
-n processing
These credentials (regcred) can then be referenced as an imagePullSecret in the relevant PodSpec of any workloads running in the processing namespace.
NOTE. Your Kubernetes distribution may provide other means for configuring cluster-wide container registry credentials - e.g. directly within the container runtime of each node within your cluster - as illustrated below with the
--registry-configoption of thek3dcluster creation.
Quick Start⚓︎
For evaluation and/or development purposes a non-production single node local cluster can be established.
This quick start provides some simple instructions to establish a local development cluster using k3d, which is part of the Rancher Kubernetes offering.
Install k3d
Follow the Installation Instructions to install the k3d binary.
Create Kubernetes Cluster⚓︎
Cluster creation is initiated by the following command.
export KUBECONFIG="$PWD/kubeconfig.yaml"
k3d cluster create eoepca \
--image rancher/k3s:v1.28.15-k3s1 \
--k3s-arg="--disable=traefik@server:0" \
--k3s-arg="--tls-san=$(hostname -f)@server:0" \
--servers 1 --agents 0 \
--port 31080:31080@loadbalancer \
--port 31443:31443@loadbalancer
The characteristics of the created cluster are:
- KUBECONFIG file created in the file
kubeconfig.yamlin the current directory - Cluster name is
eoepca. Change as desired - Single node that provides all Kubernetes roles (control-place, master, worker, etc.)
- No ingress controller (which is established elsewhere is this guide)
- Cluster exposes ports 31080 (http) and 31443 (https) as entrypoint. Change as desired
- The (optional)
--tls-sanis used to facilitate cluster administration (kubectl) from other hosts - by including the hostname in thekubeconfigclient certificate
The Kubernetes version of the cluster can be selected via the --image option - taking account of:
- k3s images provided by rancher: https://hub.docker.com/r/rancher/k3s/tags
- Kubernetes Release History: https://kubernetes.io/releases/
- Kubernetes API Deprecations: https://kubernetes.io/docs/reference/using-api/deprecation-guide/
Container registry credentials can be introduced at cluster creation - e.g. for DockerHub.
-
Registry credentials are defined in a dedicated config file (registries.yaml)…
-
The file
registries.yamlis introduced during thek3d cluster createcommand…export KUBECONFIG="$PWD/kubeconfig.yaml" k3d cluster create eoepca \ --image rancher/k3s:v1.28.15-k3s1 \ --k3s-arg="--disable=traefik@server:0" \ --k3s-arg="--tls-san=$(hostname -f)@server:0" \ --servers 1 --agents 0 \ --port 31080:31080@loadbalancer \ --port 31443:31443@loadbalancer \ --registry-config "registries.yaml"
Suppress Resource Requests⚓︎
In the case that k3d or other similar single node development cluster is used, then it is likely that insufficient cpu/memory will be available to satisfy the resource requests specified by the helm charts defaults.
A simple approach to avoid this problem is to use a Mutating Admission Policy to zero any pod resource requests.
NOTE that the success of this workaround relies upon the (overall) tendancy of the deployed components to request more cpu/memory resource than they require for a simple development setup.
For this we can use the Kyverno Policy Engine through which we can configure an admission webhook with a mutating rule.
Install Kyvero using Helm
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update kyverno
helm upgrade -i kyverno kyverno/kyverno \
--version 3.4.1 \
--namespace kyverno \
--create-namespace
Create the ClusterPolicy
cat - <<'EOF' | kubectl apply -f -
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: remove-resource-requests
spec:
rules:
- name: remove-resource-requests
match:
any:
- resources:
kinds:
- Pod
mutate:
foreach:
- list: "request.object.spec.containers"
patchStrategicMerge:
spec:
containers:
- name: "{{ element.name }}"
resources:
requests:
cpu: "0"
memory: "0"
- list: "request.object.spec.initContainers || []"
preconditions:
all:
- key: "{{ length(request.object.spec.initContainers) }}"
operator: GreaterThan
value: 0
patchStrategicMerge:
spec:
initContainers:
- name: "{{ element.name }}"
resources:
requests:
cpu: "0"
memory: "0"
EOF
Storage Provisioner⚓︎
As described in the EOEPCA+ Prerequisites, a persistence solution providing ReadWriteMany storage is required by some BBs.
For this development deployment the single node HostPath Provisioner can be used as described in the Storage - Single-node Quick Start.
For a multi-node cluster with access to Object Storage, then the Storage - Multi-node Quick Start can be used.