Skip to content

Services, Wiki-Artikel, Blog-Beiträge und Glossar-Einträge durchsuchen

↑↓NavigierenEnterÖffnenESCSchließen

Container security and Kubernetes hardening: The complete guide

Container and Kubernetes Security from the Ground Up: The 4C Model, Docker Image Hardening (non-root, distroless, multi-stage), Container Scanning with Trivy and Grype, Kubernetes RBAC, Pod Security Standards (restricted), NetworkPolicy (deny-all + allowlist), secrets management with External Secrets Operator and Vault, runtime security with Falco and eBPF, serverless security, supply chain security with Cosign/SLSA, CI/CD pipeline, and Cloud-Native Security Maturity Model.

Table of Contents (14 sections)

Containers have revolutionized the way applications are deployed—while simultaneously creating new attack surfaces. From insecure images to overprivileged pods to container escapes: a misconfigured Kubernetes cluster enables escapes from containers, lateral movement to other pods, and complete cluster takeover. This guide shows how to truly harden container security and Kubernetes.

The 4C Model of Cloud-Native Security

Cloud (Infrastructure Security)
  └── Cluster (Kubernetes Security)
       └── Container (Image and Runtime Security)
            └── Code (Application Security)

Each layer protects the one below it.
A vulnerability in the cloud layer compromises everything.

The 4C model originates from the CNCF Cloud Native Security Whitepaper and defines four security layers:

  1. Cloud - Infrastructure, Cloud Provider IAM, VPC design
  2. Cluster - Kubernetes itself: RBAC, API server hardening, etcd encryption
  3. Container - Images, runtime security, capabilities
  4. Code - Application security, dependencies, secrets in code

Kubernetes Threat Landscape

OWASP Kubernetes Top 10 - Most Common Attack Vectors:

  1. Compromised container → Escape to the host
  2. Weak RBAC → Unauthorized API server access
  3. Unencrypted secrets → Credentials in plain text in etcd
  4. Missing network policies → Lateral movement between pods
  5. Insecure images → Known CVEs in container images
  6. Privileged containers → Host access from container
  7. Misconfigured ServiceAccounts → automatically mounted tokens
  8. Insecure Kubernetes Dashboard without authentication
  9. Missing audit logs → no forensics possible
  10. Overprivileged Cloud IAM roles for worker nodes (e.g., EC2)

Container Image Security

Secure Dockerfiles

# Multi-stage build: Build tools are excluded from the production image

# Go example: Minimal image (FROM scratch)
FROM golang:1.22-alpine AS builder
WORKDIR /build
COPY go.sum go.mod ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app ./cmd/server

FROM scratch  # EMPTY - only binary + CA certificates
COPY --from=builder /app /app
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
USER 65534:65534  # nobody
ENTRYPOINT ["/app"]

# Python example: Distroless image (no shell, no package manager)
FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
COPY . .

FROM gcr.io/distroless/python3-debian12
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY --from=builder /app .
USER nonroot:nonroot  # Not root!
CMD ["app.py"]

# Java example: JRE instead of JDK, no Maven in production
FROM maven:3.9-eclipse-temurin-21 AS build
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src ./src
RUN mvn package -DskipTests

FROM eclipse-temurin:21-jre-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --from=build /app/target/*.jar app.jar
USER appuser
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]

# Node example: Non-root user
FROM node:20-alpine
WORKDIR /app
COPY package*.json .
RUN npm ci --production && \
    addgroup -S appgroup && \
    adduser -S appuser -G appgroup
COPY . .
USER appuser
EXPOSE 3000
CMD ["node", "server.js"]

What distroless images offer:

  • No shell (bash, sh)
  • No package manager (apt, apk)
  • No curl, wget
  • Minimal attack surface - Attackers can hardly execute anything after gaining access

Container Image Scanning

# Trivy (Aqua Security, free - recommended):
trivy image --severity HIGH,CRITICAL nginx:latest
trivy image --severity CRITICAL,HIGH python:3.12

# Check Dockerfile for misconfigurations:
trivy config --severity HIGH,CRITICAL Dockerfile

# Check Kubernetes manifests:
trivy config --severity HIGH,CRITICAL ./k8s/

# CI/CD integration (GitHub Actions):
# - name: Trivy Image Scan
#   uses: aquasecurity/trivy-action@master
#   with:
#     image-ref: 'myapp:${{ github.sha }}'
#     format: 'sarif'
#     output: 'trivy-results.sarif'
#     severity: 'CRITICAL,HIGH'
#     exit-code: '1'  # Pipeline fails on findings!

# GitLab CI integration:
# scan_image:
#   stage: security
#   image: aquasec/trivy:latest
#   script:
#     - trivy image --exit-code 1 --severity HIGH,CRITICAL
#       $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
#   allow_failure: false

# Grype (Anchore):
grype myapp:latest
grype myapp:latest --only-fixed  # Only patchable vulnerabilities

# Snyk:
snyk container test myapp:v1.2.3

Image Policy Rules:

  • No images with HIGH/CRITICAL CVEs in production
  • Rebuild base images weekly (fresh patches)
  • Private registry (Harbor) instead of direct Docker Hub
  • Digest pinning: image: nginx:1.25-alpine@sha256:a8560b36... (Tag can be rebuilt, digest cannot)

Supply Chain Security

Image signing with Cosign (Sigstore)

# Cosign - Image signing (SLSA Level 2+)

# 1. Generate key pair
cosign generate-key-pair

# 2. Sign image after build
cosign sign --key cosign.key \
  registry.example.com/myapp:1.2.3

# 3. Verification (in CI/CD or Admission Controller)
cosign verify --key cosign.pub \
  registry.example.com/myapp:1.2.3

# Keyless Signing with OIDC (GitHub Actions)
cosign sign \
  --oidc-issuer=https://token.actions.githubusercontent.com \
  --identity-token="${ACTIONS_ID_TOKEN_REQUEST_TOKEN}" \
  registry.example.com/myapp:${{ github.sha }}

# Generate and attach SBOM (Software Bill of Materials)
syft registry.example.com/myapp:1.2.3 \
  -o spdx-json > sbom.json

cosign attach sbom --sbom sbom.json \
  registry.example.com/myapp:1.2.3

# Check SBOM for vulnerabilities
grype sbom:sbom.json --fail-on high

Kubernetes Cluster Hardening

The Kubernetes Security Model

CIS Kubernetes Benchmark v1.9 (2024):
  Over 100 checks for control plane, worker nodes, and policies
  kube-bench: automated benchmark testing

Run kube-bench:
  kubectl apply -f \
    https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
  kubectl logs -l app=kube-bench

Important CIS controls:
  1.2.1  --anonymous-auth=false (API Server)
  1.2.6  --authorization-mode=Node,RBAC (never AlwaysAllow!)
  4.2.1  --anonymous-auth=false (Kubelet)
  5.1.1  RBAC: Do not assign cluster-admin

Fix common kube-bench FAIL issues:
  Disable anonymous auth:
    --anonymous-auth=false

  Enable RBAC:
    --authorization-mode=Node,RBAC

  Enable audit logging:
    --audit-log-path=/var/log/kubernetes/audit.log
    --audit-log-maxage=30
    --audit-log-maxbackup=10
    --audit-log-maxsize=100

RBAC - Role-Based Access Control

# Kubernetes RBAC concepts:
#   Role/ClusterRole:     What can you do? (Verbs on Resources)
#   RoleBinding:          Who gets a Role? (in a namespace)
#   ClusterRoleBinding:   Who gets a ClusterRole? (cluster-wide)

# DANGEROUS - Anti-Pattern: Full access for all service accounts
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: terrible-idea
subjects:
- kind: Group
  name: system:serviceaccounts  # ALL service accounts!
roleRef:
  kind: ClusterRole
  name: cluster-admin           # Full cluster admin!

---
# CORRECT - Least Privilege: Dedicated ServiceAccount per App
apiVersion: v1
kind: ServiceAccount
metadata:
  name: myapp-sa
  namespace: production

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: myapp-role
  namespace: production
rules:
  # Only what the app really needs:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list"]
    resourceNames: ["myapp-config"]  # Only its own ConfigMap!
  # No: secrets (requires External Secrets Operator!)
  # No: pods/exec (container shell!)
  # No: cluster-scoped resources

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: myapp-rolebinding
  namespace: production
subjects:
  - kind: ServiceAccount
    name: myapp-sa
    namespace: production
roleRef:
  kind: Role
  name: myapp-role
  apiGroup: rbac.authorization.k8s.io

Dangerous RBAC permissions - never grant:

  • cluster-admin for application ServiceAccounts
  • Wildcards: verbs: ["*"], resources: ["*"]
  • secrets: ["get", "list"] (read all secrets!)
  • pods/exec (shell access to any pods!)
  • nodes: ["*"] (Node takeover!)
# RBAC audit:
kubectl auth can-i create pods --namespace=production
kubectl auth can-i '*' '*' --all-namespaces  # Cluster admin check

# All bindings for a service account:
kubectl get rolebindings,clusterrolebindings -A \
  -o json | jq '.items[] | select(.subjects[]?.name=="myapp-sa")'

# rbac-lookup (Visualization):
kubectl-rbac-lookup myapp-sa -k serviceaccount -n production

Pod Security Standards (PSS)

# Namespace with "restricted" policy label (since Kubernetes 1.25):
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/audit: restricted

# Alternatively via kubectl:
# kubectl label namespace production \
#   pod-security.kubernetes.io/enforce=restricted
Pod Security Profile:
  privileged:  All allowed (only for system workloads!)
  baseline:    Basic security (prevents privilege escalation)
  restricted:  Maximum security (recommended for applications)

Restricted prevents:
  Privileged containers
  hostNetwork, hostPID, hostIPC
  hostPath volumes
  PrivilegeEscalation
  Root user
  Non-secure capabilities
# Restricted-compliant Pod (complete example):
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: production
spec:
  template:
    spec:
      serviceAccountName: myapp-sa      # Dedicated SA!
      automountServiceAccountToken: false  # No token if not needed!
      securityContext:
        runAsNonRoot: true
        runAsUser: 10001
        runAsGroup: 10001
        fsGroup: 10001
        seccompProfile:
          type: RuntimeDefault           # Syscall filtering enabled!
      containers:
        - name: myapp
          image: myregistry/myapp:v1.2.3@sha256:abc123  # Digest pinning!
          securityContext:
            allowPrivilegeEscalation: false  # No sudo/setuid!
            readOnlyRootFilesystem: true
            capabilities:
              drop: ["ALL"]             # Remove all Linux capabilities!
              # add: ["NET_BIND_SERVICE"] # Only if port < 1024 is required
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "500m"               # No unlimited CPU!
          volumeMounts:
            - name: tmp
              mountPath: /tmp           # Writable /tmp despite readOnlyRootFilesystem
      volumes:
        - name: tmp
          emptyDir: {}

NetworkPolicy and Microsegmentation

Standard Kubernetes: All pods can reach all other pods!
NetworkPolicy is a critical hardening measure.

Prerequisite: CNI with NetworkPolicy support
  (Calico, Cilium, Weave Net - NOT: Flannel without Calico)
# 1. Default-Deny Policy: Block everything first, then allow
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}           # All pods in the namespace
  policyTypes:
    - Ingress
    - Egress

---
# 2. Allowed: Ingress from the Ingress Controller
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-controller
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress-nginx
          podSelector:
            matchLabels:
              app.kubernetes.io/name: ingress-nginx
      ports:
        - protocol: TCP
          port: 3000

---
# 3. Allowed: Frontend to Backend (specific pods only)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080

---
# 4. Allowed: Egress to DB and DNS
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-db-egress
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: postgresql
      ports:
        - protocol: TCP
          port: 5432
    # Always allow DNS!
    - ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53

Extended with Cilium (DNS-based filtering):

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: restrict-egress-to-fqdn
spec:
  endpointSelector:
    matchLabels:
      app: backend
  egress:
    - toFQDNs:
        - matchName: "api.extern.de"  # Only this domain allowed!
      toPorts:
        - ports:
            - port: "443"
              protocol: TCP

Secrets Management in Kubernetes

Problem: By default, Kubernetes secrets are stored in etcd only as Base64-encoded
(= unencrypted!) data.

Anyone who can read etcd knows all the secrets.
Secrets in Git (most common mistake): never!

Enable etcd encryption

# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
  - secrets
  providers:
  - aescbc:
      keys:
      - name: key1
        secret:   <base64-encoded-32-byte-key># openssl rand -base64 32
  - identity: {}  # Fallback for unencrypted secrets (migration)

# Start kube-apiserver with:
# --encryption-provider-config=/etc/kubernetes/encryption-config.yaml

# Re-encrypt existing secrets:
# kubectl get secrets -A -o json | kubectl replace -f -
# Check: Is etcd encrypted?
etcdctl get /registry/secrets/default/my-secret --print-value-only \
  | strings | head -5
# If plaintext → not encrypted!
# With encryption: Output starts with &quot;k8s:enc:...&quot;
# Secrets in Vault/AWS SSM - never directly in K8s
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: myapp-secrets
  namespace: production
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: vault-backend
    kind: ClusterSecretStore
  target:
    name: myapp-secrets   # Creates K8s Secret automatically!
    creationPolicy: Owner
  data:
    - secretKey: db-password      # K8s Secret Key
      remoteRef:
        key: production/myapp/database  # Vault path
        property: password

---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: vault-backend
spec:
  provider:
    vault:
      server: &quot;https://vault.company.com&quot;
      path: &quot;secret&quot;
      version: &quot;v2&quot;
      auth:
        kubernetes:
          mountPath: &quot;kubernetes&quot;
          role: &quot;external-secrets&quot;  # Vault Role with Least Privilege

Sealed Secrets (Bitnami) - GitOps-friendly: Secrets are stored in Git with asymmetric encryption. The controller in the cluster decrypts them. Key rotation is possible.


Admission Controller and Policy Enforcement

# Kyverno Policy: No &quot;latest&quot; tag allowed
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-latest-tag
spec:
  validationFailureAction: enforce
  rules:
  - name: require-image-tag
    match:
      resources:
        kinds: [&quot;Pod&quot;]
    validate:
      message: &quot;Image tag &#x27;:latest&#x27; is prohibited. Use a specific tag.&quot;
      pattern:
        spec:
          containers:
          - image: &quot;!*:latest&quot;

---
# Kyverno: Root user prohibited
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-root-user
spec:
  validationFailureAction: enforce
  rules:
  - name: check-runAsNonRoot
    match:
      resources:
        kinds: [&quot;Pod&quot;]
    validate:
      message: &quot;Containers must not run as root.&quot;
      pattern:
        spec:
          containers:
          - (securityContext):
              runAsNonRoot: true

---
# OPA Gatekeeper: Only registry-internal images allowed
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
  name: allowed-repos
spec:
  match:
    kinds:
    - apiGroups: [&quot;&quot;]
      kinds: [&quot;Pod&quot;]
  parameters:
    repos:
      - &quot;registry.intern.firma.de/&quot;
      - &quot;registry.k8s.io/&quot;
      # Docker Hub images (docker.io/) are rejected!

Runtime Security with Falco and eBPF

Falco - eBPF-based runtime security

# Installation via Helm:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco \
  --namespace falco \
  --set driver.kind=ebpf \
  --set falcosidekick.enabled=true \
  --set falcosidekick.config.slack.webhookurl=<webhook>
# What Falco detects:
#   Shell opened in container (pod exec)
#   Privilege escalation attempted
#   Sensitive files read (/etc/shadow, /etc/passwd)
#   Unusual network connections from containers
#   Malware-like behavior (crypto-mining, reverse shell)

# Important default rules (already built into Falco):
- rule: &quot;Terminal shell in container&quot;
  desc: &quot;A shell was used as entrypoint/exec in a container&quot;
  condition: &gt;
    evt.type = execve and
    container and
    proc.name in (shell_binaries)

- rule: &quot;Write below binary dir&quot;
  desc: &quot;An attempt to write to any file below a set of binary directories&quot;
  condition: &gt;
    bin_dir and evt.dir = &lt; and
    not user_known_write_below_binary_dir_activities

# Custom Rule: Detect crypto mining
- rule: &quot;Crypto Mining Activity&quot;
  desc: &quot;Detected potential crypto mining&quot;
  condition: &gt;
    evt.type = connect and
    fd.sport in (3333, 4444, 8888, 14444, 45700) and
    container
  output: &gt;
    Crypto mining (connection=%fd.name pid=%proc.pid container=%container.id)
  priority: CRITICAL

# Falco alerts to Slack/PagerDuty (falcosidekick Helm values):
# falcosidekick:
#   config:
#     slack:
#       webhookurl: &quot;https://hooks.slack.com/services/...&quot;
#       minimumpriority: &quot;warning&quot;
#     pagerduty:
#       routingkey: &quot;<routing-key>&quot;
#       minimumpriority: &quot;critical&quot;

Tetragon and Cilium (eBPF)

Modern runtime security tools use eBPF:

Tetragon (Cilium):
  Process, File, Network Events
  Real-time Enforcement (not just alerting)
  K8s-native (aware of Namespace, Pod, ServiceAccount)
  Alert: when /etc/shadow is read
  Alert: when kubectl is executed in the container
  Kill: when crypto-mining behavior is detected

Cilium (Network Policy with eBPF):
  L7 Awareness: HTTP method, DNS-based filtering
  WireGuard encryption between nodes
  Hubble: Network Observability

Service Mesh for Zero Trust

# Istio PeerAuthentication: mTLS required (all pods must authenticate)
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: production
spec:
  mtls:
    mode: STRICT

Serverless Security

Serverless (AWS Lambda, Azure Functions, Google Cloud Functions)
has its own risks:

Attack vectors:
  Event injection: Attacker controls input event (S3, SQS, etc.)
  Dependency vulnerabilities: npm/pip packages without EDR protection
  Overly permissive IAM roles
  Secrets in Environment Variables
  Denial-of-Wallet (costly loops or recursive calls)

Best Practices:
  Lambda Execution Role: minimal permissions only
  Secrets: AWS Secrets Manager / Parameter Store (not Env Vars)
  Set Function Timeout (prevents Denial-of-Wallet)
  Dead Letter Queue for error handling
  VPC placement for internal resources (RDS, ElastiCache)
  Input Validation: never use event[&#x27;key&#x27;] directly in SQL/OS commands
# AWS Lambda - secure structure
import re, boto3

def handler(event, context):
    # Validation FIRST
    bucket = event.get(&#x27;bucket&#x27;, &#x27;&#x27;)
    if not re.match(r&#x27;^[a-z0-9-]+$&#x27;, bucket):
        raise ValueError(&quot;Invalid bucket name&quot;)

    # Secrets from SSM, not from env vars
    ssm = boto3.client(&#x27;ssm&#x27;)
    api_key = ssm.get_parameter(
        Name=&#x27;/myapp/api_key&#x27;, WithDecryption=True
    )[&#x27;Parameter&#x27;][&#x27;Value&#x27;]
    # ...

Security in the CI/CD pipeline

# Complete security pipeline (GitHub Actions)
name: Secure Build Pipeline

on: [push]

jobs:
  sast:
    name: Static Analysis
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: Semgrep SAST
      uses: returntocorp/semgrep-action@v1
      with:
        config: p/owasp-top-ten

  sca:
    name: Dependency Scan
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: pip-audit
      run: pip-audit -r requirements.txt

  container-scan:
    name: Container Image Scan
    needs: [sast, sca]
    runs-on: ubuntu-latest
    steps:
    - name: Build Image
      run: docker build -t myapp:${{ github.sha }} .

    - name: Trivy Scan
      uses: aquasecurity/trivy-action@master
      with:
        image-ref: myapp:${{ github.sha }}
        severity: CRITICAL,HIGH
        exit-code: 1

    - name: Sign Image
      run: |
        cosign sign --key env://COSIGN_KEY \
          registry.example.com/myapp:${{ github.sha }}
      env:
        COSIGN_KEY: ${{ secrets.COSIGN_KEY }}

  dast:
    name: Dynamic Analysis (Staging)
    needs: container-scan
    runs-on: ubuntu-latest
    steps:
    - name: Deploy to Staging
      run: helm upgrade --install myapp ./chart \
             --set tag=${{ github.sha }}
    - name: OWASP ZAP Scan
      uses: zaproxy/action-full-scan@v0.10.0
      with:
        target: https://staging.example.com

Cloud-Native Security Maturity Model

Level 0 - Blind:
  No container scans
  Default Kubernetes config
  Root containers everywhere

Level 1 - Basic:
  Image scanning (Trivy/Snyk)
  Non-root containers
  RBAC enabled (but broad)

Level 2 - Managed:
  Supply Chain Signing (Cosign)
  Pod Security Standards (restricted)
  NetworkPolicy for all namespaces
  Falco runtime monitoring

Level 3 - Advanced:
  SLSA Level 3 (build provenance verified)
  eBPF-based runtime enforcement (Tetragon)
  Service Mesh with mTLS (Istio/Linkerd)
  GitOps with Policy-as-Code (OPA, Kyverno)
  Automated remediation

Level 4 - Optimized:
  Continuous compliance (CSPM + Policy)
  Chaos Engineering for Security
  Red Team Exercises in Cloud Environment

Security Checklist for Kubernetes

API Server:
  --anonymous-auth=false
  --authorization-mode=Node,RBAC (never AlwaysAllow!)
  --audit-log-path configured
  --encryption-provider-config for Secrets
  --tls-cipher-suites set to secure ciphers

Worker Nodes:
  kubelet --anonymous-auth=false
  kubelet --authorization-mode=Webhook
  kubelet --read-only-port=0 (closes port 10255!)
  Kernel updates up to date

RBAC:
  No cluster-admin for normal service accounts
  RBAC audit performed with rbac-lookup
  automountServiceAccountToken: false (per namespace default)

Network:
  Default-Deny NetworkPolicy in all namespaces
  Ingress controller with TLS
  Egress policies for external communication

Images:
  Image scanning in CI/CD (Trivy/Grype)
  Only registry-internal images allowed (OPA Gatekeeper/Kyverno)
  Digest pinning for all production images
  Non-root users in all containers
  Prefer distroless or scratch images

Secrets:
  etcd encryption enabled
  No secrets in environment variables or ConfigMaps
  External Secrets Operator or Sealed Secrets for sensitive data

Monitoring:
  Falco for runtime anomaly detection
  Kubernetes audit logs forwarded to SIEM
  Run kube-bench regularly

IaC / GitOps:
  Kubernetes manifests validated via Checkov/tfsec
  Kyverno or OPA Gatekeeper as admission controller
  Image signing enforced with Cosign

Conclusion

Containers and Kubernetes offer enormous advantages for deployment and scaling—but without consistent hardening, massive security risks arise. The 4C model provides the framework: Cloud, Cluster, Container, Code—every layer must be secured. The most important measures are: distroless images with digest pinning, RBAC based on least privilege, Pod Security Standards set to "restricted", NetworkPolicy set to "default-deny," external secrets instead of etcd secrets, and Falco for runtime anomaly detection.

Further reading: Cloud Security Hub | Cloud Detection Engineering

Sources & References

  1. [1] NIST SP 800-204 Security Strategies for Microservices-based Application Systems - NIST
  2. [2] CNCF Cloud Native Security Whitepaper - CNCF
  3. [3] CIS Kubernetes Benchmark - CIS
  4. [4] OWASP Kubernetes Top 10 - OWASP

Questions about this topic?

Our experts advise you free of charge and without obligation.

Free Consultation

About the Author

Chris Wojzechowski
Chris Wojzechowski

Geschäftsführender Gesellschafter

E-Mail

Geschäftsführender Gesellschafter der AWARE7 GmbH mit langjähriger Expertise in Informationssicherheit, Penetrationstesting und IT-Risikomanagement. Absolvent des Masterstudiengangs Internet-Sicherheit an der Westfälischen Hochschule (if(is), Prof. Norbert Pohlmann). Bestseller-Autor im Wiley-VCH Verlag und Lehrbeauftragter der ASW-Akademie. Einschätzungen zu Cybersecurity und digitaler Souveränität erschienen u.a. in Welt am Sonntag, WDR, Deutschlandfunk und Handelsblatt.

10 Publikationen
  • Einsatz von elektronischer Verschlüsselung - Hemmnisse für die Wirtschaft (2018)
  • Kompass IT-Verschlüsselung - Orientierungshilfen für KMU (2018)
  • IT Security Day 2025 - Live Hacking: KI in der Cybersicherheit (2025)
  • Live Hacking - Credential Stuffing: Finanzrisiken jenseits Ransomware (2025)
  • Keynote: Live Hacking Show - Ein Blick in die Welt der Cyberkriminalität (2025)
  • Analyse von Angriffsflächen bei Shared-Hosting-Anbietern (2024)
  • Gänsehaut garantiert: Die schaurigsten Funde aus dem Leben eines Pentesters (2022)
  • IT Security Zertifizierungen — CISSP, T.I.S.P. & Co (Live-Webinar) (2023)
  • Sicherheitsforum Online-Banking — Live Hacking (2021)
  • Nipster im Netz und das Ende der Kreidezeit (2017)
IT-Grundschutz-Praktiker (TÜV) IT Risk Manager (DGI) § 8a BSIG Prüfverfahrenskompetenz Ausbilderprüfung (IHK)
This article was last edited on 08.03.2026. Responsible: Chris Wojzechowski, Geschäftsführender Gesellschafter at AWARE7 GmbH. License: CC BY 4.0 - free use with attribution: "AWARE7 GmbH, https://a7.de"

Cookielose Analyse via Matomo (selbst gehostet, kein Tracking-Cookie). Datenschutzerklärung