Running a Nginx Reverse Proxy on Kubernetes

Running a Nginx Reverse Proxy on Kubernetes

In this post I will show how I've setup my Nginx Reverse Proxy on Kubernetes and instead of mounting my configs to a persistent volume, I've opted in to create my configs as config maps.

I have a application running on my lan with the port 3579 and I want to setup a reverse proxy to do SSL termination using Traefik.

First we define our config map, named app-rp-config-cm which will include nginx.conf and ombi.conf:

---
apiVersion: v1  
kind: ConfigMap  
metadata:  
  name: app-rp-config-cm
data:  
  nginx.conf: |
    user www-data;
    worker_processes auto;
    pid /run/nginx.pid;
    events {
        worker_connections 1024;
    }
    http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        include /etc/nginx/mime.types;
        default_type application/octet-stream;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        error_log /var/log/nginx/error.log;
        gzip on;
        gzip_disable "msie6";
        include /etc/nginx/conf.d/app.conf;
    }
  app.conf: |
    upstream app {
      server 192.168.0.240:3579;
      keepalive 15;
    }
    server {
      listen 80;
      server_name _;
      location / {
        proxy_pass http://app;
        proxy_http_version 1.1;
        proxy_set_header Connection "Keep-Alive";
        proxy_set_header Proxy-Connection "Keep-Alive";
        proxy_redirect off;
      }
    }

Then our deployment:

---
apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: app-reverse-proxy
  labels:
    app: app-reverse-proxy
    category: reverse-proxy
spec:  
  replicas: 1
  selector:
    matchLabels:
      app: app-reverse-proxy
  template:
    metadata:
      labels:
        app: app-reverse-proxy
        category: reverse-proxy
    spec:
      containers:
      - name: app-reverse-proxy
        image: nginx
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
          limits:
            cpu: 100m
            memory: 50Mi
        volumeMounts:
        - name: app-rp-config-cm-vol
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
          readOnly: true
        - name: app-rp-config-cm-vol
          mountPath: /etc/nginx/conf.d/app.conf
          subPath: app.conf
          readOnly: true
      volumes:
      # https://medium.com/swlh/quick-fix-mounting-a-configmap-to-an-existing-volume-in-kubernetes-using-rancher-d01c472a10ad
      - configMap:
          name: app-rp-config-cm
          items:
          - key: nginx.conf
            path: nginx.conf
          - key: app.conf
            path: app.conf
        name: app-rp-config-cm-vol

Then our service:

---
apiVersion: v1  
kind: Service  
metadata:  
  name: app-reverse-proxy
  namespace: default
spec:  
  ports:
  - name: http
    targetPort: 80
    port: 80
  selector:
    app: app-reverse-proxy
    category: reverse-proxy

And our ingress using Traefik and Letsencrypt:

---
apiVersion: extensions/v1beta1  
kind: Ingress  
metadata:  
  name: app-reverse-proxy
  namespace: default
  annotations:
    kubernetes.io/ingress.class: traefik
    ingress.kubernetes.io/ssl-redirect: "true"
    traefik.backend.loadbalancer.stickiness: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:  
  tls:
    - secretName: app-mydomain-com-tls
      hosts:
        - app.mydomain.com
  rules:
  - host: app.mydomain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: app-reverse-proxy
          servicePort: http

You can apply then individually or concatenate them into a single file and then run:

$ kubectl apply -f my-deployment.yml

Then verify if your deployment is in its desired state:

$ kubectl get deployment -l category=reverse-proxy
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE  
app-reverse-proxy   1/1     1            1           12m  

And verify if your certificate has been issued:

$ kubectl describe certificate
Events:  
  Type    Reason        Age   From          Message
  ----    ------        ----  ----          -------
  Normal  GeneratedKey  12m   cert-manager  Generated a new private key
  Normal  Requested     12m   cert-manager  Created new CertificateRequest resource "app-mydomain-com-tls-90643266"
  Normal  Issued        12m   cert-manager  Certificate issued successfully

Then you should be able to access your endpoint as configured in the ingress section.