🎉 Announcing new lower pricing — up to 40% lower costs for Cloud Servers and Cloud SQL! Read more →

NGINX Ingress on Kubernetes

When deploying a larger web app onto Kubernetes it’s common to run a dedicated web service layer to proxy onto the application pods, perhaps doing some application layer routing to different backend services. For Brightbox users, that web service would then be exposed to the internet through our Cloud Load Balancer service which integrates neatly with Kubernetes, and can even handle managing your SSL certificates and renewals.

However, you can also run one cluster-wide web service called an Ingress controller which can provide this service for all deployments on the cluster and in a standardised way.

There are several ingress controllers available but we generally recommend the NGINX ingress controller. When combined with a certificate manager, you can setup SSL-secured external web serving of any deployment with just a simple “Ingress” resource definition.

Here we’ll explain how to get nginx-ingress installed on an existing Kubernetes cluster running the Brightbox Cloud Controller (which you get if you use our terraform system to build your cluster).

Install Helm 3

Helm is a package management tool that makes it easy to install software on Kubernetes clusters. For this guide you’ll need version 3 of the Helm command line tool installed wherever you’ll be managing your Kubernetes cluster from. See the Helm installation documentation for more details.

Install nginx-ingress controller with Helm

Now you have Helm installed, make sure you have the Kubernetes helm chart repository added and updated:

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
$ helm repo update

Create a namespace for the deployment:

$ kubectl create namespace nginx-ingress

Then create a config file for the helm chart, call it nginx-ingress.yaml and add the following config for starters:

controller:
  replicaCount: 1
  service:
    annotations:
      service.beta.kubernetes.io/brightbox-load-balancer-healthcheck-protocol: tcp
      service.beta.kubernetes.io/brightbox-load-balancer-listener-protocol: tcp
      service.beta.kubernetes.io/brightbox-load-balancer-listener-proxy-protocol: "v2"
      service.beta.kubernetes.io/brightbox-load-balancer-listener-idle-timeout: "120000"
  config:
    use-proxy-protocol: "true"

Then deploy it with helm, in the new namespace and with the config file you just created:

$ helm upgrade --namespace=nginx-ingress nginx-ingress stable/nginx-ingress --values nginx-ingress.yaml

This will deploy nginx-ingress, create a Brightbox Load Balancer and map a new Cloud IP to it. You can find the Cloud IP name by looking up the nginx-ingress-controller service:

$ kubectl -n nginx-ingress get service nginx-ingress-controller

NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP                   PORT(S)                      AGE
nginx-ingress-controller   LoadBalancer   172.30.149.39   cip-onm2d.gb1.brightbox.com   80:32437/TCP,443:31956/TCP   18m

Both the Load Balancer and NGINX are configured to use PROXY protocol to ensure real IP addresses of clients are kept in-tact.

If you have an existing Cloud IP you want to use, you can specify it in the helm config as:

controller:
  service:
    loadBalancerIP: "109.107.x.x".

Setup a DNS wildcard record

To make setting up new ingress services as convenient as possible, it’s a good idea to have a wildcard DNS record setup. All Brightbox Cloud IPs come with a wildcard DNS record as their identifier, in this example here that would be *.cip-onm2d.gb1.brightbox.com but we recommend setting up your own record on your own domain - just have it resolve to the same Cloud IP address (remember that Cloud IPs have both IPv4 and IPv6 addresses).

In this example, we’ll use *.k8s.example.com..

Setup an ingress for your service

Now, exposing your deployment to the internet is as easy as creating an Ingress definition. Assuming you have a Deployment called hello-world with an internal Service such as:

kind: Service
apiVersion: v1
metadata:
  name: hello-world
  namespace: hello-world
spec:
  selector:
    name: hello-world
  ports:
    - name: web
      protocol: TCP
      port: 80

Then to expose this at hello-world.k8s.example.com you just need to create an Ingress like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-world
  namespace: hello-world
spec:
  rules:
  - host: hello-world.k8s.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: hello-world
          servicePort: web

Now when visiting hello-world.k8s.example.com, your browser communicates with the cluster’s NGINX system, which proxies the request onto your pods, via your service.

Routing sub-paths to multiple backends

If you have separate parts of your system served by different apps, it’s easy to route those with multiple path entries in your ingress rules like this:

rules:
- host: hello-world.k8s.example.com
  http:
    paths:
    - path: /
      backend:
        serviceName: hello-world
        servicePort: web
    - path: /blog
      backend:
        serviceName: wordpress
        servicePort: web

Scaling up NGINX

By default, just one instance of NGINX is deployed on your cluster, onto one of your nodes but to handle more load you can scale up and add additional instances. Edit your nginx-ingress.yaml helm config and change replicaCount to 2 and rerun helm:

$ helm upgrade --namespace=nginx-ingress nginx-ingress stable/nginx-ingress --values nginx-ingress.yaml

And a second pod running NGINX will be started on another node in your cluster, and that node is automatically added to the Brightbox Load Balancer. Our Kubernetes Cloud Controller labels nodes with their availability zone, so Kubernetes will try to ensure NGINX pods are balanced across our multiple datacentres too, ensuring high availability.

You can customise various other aspects of the NGINX deployment with other keys in the helm config.

Customising Ingress configs

Each Ingress can be customised with annotations. You can easily configure server aliases, add rewrites, tweak various timeouts and even setup authentication:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-world
  namespace: hello-world
  annotations:
    nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
    nginx.ingress.kubernetes.io/proxy-body-size: "128M"
    nginx.ingress.kubernetes.io/server-alias: hi-world.k8s.example.com

See the nginx-ingress documentation for a complete list of annotations you can use.

Summary

While you may stick to dedicated web services for larger, more complex deployments, running one integrated cluster-wide web service is ideal for running lots of smaller apps. Something we think every cluster can benefit from.

In part two of this guide, we’ll be covering how to setup cert-manager to allow automatic generation of SSL certificates for your ingresses. Coming soon!

Interested in Managed Kubernetes?

Brightbox have been managing web deployments large and small for over a decade. If you’re interested in the benefits of Kubernetes but want us to handle managing and monitoring it for you, drop us a line.

Get started with Brightbox Sign up takes just two minutes...