Deploy Kubernetes on Brightbox Cloud

This guide takes you through deploying a Kubernetes cluster on Brightbox Cloud using Terraform.

The deployed cluster will be pre-configured with the Brightbox Cloud Kubernetes controller manager, allowing Kubernetes to manage it’s own resources using the Brightbox Cloud API.


You need a Brightbox Cloud account, and you must have an SSH key set up. And you’ll need an SSH agent running locally with that SSH key added.

Locally, you’ll need git and must have Terraform installed.

Create an API Client

Kubernetes needs a set of Brightbox API Client credentials so it can use the API to manage its nodes. In the Brightbox Manager, click the cog icon next to your account name and select API Access

Then click New API Client and give the new client a useful name and click Save.

Note the auto-generated API Client ID and secret, which you’ll need in the next steps.

Clone the Brightbox kubernetes-cluster terraform configuration repository

We’ve written a set of terraform configs to build a Kubernetes cluster for you, so get those from github:

$ git clone

Cloning into 'kubernetes-cluster'...
remote: Counting objects: 170, done.        
remote: Compressing objects: 100% (94/94), done.        
remote: Total 170 (delta 110), reused 127 (delta 74), pack-reused 0        
Receiving objects: 100% (170/170), 39.88 KiB | 5.70 MiB/s, done.
Resolving deltas: 100% (110/110), done.

$ cd kubernetes-cluster/

Initialize Terraform

Then get Terraform to initialize all the relevant plugins:

$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on
- Downloading plugin for provider "null" (1.0.0)...
- Downloading plugin for provider "brightbox" (1.0.5)...
- Downloading plugin for provider "tls" (1.1.0)...
- Downloading plugin for provider "random" (1.3.1)...
- Downloading plugin for provider "template" (1.0.0)...

Terraform has been successfully initialized!

Configure Terraform

We need to tell Terraform which Brightbox account to build the cluster on, your username and password to authenticate with, how many servers to build and the API Client credentials you created above.

Create a file called terraform.tfvars with the following keys and appropriate values:

account                  = "acc-xxxxx"
username                 = ""
controller_client        = "cli-yyyyy"
controller_client_secret = "clisecret"
worker_count             = 2

You’ll notice this doesn’t include your password. We recommand against storing user credentials on-disk in plain-text, even locally.

Luckily Terraform allows us to provide variables from environment variables To avoid your password being echoed to screen or ending up in your bash_history file, use the read command to prompt for a password and then export it into the environment:

$ read -p "Password:" -s TF_VAR_password
$ export TF_VAR_password

Build the cluster!

Amazingly that’s all the hard work done. Now just apply the configuration and Terraform will spit out a huge plan of action and ask you to confirm:

$ terraform apply

data.template_file.worker-cloud-config: Refreshing state...
data.brightbox_image.k8s_master: Refreshing state...
data.brightbox_image.k8s_worker: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:


Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:

Type yes and hit enter, and terraform will build and configure your new Kubernetes cluster.

After a few minutes the cluster will be built and Terraform will spit out some useful information:

Apply complete! Resources: 20 added, 0 changed, 0 destroyed.


group_fqdn =
master =

Connect to your Kubernetes cluster

The master output is the public IP address of the Kubernetes master server. You can SSH into this server using your SSH key:

$ ssh

Last login: Tue Aug 14 15:56:46 2018 from 2001:470:1f1d:382:c03b:215:37df:2584

Or you can use a neat trick to get Terraform fill in the hostname for you like this:

$ ssh ubuntu@$(terraform output master)

So now you can use kubectl on the master to inspect the cluster:

$ kubectl get nodes

srv-ayfvb   Ready     <none>    7m        v1.11.2
srv-kp5z5   Ready     master    7m        v1.11.2
srv-vmlld   Ready     <none>    7m        v1.11.2

See here we have one master server and two other nodes.

Grow the cluster

Let’s get Terraform to build an additional node for our cluster.

Edit the terraform.tfvars file and increase the worker_count variable from 2 to 3:

worker_count             = 3

Then run terraform apply again. Terraform knows that it’s already build the rest of the cluster and just builds one new node and configures it:

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + brightbox_server.k8s_worker[2]
      id:                      <computed>
      fqdn:                    <computed>
      hostname:                <computed>
      image:                   "img-s0jtd"
      interface:               <computed>
      ipv4_address:            <computed>
      ipv4_address_private:    <computed>
      ipv6_address:            <computed>
      ipv6_hostname:           <computed>
      locked:                  <computed>
      name:                    "k8s-worker-2"
      public_hostname:         <computed>
      server_groups.#:         "1"
      server_groups.572789731: "grp-aaaaa"
      status:                  <computed>
      type:                    "2gb.ssd"
      user_data:               "add40e872377b194544ea22c6eac496e5695e3fb"
      username:                <computed>
      zone:                    <computed>

Plan: 1 to add, 0 to change, 0 to destroy.

brightbox_server.k8s_worker[2]: Creating...
brightbox_server.k8s_worker[2]: Creation complete after 1m25s (ID: srv-9s9el)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Then on the master you can confirm that the new server was added to the cluster:

$ kubectl get nodes

srv-9s9el   Ready     <none>    1m        v1.11.2
srv-ayfvb   Ready     <none>    17m       v1.11.2
srv-kp5z5   Ready     master    18m       v1.11.2
srv-vmlld   Ready     <none>    17m       v1.11.2

So now you have a four node Kubernetes cluster, ready for receive your container deployments!

Last updated: 13 Sep 2018 at 11:16 UTC

Try Brightbox risk-free with £20 free credit Sign up takes just two minutes...