This guide takes you through deploying a Kubernetes cluster on Brightbox Cloud using Terraform.
The deployed cluster will be pre-configured with the Brightbox Cloud Kubernetes controller manager, allowing Kubernetes to manage it’s own resources using the Brightbox Cloud API.
You need a Brightbox Cloud account, and you must have an SSH key set up. And you’ll need an SSH agent running locally with that SSH key added.
Locally, you’ll need git and must have Terraform installed.
We’ve written a set of terraform configs to build a Kubernetes cluster for you, so get those from github:
$ git clone https://github.com/brightbox/kubernetes-cluster.git Cloning into 'kubernetes-cluster'... remote: Counting objects: 170, done. remote: Compressing objects: 100% (94/94), done. remote: Total 170 (delta 110), reused 127 (delta 74), pack-reused 0 Receiving objects: 100% (170/170), 39.88 KiB | 5.70 MiB/s, done. Resolving deltas: 100% (110/110), done. $ cd kubernetes-cluster/
Then get Terraform to initialize all the relevant plugins:
$ terraform init Initializing provider plugins... - Checking for available provider plugins on https://releases.hashicorp.com... - Downloading plugin for provider "null" (1.0.0)... - Downloading plugin for provider "brightbox" (1.1.0)... - Downloading plugin for provider "tls" (1.1.0)... - Downloading plugin for provider "random" (1.3.1)... - Downloading plugin for provider "template" (1.0.0)... Terraform has been successfully initialized!
We need to tell Terraform which Brightbox account to build the cluster on, your username and password to authenticate with and how many servers (workers) to build.
Create a file called
terraform.tfvars with the following keys and appropriate values:
account = "acc-xxxxx" username = "email@example.com" worker_count = 2
You’ll notice this doesn’t include your password. We recommand against storing user credentials on-disk in plain-text, even locally.
Luckily Terraform allows us to provide variables from environment variables To
avoid your password being echoed to screen or ending up in your bash_history
file, use the
read command to prompt for a password and then export it into
$ read -p "Password:" -s TF_VAR_password $ export TF_VAR_password
Amazingly that’s all the hard work done. Now just apply the configuration and Terraform will spit out a huge plan of action and ask you to confirm:
$ terraform apply data.template_file.worker-cloud-config: Refreshing state... data.brightbox_image.k8s_master: Refreshing state... data.brightbox_image.k8s_worker: Refreshing state... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
yes and hit enter, and terraform will build and configure your new Kubernetes cluster.
After a few minutes the cluster will be built and Terraform will spit out some useful information:
Apply complete! Resources: 20 added, 0 changed, 0 destroyed. Outputs: group_fqdn = grp-aaaaa.gb1.brightbox.com master = cip-lyzey.gb1.brightbox.com
master output is the public IP address of the Kubernetes master server.
You can SSH into this server using your SSH key:
$ ssh firstname.lastname@example.org Last login: Tue Aug 14 15:56:46 2018 from 2001:470:1f1d:382:c03b:215:37df:2584 ubuntu@srv-kp5z5:~$
Or you can use a neat trick to get Terraform fill in the hostname for you like this:
$ ssh ubuntu@$(terraform output master)
So now you can use
kubectl on the master to inspect the cluster:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION srv-ayfvb Ready <none> 7m v1.11.2 srv-kp5z5 Ready master 7m v1.11.2 srv-vmlld Ready <none> 7m v1.11.2
See here we have one master server and two other nodes.
Let’s get Terraform to build an additional node for our cluster.
terraform.tfvars file and increase the
worker_count variable from
worker_count = 3
terraform apply again. Terraform knows that it’s already build the
rest of the cluster and just builds one new node and configures it:
$ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: + brightbox_server.k8s_worker id: <computed> fqdn: <computed> hostname: <computed> image: "img-s0jtd" interface: <computed> ipv4_address: <computed> ipv4_address_private: <computed> ipv6_address: <computed> ipv6_hostname: <computed> locked: <computed> name: "k8s-worker-2" public_hostname: <computed> server_groups.#: "1" server_groups.572789731: "grp-aaaaa" status: <computed> type: "2gb.ssd" user_data: "add40e872377b194544ea22c6eac496e5695e3fb" username: <computed> zone: <computed> Plan: 1 to add, 0 to change, 0 to destroy. brightbox_server.k8s_worker: Creating... ... brightbox_server.k8s_worker: Creation complete after 1m25s (ID: srv-9s9el) Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Then on the master you can confirm that the new server was added to the cluster:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION srv-9s9el Ready <none> 1m v1.11.2 srv-ayfvb Ready <none> 17m v1.11.2 srv-kp5z5 Ready master 18m v1.11.2 srv-vmlld Ready <none> 17m v1.11.2
So now you have a four node Kubernetes cluster, ready for receive your container deployments!
Now you might want to follow our guide to deploying an app with a load balancer on the cluster.
Last updated: 17 Apr 2019 at 16:09 UTC