Deploying on Kubernetes #8: TLS

This is the eighth in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.

Assumptions

To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.

Necessary Background

So far we’ve been able:

TLS

TLS (or “Transport Layer Security”) is one method of proving identity and ensuring a connection is encrypted, and cannot be read by intermediaries. It’s become a defacto requirement for all modern web properties, and is becoming a standard for managing authentication at the TCP transport layer (though “TLS mutual authentication”)

In the case of Fleet, we need it simply to prove the identity of the fleet server, and encrypt the connection.

A standard method of expressing TLS

Given that this is such a common requirement for managing authentication, there are already pre-baked solutions for generating TLS certificates for applications.

Started by the project kube-cert-manager and continued by the jetstack cert-managerproject a secret type of kubernetes.io/tls has been created, and expresssed in the following format:

# /dev/stdout---
apiVersion: v1
data:
tls.crt: __TLS_CERT__
tls.key: __TLS_KEY__
kind: Secret
metadata:
name: __NAME__
namespace: __NAMESPACE__
type: kubernetes.io/tls

Adopting this format allows deferring certificate management to third party applications.

Deferring certificate management to the user

While it’s nice that third party applications do our certificate management on our behalf, we do not wish to lock users into this software. Accordingly, while we can adopt that format for our files (and document the justification) we should defer the certificate generation to the user, or possibly generate the certificates in a bootstrap job.

Implementing the required certificate

The following script will create a self signed certificate, suitable for use with the application:

$ openssl req -x509 \
-newkey rsa:4096 \
-keyout key.pem \
-out cert.pem \
-days 365 \
-nodes \
-subj "/C=EU/ST=Hessen/L=Frankfurt/O=AcmeWidgets Name/OU=Org/CN=fleet.acmewidgets.com"
$ cat <<EOF > certificate.secret.yml
---
apiVersion: v1
data:
tls.crt: $(cat cert.pem | base64 -w 0)
tls.key: $(cat key.pem | base64 -w 0)
kind: Secret
metadata:
name: kolide-fleet-fleet-tls
type: kubernetes.io/tls
EOF

Editors Note: Obligatorily, This is not a good idea for a production environment. Managing PKI properly is beyond the scope of this article, and a hard thing to do more generally. TLDR, use Hashicorp Vault, or Lets Encrypt.

This secret can then be created in the cluster with a simple:

$ $ kubectl apply -f certificate.secret.yml secret "kolide-fleet-fleet-tls" created

This secret can then be used by the application

Mounting the secret into the container

Exactly as we have previously implemented with the ConfigMap, we can express the secret in the container filesystem, and configure the application to look for those resources.

First, we need to declare the volume:

# templates/deployment.yml:44-53      volumes:
# The name comes from the configmap. It's also shown earlier
- name: "fleet-configuration"
configMap:
name: {{ template "fleet.fullname" . }}
- name: "fleet-tls"
secret:
secretName: {{ template "fleet.fullname" . }}-tls
containers:

And mount it into the container:

# templates/deployment:133-139          volumeMounts:
- name: "fleet-configuration"
readOnly: true
mountPath: "/etc/fleet"
- name: "fleet-tls" # <-- The new TLS
readOnly: true
mountPath: "/etc/pki/fleet"

This will make our files available in the container. We can verify this by listing the directory that should now exist:

$ kubectl exec -it kolide-fleet-fleet-7c59588ff7-g5dtk ls /etc/pki/fleet/tls.crt  tls.key

Awesome! But our application will need to be adjusted to find the keys at the expected location, rather than the default:

# templates/configmap.yaml:30-33    server:
address: 0.0.0.0:8080
cert: /etc/pki/fleet/tls.crt # <-- The adjusted path
key: /etc/pki/fleet/tls.key
tls: {{ default true .Values.fleet.server.tls }}

That’s it! Upon redeploying the application we can see it boots successfully:

$ kubectl logs kolide-fleet-fleet-7c59588ff7-g5dtk
Using config file: /etc/fleet/config.yml
{"component":"service","err":null,"method":"ListUsers","took":"1.246389ms","ts":"2018-04-01T12:32:06.743081387Z","user":"none"}
{"address":"0.0.0.0:8080","msg":"listening","transport":"https","ts":"2018-04-01T12:32:06.743783084Z"}

We can even see the application by using kubectl port-forward:

$ kubectl port-forward kolide-fleet-fleet-7c59588ff7-g5dtk 8080:8080Forwarding from 127.0.0.1:8080 -> 8080

After following the setup steps we can see the following screen:

Yeah! The application is working!

Making that easy

While our own installation of kolide/fleet is now working, we should make that easier for other people to install things. PKI is not a trivial operation, and it’d be nice to make it as simple as possible for non-experts to get this up and running.

There are two ways of handling this:

  • Documenting the requirements, and
  • Generating the TLS certificates automatically

We’ll only be documenting the solution for now. Generating the TLS certificates is more difficult than I’d hoped.

Document the proper solution

Helm allows displaying additional information after having installed a release in a file called the NOTES.txt file. This file is useful for showing how to access a service, or warn of there are extra setup steps. From our template, the current NOTES.txt looks as follows:

# templates/NOTES.txt:1-28fleet## Accessing fleet
----------------------
{{ if .Values.service.loadBalancer.hostName }}
1. Visit http://{{ .Values.service.loadBalancer.hostName }}
{{- else }}
1. Get the fleet URL to visit by running these commands in the same shell:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "fleet.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/login
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the loadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "fleet.fullname" . }}'
export SERVICE_IP=$(kubectl get svc {{ template "fleet.fullname" . }} --namespace {{ .Release.Namespace }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}/login
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ template "fleet.fullname" . }}-master" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:{{ .Values.service.port }}
kubectl port-forward $POD_NAME {{ .Values.service.port }}:{{ .Values.service.port }}
{{- end }}
{{- end }}
For more information, check the readme!

We can document the same instructions that we have provided here:

# templates/NOTES.txt:1-22fleet{{ if eq .Values.tls.generate false }}
## Generating the TLS certificates
If you have just installed kolide/fleet, you will need to generate TLS certficates in the appropriate format:---
apiVersion: v1
data:
tls.crt: __BASE64_TLS_CERTIFICATE__
tls.key: __BASE64_TLS_KEY__
kind: Secret
metadata:
name: {{ template "fleet.fullname" . }}-tls
type: kubernetes.io/tls

Checkout the jetstack cert manager project for automated cert creation in the appropriate format.
{{ end -}}## Accessing fleet

Additionally, it’s worth noting other things we found during testing:

# templates/NOTES.txt:47-50## InstallationIf you have just installed kolide/fleet, the view will prompt you for further installation instructions. These cannot be
automated during installation; you will need to complete them now.

Keen observes will notice that the TLS documentation is surrounded by an if statement of the following format:

{{ if eq .Values.tls.generate false }}

This references a setting that we will need to add to the values.yml file, with the default false. It’s false by default to encourage users to go and find a way to do TLS properly, rather than rely on us to generate the certificates:

# values.yml:16-20tls:
# Whether to generate TLS certificates during installation. Enable this if you're testing to create a self signed
# certificate during installation.
generate: false

Given this information, experienced helm users will be able to create the appropriate certificates with the jetstack or vault projects.

Documenting a bad solution

Because it’s not super trivial to get TLS certificates self generating in a nice way, we’re just going to document how to create the certificates in a linux environment. The instructions are exactly the same as we used earlier, with the exception they’re directly expressed to kubectl.

# templates/NOTES.txt:21-46Checkout the jetstack cert manager project for automated cert creation in the appropriate format. Alternatively for
testing purposes a self signed certificate can be generated with the following command:
$ openssl req -x509 \
-newkey rsa:4096 \
-keyout key.pem \
-out cert.pem \
-days 365 \
-nodes \
-subj "/C=EU/ST=Hessen/L=Frankfurt/O=AcmeWidgets Name/OU=Org/CN=__FILL_IN_YOUR_DOMAIN_HERE__"
$ cat <<EOF | kubectl apply -f -
---
apiVersion: v1
data:
tls.crt: $(cat cert.pem | base64 -w 0)
tls.key: $(cat key.pem | base64 -w 0)
kind: Secret
metadata:
name: kolide-fleet-fleet-tls
type: kubernetes.io/tls
EOF
This isn't a super elegant way of doing PKI, it's recommended *not* to use this in a production environment. However,
for testing purposes it's fine.
$ rm cert.pem key.pem{{ end -}}

In Summary

Our application is now up and running. Hooray! However, it’s not quite production ready just yet.

We’ll cover some further hardening of this application, such as health checks and perhaps automated generation of TLS certificates in a future post.

As usual, you can see the work at the following URL:

Checkout the next post at the following URL:
https://medium.com/@andrewhowdencom/deploying-on-kubernetes-9-exposition-via-service-6955b8d3e833