Kubernetes for Frontend

Goobernetes

Thu, 24 Dec 2020

Kubernetes

Introduction

As I close the chapter on some work, I want to take a moment to detail out some of the things that tripped me up when picking up Kubernetes as a Frontend focused kind of person. I kind of hoped I wrote this as I went, as even writing this in hindsight already feels like a distant memory.

I wouldn’t say that I got that deep into it - however I eventually tripped on enough things to warrant writing something up, if anything just for myself.

In fact the title of this is a bit of a misnomer, what does it even mean to care about kubernetes from a frontend perspective? I’d say very little, actually. The only true part where one would really even encounter it is if you were using one of the all-in-one frameworks like next.js - and even here, the only point here is ensuring your “ready” probe is set up and ready to be probed.

The other thing to highlight is that a lot of the learnings here are building upon the great work the existing system that I was retrofitting a different system onto, and making further decisions to more deeply integrate them together.

This might have a bit of assumed knowledge about Kubernetes, as this is really written to detail those pain points rather than introducing you to Kubernetes. Now I don’t really want to write this out as a listicle, but here’s an unordered list of things to note.

RAM & storage

You’ll need a lot of it. I can barely develop the frontend project on a relatively new MacBook with 16GB of RAM (and in fact some of the “newer” MacBooks I worked on were actually a regression of speed, in terms of single threaded performance!). 32GB is a nice and comfortable amount, I can imagine 64GB being even better but at that point it’s really just a luxury - the necessity is the 16 to 32 jump. You’ll be swapping a lot of things into memory if you really have no choice but to work with what you have. On a small tangent, this is why the new M1 MacBooks with a base of 8GB and only maxing out at 16GB is only useful for casual use, software support aside.

Deploying is really simple!

Prior to working in the Kubernetes space, we had a different approach for deployments - basically an AWS CloudFormation manifest was used, with the only part that truly changes in the process is the frontend assets that come out as part of the build pipeline. Now we had already been using Kubernetes as part of our continuous integration pipeline, to give us a way to preview the tip of a given branch - but this was simple enough in the sense of utilising a single docker image which ran all of the server component, and the frontend build assets being served from that. The new problem space I was dealing with, meant slotting this simple image into a broader set of micro-services. In theory it was simple enough to drop it straight in.

But it came with two questions:

Was the image going to be accessed within the cluster?

Or was the API and routing sufficiently different enough to warrant being another (NodePort?)?

Answer: both. I ended up using both these two different implementations. One utilising an existing nginx-ingress to forward on requests to our frontend, and then separately another architecture design that meant separate handling of frontend and API endpoints. Both implementations were of course, due to very different circumstances.

However both approaches meant we wouldn’t get the same guarantees as we did with the CloudFormation approach, in the sense of having a “blue green” path forward. You should set your docker image up to expose some configuration options and give yourself the ability to test your image in isolation.

The Kubernetes way is a new Deployment version which gets brought up, and then this new pod eventually takes over when its probes (if provided, and I’ll go into probes a little later on). In fact this gave more guarantees in the sense of upgrading the frontend and having the ability to roll back if you truly must. This is done rather than trying to deploy a previous CloudFormation artifact.

Probes are darn important for zero downtime deploys

Now one of the features you should really utilise when leveraging Kubernetes, is getting probes set up. Meaning, having a service or endpoint that your cluster uses when scaling up more or new pods. A brief summary of the differing types of probes:

Readiness: whether your pod is ready to start accepting new requests

Liveness: whether the pod is alive and healthy. let’s say your service is getting absolutely hammered - this endpoint should start erroring out to help your cluster know that it needs to bring up more pods.

OK. So now we know what they do. Why are they important? Some hypothetical scenarios:

  • new versioned image for a pod is brought on.

    • Your readiness probe is just an endpoint that responds “yes OK I’m ready” without any other checks & doesn’t truly reflect that it is ready to accept requests. You’ll have a brief moment of requests being routed to this pod but it not actually responding to them as you expect, because it’s not ready!
    • Your liveness probe doesn’t actually utilise any of the services or dependencies that your API endpoints rely on. You don’t have an actual picture that it’s a healthy pod. So one of the defaults that we used in our helm templating was a global value to indicate whether you wanted to utilise probes as part of your cluster. Cool.

Except that only really works if your images actually use probes. The inclusion of these defaults meant that we had a small blindspot in terms of how safe it really was to roll forward or back across different versions. One real example that triggered problems was that we had a cluster where its (Kubernetes) secrets resources were swapped out with incorrect ones.

This of course, led to an outage due to secrets being pulled in when newer pods were brought on. An easily preventable outage, had the probes truly reflected that it was ready but not live - and meant we eventually shipped probes that better tackled this problem.

Alright. I think this has gone on too long. A terrible spot to end on, I’ll either revise this or post a part 2 for a proper conclusion. Merry Christmas & a Happy New Year to all!

  • tidbits