Python and Kubernetes
Kubernetes promises to act as an operating system for your data center, scheduling and running your code on potentially thousands of individual machines. This is an exciting new technology, but getting started can be pretty intimidating. In my case, Kubernetes was something of a mystery -- I wrote code, and DevOps deployed that code using Kubernetes. My own understanding of Kubernetes was very limited.
In this post, I'm going to show how you can interact with a Kubernetes cluster programmatically with Python. The good news it that there is now a well maintained Kubernetes client for Python. The bad news is that the code is auto generated, pretty low level, and there isn't much in the way of tutorials or other information on the web. Oftentimes diving into the client code itself is the only real way to answer questions.
To get started you'll need somewhere to run Kubernetes. Although Minikube is often cited as an option, I chose to start a cluster on the Google Compute Engine. At the time of writing, Google is offering a $300 credit when you're starting out with GCE, and I found the prospect of experimenting with a real Kubernetes cluster more interesting.
Once I had my GCE account activated, I followed these instructions to get my small Kubernetes cluster running. It involves installing some remote admin programs and executing a handful of shell scripts. All told, it took me about an hour to get up and running.
Kubectl and Cluster Contexts
Before we get to the actual code, it's worth mentioning that kubectl is the command line interface for interacting with your cluster. The commands that you issue are only directed to one cluster at a time; to switch clusters, kubectl can be configured to utilize different contexts. You can add new contexts and switch between them using the kubectl program, or you can edit the ~/.kube/config file directly.
To switch contexts with kubectl:
Starting a job and getting output
Once you're sure kubectl can talk to your cluster, it's time to try out using Python. My first goal was to start a job, wait until it completed, and grab the output of the pod running inside the job. By using a job instead of just a pod, we can be assured that the pod will be rescheduled should the underlying node die while the pod is executing.
Here I'm creating a job that has a pod template nested within. Because we might want to launch a lot of these jobs, we can tell Kubernetes to name the job itself using the generate-name attribute. The string 'job-' serves as a prefix. Kubernetes will attach a random string to the end of our prefix. The pod template specifies that I'd like to launch a container using the "hello-world" image. This is a test image available on Docker Hub that will produce a simple message on stdout when launched.
So what happens with this job once it's launched? How can we track its state?
The Kubernetes client has an interesting concept called a Watch. Instead of polling the API server for the state of our job, we can instead subscribe to listen for events using the Watch interface. The Watch interface will establish a persistent HTTP connection using chunked transfer encoding. The Watch stream method accepts a function and its arguments, and will return an iterator that you can consume.
Here I listen for events on a specific label_selector. Using label selectors I can listen for events on a specific pod, or a group of pods that match the label. The event is a deeply nested object containing a great deal of state information about the pod at that moment in time. It's important to realize that should we start watching again later on, all events related to the matching pods will be replayed.
If we're only interested in what's happened from this moment on, we'll need to pass in a resource version. Resource versions are a monotonically increasing ID.
In this example, the watch stream will only include events that occur after our initial request.
Exception handling anti-patterns
Anti-patterns often emerge when developers that are learning Python first delve into exception handling. One of the most common anti-patterns is the overly broad except block. The developer usually has a specific error in mind, but they use a bare except clause to handle the scenario. This often has the end result of masking the real issue, or worse, actively misleads anyone unfortunate enough to be debugging the problem.
Let's take a look at an example of an overly broad except block. In this example we have a simple Flask app that lets us query and paginate a list of widgets.
A try/except block encloses our database query. In this somewhat contrived example, the programmer expects that there might be a database related exception, so the query execution is wrapped in a try/except.
In some cases, whatever exception occurs might indeed be database related. But as is the case with this code, the unspecific except block can mask other issues. This code happens to break when the client passes a page or per_page value, because it's not casting those values to an integer, resulting in a TypeError. There are probably other such problems here.
We should ask ourselves when we're writing a try/except block, what is the purpose of the exception handling? In a web application, we may want to return a certain kind of response regardless of any errors that occur. For example, we may want to include some kind of request ID in JSON responses. This ID might be displayed on an error modal for users to report to customer service.
If we want to return that ID, we need to make sure an exception doesn't cause our application to return a 500. Use cases like this lead to the anti-pattern of having try/except blocks everywhere. The developer starts adding try/except blocks to anything that they believe might trigger an exception. Ultimately, you can end up with more code inside try blocks than outside.
The solution to this kind of anti-pattern is applying a better abstraction. In this case, if we want to make sure our request ID is always returned, we should wrap our handler methods in such a way that the exception logic is contained in a single place. In Django you could use a middleware class to catch exceptions, or in Flask an errorhandler decorator.
Here's an example of abstracting the exception logic out of the view handlers in Flask.
Building a blog: Flask, Gunicorn, NGINX
Flask is a great option when a framework like Django seems like overkill. In this post I'd like to step through how I built this blog with Flask.
My goals for the blog are pretty modest.
- Ability to edit posts in Markdown
- Support for displaying code snippets with syntax highlighting
- Not a complete eyesore
- Easy to deploy and maintain
Installation and Deployment
To begin with, let's take a look at how the application is installed and deployed. I want to be able to deploy my blog on any host with a minimum of work, so I'll be using Docker to containerize the application. At the same time, I'd like the blog to be fast and responsive, so I'll use Gunicorn as the server in front of Flask. The Flask development server has many attractive features such as auto reloading, but it runs on a single thread, making it a poor choice when efficiency is needed.
Gunicorn can spawn several workers and delegate requests to them. The benefit is immediately apparent even with just a single browser, as modern browsers request many resources simultaneously. Unfortunately, putting Flask behind Gunicorn means we lose some of the conveniences, like auto reloading and debug output.
To have the best of both worlds, I'm creating three docker images in total. One image acts as the base and contains all of the setup while deferring the final startup step. My other images are development.dockerfile and production.dockerfile. Let's take a look at the base Dockerfile that contains the setup.
When I'm working on the blog, I use the development.dockerfile image.
When I want to deploy my optimized blog, I switch to the production.dockerfile image.
At this point, I could just start up my container and start developing, but I'll take it one step further with Docker and use docker-compose to automate some aspects of the container startup process.
For my development container to be useful, I need to mount my code directory into the container. Otherwise, I'd be stuck rebuilding the container for every change I make. I also need to forward some environment variables into the container. This makes for a pretty onerous command. Using docker-compose here just simplifies that process.
That's the full picture when it comes to the development setup.
Deploying a production build takes it one step further and introduces NGINX.
While Gunicorn provides parallelism, nginx handles SSL connections and is better suited to dealing with the Internet. Slow clients are one reason why we don't want Gunicorn talking directly to the Internet. Nginx is able to buffer the response from Gunicorn and free up the worker while streaming the reply at whatever rate the client will accept it.
I've used letsencrypt to add SSL for the blog. While the need for SSL on a blog might be debatable, it's just so easy to do that it's almost always worth it. Launching certbot on your host allows you to demonstrate that you have control over that address. Once that's successful, you'll have a certificate that you can point nginx to.
Here's the production setup contained in docker-compose.prod.yml.
This setup mounts an SSL certificate and my NGINX config file into the container. Building an image with the certificate and private key baked in would probably be a bad idea, in the same way that committing secrets that end up in GitHub is a bad idea. You'll also notice that the blog container doesn't mount the code directory, but instead uses the files that were copied into it when it was built.
Now let's take a look at the nginx configuration.
This is a pretty simple configuration that performs proxying and redirects requests to http over to https.
That covers all aspects of installing and deploying the blog. In the next post I'll cover Flask routing, authentication, Jinja templates, and Markdown processing.