Mar 302024
 

Lately I’ve been trying to run some server software in a local Docker container just to play around with it, but I ran into network issues trying to use it. It’s 2024, and it seems like everything is containerized (precisely so you can run it locally the same as you would in production) and deployed via infrastructure as code, so why shouldn’t I be able to grab the container image, fire it up, and actually use this service? I think is a result of just running our code on the cloud, and it’s something we need to explicitly be considering.

For the record, I was specifically trying to run Nextcloud locally, in order to play around with their Solid app. Obviously, this is not the intended use case, but that’s fine. I want to see if something works and get some insights into what it does and doesn’t do. Like I said, I wanted to play around with it and get a sense of what it could do and where I might be able to do something.

Here’s the thing, Nextcloud runs in containers. I tried both the official Nextcloud all-in-one image and the community image. The all-in-one image hated the fact that my hostname was localhost (it also didn’t like me setting a hostname in /etc/hosts and using that) because it was using DNS validation for the various containers it was launching, and there was no A record associated with my hosts file. The community image ran just fine, but the Solid app ran into CORS problems when I was trying to set it up.

Nextcloud is obviously intended to be running on a network, as is Solid, but it seems like we’re so used to services ultimately being deployed on the cloud, that it seems that “deploy to your cloud environment” is an implied step in the “testing your code” process. It’s not just these 2 specific applications. If part of your service involves a function-as-a-service component, then generally speaking testing that code involves pushing it to your cloud provider and running it there. You could run your function locally, but a first pass through on researching how to do that makes it look like you need to go all-in on the Serverless framework, or commit to something like Localstack to emulate AWS on your machine. Eventually you may stumble on the fact that AWS has container images for a lot of their services, and then start asking yourself if maybe there’s run for the lambda runtime too (they have images for their other lambda runtime languages too).

The services we write are more than just some code running on a server instance (whether that server is a local container or in the cloud). We generally have databases, and configuration involved to communicate or invoke other services, and other configurations to get data from other services. For all the hype about service-oriented architectures, very little of what we’re writing is truly in isolation. But it seems like the only time we’re thinking holistically about the service stack is when building out the infrastructure (assuming you have that as code somewhere). As a result, we have a surprising amount of coupling that gets ignored during development.

How close does this sound to a typical web application that gets written by hundreds of businesses all over the country? There’s a front-end project that has all the HTML, styling, and Javascript code. It makes HTTP calls to a separate back-end project that returns the data used by the front-end application. That data comes from a separate database that’s deployed on another (set of) server(s). And that’s the simple version of things. We can easily add more pieces with things like caching, some sort of authentication service that sits in front of all traffic from web pages to servers, or external services for data we can use to enhance the objects we’re returning.

I’ve been talking about running individual pieces of code, but software runs in stacks. Organizationally we have this figured out as part of our deployment pipelines, but we don’t ever reference that higher-level stack in the READMEs of our repos. I get the degree of sense it makes to only focus on the part you work on, or not even bother setting up local instances of code you don’t even run. For example, if you’re a front-end engineer, it’s easier to just configure your web application to point to the QA back-end for your testing, or if you’re working on the product catalog service, it doesn’t seem to make sense to have to deploy a local build of the authentication service.

There are some benefits to going through this extra effort though. The first is that getting to a point where running this sort of isolated development environment is issues in another service don’t impact your development/testing. You have a copy running locally that’s good enough to keep going all the way up to actual integration testing. With a little training/documentation, you can do the setup needed in other services to support stuff you’re doing without having to coordinate work with another team (e.g. adding your user to the authentication service so your credentials work in the app).

Having a whole local stack lets you do more realistic functional testing. Yes, you should have automated tests that are as thorough as possible, but it’s also important to actually run the code and deal with real inputs and outputs, especially during development. If you’re a back-end developer, that means having a UI to point, click, and type in. If you’re a front-end developer, that means a server sending data back. It’s tempting to use an existing server, database, or API tool to serve as a substitute (and sometimes that’s realistically all you have), but nothing beats doing end-to-end testing. But we develop on local branches initially for a reason – we’re exploring solutions, haven’t finished debugging, and are oftentimes more focused on the specific scenario(s) we’re actively working on than the full range of conditions that can occur in the live system at first. Deploying a complete stack lets you maintain that isolation you need for development until you have something that’s ready to merge into the main code, at which point you’re actually ready to integrate with other people’s code (and ready to have them start testing against yours as well).

This is something that Kubernetes (once you get Helm Charts figured out and set up) and Docker Compose do really well. If you’re already running in containers, you can essentially deploy a whole application stack with a simple command. You just update the Docker image for the piece you’re working on with a local image build, and pull the rest from a stable image from your image repository. Now you have a full application stack running locally that you can do with as you please, and even tear down and rebuild with no trouble.

Deploying stacks simplifies the setup process for new developers. Download the repos you’ll be working in, build the applicable container images locally, and then just spin up the stack. Bam, the code is running locally in a matter of minutes. Need to make some changes to your infrastructure, like adding an environment variable, or a new component? You can test that locally just like your code (which your infrastructure should be).

There’s some real benefits to thinking of your applications and services in the context of deployable stacks. The first is that while different components may live in different repositories, they’re linked together at a top level to be deployed as 1 application or service. Odds are you already have the ability to do this in production, but letting developers do this on their own machines lets them test that process, as well as have a complete app running in an isolated environment so you can code, clear everything out, and start all over without impacting anyone else, not to mention vastly simplifying getting set up to do development in the first place. It encourages a more holistic view of application development that we can easily lose track of by focusing on our own little bit of the code we work on.

 Posted by at 1:00 PM