Will Murphy's personal home page

Migrating to App Runner

Hello everyone,

I was recently disappointed to learn that the Heroku free tier is going away. There’s an interesting discussion on Hacker News, pointing out that a major reason might be piracy / spam content being hosted on free tier Heroku. But anyway, this post isn’t about whether Heroku should cancel their free tier; it’s about what to do instead.

My wife built a wonderful family cookbook website over the years. It’s deployed as a Heroku free tier application, and it’s a great use case for that. The database is small, on the order of hundreds of rows. The traffic is light. The audience is patient with free dynos being torn down and waiting for them to come back up. But we need a new hosting strategy. Family members can log in and upload their favorite cookie recipe, or see each other’s recipes. Every year we print hard copies of it and give it to friends. Warm fuzzies all around. But the cookbook needs a new home!

In this series, we’ll look at migrating a fairly typical free-tier Heroku app to AWS App Runner + RDS. It will not be completely free when we get it there; my goal is instead to make it scale to zero. That is, it shouldn’t cost me any money when no one’s using it, as far as possible.

It’s also important to point out that this is a lift and shift migration. In Heroku, the recipe website was basically a containerized Django app connected to Postgres. In AWS, it will be a containerized Djanog app connected to postgres. (If I wanted to make it really scale to zero, I would probably rewrite it into some lambdas backed by DynamoDB, but since I only have till November to get off of Heroku, we’re doing a lift and shift instead).

I expect this series to be three parts. In part 1 (this post), we’ll talk about getting a Django app running in a container at all, so that I can deploy it easily wherever. In the following post, I’ll talk about how to make this work in AWS AppRunner + RDS. And in the last post we’ll talk about the somewhat hacky workaround I did to make it scale to zero.

Containerizing The App

First, what is the problem that containerizing an app solves? In this case, containerizing an app solves the problem of portability. I have a bunch of Python files. When run on my laptop, or on a (soon to die) Heroku dyno, they work together and make a nice website. But if I just emailed them to you, there would be a bunch of stuff to pip install, and some environment variables that might be missing, etc, etc, and it would take you a while to get it working. Enter containers. Basically, a container is a way of writing down all the file system changes we would need to make to launch the app on a blank server, and then bundling up all those changes into a single file that I could send you. Then, you could just launch that file, and you wouldn’t need to do anything else to make that pile of Python files behave.

Here’s the containerfile I’m using:

FROM public.ecr.aws/ubuntu/ubuntu:20.04_stable

RUN apt-get update \
    && apt-get -y upgrade \
    && apt-get install -y python3-pip build-essential manpages-dev libpq-dev postgresql-client

RUN apt-get install -y ruby ruby-dev && gem install sequel sqlite3 sequel_pg

RUN python3 -m pip install -U pip \
    && pip3 install --upgrade setuptools

RUN mkdir -p /app

COPY ./requirements.txt /app/requirements.txt

WORKDIR /app
RUN pip3 install -r /app/requirements.txt

COPY . /app

EXPOSE 8000

CMD ["./start-app.sh"]

I mostly copied this file from a blog post that I can’t find now (sorry friend!), and tweaked a few things that I’ll explain. There are also a few things I would have done differently, but for a hobby project I squeeze in on the edges of the work day, this is good enough.

A few comments here:

  1. Starting from public.ecr.aws/ubuntu/ubuntu:20.04_stable is a bit of an odd choice. It’s an odd choice because Ubuntu is a heavy, user-usable distro. There’s probably all kinds of stuff in there I don’t need for a Django site at all. But, it’s an official image hosted on ECR. Since we’re deploying to AWS, something that’s already on ECR public doesn’t require any additional setup (AWS’s build workers can already pull from public ECR repos), and since it’s officially maintained by Canonical, the folks that ship Ubuntu, it’s not going to break or be full of spyware, so for those two reasons I’m willing to accept the bloat.
  2. Installing ruby ruby-dev && gem install sequel sqlite3 sequel_pg is to do a database migration, which is a one time task and I’ll be able to remove that later. More on that next post.
  3. I copied requirements.txt separately from the rest of the app, so that I can run pip3 install before I copy source code. This is meant as a build-time optimization - if requirements.txt didn’t change, all the layers before COPY . /app should be reusable, so we only need to rebuild the very last layer.
  4. EXPOSE 8000 tells the container runtime to let the container accept traffic on TCP port 8000.
  5. CMD ["./start-app.sh"] runs a tiny start up script. Mostly, it just starts the Django server on port 8000, but it also runs the Django command to apply database migrations. And I used it as a place to run one time database copying commands.

Podman

Locally, I use Podman instead of Docker. One, Docker is being increasingly a pain about making you log in on mac, throttling pulls from DockerHub. Plus, they did the whole, “remember when Docker for macOS was free. Well now it’s not! Ha!” thing a while back, so I’ll pass on using it at all.

Podman can call the Dockerfile a Containerfile, which is nice, and I’ll follow that convention here.

I was a bit worried about the switch, since there’s a lot of ecosystem built up around Docker, but to be honest podman is super easy to use. For example, in the AWS Console’s ECR page for an image repository, there’s a “view push commands” button. These commands have docker in them, but you can literally just substitute podman everywhere it says docker and have no issues at all.

Final steps

So here’s what I actually did to make the Django app run locally in a container:

  1. Find and slightly modify the Containerfile above
  2. Make sure things are right in requirements.txt, since there won’t be any other available pip packages in the container.
  3. Build an image with podman build -t recipe-website .
  4. Run the image with podman run --network bridge --publish 3000:8000 --env PRODUCTION=false localhost/recipe-website:latest. The --network bridge --publish 3000:8000 tells podman that I want it to listen on post 3000 of my laptop and forward requests there to port 8000 on the running container.
  5. Head on over to localhost:3000 to see whether it’s working.

After a few iterations of this process, I had the app running in a container, making it portable enough to move to whereever I wanted to deploy it. Next time, we’ll talk about getting the app to run on AWS.

Till next time, happy learning!
– Will

Discussion

Love this post? Hate it? Is someone wrong on the internet? Is it me? Please feel free to @me on mastodon

I used to host comments on the blog, but have recently stopped. Previously posted comments will still appear, but for new ones please use the mastodon link above.

Join the conversation!