Static Sites with Python, uv, Caddy, and Docker
My preferred deployment stack for Python-built static sites.
I’ve largely switched to uv at this point and it’s been pretty great. I use it for everything I can, from little scripts with uv run , to libraries, to applications. It’s so fast it does actually matter, the workflow side of things works well enough for me, and—perhaps most valuably—it manages Python executables for me beautifully.
As we’re all familiar with by now, I’m a static site aficionado, and have a number out in the wild. Some are purely static—hand-crafted artisanal HTML and CSS—and others are built with Python. I like serving them all with Caddy inside a multi-stage build Docker container, which has been working quite well for me so far.
In this post I want to explain the fairly simple setup I use to build and serve a number of websites using the above stack.
Example
For our main example we can use my personal deployment of sus, a static site based URL shortener I wrote and have been using for years.
Dockerfile
Let’s start with the Dockerfile and then we’ll go through it line by line:
# use Debian image with uv, no need for system Python FROM ghcr.io/astral-sh/uv:debian AS build # explicit work dir is important WORKDIR /src # copy all files COPY . . # install Python with uv RUN uv python install 3 .13 # run build process RUN uv run --no-dev sus # serve with Caddy FROM caddy:alpine # copy Caddy config COPY Caddyfile /etc/caddy/Caddyfile # copy generated static site COPY --from = build /src/output /srv/
The first line is our starting image:
# use Debian image with uv, no need for system Python FROM ghcr.io/astral-sh/uv:debian AS build
It uses an image built by Astral, the makers of uv, that’s based on Debian. It also names it build , since we’re only using the image in the first step, and are actually not relying on it by the end of the process.
The second line defines our working directory inside the container:
# explicit work dir is important WORKDIR /src
I habitually choose /src as it makes sense given what I’m trying to accomplish. In theory this doesn’t matter much and you can use something else if you prefer.
The third line copies the repo into the container:
# copy all files COPY . .
The first . refers to the path on the host machine, and in this case that’s the repo root. The second . refers to the working directory we set above, so it would be /src in this case.
After this is when the magic starts:
# install Python with uv RUN uv python install 3 .13
Remember how I said uv can manage Python executables? This is one way to achieve this, and has been a bulletproof, fast way of doing so. It also mirrors what I do locally on my personal machine, maintaining some consistency.
Once we have the Python version we want, we can run the build step, which also installs dependencies:
# run build process RUN uv run --no-dev sus
uv will automatically sync the virtual environment, and thus in our case install dependencies. The --no-dev flag skips any dependencies defined as dev-only. In this case I don’t have any, but in other projects I use tools like pytest and pdbpp, for example. Note that you probably want to omit this flag if you’re using this container in CI. sus builds the site in /output upon invocation of the eponymous command, which is consistent with with my other projects. (Some day I’ll standardize all my projects using just and Justfile s, and will be able to do something like RUN uv just build , but that day hasn’t yet arrived.)
With the site built in /src/output , it’s time to leave uv and Python behind and move onto Caddy:
# serve with Caddy FROM caddy:alpine
It’s generally a good idea to specify the desired version, but I guess I like to live dangerously. This directive tells Docker to start building using a different image, whilst keeping the previous one around for reference.
The only Caddy config we need is copying the Caddyfile to the default destination:
# copy Caddy config COPY Caddyfile /etc/caddy/Caddyfile
This allows the automatically started Caddy process to pick up the config without having to tell it where to look.
And our final step is to copy the output we generated with uv and Python into the location where Caddy expects to find the files to serve:
# copy generated static site COPY --from = build /src/output /srv/
This time we specify the source location as that first builder image we used with --from=build , reference the path where the contents of the built site were placed, and drop it all into the location specified in the Caddyfile below.
All this above results in a Caddy-based image with our files in /srv and our Caddy config in /etc/caddy/Caddyfile , ready to be served.
Caddy
The Caddyfile is a human-readable configuration file for Caddy, with JSON being the other option. Here is this one in all its glory, and we’ll go through all its parts:
mwokss00s84sg0okwoggg8s0.5.78.24.144.sslip.io:80, mwokss00s84sg0okwoggg8s0.5.78.24.144.sslip.io:443, link.pileof.tools:80, link.pileof.tools:443, l.pileof.tools:80, l.pileof.tools:443 { root * /srv file_server @plausible path /js/script.js /api/event handle @plausible { rewrite /js/script.js /js/script.js reverse_proxy https://plausible.io { header_up Host {http.reverse_proxy.upstream.hostport} transport http { tls } } } }
Caddyfile s can contain multiple site configurations, and I habitually structure mine as such even if they only have one. The first six lines define which domains and ports the section configures:
mwokss00s84sg0okwoggg8s0.5.78.24.144.sslip.io:80, mwokss00s84sg0okwoggg8s0.5.78.24.144.sslip.io:443, link.pileof.tools:80, link.pileof.tools:443, l.pileof.tools:80, l.pileof.tools:443 {
The first domain is the default one Coolify Cloud generated for this site when I configured it, the second is the actual domain I tend to use, and the third saves me a whopping three characters…or would, if I ever used it. For each domain I want to serve content on port 80 (http) and 443 (https).
The following two lines tell Caddy to serve content from /srv :
root * /srv file_server
The root directive defines which files to serve and from which path in the container, and the file_server directive tells it to serve them up. Caddy can do many other things, but I’ve only ever used it in this capacity.
The remainder of the file is pretty specific to my setup, and is the rewrite for Plausible Analytics, a privacy-friendly analytics service I use:
@plausible path /js/script.js /api/event handle @plausible { rewrite /js/script.js /js/script.js reverse_proxy https://plausible.io { header_up Host {http.reverse_proxy.upstream.hostport} transport http { tls } } } }
What it lets me to is pretend I’m hosting the actual JS files required for Plausible to work, invisibly proxying the requests to their manager instance. This helps a bit with some content blockers, which I’m fine with given Plausible’s privacy-friendly raison dêtre. The actual code can be broken down as such:
@plausible path /js/script.js /api/event registers the two paths under the @plausible named matcher, letting us tie functionality to them.
registers the two paths under the named matcher, letting us tie functionality to them. handle @plausible defines the config for said paths.
defines the config for said paths. rewrite /js/script.js /js/script.js “redirects” the first path on my domain to the second path on the target domain, which just happen to be identical here.
“redirects” the first path on my domain to the second path on the target domain, which just happen to be identical here. reverse_proxy https://plausible.io { sets up a reverse proxy to the Plausible domain.
sets up a reverse proxy to the Plausible domain. header_up Host {http.reverse_proxy.upstream.hostport} replaces the host in the request header.
replaces the host in the request header. transport http { tls } enables TLS for the rewrite
All my sites have nearly identical configuration.
Notes
Error Pages
Another site of mine uses this snippet to provide custom error pages:
handle_errors { rewrite * /{err.status_code}.html file_server }
This would result in Caddy serving a file named 404.html in the case of a 404 error, etc.
Content Type
The same site also needs to provide a specific content type for a few paths:
@feed { path /feed/* path /blog/feed/* } handle @feed { header Content-Type application/atom+xml }
Redirect
Another site is just a redirect to an entirely different page:
redir https://example.com/foobar
Conclusion
This stack has been serving me quite well for the month or so since I migrated things to it. I want to standardize my projects to rely on just build as the only command I need to specify in the Dockerfile to get things to work, but that’s the only change I’d like to make at the moment.
Hope this was helpful, or at least interesting!
Thanks for reading! You can keep up with my writing via the feed or newsletter, or you can get in touch via email or Mastodon.