Feb 13th, 2025
Previous situation: I use Ubiquiti Unifi components for my networking, the central component that connects my home to the Internet is a UDMP (“Dream Machine Pro”). This provides me with public IPV4 and IPV6 addresses. Public services like this blog run on a Yunohost server in a DMZ in my home. The domain registrar is core-networks.de. Yunohost takes care of the certificates for itself, the authentication with LetsEncrypt works through the public accessible webserver. The certificates are for specific (sub-)domains, no wildcard certificates here.
A few years ago Niclas and I discussed the possibility of running a reverse proxy in my home to access other servers / docker containers / IoT-devices via https with an official certificate. During that discussion I came to the conclusion that it was not feasible due to the lack of flexibility of my Unifi network. In particular it was not possible at the time (officially and supported) to add host entries to DNS that did not obtain their IP-address via DHCP nor was it possible to add a wildcard certificate pointing to a specific IP address. I still liked the highly integrated approach of Unifi and did not want to change it for the sake of a reverse proxy (the benefits of which I also only partially understood).
This week the subject came up again and we found that the necessary features had been added to the Unifi Network Application.
In the “Clients” tab of the WebUI there is an icon in the top right corner (second from the right) that allows you to manually add “clients” by specifying MAC-address, IPV4-address and name (which did not help much, as I need a wildcard to point to a single IP address). But in Settings -> Routing -> DNS I was able to create an entry for *.int.qlch.de pointing to a single IPV4-address.
To issue my certificate request I downloaded and used the acme.sh script:
# the following user/pw is not credentials for their website
# login. you need to create an api-user on their website
# at https://iface.core-networks.de/general/api/accounts
#
export CN_User="<user>"
export CN_Password="<password>"
.acme.sh/acme.sh --issue --dns dns_cn -d '*.int.qlch.de' --home /home/la/mystack/caddy/certs
The CN* environment variables and the –dns dns_cn parameter tell the script how to communicate with my DNS registrar, so that it can respond to the CA challenge-request. To my surprise (I thought the script came out of the LetsEncrypt community) the default CA contacted by the script was zerossl. The error message told me how to register an email address, which I did but further attempts did not result in a certificate, but only unspecific error messages (even with –debug) so we gave up on zerossl and switched to LetsEncrypt:
acme.sh --set-default-ca --server letsencrypt --home /home/la/mystack/caddy/certs # --home necessary to keep setting
Now things went quickly and smoothly and the certificates were stored in the requested path for Caddy (my intended reverse proxy) to use later. Caddy was easy to set up in my docker-compose.yml file:
caddy:
image: caddy
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
volumes:
- /etc/timezone:/etc/timezone:ro
# - ./caddy/etc/Caddyfile:/etc/caddy/Caddyfile
# Caddyfile now in ./caddy/etc/
- ./caddy/etc:/etc/caddy
- ./caddy/certs:/certs
At this point I realized, that my previous approach of exposing ports on the hosts’ IP address was no longer necessary, since Caddy would contact the other containers over the internal Docker network using their container names as hostnames and the hostname:port combination would no longer be in danger of colliding (one last (?) time I had to move the heimdall container from port 80 to something else (temporarily until the port exposure was later completely stopped)) so that it would not collide with the port 80 intended for Caddy. The Caddyfile to configure Caddy started something like this:
(prod_cert) {
tls /certs/*.int.qlch.de_ecc/fullchain.cer /certs/*.int.qlch.de_ecc/*.int.qlch.de.key
}
heimdall.int.qlch.de {
import prod_cert
reverse_proxy heimdall:80
}
The last 4 lines can be used as a template for all kinds of other services. For Docker containers in the same stack / host you can just use the container name as the hostname for the container. But you can also reverse-proxy other systems outside your Docker host by simply specifying a valid hostname (on your home-network) or IP address. Not every host worked with just the above template. For Homeassistant I had to add some lines to its configuration.yaml to make it accept the the reverse-proxy forwarding:
http:
use_x_forwarded_for: true
trusted_proxies:
- 192.168.x.y # IP-Adresse des Caddy-Servers
For my Unifi WebUI I had to add additional keywords into the Caddyfile:
unifi.int.qlch.de {
import prod_cert
reverse_proxy https://192.168.1.1 {
transport http {
tls_insecure_skip_verify
}
header_up -Authorization
header_up Host {host}
}
}
Two other applications I have not yet figured out why they do not work through the reverse proxy are kodi and update-kuma. All the others I tried in my homelab worked fine with the above mentioned 4-line template.
After changing the Caddyfile I had to restart the caddy container every time, which is more than necessary (and costs a lot of waiting time), a simple reload of the config-file is enough. Caddy is not very picky about how you indent your Caddyfile to figure out what you want it to do. However when you reload it, it complains and suggests an “fmt –overwrite” make it more readable. I wrote a little wrapper script called “caddy” to avoid having to remember and type the full commands every time. It will probably be visible on my Gitea sooner or later and hopefully linked from here 🙂 Currently it changes too often to publish anywhere.
One positive side-effect besides not having to exposure Docker container ports (and potentially have them colliding with others) is that Bitwarden/Vaultwarden (my password manager combination) always had trouble identifying a simple host URL to associate a user/password combination with. So when I opened, say, https://unifi in my browser, I had to manually search my Bitwarden extension for the correct user/password/otp combination. Now that it is https://unifi.int.qlch.de it just works automatically flawlessly.
I don’t have a lot of routine and a deep understanding of certificates, so I’d like to mention a few things I’ve just learned (perhaps to clarify for others at my level of understanding):
The certificates are not “created” by the CA, they are signed and validated as belonging to your domain. If they were to create the certificate, they would also own the private key, which no one should have access to except the domain owner (thank you, Oli).
I suspected that this new wildcard certificate for internal use might interfere with the certificates used by yunohost for my public presence. They they are completely independent and don’t interfere with each other.
You could set this up with other software instead of Caddy, but Caddy seems to have the least complicated setup as reverse proxy (e.g. compared to nginx or Apache).
You could use paths in the URL instead of different hostnames (as the wildcard part). So you could have https://caddy.int.qlch.de/heimdall instead of https://heimdall.int.qlch.de. But many applications with a web interface expect their root to be the root of the path, so heimdall might (I have not checked if it does, just an example) want to send to to its main page, but would instead send to you to https://caddy.int.qlch.de/ which is unintended and impractical. You would have to add rewrite rules for these applications which makes it more complicated.
Another thought might be to issue certificates specific to each internal host, which could be a privacy issue, since every single certificate you ever issue will be publicly visible (e.g. you can look them up in crt.sh), and maybe not every funny hostname is intended to be publicly visible.
If your certificates are deployed on a public web server you can check the “quality” of your setup on ssllabs.
maub and schiermi noted, that Caddy can issue and renew certificates itself.
I was not aware of this before implementing the above. It would not work for me because I could not find a plugin for the DNS api from core-networks.de
Thanks Niclas, without the discussions we had, your knowledge and support this would not have happened.
Update Feb 15th: schiermi provided me a working Caddyfile snippet for fritz.box:
fritz.int.qlch.de {
import prod_cert
reverse_proxy https://192.168.1.5 {
header_up Host {upstream_hostport}
transport http {
tls_insecure_skip_verify
}
}
}
Update Feb 16th: I found out that when changing the Caddyfile from outside the container, sometimes the older version was still inside, so caddy reload did not have the desired effect and before debugging / finding the correct config snippets gave inconsistent results that were hard to interpret. Now I’ve changed the container-mapping of the Caddyfile to a folder so it’s always the same version of the file inside and outside the container (I already updated above).
volumes:
- /etc/timezone:/etc/timezone:ro
# - ./caddy/etc/Caddyfile:/etc/caddy/Caddyfile
- ./caddy/etc:/etc/caddy
- ./caddy/certs:/certs
After that, progress got a lot faster 🙂 I solved my problems with update-kuma, kodi and several others. Here are some sample snippets:
unifi.int.qlch.de {
import prod_cert
reverse_proxy https://192.168.1.1 {
transport http {
tls_insecure_skip_verify
}
header_up -Authorization
header_up Host {host}
}
}
luleey.int.qlch.de {
import prod_cert
reverse_proxy http://192.168.100.1 {
header_up Host {upstream_hostport}
}
}
coreelec.int.qlch.de {
reverse_proxy http://192.168.1.185:8080
@websockets {
header Connection *Upgrade*
header Upgrade websocket
}
reverse_proxy @websockets http://192.168.1.185:9090
}
One particular problem was that name resolution for hosts outside of the docker compose stack (e.g., for coreelec) did not work, even though they can be resolved just fine outside of containers via DNS (provided by UDMP).
Addendum to acme.sh and certificate renewal:
acme.sh has a “–install-cronjob” command that creates a cronjob but even though I specified the correct “–home” it was created with the wrong default –home. I later added a “–reload” command to invoke my wrapper script “caddy”:
30 15 * * * "/home/la/.acme.sh"/acme.sh --cron --home "/home/la/mystack/caddy/certs" --reloadcmd "/home/la/bin/caddy reload" > /dev/null
Time will tell if renewal works or I need to change things…
Update Feb 27th: Today I added the following snippet to my Caddyfile:
# Catch-all for undefined hosts
:80, :443 {
respond "404 Not Found" 404
}
to avoid waiting for long timeouts in case the requested host is not defined in the Caddyfile