Hosting Private Local Image Registry

26 Feb 25

One of the first problems I encountered when setting up my kubernetes cluster is where should I store my container images. Surely I don’t want to host it on a remote public / private registry because:

  1. Most of it will just be my personal fun project but it may still contain sensitive information
  2. The idea that I have to download from the internet an image that was built locally from inside my homelab environment is just stupid
  3. Bandwidth consideration, pulling image from internet would take some of my bandwidth and if it happens often it would destroy my precious doom-scrolling time

I’ve heard about harbor and jfrog but upon looking it up again I found that those are too complicated for my use case. Instead I settled on using distribution (or is it called registry?) and running it on a VM using docker.

Seems simple enough right? Not so much apparently. See, the thing about running a registry is somehow all the client (docker, containerd in kubernetes cluster) connecting to the registry needs to use TLS. Even for the ones running completely locally. This means I need to do some additional steps:

  1. Generate SSL certificate for the registry
  2. Make sure the clients are connecting to the registry via domain name with a valid TLS certificate
  3. Configure all clients to trust the certificate

Running The Registry

As mentioned before, I am running the registry locally and don’t want to expose it via public domain. In this case I am using registry.box domain, so I can’t use trusted certificate like Let's Encrypt here. So I just generated my self-signed certificate using openssl. To generate the certificate for that domain, I use the following command

openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -addext "subjectAltName = DNS:registry.box" -x509 -days 365 -out certs/domain.crt

For actually running the registry, I use docker. BUT!!! the one responsible for running the docker command is my ansible script (lol). I really don’t know whether this is a good idea, but at the time I don’t have any simpler solution. Below is the script that I use

---
- name: Create registry configuration from template
ansible.builtin.template:
src: htpasswd.j2
dest: /opt/htpasswd
mode: '0644'
- name: Creates cert dir
ansible.builtin.file:
path: /opt/certs
state: directory
- name: Copy SSL crt for registry.box
ansible.builtin.copy:
content: "{{ domain_crt }}"
dest: "/opt/certs/domain.crt"
- name: Copy SSL key for registry.box
ansible.builtin.copy:
content: "{{ domain_key }}"
dest: "/opt/certs/domain.key"
- name: Run registry
ansible.builtin.docker_container:
name: registry
image: registry:2
state: started
restart_policy: always
ports:
- "443:443"
volumes:
- "/opt/htpasswd:/opt/htpasswd"
- "/opt/certs:/certs"
- "/mnt/registry:/var/lib/registry"
env:
REGISTRY_AUTH: "htpasswd"
REGISTRY_AUTH_HTPASSWD_REALM: "Registry Realm"
REGISTRY_AUTH_HTPASSWD_PATH: "/opt/htpasswd"
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key

After that I only need to setup DNS A Record from my DNS server (in this case pihole instances) to point the domain registry.box to the IP of that VM.

Trust Isn’t Earned, It’s Enforced

The thing about self signed certificate is, of course no one trust it out of the box. In general, we can mark that a certificate is trusted in a Debian based OS by putting the crt file inside /usr/local/share/ca-certificates directory. In my use case there are two system which directly communicate with image registry: Kubernetes cluster to pull pods images and Github action runner pod to push image of my built applications (which ironically is inside the kubernetes cluster, so I guess only saying only Kubernetes cluster is enough).

For the Kubernetes use case, as far as I’m aware, I have two options:

  1. Manually (?) puts the certificate file in each nodes
  2. Use DaemonSet to do just that as pointed in this docs

I chose the first option to manually add the certificate into each and every nodes. Of course I don’t actually do it manually, I utilized my existing ansible script for installing and/or upgrading k3s cluster as follow:

...
- name: Copy SSL crt for registry.box
ansible.builtin.copy:
content: "{{ domain_crt }}"
dest: "/usr/local/share/ca-certificates/registry.box.crt"
- name: Update system's certificate store
ansible.builtin.shell: update-ca-certificates
...

The reasoning behind this decision are:

  1. Using DaemonSet means it will unavoidably uses some amount of resources from the node to run the DaemonSet pods
  2. I’m not aware of this option until after I finished setting up my kubernetes cluster

UI / Dashboard

Upon writing this post, I was reminded that my registry is missing one key component and that is UI / dashboard. Up to this point I haven’t had the need to manage and monitor my registry but that will likely change over time. I will eventually need to do garbage collection and see what’s available and not in my registry easily. In order to do so, I googled and found some options:

I settled on using the second option due to its simplicity and straightforwardness. So I modified my ansible script as follow:

...
- name: Create Docker network
ansible.builtin.docker_network:
name: registry_network
state: present
- name: Run registry
ansible.builtin.docker_container:
name: registry
image: registry:2
state: started
restart_policy: always
networks:
- name: registry_network
ports:
- "443:443"
volumes:
- "/opt/htpasswd:/opt/htpasswd"
- "/opt/certs:/certs"
- "/mnt/registry:/var/lib/registry"
env:
REGISTRY_AUTH: "htpasswd"
REGISTRY_AUTH_HTPASSWD_REALM: "Registry Realm"
REGISTRY_AUTH_HTPASSWD_PATH: "/opt/htpasswd"
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_HTTP_HEADERS_Access-Control-Allow-Origin: "[http://registry-ui.box]"
REGISTRY_HTTP_HEADERS_Access-Control-Allow-Methods: "[HEAD,GET,OPTIONS,DELETE]"
REGISTRY_HTTP_HEADERS_Access-Control-Allow-Credentials: "[true]"
REGISTRY_HTTP_HEADERS_Access-Control-Allow-Headers: "[Authorization,Accept,Cache-Control]"
REGISTRY_HTTP_HEADERS_Access-Control-Expose-Headers: "[Docker-Content-Digest]"
- name: Run registry UI
ansible.builtin.docker_container:
name: registry-ui
image: joxit/docker-registry-ui:main
state: started
restart_policy: always
networks:
- name: registry_network
ports:
- "8080:80"
env:
SINGLE_REGISTRY: "true"
REGISTRY_TITLE: Docker Registry UI
DELETE_IMAGES: "true"
SHOW_CONTENT_DIGEST: "true"
NGINX_PROXY_PASS_URL: https://registry
SHOW_CATALOG_NB_TAGS: "true"
CATALOG_MIN_BRANCHES: "1"
CATALOG_MAX_BRANCHES: "1"
TAGLIST_PAGE_SIZE: "100"
REGISTRY_SECURED: "true"
CATALOG_ELEMENTS_LIMIT: "1000"
...

Future Work

  1. The smart me from the past thinks that 365 days of certificate validity is a good enough number. Well, it is not, one day I’ll forget about this and my system breaks because the SSL certificate is no longer valid. I need a way to automate this
  2. Monitoring and alerting. I have node exporter installed on all of my VMs so I can use that to monitor CPU, memory, and storage usage. But still no alerting and container specific metrics (cadvisor might be a good fit for this)
  3. Security scanning