Setting up a private docker registry on AWS

The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".

This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.

This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.

Due to the size of images and potential bandwidth required, I’ve previously found that it can be a little on the expensive side getting a private docker registry. This is especially true in instances where you’re working on personal projects and are likely very cost sensitive.

Creating a private registry is pretty easy though, and doesn’t require much to run if you’re only going to be expecting small loads. Even in cases where you deploy often, a small instance will typically suffice as long as it has good network specs.

To create a private registry on AWS, I’ve previously done the following, but this can really be done anywhere docker is available.

First off, create an Amazon Linux EC2 instance, and ssh into it. I’ve used a t2.micro instance for my personal projects, but getting something larger might be a better idea if you’re expecting a higher load and the micro doesn’t hold up.

Once you’re in, the install documentation for docker can be found here but the general idea is the following:

sudo yum update
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user

(With Amazon Linux 2 you can also use sudo amazon-linux-extras install docker in place of yum install)

After doing the above, cycle your session and once you’ve logged in again, docker ps should succeed and connect to the running docker daemon (where you’d currently expect nothing to be running in the ps output).

Running the registry with SSL is also a very good idea and using certbot and Lets Encrypt makes this easy and free. Firstly install certbot (documentation can be found here ) which should look like the following:

# Amazon Linux doesn't contain the normal global repo list you'd expect from other linux distros so
# you need to add the fedora repo manually to access certbot
sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/
sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm
sudo yum-config-manager --enable epel*

# Now you should be able to install certbot
sudo yum install -y certbot python2-certbot-apache

At this point, as we’re using Amazon Linux, we’ll have an apache server already running. You want to add your registry domain into the apache config such as the following example. Watch out here if you are following the supplied documentation as well, as this is a proxy pass declaration, not a document root as the example in the docs.

<VirtualHost *:80>
  ServerName YOUR_DOMAIN.COM
  ProxyPreserveHost On
  ProxyPass / http://localhost:5000/
  ProxyPassReverse / http://localhost:5000/
</VirtualHost>

At this point when running certbot, it will pick up the new configuration and ask if you want it to start managing the SSL certificate for it. As long as you have set up your DNS correctly to point to this new server, and you’ve attached a suitable security group to your EC2 instance that allows for traffic on ports 80 and 443, then certbot should be able to generate valid certificates.

During the certificate generation certbot will automatically modify your apache config so that the new certificates are used which is very helpful. The less you need to mess with apache configs the better right?

The one issue here is that registry running behind apache needs headers set in order to work correctly. At this point you can edit the custom file certbot created at /etc/httpd/conf/httpd-le-ssl.conf and add the following configuration to the <VirtualHost *:443> configuration that it just added automatically.

Header add X-Forwarded-Proto "https"
RequestHeader add X-Forwarded-Proto "https"

Once you have added this configuration, the httpd service requires a restart so that the new configuration is picked up by apache, and this can be done via sudo systemctl restart httpd.

At this point if you visit https://YOUR_DOMAIN.COM you should get a response with a valid SSL, but will likely get some for of apache error saying that the service is unavailable. This is expected as the actual registry is not yet running. To start it, it’s a single docker command to bring up the registry container.

This command assumes that you want to store the registry data within the /root/registry directory. If you wish to store this elsewhere, such as on a mounted drive, feel free to change it.

docker run -d \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v /root/registry/data:/var/lib/registry \
  -v /root/registry/auth:/auth \
  -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry \
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
  -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
  -e REGISTRY_AUTH=htpasswd \
  -e SEARCH_BACKEND=sqlalchemy \
  registry:2

Once the container is running and if everything is set up correctly, if you visit https://YOUR_DOMAIN.COM/v2/_catalog, you should be greeted by a standard http auth modal. Great!

When starting, registry will create default credentials which you can find by looking at the logs. You can feel free to use this, or if you want you can create your own.

To create your own credentials and replace the default that was generated you just need to overwrite the /root/registry/auth/htpasswd file with a new htpasswd file. To do this it is a simple command such as:

sudo htpasswd -Bbc /root/registry/auth/htpasswd USERNAME PASSWORD

This command will typically require sudo unless you’ve changed permissions on the local registry directory.

After doing this you should be able to successfully submit the http modal with your credentials (no need to restart the container here, it picks it up automatically).

You now have your own private running registry, good work!

The last step is to test a login via docker. On your local machine, attempt to log in like the following command which should return a response of Login Succeeded.

docker login -u USERNAME -p PASSWORD registry.YOUR_DOMAIN.COM

Image pull secrets

Added a regcred via

$ kubectl create secret docker-registry dockerregistrycredentials --docker-server="docker-registry.deckee.com:5000" --docker-username="enabled" --docker-password="<REDACTED>" --docker-email="[email protected]"
secret/dockerregistrycredentials created

This unfortunately needs to be run in all namespaces which will run our private containers. No way of storing secrets across namespaces…

Just need to add imagePullSecrets: - name: dockerregistrycredentials to the container spec in each deployment.

Posted in: