If you have ever used Docker or any other Linux OCI container system, you inevitably have incurred in the following error:
x509: failed to load system roots and no roots provided
This message is remembering you that you forgot to provide Root Certificate Authorities to your application. There are two different ways to solve this:
- mount the /etc/ssl/certs folder from the machine where the container is running
- bundling the root CAs in your image
As you may imagine from the title, I believe that the second option is by far better than the first one.
The first reason why I’m saying this is that doing this you ensure better testing opportunity. In fact, you’ll be able to ensure that even the certificates your application is using are the same in your test environment and your production environment. If you mount the folder from the host, you risk having different CAs throughout the different environments and the life of the application. Every company I’ve seen that deployed containers (usually with Kubernetes) created multiple clusters for the different environments and - for reasonably understandable reasons - they tended to upgrade test environments before their production environments, creating the possibility of having newer CAs in the test environments than in the production ones. For this reason, mounting the CAs folder from the host could make it harder to perform proper tests.
The second reason is simplicity. Bundling the CAs instead of mounting from the host machine, allows you to run the Docker container with fewer parameters (or write fewer lines in your Kubernetes YAML file) and therefore it’s easier to remember and harder to make errors. Containers are built around the idea of being easy to deploy, and when you can simplify your deployment, you should do it.
The third reason is consistent behavior. One of the core advantages of the Container Platforms is the drastic reduction of rigid dependencies between the applications and the underlying architecture. For instance, you can have Red Hat Enterprise Linux running on your hosts and still host Linux Container with any flavor of Linux. The only required dependency between the containers and the host system is the small set (~20) of syscalls Docker uses, which are very stable and very tested. This allows system administrators to manage Container Platforms nodes as if no-one is using them (after having drained them), being sure that the other nodes in the clusters will handle the workloads. This often means that the Container Platform you are running your containers on has hosts with different versions or even distributions of Linux. Usually, this happens during upgrades. If your application relies on the host CAs, you risk to have different CA list based on the host your container is running on, or - if you are running multiple pods for your deployment - the different pods might behave differently so that every request will behave differently.
For the reasons described here, I always suggest to bundle the CAs in your container images.