The kinds of applications you put inside containers will often come with their own pieces of configuration. Some examples:

  • Redis and nginx have conf files
  • Spring Boot microservices will have a file of application properties

You wouldn't recompile these programs just to make changes to their external config files—that's why the files are external. So it'd also make sense to want to avoid rebuilding images for every such change. ConfigMaps are an object that provides you a way of injecting these into your containers without having to use different versions of your image.

Using ConfigMaps to inject properties files

Below is an example of an nginx.conf file:

user nginx;
worker_processes 1;
error_log stderr;
pid /var/run/nginx.pid;
events {
  worker_connections 1024;
}

http {
  server {

    listen 80;

    location / {
      root   /usr/share/nginx/html;
      index  index.html index.htm;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
      root   /usr/share/nginx/html;
    }
  }
}

We're going to put this file in a ConfigMap and make changes to the port. The port is specified in the line listen 80;.

Below is a ConfigMap manifest containing this file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configmap-1.0.0
  namespace: ecommerce
data:
  my-conf: |
    user nginx;
    worker_processes 1;
    error_log stderr;
    pid /var/run/nginx.pid;
    events {
      worker_connections 1024;
    }

    http {
      server {

        listen 40000;

        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
      }
    }

The port has been changed from the default of 80 to 40000. Below are Service and Deployment manifests to use with this ConfigMap:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: ecommerce
spec:
  ports:
  - name: default-port    port: 80  - name: 40000-port    port: 40000  - name: 50000-port    port: 50000  selector:
    component: webserver

(Yes, a Service with multiple ports is a thing you can do! But in such cases every port must be assigned a name.)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dep
  namespace: ecommerce
spec:
  replicas: 1
  selector:
    matchLabels:
      depLabel: my-app-nginx
  template:
    metadata:
      name: nginx-pod
      labels:
        component: webserver
        depLabel: my-app-nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17-alpine
        ports:
        - containerPort: 40000        volumeMounts:        - name: override-conf          mountPath: /etc/nginx/nginx.conf          subPath: nginx.conf      volumes:      - name: override-conf        configMap:          name: nginx-configmap-1.0.0          items:          - key: my-conf            path: nginx.conf

The ConfigMap is added to the Pod in the form a volume. Volumes in Kubernetes are similar to volumes in Docker in that they both allow for a folder to be inserted into a container's file system. But a container doesn't need a VOLUME instruction in its Dockerfile in order to receive a volume in Kubernetes, and you usually won't use VOLUME instructions when creating images for use in Kubernetes.

Take another look at the Deployment and ConfigMap manifests, observing the values of each field and what fields refer to others by means of having the same value.

  • The ConfigMap is named nginx-configmap-1.0.0. This is then used in the configMap section of the volume to identify which ConfigMap you're referring to.
  • items specifies the files that will go inside the volume. There's only one item in this list, which will be your conf file. key: my-conf tells it to get the value of the my-conf key (which are your conf file contents) from the ConfigMap. path: nginx.conf means that it will put this value in a file called "nginx.conf" in the volume.
  • The volume is named override-conf. This name is used to refer to it when inserting ("mounting") the volume in the container via the volumeMounts section.
  • The subPath: nginx.conf refers to a file in the volume. This file will be taken and placed at the mountPath in the container. (The nginx:1.17 image already has a default config file at /etc/nginx/nginx.conf. That file will be replaced with yours.)

So, the result of all this is to take the file written in the ConfigMap and put it in the place of the nginx.conf in the container.

Use kubectl apply on all the above manifests and try navigating to http://localhost. It should fail, which is because the Ingress is still pointing to the Service's port 80. Change the Ingress to be as below, referring to the port by name instead of number and pointing it to the 40000 port.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  namespace: ecommerce
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              name: 40000-port

After applying this, navigating to http://localhost should be successful.

One more thing to try to drive home how the configuration is working:

  • Change the port in the ConfigMap as well as the containerPort in the Deployment to 50000.
  • Change the ConfigMap's name in both to nginx-configmap-1.0.1.
  • Apply the ConfigMap and Deployment manifests.
  • Then check http://localhost to confirm it's no longer working, and change the Service port used by the Ingress to 50000-port.
  • Check again after applying this and it should be working.

Another slightly different approach: replacing the whole /etc/nginx folder

Try this: in the Deployment manifest, remove the subPath: nginx.conf line, and change the mountPath: /etc/nginx/nginx.conf line to mountPath: /etc/nginx. This will tell it to take the whole volume and replace the container's /etc/nginx folder with it. If you try this, you won't notice any difference. The port replacement will still work, since the volume has your nginx.conf file in it.

By default, the container does have files other than nginx.conf in its /etc/nginx folder. Try running kubectl exec <pod-name> -n ecommerce -- ls /etc/nginx before and after removing the subPath to see how it affects this folder. (Swap in your Pod name. Remember that kubectl get pods can provide you the Pod name, and the name of the pod will change when you re-apply the deployment.) Before, the folder has several default files in it. After, it only has your nginx.conf.

The advantage of using a subPath is that it doesn't wipe away all the rest of the contents of the folder it's in. In our case, none of those other files are strictly necessary, but there will be cases where you have a folder full of necessary config files and the only way to preserve them all without using subPath is to put them all in a ConfigMap.

The disadvantage of using subPath is that it doesn't update the file's contents if you subsequently make changes to your ConfigMap. You'd have to recreate the Pod for it to get these changes. In our specific case, the nginx server would need to be manually restarted or reloaded anyway in order for changes to the conf file to take effect, but there may be other applications that read from their files regularly while running.

When considering whether or not to use a subPath see if you have the option to place your file in a non-default directory and use container arguments to tell it where its new location is.

Adding Helm into this process

While it's useful to be able to change the properties without rebuilding the image, there could be some improvements to the process described on this page.

  • Wouldn't it be great if you could specify the port only once, instead of 4 times: in the ConfigMap, Service, Deployment, and Ingress? If at some point you wanted to change the port, you'd have to remember to go to each of these 4 files.
  • We also exposed 3 ports in the Service. This was to save us from having to re-apply the Service after changing the port in the ConfigMap. If we had one place where we could change the port everywhere, then we also wouldn't have any incentive to expose 3 ports all the time.
  • The above points deal with changing ports, which you won't do often or at all for most applications. But you'll find that there are many instances elsewhere of this general category of problem. That is, of needing to make multiple edits across multiple files in order to change just a single thing.

Resolutions to these problems can be achieved by using Helm, which is discussed in a later page.

Why also change the ConfigMap's name and not just its contents?

In the demonstration above it might seem like the changes to the ConfigMap's name are unnecessary. Wouldn't it suffice for the only change in the ConfigMap to be to the port in the nginx.conf file, and for the only change in the Deployment to be the containerPort field?

The reason to change the ConfigMap's name with each change to the ConfigMap is to introduce resiliency into your deployments and simplify the course of action to take in the event of a failure. Changing the ConfigMap's name will force the cluster to try to create both a new ConfigMap and new Pods.

  • In the event that there's some error in the ConfigMap that causes the Pods to fail to initialize, the previous Pods will still remain, due to how Kubernetes rolls out Deployments. So, this error at least won't result in any downtime for your application.
  • In the event that there's an error in the ConfigMap that only makes itself known after the Pods are ready, the previous ConfigMap will still exist and be easy to rollback to.

So a good practice is to change the name of the ConfigMap whenever anything in the ConfigMap is changed. To help with this, consider using semantic versioning, or an incremented number, in the name of the ConfigMap. This does also mean that the name of the ConfigMap would need to be changed in both the ConfigMap's manifest, and the manifests of anything referencing that ConfigMap. But a multiple-place change can easily be made a single-place change in Helm.

Using ConfigMaps to define environment variables

ConfigMaps can also be used to supply additional environment variables to containers. An example manifest of a ConfigMap that does this is below. The data section consists of a list fo key-value pairs that are converted directly into environment variables. Feel free to modify them or add your own, and then use kubectl apply on it.

apiVersion: v1
kind: ConfigMap
metadata:
  name: env-var-configmap-1.0.0
data:
  TIME_ZONE: UTC-06:00
  GREETING: Hello, World!

To prove that these variables are set in the container, we'll use an application that exposes a REST API. Requests to this API specify an environment variable, and the value of that variable is returned. See the last 3 lines of the below manifest to see how it makes use of the ConfigMap:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: env-var-finder-dep
spec:
  replicas: 1
  selector:
    matchLabels:
      depLabel: env-var-api
  template:
    metadata:
      name: env-var-finder-pod
      labels:
        component: env-var-api
        depLabel: env-var-api
    spec:
      containers:
      - name: env-var-finder
        image: bmcase/message-holder
        ports:
        - containerPort: 8082
        envFrom:
        - configMapRef:
            name: env-var-configmap-1.0.0

Apply this manifest, as well as your own Service and Ingress that will let outside traffic in through the container's port 8082. (If you're using Rancher Desktop, you'll need to remove previous Ingresses). You can then find the values of the container's environment variables by navigating to localhost:8082/api/v1/e/TIME_ZONE and localhost:8082/api/v1/e/GREETING.

Try changing only the value of one of the environment variables in the ConfigMap, and reapply the ConfigMap. You'll find that the API response is not reflecting your changes. This is because the Deployment configuration has remained unchanged, and so the cluster had no reason to restart the Pods. Even if you try to use kubectl apply again with the Deployment manifest, it still won't do anything, because it will detect no change.

Now, change the name of the ConfigMap in both the ConfigMap and Deployment manifests and reapply both. This will result in a rolling update of the Pods, and once this is finished the API response will show the updated environment variables. Once again, changing all occurrences of the ConfigMap name like this may seem like a complicated and error-prone practice, but when using Helm it can be made simple.

Environment variables example: a Kong API gateway

Let's use a practical example to see more ways to configure your Pods. We'll put both the nginx server and the environment variable API behind a Kong API gateway. Kong has many properties to be configured which are customarily specified in environment variables.

See below for the file containing manifests of its ConfigMaps:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kong-yaml-configmap-1.0.0
data:
  my-kong-yml: |
    _format_version: "1.1"
    services:
    - name: nginx
      url: http://nginx-service.ecommerce:50000
      routes:
      - name: nginx
        paths:
        - /nginx
    - name: env-var-finder
      url: http://env-var-service:8082/api/v1
      routes:
      - name: env-var-finder
        paths:
        - /env-var-finder
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kong-env-configmap-1.0.0
data:
  KONG_DATABASE: "off"
  KONG_DECLARATIVE_CONFIG: kong.yml
  KONG_PROXY_ACCESS_LOG: /dev/stdout
  KONG_ADMIN_ACCESS_LOG: /dev/stdout
  KONG_PROXY_ERROR_LOG: /dev/stderr
  KONG_ADMIN_ERROR_LOG: /dev/stdout
  KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl

Note that this file contains two ConfigMaps, separated by a line having only ---. This is following a YAML convention which allows the definition of multiple objects in a single file. When you use kubectl apply on it, it'll create two separate ConfigMaps.

  • The first ConfigMap is one of the config files that will be placed inside the Kong container, similarly to what you did with nginx.conf. Check the Service names and ports in this file's url fields and make any changes necessary based on how you defined these services in previous exercises.
  • The second ConfigMap is the one that contains the environment variables.

Make sure you still have the nginx and environment variable API Deployments running, and have Services available for each. Then use kubectl apply on the above ConfigMaps as well as on the two files below to ready your Kong API gateway.

apiVersion: v1
kind: Service
metadata:
  name: kong-service
spec:
  selector:
    component: apigateway
  ports:
  - port: 8000
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong-dep
spec:
  replicas: 1
  selector:
    matchLabels:
      depLabel: kong-apigateway
  template:
    metadata:
      name: kong-pod
      labels:
        component: apigateway
        depLabel: kong-apigateway
    spec:
      containers:
      - name: kong
        image: kong:3.6.1-ubuntu
        ports:
        - containerPort: 8000
        - containerPort: 8443
        - containerPort: 8001
        - containerPort: 8444
        envFrom:
        - configMapRef:
            name: kong-env-configmap-1.0.0
        volumeMounts:
        - name: kong-yml
          mountPath: /kong.yml
          subPath: kong.yml
      volumes:
      - name: kong-yml
        configMap:
          name: kong-yaml-configmap-1.0.0
          items:
          - key: my-kong-yml
            path: kong.yml

Finally, remove any previous Ingress you have if you're using Rancher Desktop, and apply the below Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kong-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kong-service
            port:
              number: 8000

You can then connect to both using the following:

ConfigMaps aren't your only option for defining environment variables. See below for a version of the Kong Deployment that includes the environment variables in the manifest of the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong-dep
spec:
  replicas: 1
  selector:
    matchLabels:
      depLabel: kong-apigateway
  template:
    metadata:
      name: kong-pod
      labels:
        component: apigateway
        depLabel: kong-apigateway
    spec:
      containers:
      - name: kong
        image: kong:3.6.1-ubuntu
        ports:
        - containerPort: 8000
        - containerPort: 8443
        - containerPort: 8001
        - containerPort: 8444
        env:        - name: KONG_DATABASE          value: "off"        - name: KONG_DECLARATIVE_CONFIG          value: kong.yml        - name: KONG_PROXY_ACCESS_LOG          value: /dev/stdout        - name: KONG_ADMIN_ACCESS_LOG          value: /dev/stdout        - name: KONG_PROXY_ERROR_LOG          value: /dev/stderr        - name: KONG_ADMIN_ERROR_LOG          value: /dev/stdout        - name: KONG_ADMIN_LISTEN          value: 0.0.0.0:8001, 0.0.0.0:8444 ssl        volumeMounts:
        - name: kong-yml
          mountPath: /kong.yml
          subPath: kong.yml
      volumes:
      - name: kong-yml
        configMap:
          name: kong-yaml-configmap-1.0.0
          items:
          - key: my-kong-yml
            path: kong.yml

The above highlighted lines replace the ConfigMap kong-env-configmap-1.0.0.