Automating Certificate Management for Kubernetes Application using Cert-manager and Let’s Encrypt

Muralidharan K
7 min readMay 20, 2021

--

TLS certificate is one of the fundamental part of the modern web application security. Managing lifecycle of TLS certificates for the applications running in Kubernetes can be challenging. Managing issuance of new certificate, creating Kubernetes secrets for the certificate, configuring ingress with the certificate secret, tracking the life cycle and renewing certificate on time etc for each certificate in your environment will be tedious and error-prone. In this blog, we are going to see how we can automate certificate management using Let’s encrypt’s free certificates and cert-manager for the applications running on Kubernetes. You can read more about cert-manager and Let’s Encrypt in their documentation. They both got great documentations.

We are going to use a demo online shop application called Sock Shop for this demonstration.

Step 1: Install cert-manager

Cert-manager is an open source Kubernetes add-on that helps to automate TLS certificate management. Cert-Manager manages the lifecycle of certificates issued by certificate authorities by ensuring certificates are valid and duly renewed before they expire. It supports various issuing sources including Let’s Encrypt. We will install cert-manger using the below command.

➜ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml

Step 2: Install Contour Ingress Controller and configure the DNS

Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour is also a Cloud Native Computing Foundation Incubating project like cert-manager. We will use contour ingress controller to create our ingress resources for our application.

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

You can check status with the below command. A successful installation will create contour deployment, envoy daemonset and service resource for both of the pods as shown below.

➜ kubectl get all -n projectcontourNAME                              READY STATUS    RESTARTS AGE
pod/contour-certgen-v1.15.1-l468c 0/1 Completed 0 76m
pod/contour-f4b8678–9dr9f 1/1 Running 0 76m
pod/contour-f4b8678-m7bxw 1/1 Running 0 76m
pod/envoy-tpbjq 2/2 Running 0 76m
pod/envoy-xltrq 2/2 Running 0 76m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/contour ClusterIP 10.100.10.223 <none> 8001/TCP 76m
service/envoy LoadBalancer 10.100.6.9 x..alb.amazonaws.com 80:31017/TCP,443:32047/TCP 76m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/envoy 2 2 2 2 2 <none> 76m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/contour 2/2 2 2 76m
NAME DESIRED CURRENT READY AGE
replicaset.apps/contour-f4b8678 2 2 2 76m
NAME COMPLETIONS DURATION AGE
job.batch/contour-certgen-v1.15.1 1/1 4s 76m

Now you can check the external IP of contour service and configure a domain name to point to the external IP in your DNS. If you using a managed cluster like EKS like me, you will be getting a Loadbalncer dns name as shown below. You can create a CNAME record in route53 to point your domain to LoadBalancer domain address. If you are using GKE, you will be getting a public IP for an external IP field. You can add an A record in that case.

➜ kubectl get service -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.10.223 <none> 8001/TCP 140m
envoy LoadBalancer 10.100.6.9 x-x.x.elb.amazonaws.com 80:31017/TCP,443:32047/TCP 140m

Step 3: Install demo micro-services

We will be using this demo application for this experiment. You can clone this repo, go to the directory microservices-demo/deploy/kubernetes and run below commands.

➜ git clone https://github.com/microservices-demo/microservices-demo.git
Cloning into 'microservices-demo'...
remote: Enumerating objects: 9944, done.
remote: Counting objects: 100% (224/224), done.
remote: Compressing objects: 100% (76/76), done.
remote: Total 9944 (delta 141), reused 194 (delta 138), pack-reused 9720
Receiving objects: 100% (9944/9944), 52.91 MiB | 10.68 MiB/s, done.
Resolving deltas: 100% (6007/6007), done.
➜ cd microservices-demo/deploy/kubernetes
kubectl create namespace sock-shop
➜ kubectl apply -f complete-demo.yaml

This will create a namespace sock-shop for our demo sock shop application and deploy a bunch of micro-services and expose it through services.

➜ kubectl get pod -n sock-shopNAME                          READY STATUS      RESTARTS     AGE
carts-7bbbd7779d-bh5dc 1/1 Running 0 42m
carts-db-84b777d9c-hzn8g 1/1 Running 0 42m
catalogue-8684f655d9–5gq66 1/1 Running 0 42m
catalogue-db-5579f7f4cb-5ww45 1/1 Running 0 42m
front-end-6f5fc69d6-rghlk 1/1 Running 0 42m
orders-7ccf68495-g2pc7 1/1 Running 0 42m
orders-db-77c46b9c85–9qkzp 1/1 Running 0 42m
payment-74976d749f-wfbns 1/1 Running 0 42m
queue-master-799b6b57d5-lk5jc 1/1 Running 0 42m
rabbitmq-8458d7d4c-8bfxp 1/1 Running 0 42m
shipping-5c89f4886c-zrqqn 1/1 Running 0 42m
user-d6c48f668-tznbf 1/1 Running 0 42m
user-db-5cdfd6d44b-p6br5 1/1 Running 0 42m
➜ kubectl get service -n sock-shopNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carts ClusterIP 10.100.38.59 <none> 80/TCP 11h
carts-db ClusterIP 10.100.65.43 <none> 27017/TCP 11h
catalogue ClusterIP 10.100.227.63 <none> 80/TCP 11h
catalogue-db ClusterIP 10.100.9.71 <none> 3306/TCP 11h
front-end ClusterIP 10.100.80.85 <none> 80/TCP 11h
orders ClusterIP 10.100.131.70 <none> 80/TCP 11h
orders-db ClusterIP 10.100.35.117 <none> 27017/TCP 11h
payment ClusterIP 10.100.176.152 <none> 80/TCP 11h
queue-master ClusterIP 10.100.212.14 <none> 80/TCP 11h
rabbitmq ClusterIP 10.100.193.68 <none> 5672/TCP 11h
shipping ClusterIP 10.100.252.57 <none> 80/TCP 11h
user ClusterIP 10.100.161.222 <none> 80/TCP 11h
user-db ClusterIP 10.100.114.7 <none> 27017/TCP 11h

Step 4: Create a Issuer for Lets Encrypt

Cert-manager supports ACME protocol to get certificates from CAs. We will be using the Let’s Encrypts ACME server with HTTP01 challenge. You can learn more bout ACME issuer and how it works with cert-manager here.

First we need to create issuer for Let’s Encrypt ACME Server to generate certificate for our application. We will create ClusterIssuer that can be referenced by Certificate resources in all namespaces (you can learn more about this here). We will create a ClusterIssuer called letsencrypt-staging for testing and another one called letsencrypt-prod for actual production setup. Let’s Encrypt recommend testing against their staging environment before using the production environment. This will allow us to get things right before issuing a trusted production certificates and reduce the chance of running up against rate limits which is more strict in Let’s Encrypt production APIs.

You can see sample manifest for staging and production Issuers below.

Staging Issuer (replace your email in the email field)

➜ cat <<EOF > letsencrypt-staging.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: xxxxx@gmail.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: sock-shop
solvers:
- http01:
ingress:
class: nginx
EOF

Production Issuer (replace your email in the email field)

➜ cat <<EOF > letsencrypt-prod.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: xxxx@gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: sock-shop
solvers:
— http01:
ingress:
class: nginx
EOF

You can check the status by using below commands

➜ kubectl get ClusterIssuer
NAME READY AGE
letsencrypt-prod True 20h
letsencrypt-staging True 21h

Step 5: Create a Ingress resource for the frond end service

Now we will create an Ingress for our front-end micro-service. First let’s describe the details of the frond-end service.

➜ kubectl describe service -n sock-shop front-end
Name : front-end
Namespace : sock-shop
Labels : name=front-end
Annotations : <none>
Selector : name=front-end
Type : ClusterIP
IP : 10.100.80.85
Port : <unset> 80/TCP
TargetPort : 8079/TCP
Endpoints : 192.168.65.166:8079
Session Affinity: None
Events : <none>

As you can see the service type is ClusterIP, and its serving on port 80. We need this information for our Ingress. Lets create the ingress now (replace your domain name instead of mydomain.com).

Note : I’m using beta apiVersion because my Kubernetes version is 1.18. Kubernetes versions higher than 1.18 can use networking.k8s.io/v1 apiVersion.

➜ cat <<EOF > ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frond-end
namespace: sock-shop
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
acme.cert-manager.io/http01-edit-in-place: “true”
spec:
rules:
— host: mydomain.com
http:
paths:
— pathType: Prefix
path: /
backend:
serviceName: front-end
servicePort: 80
tls:
— hosts:
— mydomain.com
secretName: ingress-cert
EOF

Step 6: Check the certificates and related resources

Now we can check if everything is working as expected. Let’s start with the status of the certificate.

➜ kubectl get certificate -n sock-shop
NAME READY SECRET AGE
ingress-cert True ingress-cert 87m

Certificate has been create and its ready for use. If you describe the certificate you can see the sequence of events and its status.

Events:
Type Reason Age From Message
— — — — — — — — — — — — -
Normal Issuing 59s cert-manager Issuing certificate as Secret does not exist
Normal Generated 59s cert-manager Stored new private key in temporary Secret resource “ingress-cert-n89ph”
Normal Requested 59s cert-manager Created new CertificateRequest resource “ingress-cert-nnff2”
Normal Issuing 59s cert-manager Issued temporary certificate
Normal Issuing 51s cert-manager The certificate has been successfully issued

In case you encounter any issue in certificate issuing with cert-manager , you can follow the instruction here for troubleshooting.

Now let’s check if the applications is accessible via domain name.

Yes, but as you can see the browser is not trusting the certificate. This is because the we have used Let’s Encpty staging issuer. You can see that if you check the details of the certificate.

Step 7: Update the ingress annotation with production issuer.

Now we can replace the staging issuer with production issuer. Lets edit the Ingress manifest and apply the updated configuration.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frond-end
namespace: sock-shop
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
rules:
- host: mydomain.com
http:
paths:
- pathType: Prefix
path: /
backend:
serviceName: front-end
servicePort: 80
tls:
- hosts:
- mydomain.com
secretName: ingress-cert

Once you apply the updated configuration wait for few minutes to issue the new certificate.

➜ kubectl apply -f ingress.yaml

Now you can check the status of the certificate.

➜ kubectl get certificate -n sock-shop
NAME READY SECRET AGE
ingress-cert True ingress-cert 87m

Also try to access your application with the domain name from the browser and validate the certificate.

--

--

Muralidharan K
Muralidharan K

Written by Muralidharan K

Platform Engineer - Cloud and Cloud Native Tech Enthusiast

No responses yet