Use Nginx Ingress Controller and TLS encryption with LetsEncript in Azure Kubernetes Service (AKS)

When organizations are deploying the application in microservice architecture to Kubernetes it has multiple service components. Some services may communicate each within the cluster services and some services need to expose to the internet. In Kubernetes, to expose service to the internet can possible with the use of LoadBalance service resource or Ingress service with Ingress Controller.

In this blog, we will walk through how to create an Ingress controller in AKS. Then use a static public IP for routing traffic to the internet. Then for TLS encryption, we use cert-manager to automatically generate and configure LetsEncript certificates.

Before We Begin

We assumed the following prerequisite is fulfilled before we create Ingress Controllers and TLS.

Create Ingress Controller

When we looking for an ingress controller to use in our AKS cluster, we can find different flavors of ingress controllers that are suitable for specific tasks and features.

In this article, I’ll walk through creating an Nginx Ingress Controller. By default when Nginx Ingress Controller created it assign with a public IP, but this public IP was allocated with dynamic. Therefore the IP address of the ingress controller will be limited to the life span of the ingress controller, if the ingress controller deleted, public IP will be changed. With the use of static IP addresses, we can retain the IP if the ingress controller is deleted. This will help us to configure DNS that uses the IP address in the application lifecycle.

First, we allocate a public IP with static allocation as bellow. As a resource group for the IP we use the resource group created by AKS cluster, resource group name starts as similar to MC_myResourceGroup_<clusterName>_region

az network public-ip create --resource-group MC_myResourceGroup_<ClusterName>_eastus --name nginx-static-pip --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv

–query – this parameter is used to query the IP after creation

Now we create the Nginx Ingress Controller using the Helm chart. In the help chart, it already package all the components need to deploy the Nginx Ingress controller.

Currently, Ingress Controllers are supported only on Linux Nodes and when using Helm we will use a parameter to schedule Nginx Ingress Controllers only on Linux.

Where can I get Helm Charts?

You can refer to the GitHub repository of Helm Charts from here. You can clone or download the repo and navigate to the folder path.

Before we create an Nginx Ingress Controller, create a namespace in Kubernetes. So we can use that specific namespace only for Ingress controller resources.

#Create a Namepace 
kubectl create namespace ingress-basic

Next, create the Ingress Controller with custom parameters

helm install stable/nginx-ingress \
    --namespace ingress-basic \
    --set controller.replicaCount=2 \
    --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set controller.service.loadBalancerIP="<Replace-PIP>"
  • –namespace – With this parameter points to namespace we created for the Nginx Ingress Controller. So all the resources related to the ingress controller will under this namespace.
  • –set controller.replicaCount=2 – Count of Nginx Ingress controllers need to create, its good practice to use more than one for redundancy.
  • –set controller.nodeSelector.”beta.kubernetes.io/os”=linux – With this flag, we indicate deploy to Kubernetes Nodes that runs only Linux.
  • –set controller.service.loadBalancerIP=”” – Giving the public IP for Nginx Ingress Controller we allocated previously.

We can use the following command to see the ingress is deployed.

#Use kubectl
kubectl get pods -n ingress-basic

#Use Helm
helm ls

Check the public IP is attached to ingress service. Under type LoadBalancer you can see the public IP is allocated.

kubectl get service -l app=nginx-ingress --namespace ingress-basic

NAME                                            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
exegetical-pika-nginx-ingress-controller        LoadBalancer   10.0.111.183   52.230.57.42   80:31620/TCP,443:31319/TCP   2d20h
exegetical-pika-nginx-ingress-default-backend   ClusterIP      10.0.115.246   <none>         80/TCP                       2d20h

Assign DNS Name for Ingress Controller IP

To work HTTPS and certificates properly we need to configure our Ingress Controller public IP DNS following commands can be used for update DNS config for the IP.

#!/bin/bash

IP="IPAddress"

DNSNAME="UniqueDNSName"

#Get Azure Resource ID 
PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv)

#Update the PIP with DNS Name
az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME

Install cert-manager for SSL Termination and Automatic Certificate Generation

The NGINX ingress controller is work on layer 7 and therefore it has the capability of terminating TLS at the ingress controller level. We can configure this any many ways but in this post, I will walk through how it can accomplish by using cert-manager for certificate management and LetsEncript for automatic certificate distribution.

# Install the CustomResourceDefinition resources separately
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager cert-manager.io/disable-validation=true

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install \
  --name cert-manager \
  --namespace cert-manager \
  --version v0.11.0 \
  jetstack/cert-manager

Create CA Cluster Issuer

For issuing certificates to application cert-manager need two resources to be created in AKS Cluster, first Issuer and ClusterIssuer.

$ kubectl apply -f cluster-issuer.yaml

Create Ingress Route

After the creation of a cluster issuer, we have to create an ingress router for the application you wish to use for TLS. For this demo, I used grafana application to access from internet with the ingress controller. Following is the YAML used for ingress route creation.

When creating the YAML we need to use the annotation for cert-manager as cert-manager.io/cluster-issuer: letsencrypt-prod. Under the -backend we need to specify the service name and the port communicating with the ingress controller. Under the -hosts we need to put application domain name.

This will automatically provision a certificate for the application host mentioned in YAML using ingress-shim which is a part of cert-manager. so you can verify the certificate is created by the following command.

kubectl describe certificate tls-secret --namespace monitoring
Name:         tls-secret
Namespace:    monitoring
Labels:       <none>
Annotations:  <none>
API Version:  cert-manager.io/v1alpha2
Kind:         Certificate
Metadata:
  Creation Timestamp:  2019-12-15T01:13:13Z
  Generation:          1
  Owner References:
    API Version:           extensions/v1beta1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Ingress
    Name:                  grafana-ingress
    UID:                   110af933-1ed8-11ea-9249-ae96fff0647c
  Resource Version:        1992589
  Self Link:               /apis/cert-manager.io/v1alpha2/namespaces/monitoring/certificates/tls-secret
  UID:                     111633bb-1ed8-11ea-9249-ae96fff0647c
Spec:
  Dns Names:
    <SubDomain>.cloudlife.info
  Issuer Ref:
    Group:      cert-manager.io
    Kind:       ClusterIssuer
    Name:       letsencrypt-prod
  Secret Name:  tls-secret
Status:
  Conditions:
    Last Transition Time:  2019-12-15T01:18:44Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2020-03-14T00:18:43Z
Events:                    <none>

Additional Links