Setup Nginx Ingress Controller on Kubernetes using Helm 3

In this article, We are going to cover What is Kubernetes Ingress, What is Kubernetes Ingress Controller, Why we need Ingress resource, Setup Nginx Ingress Controller on Kubernetes using Helm 3.

Setup Nginx Ingress Controller on Kubernetes using Helm 3 1

Prerequisites:

To go along my tutorial you need to have:

My setup:

Node NameOperating SystemIP Addresskubectl client/server versionHelm versionDocker version
kmaster-ftUbuntu 18.04.4 LTS172.32.32.100v1.19.3v3.4.119.03.6
kworker-ft1Ubuntu 18.04.4 LTS172.32.32.101v1.19.3NA19.03.6
kworker-ft2Ubuntu 18.04.4 LTS172.32.32.102v1.19.3NA19.03.6

What is Kubernetes Ingress?

As per official documentation Kubernetes Ingress is an API object that manages external access to the services in a cluster, typically HTTP/HTTPS. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Ingress may provide load balancing, SSL termination and name-based (path-based) virtual hosting. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node within you Kubernetes cluster.

Within a Kubernetes cluster an Ingress resource allows you to share a single public IP address and route your application via URLs or URI, commonly known as http/https routing. If your Kubernetes cluster has Ingress Controller resource deployed and once it’s picked, you will be able to specify the URLs or URI that you need to route, and the controller will look after it for you.

NOTE: An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer

What is Kubernetes Ingress Controller?

As per official documentation ‘In order for the Ingress resource to work, the cluster must have an ingress controller running.’

Unlike other types of controllers which run as part of the kube-controller-manager binary, Ingress controllers are not started automatically with a cluster. You should choose the ingress controller implementation that best fits your requirements and your cluster.

Kubernetes as a project supports and maintains AWS, GCE, and nginx ingress controllers

For my tutorial I will be using Nginx Ingress Controller.

Why we need Ingress resource?

When we can simple expose a service and set the service type as LoadBalancer why do we need ingress in Kubernetes?

The exposed service will then talk to cloud provider of your choice and spin up the Load Balancer for you which will have a Public IP and that’s all. End users will be able to access your application using that public IP or with associated DNS name.

Now for a minute just imagine that you are running 100’s of services which are exposed to the public on using one of the public cloud providers. Each service would spin up its own load balancer and public IP address. It will be soon become a headache for you to handle them and eventually become unmanageable. Keep one thing in mind it will shoot up your BILL like anything from your cloud provider for all the extra load balancers and Public IP addresses.

Here we need a solution like Kubernetes Ingress.

Understand Routing flow with Ingress

In this section we will understand how the Routing flow works with Ingress in Kubernetes.

In the diagram below, you can see the flow of how requests are coming in from the internet, then hit the ingress controller, and are then routed to the service running on your cluster and then it will eventually hit the Pod running the application.

Setup Nginx Ingress Controller on Kubernetes using Helm 3 2

As of now we have clear understanding of Kubernetes Ingress and Ingress controllers and the routing flow. Let’s start some practical things now.

Installing an inhouse Bare metal Load Balancer solution “MetalLB”

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.

I will not explain the steps instead put the commands which you can as is follow.

Make some changes to kube-proxy configMap

If you’re using kube-proxy in IPVS mode, since Kubernetes v1.14.2 you have to enable strict ARP mode. Note, you don’t need this if you’re using kube-router as service-proxy because it is enabling strict arp by default.

You can achieve this by editing kube-proxy config in current cluster:

root@kmaster-ft:/opt# kubectl edit configmap -n kube-system kube-proxy

and set:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

Installation Commands:

root@kmaster-ft:/opt# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
namespace/metallb-system created

root@kmaster-ft:/opt# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created

root@kmaster-ft:/opt# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
secret/memberlist created

Define and deploy a configMap

Create the following configMap manifest file and apply the same.

root@kmaster-ft:~/metalLb# cat metalLb-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.32.32.220-172.32.32.250

Since I am using the CIDR for internal calico networking for kubernetes cluster 172.32.32.0/24. I have used a range of IP’s reserved for the Load Balancers.

NOTE: This will show pod network CIDR which used by kube-proxy

kubectl cluster-info dump | grep -m 1 cluster-cidr

Apply the configMap configuration

root@kmaster-ft:~/metalLb# kubectl apply -f metalLb-configmap.yml
configmap/config created

Test the Load balancer functionality by creating a Deployment and exposing the service

root@kmaster-ft:~/metalLb# kubectl  create deployment nginx --image=nginx
deployment.apps/nginx created
root@kmaster-ft:~/metalLb# kubectl expose deployment nginx --port 80 --type LoadBalancer
service/nginx exposed
root@kmaster-ft:~/metalLb# kubectl get svc
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)              AGE
kubernetes       ClusterIP      10.96.0.1        <none>          443/TCP              28d
nginx            LoadBalancer   10.101.236.126   172.32.32.220   80:30067/TCP   

It’s working as expected as our nginx service is getting LoadBalancer IP assigned 172.32.32.220.

Deploy the Nginx Ingress controller

To deploy the NGINX Ingress controller using helm, run the following command:

root@kmaster-ft:~/ingress-demo# helm install nginx-ingress stable/nginx-ingress
WARNING: This chart is deprecated
NAME: nginx-ingress
LAST DEPLOYED: Sat Dec  5 10:01:07 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verify the installation

root@kmaster-ft:~/ingress-demo# kubectl get all
NAME                                                READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-f79cbf87c-p59cd        1/1     Running   0          49s
pod/nginx-ingress-default-backend-c5449fb44-wr698   1/1     Running   0          49s

NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
service/kubernetes                      ClusterIP      10.96.0.1        <none>          443/TCP                      28d
service/nginx-ingress-controller        LoadBalancer   10.102.111.195   172.32.32.222   80:32164/TCP,443:30799/TCP   49s
service/nginx-ingress-default-backend   ClusterIP      10.110.8.117     <none>          80/TCP                       49s

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller        1/1     1            1           49s
deployment.apps/nginx-ingress-default-backend   1/1     1            1           49s

NAME                                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-ingress-controller-f79cbf87c        1         1         1       49s
replicaset.apps/nginx-ingress-default-backend-c5449fb44   1         1         1       49s

It’s up and running as we see.

Deploy a sample web app

Following is our application deployment manifest file.

root@kmaster-ft:~/ingress-demo# cat ft-ingress-demo-deploy-v1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: ft-ingress-demo-deploy-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      run: ft-ingress-demo-deploy-v1
  template:
    metadata:
      labels:
        run: ft-ingress-demo-deploy-v1
    spec:
      volumes:
      - name: webdata
        emptyDir: {}
      initContainers:
      - name: web-content
        image: busybox
        volumeMounts:
        - name: webdata
          mountPath: "/webdata"
        command: ["/bin/sh", "-c", 'echo "<h1><font color=blue>Welcome to Fosstechnix! It is version 1 of our application! </font></h1>" > /webdata/index.html']
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: webdata
          mountPath: "/usr/share/nginx/html"

Apply and create the Deployment

root@kmaster-ft:~/ingress-demo# kubectl apply -f ft-ingress-demo-deploy-v1.yml
deployment.apps/ft-ingress-demo-deploy-v1 created

Verify the deployment creation

root@kmaster-ft:~/ingress-demo# kubectl get deployment
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
ft-ingress-demo-deploy-v1       1/1     1            1           3m32s
nginx-ingress-controller        1/1     1            1           2d4h
nginx-ingress-default-backend   1/1     1            1           2d4h

Expose the Deployment:

root@kmaster-ft:~/ingress-demo# kubectl expose deployment ft-ingress-demo-deploy-v1 --port 80
service/ft-ingress-demo-deploy-v1 exposed

Verify the service status

root@kmaster-ft:~/ingress-demo# kubectl get svc
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
ft-ingress-demo-deploy-v1       ClusterIP      10.108.133.241   <none>          80/TCP                       69s
kubernetes                      ClusterIP      10.96.0.1        <none>          443/TCP                      30d
nginx-ingress-controller        LoadBalancer   10.102.111.195   172.32.32.222   80:32164/TCP,443:30799/TCP   2d4h
nginx-ingress-default-backend   ClusterIP      10.110.8.117     <none>          80/TCP                       2d4h

You can see a LoadBalancer IP assigned (172.32.32.222) to our Ingress Controller resource which will be used to access our applications from outside.

Create an Ingress resource

Time to Create an Ingress resource using following manifest file that sends traffic to your Service via ft-demo.ingress.example.com.

root@kmaster-ft:~/ingress-demo# cat ingress-resource-ft-demo.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress-resource-ft-demo
spec:
  rules:
  - host: ft-demo.ingress.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: ft-ingress-demo-deploy-v1
          servicePort: 80

Apply and create the Ingress resource

root@kmaster-ft:~/ingress-demo# kubectl apply -f ingress-resource-ft-demo.yaml
ingress.networking.k8s.io/ingress-resource-ft-demo created

Verify the Ingress resource status:

root@kmaster-ft:~/ingress-demo# kubectl describe ingress ingress-resource-ft-demo
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name:             ingress-resource-ft-demo
Namespace:        default
Address:          172.32.32.101
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                         Path  Backends
  ----                         ----  --------
  ft-demo.ingress.example.com
                               /   ft-ingress-demo-deploy-v1:80   192.168.77.166:80)
Annotations:                   kubernetes.io/ingress.class: nginx
                               nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age    From                      Message
  ----    ------  ----   ----                      -------
  Normal  CREATE  3m39s  nginx-ingress-controller  Ingress default/ingress-resource-ft-demo
  Normal  UPDATE  3m39s  nginx-ingress-controller  Ingress default/ingress-resource-ft-demo

Add the following line to the bottom of the /etc/hosts file or to Windows hosts file c:\Windows\System32\Drivers\etc\hosts

172.32.32.222 ft-demo.ingress.example.com

Try accessing it through your web browser

Try accessing version 1 of our web application using the following url.

http://ft-demo.ingress.example.com/
Setup Nginx Ingress Controller on Kubernetes using Helm 3 3

Create Second Deployment of your web app

Following is our application version 2 deployment manifest file.

root@kmaster-ft:~/ingress-demo# cat ft-ingress-demo-deploy-v2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: ft-ingress-demo-deploy-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      run: ft-ingress-demo-deploy-v2
  template:
    metadata:
      labels:
        run: ft-ingress-demo-deploy-v2
    spec:
      volumes:
      - name: webdata
        emptyDir: {}
      initContainers:
      - name: web-content
        image: busybox
        volumeMounts:
        - name: webdata
          mountPath: "/webdata"
        command: ["/bin/sh", "-c", 'echo "<h1><font color=green>Welcome to Fosstechnix! It is version 2 of our application!</font></h1>" > /webdata/index.html']
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: webdata
          mountPath: "/usr/share/nginx/html"

Apply and create the Deployment

root@kmaster-ft:~/ingress-demo# kubectl apply -f ft-ingress-demo-deploy-v2.yml
deployment.apps/ft-ingress-demo-deploy-v2 created

Verify the deployment creation

root@kmaster-ft:~/ingress-demo# kubectl get deploy
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
ft-ingress-demo-deploy-v1       1/1     1            1           22m
ft-ingress-demo-deploy-v2       1/1     1            1           27s
nginx-ingress-controller        1/1     1            1           2d4h
nginx-ingress-default-backend   1/1     1            1           2d4h

Expose the Deployment:

root@kmaster-ft:~/ingress-demo# kubectl expose deployment ft-ingress-demo-deploy-v2 --port 80
service/ft-ingress-demo-deploy-v2 exposed

Verify the service status

root@kmaster-ft:~/ingress-demo# kubectl get svc
NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
ft-ingress-demo-deploy-v1       ClusterIP      10.108.133.241   <none>          80/TCP                       20m
ft-ingress-demo-deploy-v2       ClusterIP      10.100.82.79     <none>          80/TCP                       36s
kubernetes                      ClusterIP      10.96.0.1        <none>          443/TCP                      30d
nginx-ingress-controller        LoadBalancer   10.102.111.195   172.32.32.222   80:32164/TCP,443:30799/TCP   2d4h
nginx-ingress-default-backend   ClusterIP      10.110.8.117     <none>          80/TCP                       2d4h

Edit Ingress

Now since we have created a new Deployment we need to edit our current Ingress Resource. Add following part to your ingress resource file.

- path: /v2
        backend:
          serviceName: ft-ingress-demo-deploy-v2
          servicePort: 80

Apply the changes:

root@kmaster-ft:~/ingress-demo# kubectl apply -f ingress-resource-ft-demo.yaml
ingress.networking.k8s.io/ingress-resource-ft-demo configured

Verify the Ingress deployment status

root@kmaster-ft:~/ingress-demo# kubectl describe ingresses.networking.k8s.io
Name:             ingress-resource-ft-demo
Namespace:        default
Address:          172.32.32.101
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                         Path  Backends
  ----                         ----  --------
  ft-demo.ingress.example.com
                               /     ft-ingress-demo-deploy-v1:80   192.168.77.166:80)
                               /v2   ft-ingress-demo-deploy-v2:80   192.168.17.60:80)
Annotations:                   kubernetes.io/ingress.class: nginx
                               nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  CREATE  26m                  nginx-ingress-controller  Ingress default/ingress-resource-ft-demo
  Normal  UPDATE  8m57s (x3 over 26m)  nginx-ingress-controller  Ingress default/ingress-resource-ft-demo

We can see our host configurations under Rules section.

Test Your Ingress

Try accessing the version v1 of our web application using following URL.

http://ft-demo.ingress.example.com/
Setup Nginx Ingress Controller on Kubernetes using Helm 3 4

Try accessing the version v2 of our web application using following URL.

http://ft-demo.ingress.example.com/v2
Setup Nginx Ingress Controller on Kubernetes using Helm 3 5

That’s all! Our Ingress controller setup is working perfectly fine.

Hope you like the article. Please let me know your feedback in the response section.

Conclusion:

We have covered, What is Kubernetes Ingress, What is Kubernetes Ingress Controller, Why we need Ingress resource, Setup Nginx Ingress Controller on Kubernetes using Helm 3.

Related Articles:

Configure Traefik LetsEncrypt for Kubernetes [6 Steps]

Kubernetes Concepts for Beginners

4 Steps to Install Kubernetes Dashboard

How to Create New Namespace in Kubernetes

Kubernetes Deployment Using Helm [Part 1]

Deploy to Kubernetes using Helm and GitLab[ Part 2]

Configure Traefik Ingress Controller on Kubernetes [5 Steps]

Install nginx ingress controller kubernetes kops using helm3

FOSS TechNix

FOSS TechNix (Free,Open Source Software's and Technology Nix*) founded in 2019 is a community platform where you can find How-to Guides, articles for DevOps Tools,Linux and Databases.

2 thoughts on “Setup Nginx Ingress Controller on Kubernetes using Helm 3”

  1. Hi
    Thanks for this wonderful tutorial. I tried this and it worked but I am not able to solve one issue.
    The Nginx application is accessible when I do curl http://ft-demo.ingress.example.com/v2 from within the kmaster node. But not reachable externally. I have tried this example on a Mac. So kmaster is a vagrant box with IP 172.42.42.100. I am able to ping this IP from Mac terminal.
    But my problem is I am not able to curl http://ft-demo.ingress.example.com/v2 from mac terminal nor from browser in Mac as shown above.
    Did I miss some step?

    Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share via
Copy link
Powered by Social Snap