Guide on Nginx Ingress Path-based routing
Nginx Ingress? Go through the below article to get your doubts cleared.
Kubernetes supports a high-level abstraction called Ingress, which allows simple path-based, host- or URL-based HTTP routing. An Ingress is a core concept (in beta) of Kubernetes. It is always implemented by a third party proxy; these implementations are known as Ingress controllers.
An Ingress controller is responsible for reading the ingress resource information and processing that data accordingly. it’s a DaemonSet or Deployment, deployed as a Kubernetes Pod, that watches the endpoint of the API server for updates to the Ingress resource.
Some of the most popular Ingress Controllers for Kubernetes, namely:
Exposing your application on Kubernetes nginx ingress
In Kubernetes, these are several different ways to expose your application; using Ingress to expose your service is one way of doing it. Ingress is not a service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource, as it can expose multiple services under the same IP address.
This post will explain how to use an ingress resource with Nginx-ingress Controller and front it with an NLB (Network Load Balancer).
Getting external traffic into Kubernetes – ClusterIp, NodePort, LoadBalancer, and Ingress
When you begin to use Kubernetes for real-world applications, one of the first questions to ask is how to get external traffic into your cluster. The official documentation offers a comprehensive (but rather dry) explanation of this topic, but here we are going to explain it in a more practical, need-to-know way.
There are several ways to route external traffic into your cluster:
- Using Kubernetes proxy and
ClusterIP: The default Kubernetes
ClusterIp, which exposes the
Serviceon a cluster-internal IP. To reach the
ClusterIpfrom an external source, you can open a Kubernetes proxy between the external source and the cluster. This is usually only used for development.
- Exposing services as
NodePort: Declaring a
NodePortexposes it on each Node’s IP at a static port (referred to as the
NodePort). You can then access the
Servicefrom outside the cluster by requesting
<NodeIp>:<NodePort>. This can also be used for production, albeit with some limitations.
- Exposing services as
LoadBalancer: Declaring a
LoadBalancerexposes it externally, using a cloud provider’s load balancer solution. The cloud provider will provision a load balancer for the
Service, and map it to its automatically assigned
NodePort. This is the most widely used method in production environments.
Why do I need a load balancer in front of a nginx ingress?
Ingress is tightly integrated into Kubernetes, meaning that your existing workflows around
kubectl will likely extend nicely to managing ingress. An Ingress controller does not typically eliminate the need for an external load balancer , it simply adds an additional layer of routing and control behind the load balancer.
Pods and nodes are not guaranteed to live for the whole lifetime that the user intends: pods are ephemeral and vulnerable to kill signals from Kubernetes during occasions such as:
- Memory or CPU saturation.
- Rescheduling for more efficient resource use.
- Downtime due to outside factors.
Load Balancer (Kubernetes service) is a construct that stands as a single, fixed-service endpoint for a given set of pods or worker nodes. To take advantage of the previously-discussed benefits of a Network Load Balancer (NLB), we create a Kubernetes service
type:loadbalancer with the NLB annotations, and this load balancer sits in front of the ingress controller – which is itself a pod or a set of pods. In AWS, for a set of EC2 compute instances managed by an Autoscaling Group, there should be a load balancer that acts as both a fixed referable address and a load balancing mechanism.
Ingress with load balancer
To start with, create an Internal facing NLB
# git clone https://github.com/shanki84/nginx-ingress.git
Create Ingress Controller
We use the
kubectl apply command. It creates all resources defined in the given file.
# kubectl apply -f nginx-ingress-controller.yaml
The first command to execute automatically installs all components required on the K8s cluster:
namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
Create Static LoadBalancer
By default, Kubernetes is configured to expose NodePort services on the port range 30000 – 32767. But this port range can be configured, allowing us to use the port 80 for our Ingress Controller.
AWS / Azure / GKE
Network Load Balancer
# kubectl create -f aws-nlb-service.yaml -n ingress-nginx
ingress-nginx ingress-nginx- LoadBalancer 172.30.188.11 a##########7d11e9b47702ef02f8e6f-7##########d33f2.elb.eu-west-1.amazonaws.com 80:36788/TCP,443:30781/TCP 31s
AZURE / GCE-GKE
# kubectl create -f generic-lb-service -n ingress-nginx
LB service for Azure / GCE/GKE created:
Our Ingress Controller is now available on port 80 for HTTP and 443 for HTTPS:
“Apple “& “Samsung” are the 2 microservices deployed under namespace “demoapp”, if not it will be deployed on the default namespace.
# kubectl create -f apple-app.yaml -n demoapp
# kubectl create -f samsung-app.yaml -n demoapp
Apple & Samsung exposes its service over NodePort.
Create an Ingress-Resource, which has rules to perform path-based routing.
# kubectl create -f ingress-resources.yaml -n demoapp
Validate Ingress-Resources rules by:
# kubectl describe ing -n demoapp
we can also have microservices on any other namespace and Ingress-resources on the same namespace. Ingress controller can read Ingress-Resources using Annotation.
Ingress-Resources holds an annotation: nginx
Ingress-Controller holds election-id as “nginx”
# Defaults to "-" # Here: "-" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx"
Can I reuse a NLB with services running in different namespaces? In the same namespace?
Install the NGINX ingress controller as explained above. In each of your namespaces, define an Ingress Resource with annotation:nginx , Ingress-Controller can read this Ingress-Resources.
validate the deployed microservices:
If you get 443 Error, then re-execute the config file Nginx-ingress-controller. it has RBAC for which gives permissions to Nginx to read the newly added microservices and its services.
# kubectl apply -f nginx-ingress-controller.yaml -n ingress-nginx
Delete the Ingress resource:
# kubectl delete -f https://github.com/shanki84/nginx-ingress/blob/master/ingress-resources.yaml -n demoapp
Delete the services:
# kubectl delete -f apple-app.yaml -n demoapp # kubectl delete -f samsung-app.yaml -n demoapp
Delete the NLB:
# kubectl delete -f aws-nlb-service.yaml -n ingress-nginx
Delete the NGINX ingress controller:
# kubectl delete -f nginx-ingress-controller.yaml -n ingress-nginx
We hope this post was useful! Please let us know in the comments.
Please read about other blogs on Data Science here.