Service Discovery and Other Cluster Management Techniques Using Consul

Consul is a cluster management tool from Hashicorp and it is very useful for creating advanced micro-services architecture. Consul is a distributed configuration system which provides high availability, multi-data center, service discovery and High fault tolerance. So managing micro-services with Consul is pretty easy and simple.

Current micro-service architecture based infrastructure has following challenges

  1. Uncertain service locations
  2. Service configurations
  3. Failure Detection
  4. Load balancing between multiple data-centers

Since Consul is distributed and agent-based, It could solve all the above challenges easily.

Consul Technology and Architecture

Consul is an agent-based tool, which means it should be installed in each and every node of the cluster with servers and a client agent nodes. Hashicorp provides an open source binary package for installing Consul, and it can be downloaded from this (https://www.consul.io/downloads.html).

To install Consul to all the nodes, we need to download the binary file and keep it in the bin folder (/etc/local/bin), so that we can run this from anywhere within the node.

Consul needs to be started as a process and it will continuously shares information. For this, we should  start the agent on all the nodes and connect each other for communicating each other.

Communication between nodes will be done through gossip protocol, Which means each node will send some data to other nodes like a virus and eventually to others.

gossip
Consul’s Gossip protocol and service discovery

Before going to the demonstration, I would like to explain about the architecture of this tool. So basically agent will be started as servers within the nodes where services are running. and a client agent can be used for UI and query the information about the cluster of server. It is not necessary that client can not have the service within it.

micro
Microservice architecture with Consul

To start the agent as a server, we need to mention server as a parameter.

Consul will not automatically join the cluster. It needs to be joined by mentioning the hostname or IP address of other nodes with each other.

Consul maintains the information about the cluster members, and this can be seen at all other instance’s console.

Consul exposes the information about the instance through API and because of this consul can be used for other infrastructural application, example dashboard, Monitoring tool or our own event management system.

READ  Running Custom Scripts In Docker With Arguments - ENTRYPOINT Vs CMD

Similarly, We can run the consul agent within the client and we need to join this client with the server clusters so that we setup our querying mechanism or dashboard or cluster monitoring.

Service discovery is an another great feature of consul. For our infrastructure services, we need to create separate service configuration file in JSON format for consul. service configuration file should be kept inside the consul.d configuration folder for getting identified by consul agent. so we need to create consul.d inside /etc/ folder.

Let us assume we have a service named as “nginx” and it is running on port 80. so we will create a service configuration file inside our consul.d folder for this service “nginx

Later when we are starting our agent, we can see our services which are mentioned inside consul.d folder also synced with the consul agent.

Which mean the service can be communicating with consul agent. So the availability of a node and health of the node can be shared across the cluster.

We can query the service using either  DNS or HTTP API. If we are using DNS, then we need to use dig for query and DNS name will be like NODE_NAME.service.consul. If we are having multiple services with the same application, we can separate it with tags. And its DNS will be like TAG.NODE_NAME.service.consul. Since we have an internal DNS name within the cluster, we can manage DNS issue which usually occurs while load balancer fails.

If we use HTTP API for querying the service then it will be like

READ  Linux VI Editor Shortcuts, Tips and Productivity Hacks For Beginners

So here we could see how it can be helpful for service discovery right..?

Just like Service discovery, Health checking of nodes is also taken care by this consul. Consul could expose the status of the node so that we could easily find the solution for failure detection among the nodes. For this example, i am manually crashing the server

Usually, infrastructural configurations are stored with key/value pair since consul provides that we could use it for dynamic configurations.
for example:

Since key/value pair configuration is more effective for infrastructure, this will be the distributed asynchronous – ly solution of centralized dynamic configuration.

The big feature of consul is UI for everything. We can check health of cluster members, store/delete key/values in consul, service management etc.. to get this dashboard go to the browser

http://consul_client_IP:8500/ui

And for live demo consul provides demo dashboard[link]{https://demo.consul.io/ui/}

Setting up Consul Using Anisble

For installation and basic configuration, download the ansible role and simply run the sample playbook from here (https://github.com/PrabhuVignesh/consul_installer).

OR

Download ansible role from ansible-galaxy:

Or simply download the ansible playbook with Vagrant file from here (https://github.com/PrabhuVignesh/consul_experiment) and just follow the instructions from the README.md file.

Conclusion

Converting your application into microservices is not a big deal. Making it as a scalable application is always a challenging thing. This challenges can be solved if we can able to combine tools like Consul, serf, messaging queue tools together. This will make your microservices scalable, Fault tolerant and highly available for Zero downtime application.

Free DevOps Resources

Get DevOps news, tutorials and resources in your inbox. A perfect way If you want to get started with devops. Like you, we dont like spam.