Docker Multi-Host Networking Tutorial – Using Consul

docker multi-host networking tutorial

Now docker has production ready multi-host networking capabilities and it has commands to manage networks from the command line. This guide walks you through the basic docker multi-host networking using docker machine and consul service discovery tool.

[alert style=”e.g. white, grey, red, yellow, green”] There is only one prerequisite for this tutorial. You just need the latest docker toolbox installed on your system.[/alert]

Docker Multi-Host Networking

In this setup, we will create three docker hosts on virtual box using docker machine. One host runs consul and other  two hosts share the network information using consul service discovery container on the first host. You can learn more about consul from here.

[alert style=”e.g. white, grey, red, yellow, green”] To know more about networking basics, refer this official docker documentation[/alert]

Follow the steps given below to set up multi-host networking.

1. Create a docker machine named “host1-consul”

docker-machine create -d virtualbox host1-consul

2. Launch a consul container on the host1-consul host using the following docker run command.

docker $(docker-machine config host1-consul) run -d -p "8500:8500" -h
 "consul" progrium/consul -server -bootstrap

3. You can verify the running container status using the following command.

docker $(docker-machine config host1-consul) ps

4. Now, launch the second docker machine host with parameters to register it with consul running on the host1-consul host.

docker-machine create -d virtualbox  --engine-opt="cluster-store=consul://$(docker-machine ip host1-consul):8500" --engine-opt="cluster-advertise=eth1:0" host2

5. Launch the third docker machine.

docker-machine create -d virtualbox  --engine-opt="cluster-store=consul://$(docker-machine ip host1-consul):8500" --engine-opt="cluster-advertise=eth1:0" host3

Now the two hosts have the default networks which can be used only for single host communication.

6. To have a multi-host network we need to create an overlay network on host2 using the “docker network” command as shown below.

docker $(docker-machine config host2) network create -d overlay myapp

7. Now, if you check the networks on host3, you will be able to see the overlay network we created on host2. It is because our two hosts are registered with consul and the network information is shared among all the hosts which are registered with it.

docker $(docker-machine config host2) network ls
docker $(docker-machine config host3) network ls

Now, if you launch containers in the differnt host, you will be able to connect them using the container name. Let test it by launching a Nginx container on host2 and test the connection by downloading the default Nginx page from host3 using a busybox container.

8. Launch a Nginx container on host2 by specifying the network “myapp” we have created.

docker $(docker-machine config host2) run -itd --name=webfront --net=myapp nginx

9. Verify the running container.

 docker $(docker-machine config host2) ps

10. Now, launch a busybox container on host3 with parameters to download the homepage of nginx running on host2.

 docker $(docker-machine config host3) run -it --rm --net=myapp busybox wget
-qO- http://webfront

If the above command returns an HTML output, it means the containers are able to connect to hosts using the overlay network you have created.

docker multi-host networking tutorial

How To Setup an NFS Server and Client For File Sharing

NFS server and client setup

In this tutorial, I will explain how to set up an NFS server and client configurations for mounting an NFS share across systems in a network.[alert style=”e.g. white,

[alert style=”e.g. white, grey, red, yellow, green”] Note: This tutorial is based on Ubuntu 14.04 server. You can create a VM easily on the cloud or using vagrant. [/alert]

Setup an NFS server

In this setup, I have a ubuntu server with IP address 192.168.0.2.

1. Refresh the apt package index

sudo apt-get update

2. Install the NFS package.

sudo apt-get install nfs-kernel-server

3. Next, we need to create a directory that can be shared with other hosts in the network. I am going to create a folder in var directory.

sudo mkdir /var/nfs

4. Change the ownership of the NFS folder to “nobody” and “nogroup”.

sudo chown nobody:nogroup /var/nfs

The “nobody” is a user present in most of the Linux distros which belong to the “nogroup” which does not have any privileges on the system programs or files.

5. All the NFS configurations are set in the /etc/exports file. In which we can give specific permissions for a client to access the files in the share.

In this example, I have a client with IP 192.168.0.3. Open the exports file and make an entry as shown below.

/var/nfs    192.168.0.3(rw,sync,no_subtree_check)

6. NFS table holds all the exports of the shares. You can create one using the following command.

sudo exportfs -a

7. Now, let’s start the NFS service.

sudo service nfs-kernel-server start

That’s it! We have an NFS server up and running.

Setup NFS client Node

All the servers which need access to the NFS share need to install the NFS client packages.

1. Refresh the apt list and install the client package.

sudo apt-get update
sudo apt-get install nfs-common

2. Let’s create a folder that will be mounted to the remote NFS share.

sudo mkdir /mnt/nfs/

3. Now, mount the remote NFS directory with out local /mnt/nfs directory.

sudo mount 192.168.0.2:/var/nfs /mnt/nfs

4. Verify the mount using the following command. You will see NFS share listed in the file system.

df -h

The output looks like the following.

192.168.0.2:/var/nfs   40G  1.4G   37G   4% /mnt/nfs

5. Now you can test the share by creating a test file in /var/nfs folder of the NFS server and try to list that in your clients /mnt/nfs folder.

That’s it! You now have an NFS share server and client up and running. You can add more clients in the same way we set up our first client.

NFS server and client setup

How To Setup an Elasticsearch Cluster – Beginners Guide

elasticsearch cluster setup

In part I, we learned the basic concepts of elasticsearch. In this tutorial, we will learn how to set up an elasticsearch cluster with client, master and a data node.

Setup an Elasticsearch Cluster

For this setup to work, as a prerequisite, you need three virtual machines with enough memory. This tutorial is based on ubuntu server 14.04. You can set up an ubuntu server using vagrant, or on any cloud provider.

Do the following before we start configuring the server for elasticsearch.

1. Create three ubuntu 14.04 VM’s with 1GB RAM each.
2. Update all the servers using the following command.

sudo apt-get update

3. Change the hostnames to es-client-01, es-master-01 and es-data-01 to match the client, master and data node roles.

4. Edit /etc/hosts file of all the nodes and make entries for all the nodes for the hostnames as shown below. Change the IP addresses with the IP addresses of your VM’s.

192.168.4.40			es-client-01
192.168.4.41			es-master-01
192.168.4.42			es-data-01

The above configuration is very important because we will be using the hostname for the nodes to communicate with each other.

Setting up Client Node (es-client-01)

Now we have the base VM. Let’s start with elastic search configuration.

Install Latest Java

Elasticsearch needs java runtime as its core is java. You can install the latest java version by executing the following commands.

1. Add the official oracle java repository.

sudo add-apt-repository ppa:webupd8team/java

2. Now, refresh the package list.

sudo apt-get update

3. You can now install java using the following command.

sudo apt-get install oracle-java8-installer

4. Once installed, verify the installation by checking the java version.

java -version

Recommended: Elasticsearch Administration Course (Learn Using Free Trial)

Install Elasticsearch

1. Download the elasticsearch installation file

https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.2.0/elasticsearch-2.2.0.deb

Note: At the time of writing, the release of elasticsearch is 2.2.0

2. Install the downloaded package.

Note: If you have downloaded any version other than 2.2.0, change the package name accordingly.

sudo dpkg -i elasticsearch-2.2.0.deb

3. Start the elasticsearch service.

sudo service elasticsearch start

4. Our node es-client-01 has elasticsearch service running and we will consider as the client node. Also, you need to set elasticsearch to start automatically on bootup. Use the following command to do that.

sudo update-rc.d elasticsearch defaults 95 10

5. Verify the elasticsearch service by sending a HTTP request to port 9200. By default elasticsearch run on port 9200.

curl http://localhost:9200

You would see a JSON response, which looks like the following.

{
  "name" : "Crusher",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.2.0",
    "build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
    "build_timestamp" : "2016-01-27T13:32:39Z",
    "build_snapshot" : false,
    "lucene_version" : "5.4.1"
  },
  "tagline" : "You Know, for Search"
}

The above output shows the name of the node, cluster name, and a few other details.

If you do not specify a node name in the configuration, elasticsearch assigns a random name on every restart.

All the elasticsearch configurations are present in elasticsearch.yml file, which is located in /etc/elasticsearch folder.

6. Now, the elasticsearch.yml file has to be edited for the configuring the node as a client node. Open the elasticsearch.ym file located in /etc/elasticsearch directory and change the configurations as follows.

The configuration file has many sections like cluster, node, paths etc.

Note: Refer this config file for all the configurations explained below.

Under the cluster section, change the cluster name parameter.

cluster.name: devopscube-production

Under node section, change the node name parameter and add other parameters as shown below.

node.name: es-client-01
node.client: true
node.data: false

Under network section, change the “network.host” parameter with the IP address of your client node.

network.host: 192.168.4.40

Under discover section add the following.

discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["es-client-01", "es-master-01",  "es-data-01"]

The above parameters disable the multicast and send a unicast message to the specified hosts. As we have already made the hosts entry for all the hostnames, the unicast messages will go the respective nodes.

4. Save the file and restart the elasticsearch service for changes.

sudo service elasticsearch restart

5. Now, we need to make some system level changes. Open /etc/security/limits.conf file to change the file limits that can be used. By default, it is 1024 for Ubuntu. You can check this by running “unlimited -n” command.

Add the following lines at the end of the file.

*        soft   nofile   64000
*        hard   nofile   64000
root     soft   nofile   64000
root     hard   nofile   64000

6. Open /etc/pam.d/common-session file and add the following line.

session required                        pam_limits.so

7. Open /etc/pam.d/common-session-noninteractive and add the following.

session required                        pam_limits.so

8. It is recommended to have the heap size as half as the RAM. This tutorial is based on 1 GB RAM VM. So we will configure 512 MB swap space.

You need to set an environment variable for elasticsearch heap size. You can do this by editing the /etc/environment file. The file should look like the following.

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
ES_HEAP_SIZE="512M"

Once edited, you should reboot the server.

Setting Up Master and Data Node

Follow all the steps we used to setup the client node for the master and data node. Only while configuring the elasticsearch.yml file just uses the data given below. All the other steps are same for all the nodes.

For master node (elasticsearch.yml)

Under node section of the elasticsearch.yml file, add the following. Refer this file.

node.name: es-master-01
node.master: true
node.data: false

Under network section, change the “network.host” parameter. Change the IP address accordingly.

network.host: 192.168.4.42

For data node (elasticsearch.yml)

Under the node section, add the following. refer this for configurations.

node.name: es-data-01
node.client: false
node.data: true

Under the network section, replace the data nodes IP address as you did for the client and master nodes.

Once you configure all the three nodes, restart the elasticsearch service on all the three nodes.

sudo service elasticsearch restart

Now you will have a working elasticsearch cluster.

Installing elasticsearch GUI plugin

Once you setup an elasticsearch cluster, you can view the cluster status on the client node(es-client-01) using the following command.

curl http://es-client-01:9200/_cluster/stats

But the but is not that easy to comprehend. So you can make use of the elasticsearch head plugin to view the cluster details in the browser UI.

We will install this plugin on our client node. To install the plugin, navigate to “/usr/share/elasticsearch/bin” directory and execute the following command.

./plugin install mobz/elasticsearch-head

Restart the elasticsearch service for the plugin to work.

sudo service elasticsearch restart

Now, if you access http://<IP>:9200/_plugin/head/ in your browser, you will be able to see all the cluster details.

Wrapping Up

In this tutorial, I have explained all the steps to setup a three-node elasticsearch cluster. In the next article, I will cover more on indexing strategies for elasticsearch.

Also, you can take a look at the devopscube vagrant repository for setting up the three node cluster. Elasticsearch vagrant cluster setup

elasticsearch cluster setup