How to Provision Docker Hosts on Azure using Docker Machine

docker machine on azure

Docker machine helps you to spin up docker hosts locally as well as with various cloud providers. This tutorial will teach to how to provision docker hosts on azure using docker machine utility.

Prerequisites

1. The system should be configured with azure CLI tools and the publishsetting file. I f you do not have CLI setup, follow this tutorial . How to set up Azure cli 
2. You should have latest docker-machine installed on your system.

Provision Docker Hosts on Azure

Provisioning docker hosts on azure is relatively easy. Docker machine will provision the VM and installs the latest docker engine on the VM based on its OS family. You can then deploy and manage containers from your laptop or host where docker machine is installed.

Creating a host using docker machine

Follow the instructions given below.

Execute the following command to create a docker host on azure using docker machine.

Note: Change id, YOUR-SUBSCRIPTION ID and USER-DEFINED-NAME accordingly. USER-DEFINED-NAME should be a unique name. If you try to use generic names, you will get an error saying that the name already exists.

docker-machine create -d azure --azure-subscription-id="YOUR-SUBSCRIPTION ID" --azure-subscription-cert="machine-cert.pem" USER-DEFINED-NAME

Example

By default, the docker machine azure driver creates an ubuntu 14.04 host.

docker-machine create -d azure --azure-subscription-id="52812b7b-7295-4e76-9c75-3b74b9abdc1f" --azure-publish-settings-file="credentials.publishsettings" --azure-location="East US" zeus-docker

To list all the environment variables set by docker machine for accessing the machine you have created, just execute the following command.

docker-machine env MACHINE-NAME

Output

[email protected]:~/keys$ docker-machine env zeus-docker
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://zeus-docker.cloudapp.net:2376"
export DOCKER_CERT_PATH="/home/mike/.docker/machine/machines/zeus-docker"
export DOCKER_MACHINE_NAME="zeus-docker"
[email protected]:~/keys$

Now, to run docker commands on your new machine, use the following command.

eval "$(docker-machine env zeus-docker)"

In your current terminal, you can run all docker commands against the azure docker host.

Moreover, you can create a swarm cluster on azure using docker machine. We will cover that in the next article. Let us know in the comments section if you face any errors.

docker machine on azure

Puppet Hiera Tutorial – Beginners Guide

pupppet hiera

When you write a puppet module, you might not want to put all the data in to the module because all the module developers might want access to that data. So, It is a good practice to separate the data from the code. This can be achieved using Hiera.

Note: This tutorial is based on puppet enterprise.

Puppet Hiera Tutorial

In this puppet hiera tutorial you will learn the basics of hiera and how to use it in puppet modules.

Hiera is a key value lookup tool which holds all the data that has to be dynamically placed in a module. You can store usernames, passwrod, DNS server details, ldap server details etc. Moreover, you can encrypt the data in hiera for security. Hiera resides in the puppet server for global access unless the client is operating in masterless setup. In that case, it resides in the client itself.

Hiera Configuration File

Hiera comes bundles with puppet enterprise, so you don’t have to install it separately but you might want to change its configuration to suit your needs.

The hiera configuration file resides in “/etc/puppetlabs/code” directory. It is yaml file named “hiera.yml”

A normal configuration file looks like the following.

---
:backends:
  - yaml
  - json
:yaml:
  :datadir: /etc/puppetlabs/code/environments/%{::environment}/hieradata
:json
  :datadir: /etc/puppetlabs/code/environments/%{::environment}/hieradata
:hierarchy:
  - "node/%{::fqdn}"
  - common

:backends – Hiera supports yaml, json and puppet class backends.

:datadir – The location where you place your hieradata. In the above code snippet, you can see a interpolated string “%{::environment}”. This is to dynamically select an environment in case you have different environments specified in the puppet server. By doing do, you can access the environment specific hiera data.

If you use both yaml and json data directories, you need to specify both as shown in the above code snippet.

:hierarchy This represents the folder and file hirearchy inside the “:datadir” i.e, hieradata folder. You can use interpolation to dynamically pass the file name.

Creating Hiera Data files

Hiera data files could be yaml or json files as mentioned above. All the data files will reside inside the “hieradata” folder in respective environments.

You can keep all the deafult values under common.yaml file in hieradata fodler. For example,

/etc/puppetlabs/code/environments/%{::environment}/hieradata/common.yaml

If you have any node specific data, you can have the hierarchy as follows.

/etc/puppetlabs/code/environments/%{::environment}/hieradata/nodes/mynode.example.com

A sample YAML based configuration file is shown below. You can have all the value in key value fashion. You also nest data elements if neccessary.

---
ldap_servers:
  - 10.132.17.196
  - 10.132.17.195

users:
  joe:
    home: '/home/joe'
  jenkins:
    password: 'mysecret'

Accessing Hiera Data using CLI

Once you have the hiera data ready in the puppet server, you can check the values using hiera CLI.

To access the value , just use the hiera command with the key as shown below.

hiera ldaps_ervers

If you have used interpolation in the “:datadir” configuration, You should add the parameters as shown below.

 hiera ldap_servers ::environment=production

If you want access the value for a key from a yaml file which is high hierarchy, you need to specify that in the lookup. Otherwise it will return the value from the common.yaml file.

A high hirearchy lookup, for example, a data source from hieradata/node/mynode.example.com.yaml will look like the following.

hiera ldap_server node=test

Accessing Hiera Data From Modules

Accessing data hiera data from module is relatively easy. Use the following syntax in your module to access the data directly.

$ldapservers = hiera("ldap_servers")

$ldapserver is just a puppet variable. You can substitute hiera without assigning it to a variable.

If you want to get all the ldap_servers value in the hierarchy in an array, you can use the following syntax.

$ldapservers = hiera_array("ldap_servers")

Hiera Arguments

While accessing hiera data through modules, you cat set a default value to use if hiera returns nil. It has the following syntax.

$ldapservers = hiera_array("ldap_servers","10.32.34.45")
pupppet hiera

Setting Up Azure CLI on Ubuntu Linux

azure cli ubuntu setup

Azure has a great web interface called azure portal for performing all the functions. But if you prefer command line tools over graphical user interface, you can make use of azure command line interface to manage all azure resources.

Setting up Azure CLI on Ubuntu

This tutorial will guide you for setting up azure cli on Ubuntu Linux systems.

Azure cli need nodejs runtime for performing the cli operations. Execute th following commands for installing azure cli using npm.

sudo apt-get install nodejs-legacy
sudo apt-get install npm
sudo npm install -g azure-cli

If you would like to install it from source, you can use the following commands.

git clone https://github.com/Azure/azure-xplat-cli.git
cd ./azure-xplat-cli
npm install
bin/azure 

Verify Installation:

Once installed, verify the installation using the following command.

azure help

Configuring CLI with Azure Subscription

To connect to your azure subscription, you should configure your cli for authentication. You can do this by executing the following command.

azure login

The above command will output an url and code to get authenticated via browser. Open the url in a browser, enter the code and login to your azure account. Once you are done with it, you will see the confirmation in your command line as shown below.

[email protected]:~# azure login
info:    Executing command login
/info:    To sign in, use a web browser to open the page https://aka.ms/devicelogin. Enter the code SDFHRGF to authenticate. If you're signing in as an Azure AD application, use the --username and --password parameters.
|info:    Added subscription Free Trial
info:    Setting subscription "Free Trial" as default
+
info:    login command OK
[email protected]:~#

Adding Subscriptions

If you have different subscriptions with you azure account, you can add a specific subscription for cli. To do that, get the subscription name using the following command.

azure account list

Add then, you can set a specific subscription using the following command.

azure account set your-subscription

Authentication Using Publishsettings File

You can also use the publishsetting file to authenticate against azure. Execute the following command to download the publishsetting file.

azure account download

You will get the below output.

[email protected]:~/test$ azure account download
info:    Executing command account download
info:    Launching browser to http://go.microsoft.com/fwlink/?LinkId=345334
[email protected]:~/test$

Go the link specified in your output and download the file. Once downloaded, execute the following command to import the file.

azure import <downloaded-publish-setting-file>  
azure cli ubuntu setup

AWS Codecommit Tutorial – Beginners Guide

AWS codecommit tutorial

In a normal private environment , if you want to host your code using solutions like gitlab, Atlassian stash etc, you will need manage high availability and scalability for your production systems. AWS codecommit is a private managed source control system which is secure, highly scalable and scalable. It is git based and it works the same way like all other git based source control systems like github, stash etc. This allows easy migration of your code repositories to codecomiit and have the same work-flow you used to have. Moreover, codecommit provides out of the box encryption for your source codes which is at rest and in transit. If your applications are hosted in AWS, codecommit would be a good fit for all your source codes.

AWS Codecommit Tutorial

This aws codecommmit tutorial will guide you to get started with AWS codecommit service. To follow this tutorial, you need to have the latest AWS CLI installed on your system. If you do not have the CLI setup follow this link for the setup. It is always advisable to create an IAM user and attach a policy with required access to codecommit.

You Might Like: AWS account security tips

Creating a repository

Like you do in any source control system, the first step is to create a repository for your project. Use the following syntax for creating a repository in codecommit.

 aws codecommit create-repository --repository-name MyProjecRepo --repository-description "Write a description about your project"
[email protected]:~$ aws codecommit create-repository --repository-name myapp --repository-description "This is the code repository for myapp"
{
    "repositoryMetadata": {
        "repositoryName": "myapp", 
        "cloneUrlSsh": "ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp", 
        "lastModifiedDate": 1449065689.399, 
        "repositoryDescription": "This is the code repository for myapp", 
        "cloneUrlHttp": "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp", 
        "creationDate": 1449065689.399, 
        "repositoryId": "2e859c05-06f6-458e-899d-cbc9a589fd33", 
        "Arn": "arn:aws:codecommit:us-east-1:146317666315:myapp", 
        "accountId": "146317666315"
    }
}
[email protected]:~$ 

Once the command execution is successfull, it returns the output with the codecommit repo url for both ssh and http.

Authentication local git to codecommit

Next step is to configure your local git for authenticating againist codecommmit. So that you will have persmissions to clone, push and do all the remote repository related tasks. You can do that using the credential helper as shown below.

git config --global credential.helper '!aws codecommit credential-helper [email protected]'
git config --global credential.UseHttpPath true

Common git config

[email protected]:~/projects$ git config --global user.email "[email protected]"
[email protected]:~/projects$  git config --global user.name "devopscube"

Cloning the Repository

You can clone the remote codecommit repository to your local workstation using the normal git clone command and the repository url you got in the output section when you created the repository.

[email protected]:~/projects$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp myrepo
Cloning into 'myrepo'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done.
[email protected]:~/projects$

performing Common Git Funtions

Now you have an empty repository clonned from codecommit. You can perform all the normal git operations as you perform with any git based source control system as shown below.

Note: If you are using Ubuntu 14.04 as your workstation, you are likely to get a “gnutls_handshake() failed” error. You can rectify this error by following this solution. gnutls_handshake() failed solution

[email protected]:~/projects/myrepo$ touch test.txt
[email protected]:~/projects/myrepo$ git status
On branch master

Initial commit

Untracked files:
  (use "git add ..." to include in what will be committed)

	test.txt

nothing added to commit but untracked files present (use "git add" to track)
[email protected]:~/projects/myrepo$ git add test.txt 
[email protected]:~/projects/myrepo$ git commit -m "first commit"
[master (root-commit) cd12dd2] first commit
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 test.txt
 [email protected]:~/projects/myrepo$ git push -u origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 206 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: 
To https://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
[email protected]:~/projects/myrepo$

Creating a Branch

You can create a branch for your repository using “create-branch” attribute. For this you must pass the commit id to for the new branch to point to.You can get short hash of commit id’s using “git log” command. An example is shown below.

[email protected]:~/projects/myrepo$ git log
commit cd12dd2b35afc3768a2a025654fa01e6ddb54fa4
Author: devopscube <[email protected]>
Date:   Thu Dec 3 18:58:57 2015 +0530

    first commit
[email protected]:~/projects/myrepo$

Once you get the commit id, use the following command to create a new branch. Replace the repository name, branch name and commit id accordingly.

aws codecommit create-branch --repository-name myapp --branch-name newfeature --commit-id cd12dd2b35afc3768a2a025654fa01e6ddb54fa4

List all Branches

You can list all the branches associated with a repository using “list-branches” as shown below.

aws codecommit list-branches --repository-name myapp

Rename a Repository

A repository can be renamed using “update-repository-name” attribute.

aws codecommit update-repository-name --old-name myapp --new-name MyNewApp

Getting Repository Details

To get the information about more than one repository, you can run a batch-get-repositories attribute as shown below.

aws codecommit batch-get-repositories --repository-names myapp railsapp

Deleting a Repository

“delete-repository” attribute is used with the cli to delete a repository.

aws codecommit delete-repository --repository-name MyNewApp

Output:

[email protected]:~/projects/myrepo$ aws codecommit delete-repository --repository-name MyNewApp
{
    "repositoryId": "2e859c05-06f6-458e-899d-cbc9a589fd33"
}
[email protected]:~/projects/myrepo$
AWS codecommit tutorial

[Solution] gnutls_handshake() failed GIT repository – AWS codecommit

gnutls_handshake() failed

Note: This solution is not just limited to codecommit but also for other Ubuntu gnults_handshake related issues.

If you have AWS cli installed in ubuntu 14.04 and working with AWS codecommit, you are likely to get “gnutls_handshake() failed” error when you try to clone a repository created in codecommit. Do not worry about it, we have a solution for it.

[Solution] gnutls_handshake() failed

Follow the steps given below to rectify this issue.

1. Install build-essential, fakeroot and dpkg-dev using the following command.

sudo apt-get install build-essential fakeroot dpkg-dev

2. Create a directory named git-rectify in the home folder using the following command.

mkdir ~/git-rectify

3. CD in to the get-rectify directory and get the git source files.

cd ~/git-rectify
apt-get source git

4. Install all the git dependencies.

sudo apt-get build-dep git

5. Install libcurl with all development files.

sudo apt-get install libcurl4-openssl-dev

6. Unpack all the source packages using the following command.

Note: The name “git_1.9.1-1ubuntu0.1” could vary based on the lastest version. So look in to the directory for the correct version name.

 dpkg-source -x git_1.9.1-1ubuntu0.1

7. Cd in to “git_1.9.1” folder and open the control file located inside debian folder (git_1.9.1/debian/control) in a text editor. Replace all the occurences of “libcurl4-gnutls-dev” to “libcurl4-openssl-dev”. Also open “debian/rules” file and delete the line “TEST=test”

8. Build the package files using the following command.

sudo dpkg-buildpackage -rfakeroot -b

9. Install the new git package by executing the folling command.

Note: The package name is based on the system architecture. So have a look at the package name located in “git_1.9.1” (could be a different name for you) folder.

sudo dpkg -i git_1.7.9.5-1_amd64.deb

Thats it! Now you will be able to clone and do all the git related activities to your codecommit service. Lets us know if you are not able to rectify the issue after performing all the above mentioned steps.

gnutls_handshake() failed

How To Monitor Docker Containers – Host Based Monitoring

monitor docker containers

It is important to get visibility in to status and health of docker environments as the deployments grow larger. In this tutorial, we will look into few options for monitoring Docker containers.

Monitoring container using cAdvisor

cAdvisor is a tool created by google for their own container tool and later they added support for Docker containers. cAdvisor stands for container advisor. It helps you to gain insights on how much resource is being used by Docker containers and also helps you understand the performance characteristics on the running containers. cAdvisor has a GUI and it also exposes api’s to get obtain the data programmatically.

cAdvisor is well suited for monitoring your running containers and its resource usage. There is an official cAdvisor image in docker hub which comes configured with cAdvisor. Using that image is the best way to get started with its functionalities.

In this section we will look in to how to deploy a cAdviosor container to monitor out Docker containers.

Launching a cAdvisor container:

You can launch a cAdvisor container using google’s official image “google/cadvisor”. Launch a cAdvisor container using the following command.

sudo docker run \

--volume=/:/rootfs:ro \

--volume=/var/run:/var/run:rw \

--volume=/sys:/sys:ro \

--volume=/var/lib/docker/:/var/lib/docker:ro \

--publish=8080:8080 \

--detach=true \

--name=cadvisor \

google/cadvisor:latest

Accessing UI:

cAdvisor GUI can be accessed on the host port 8080. Point your browser to the IP of the host running cAdvisor followed by port 8080 http://<host IP>:8080.

Also read: Getting started with docker machine

Docker Host Monitoring Using Sensu

Sensu is an open source monitoring framework for self-hosted and centralized metric service.

Setting up Sensu server:

In this section we will learn how to set up a Sensu server using a Docker container. To deploy a sensu server, you can make use of a prebuild sensu server Docker image hiroakis/docker-sensu-server. This container will deploy sensu server, uchiwa web interface, rabbit-mq server, redis and sensu api. There is no dedicated functionality to monitor Docker, however, using plugin system you can configure status checks and container metrics.

Before deploying sensu server, you must create a check that has to be loaded in to the server. Follow the instructions below to deploy the sensu server container.

  1. Create a directory named sensu and cd in to it.
mkdir sensu && cd sensu
  1. Create a file name check-docker.json and copy the following content to it.
{

"checks": {

"load_docker_metrics": {

"type": "metric",

"command": "load-docker-metrics.sh",

"subscribers": [

"docker"

],

"interval": 10

}

}

}

The above check shows that all sensu client will have a script named load-docker-metrics.sh. We will deploy this script in all sensu clients (agents).

  1. From the same directory you have the check-docker.json file, run the following docker command to deploy the sensu server.
$ sudo docker run -d --name sensu-server         \

-p 3000:3000                                 \

-p 4567:4567                                 \

-p 5671:5671                                 \

-p 15672:15672                               \

-v $PWD/check-docker.json:/etc/sensu/conf.d/check-docker.json  hiroakis/docker-sensu-server
  1. Now, you will be able to launch the uchiwa dashboard at http://host-ip:3000.

Setting up Sensu client:

Now we have our sensu server up and running. Next step is to deploy sensu clients (agents) on hosts running Docker docker containers.

Follow the instructions below to configure sensu client on nodes running Docker containers.

  1. Create a directory name sensu-client and cd in to the directory.
$ mkdir  sensu-client && cd sensu-client 
  1. While deploying sensu server, we created a check saying that all sensu agents will be running a script named load-docker-metrics.sh. Create a file name load-docker-metrics.sh and copy the following script on to it. This script uses native Docker API to get the list of running containers, images etc.
#!/bin/bash

set -e 

# Get the containers count

containers_running=$(echo -e "GET /containers/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock \
    | tail -n +5           \

    | python -m json.tool  \

    | grep \"Id\"          \

    | wc -l) 

# Get the count of all the containers

total_containers=$(echo -e "GET /containers/json?all=1 HTTP/1.0\r\n" | nc -U /var/run/docker.sock \

 | tail -n +5           \

 | python -m json.tool \

 | grep \"Id\"          \

 | wc -l)

# Count all images

images_count=$(echo -e "GET /images/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock         \

 | tail -n +5           \

 | python -m json.tool \

 | grep \"Id\"          \

 | wc -l)

echo "docker.HOST_NAME.containers_running ${containers_running}"

echo "docker.HOST_NAME.total_containers ${total_containers}"

echo "docker.HOST_NAME.images_count ${images_count}"

if [ ${containers_running} -lt 3 ]; then

    exit 1;

fi 

In the above script, replace HOST_NAME with the hostname of the server in which you are deploying the sensu client.

  1. Make the script executable using the following command.
$ chmod 700 load-docker-metrics.sh 
  1. From the directory you have load-docker-metrics script, execute the following docker command.

Note: In the following command, replace the following values.

SENSU_SERVER_IP: ip of host running sensu server

RABIT_MQ_USER : sensu (default value)

RABIT_MQ_PASSWORD : password (default value)

CLIENT_NAME CLIENT_IP : ip address of host running sensu client.

$ sudo docker run -d --name sensu-client --privileged  \

  -v $PWD/load-docker-metrics.sh:/etc/sensu/plugins/load-docker-    metrics.sh  \

  -v /var/run/docker.sock:/var/run/docker.sock   \

usman/sensu-client SENSU_SERVER_IP RABIT_MQ_USER \ RABIT_MQ_PASSWORD CLIENT_NAME CLIENT_IP 
  1. Once the sensu client container is launched, the host will get registered to the sensu server in few seconds. Access the sensu uchiva dashboard and you can see the registered client with the checks configured.
  1. If you click on the registered client and see the status of keepalive, all the values will be shown as 0. It means your Docker host is running without any critical issues. If it turns to any non-zero number, it means that the Docker daemon is not running as expected.

You might like: List of devops blogs and resources

Docker stats command

Docker client has a native functionality to inspect the resource consumption of Docker containers. You need to specifically mention the names of the containers with “docker stats” command to look in to its stats. If you haven’t specified the CPU usage for a container, the stats command will show the total memory available in the host machine. It doesn’t mean that a container has that much usable resources.

Execute the following command to see the stats of a container.

Syntax: docker stats <container name or id>
$ sudo docker stats tender_kowalevski 

To get the more information about container statistics, you can use docker stats api as given below.

Syntax: echo -e "GET /containers/<container_name>/stats HTTP/1.0\r\n" | nc -U /var/run/docker.sock 
$ echo -e "GET /containers/tender_kowalevski/stats HTTP/1.0\r\n" | nc -U /var/run/docker.sock

The response for above rest request is shown below.

 {

   "read":"2015-05-13T10:07:33.885214393Z",

   "network":{

      "rx_bytes":648,

      "rx_packets":8,

      "rx_errors":0,

      "rx_dropped":0,

      "tx_bytes":648,

      "tx_packets":8,

      "tx_errors":0,

      "tx_dropped":0

   },

   "cpu_stats":{

      "cpu_usage":{

         "total_usage":13763940722,

         "percpu_usage":[

            13763940722

         ],

         "usage_in_kernelmode":20000000,

         "usage_in_usermode":90000000

      },

      "system_cpu_usage":101902770000000,

      "throttling_data":{

         "periods":0,

         "throttled_periods":0,

         "throttled_time":0

      }

   },

   "memory_stats":{

      "usage":9994240,

      "max_usage":13570048,

      "stats":{

         "active_anon":9986048,

         "active_file":0,

         "cache":73728,

         "hierarchical_memory_limit":18446744073709551615,

         "inactive_anon":0,

         "inactive_file":0,

         "mapped_file":0,

         "pgfault":7903,

         "pgmajfault":0,

         "pgpgin":3641,

         },

      "failcnt":0,

      "limit":1040683008

   }

}

monitor docker containers