How to Setup Ansible AWS Dynamic Inventory

Ansible AWS Dynamic Inventory Setup

When you are using Ansible with AWS, maintaining the inventory file will be a hectic task as AWS has frequently changed IPs, autoscaling instances, and much more.

However, there is an easy solution called ansible dynamic inventory. Dynamic inventory is an ansible plugin that makes an API call to AWS to get the instance information in the run time. It gives you the ec2 instance details dynamically to manage the AWS infrastructure.

When I started using the Dynamic inventory, it was just a Python file. Later it became an Ansible plugin.



I will talk more about how to manage the AWS dynamic inventory later in this article.

Dynamic inventory is not limited to just AWS. It supports most of the public and private cloud platforms. Here is the article on managing GCP resources using Ansible Dynamic inventory. 

Setup Ansible AWS Dynamic Inventory


In this tutorial, you will learn how to set up a dynamic inventory on AWS using boto and the AWS ec2 Ansible plugin.

Follow the steps carefully for the setup.

Step 1: Install python3

sudo yum install python3 -y

Step 2: Install the boto3 library.

sudo pip3 install boto

Step 3: Create a inventory directory under /opt and cd in to the directory.

sudo mkdir -p /opt/ansible/inventory
cd /opt/ansible/inventory

Step 4: Create a file named aws_ec2.yaml in the inventory directory and copy the following configuration.

Note: The file name should be aws_ec2.yaml. Also, replace add your AWS access key and secret to the config file.

---
plugin: aws_ec2
aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>
keyed_groups:
  - key: tags
    prefix: tag

If you have an AWS instance role attached to the instance with required Ansible permissions, you don’t have to add the access and secret key in the configuration. Ansible will automatically use the attached role to make the AWS API calls.

Step 5: Open /etc/ansible/ansible.cfg and find the [inventory] section and add the following line to enable the ec2 plugin.

enable_plugins = aws_ec2

It should look something like this.

[inventory]
enable_plugins = aws_ec2

Step 6: Now lets test the dynamic inventory configuration by listing the ec2 instances.

ansible-inventory -i /opt/ansible/inventory/aws_ec2.yaml --list

The above command returns the list of ec2 instances with all its parameters in JSON format.

If you want to use the dynamic inventory as a default Ansible inventory, edit the ansible.cfg file present in /etc/ansible directory and search for inventory parameter under defaults. Change the inventory parameter value as shown below.

inventory      = /opt/ansible/inventory/aws_ec2.yaml

Now if you run the inventory list command without passing the inventory file, Ansible looks for the default location and picks up the aws_ec2.yaml inventory file.

Step 6: Execute the following command to test if Ansible is able to ping all the machines returned by the dynamic inventory.

ansible all -m ping

Grouping EC2 Resources


The primary use case of AWS Ansible dynamic inventory is to execute Ansible playbooks or ad-hoc commands against a single or group of categorized or grouped instances based on tags, regions, or other ec2 parameters.

You can group instances using tags, instances type, instance names, custom filters and more. Take a look at all supported filters and keyed groups from here.

Here is a minimal configuration for aws_ec2.yaml that uses few keyed_groups and filters.

---
plugin: aws_ec2

aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>

regions:
  - us-west-2

keyed_groups:
  - key: tags
    prefix: tag
  - prefix: instance_type
    key: instance_type
  - key: placement.region
    prefix: aws_region

Execute the following command to list the dynamic inventory groups.

ansible-inventory --graph

You will see an output like the following with all instances grouped under tags, zones, and region with dynamic group names like aws_region_us_west_2 , instance_type_t2_micro, tag_Name_Ansible

Now you can execute Ansible ad-hoc commands or playbook against these groups.

Execute Anisble Commands With Dynamic Inventory

Lets test the dynamic inventory by executing few ansible ad-hoc commands.

Note: Make sure you have the SSH keys or user/password setup in your ansible configuration for Ansible to connect to it for executing the commands.

Execute Ping

I am going to execute the ping command with all instances in the region us_west_2. As per my configuration, the dynamic group name is aws_region_us_west_2.

ansible aws_region_us_west_2 -m ping

If you have all the right configurations, you should see an output like the following.

ec2-54-218-105-53.us-west-2.compute.amazonaws.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Using Dynamic Inventory Inside Playbook

If you want to use dynamic inventory inside the playbook, you just need to mention the group name in the hosts varaible as shown below.

---
- name: Ansible Test Playbook
  gather_facts: false
  hosts: aws_region_us_west_2
  tasks:

    - name: Run Shell Command
      command: echo "Hello World"

Ansible AWS Dynamic Inventory Setup

How To Setup Ansible Dynamic Inventory For Google Cloud

Setup Ansible Dynamic Inventory For Google Cloud

The best way to manage and orchestrate VM instances in Google cloud using Ansible is through Dynamic inventory plugin. It is an ansible google cloud module that authenticates GCP in run time and returns the instance details.

With dynamic inventory, you don’t need to manage a static inventory file, instead, you can group instances based on instance labels, zones, and network tags. Even you can group instances based on names.

Ansible Dynamic Inventory Google Cloud Configuration


Let’s get started.

Prerequisites:

  1. You should have pip installed
  2. Ansible installed
  3. Google Service Account JSON with permissions to provision GCP resources.

Follow the steps given below to configure Ansible dynamic inventroy GCP plugin,

Step 1: Install google-auth module using pip.

sudo pip install requests google-auth

Step 2: Create a dedicated inventory directory

sudo mkdir -p /opt/ansible/inventory

Step 3: Create a Google IAM service account. It will be used by the Ansible server to authenticate against google cloud for dynamic inventory. A service account json will look like the following.

{
  "type": "service_account",
  "project_id": "devopscube-sandbox",
  "private_key_id": "sdfkjhsadfkjansdf9asdf87eraksd",
  "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBaksdhfjkasdljf sALDIFUHW8klhklSDGKAPISD GIAJDGHIJLSDGJAFSHGJN;MLASDKJHFGHAILFN DGALIJDFHG;ALSDN J Lkhawu8a2 87356801w tljasdbjkh=\n-----END PRIVATE KEY-----\n",
  "client_email": "[email protected]",
  "client_id": "32453948568273645823",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://accounts.google.com/o/oauth2/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/ansible-provisioning%40devopscube-sandbox.iam.gserviceaccount.com"
} 

Step 5: Save the service account file as service-account.json inside /opt/ansible/inventory folder.

Step 6: Create a file named gcp.yaml inside /opt/ansible/inventory directory and add the following content.

---
plugin: gcp_compute
projects:
  - <gcp-project-id>
auth_kind: serviceaccount
service_account_file: /opt/ansible/inventory/service-account.json

Replace <gcp-project-id> with your google cloud project id. You can get the id from the GCP dashboard.

gcp project id for ansible dynamic inventory

There are more configuration

Step 4: Change the inventory folder’s permission to 755.

sudo chmod -R 755 /opt/ansible

Step 5: Now we have all the required configurations for gcp dynamic inventory. Lets test it by listing out all the instances using the ansible-inventory command. Make sure you are running this command from /opt/ansible/inventory directory.

ansible-inventory --list -i gcp.yaml

The above command should give a JSON output with all the instance details. Which means, now Ansible is able to communicate to GCP via service account.

Here is an example output.

Ansible GCP  dynamic inventory instance list

Step 5: Open /etc/ansible/ansible.cfg file Add the dynamic inventory config path under default configs.

[defaults]

# some basic default values...

inventory      = /opt/ansible/inventory/gcp.yaml

With this configuration, by default all the ansible action will consider the gcp config as its inventory file.

You can verify the default inventory configuration by executing the inventory list command.

ansible-inventory --list

You should get the similar output you got in step 4.

Grouping GCP Resources


Now we have al the configurations for Ansible to interact with GCP. But it is not enough. We need to group resources to execute ansible commands or playbooks against all the servers in that group.

Now, here is the cool feature!

GCP Ansible module automatically groups your resources based on a few standard parameters.

Ansible GCP module supports grouping using labels, zones, and network tags. Grouping with labels is the best way to manage resources using Ansible. It would be best if you had a standard and consistent labeling standard across all your environments.

Here is the inventory file example where I have added grouping using labels and Zone. Also, there are two groups named “development” and “staging”. These will return all VMS, which matches the filter.

---
plugin: gcp_compute
projects:
  - devopscube-262115
auth_kind: serviceaccount
service_account_file: /opt/ansible/inventory/service-account.json
keyed_groups:
  - key: labels
    prefix: label
  - key: zone
    prefix: zone
groups:
  development: "'env' in (labels|list)"
  staging: "'jenkins' in name"

Modify your inventory file with the above keyed_groups & groups and execute the following command to list the groups.

ansible-inventory --graph

The following output shows all the machines grouped as per the labels and group filters as shown below.

Now, you can use the group names to execute remote commands on all the machines which comes under the group.

Execute Ansible Commands With Dynamic Inventory On GCP


Now, let’s test our configuration by running a few ansible ad-hoc remote commands.

I have added the default remote private key and remote user in ansible.cfg file so that Ansible can communicate to the remote host via ssh.

The following are the parameters required for the remote connection. Replace the values accordingly.

remote_user = bibin.w
host_key_checking = False
private_key_file = /home/bibin.w/.ssh/id_rsa

Execute Remote Ping

In my dynamic inventory configuration, I have added a group named staging, which groups all machines with “jenkins” in the instance name.

I am going to execute a ping command against the staging group.

ansible staging -m ping

It successfully executes ping command on an instance which matches the staging group. Here is the output.

[email protected]:/etc/ansible# ansible staging -m ping
35.192.72.62 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Execute a Remote Shell Command

Following ansible ad-hoc command executes an echo command on all servers which belong to the staging group.

ansible staging -m shell -a 'echo "Hello World"'

Here is the output.

[email protected]:/etc/ansible# ansible staging -m shell -a 'echo "Hello World"'
35.192.72.62 | CHANGED | rc=0 >>
Hello World

Few Best Practices

When you are using the Ansible dynamic inventory on google cloud, follow the best practices given below.

  1. Use labels to group the instances.
  2. Have a standard instance labeling strategy and naming schemes.
  3. Do not commit the GCP service account JSON to Github or any other SCM.
  4. Always dry run ansible playbooks before you execute directly on the VMs.

Setup Ansible Dynamic Inventory For Google Cloud

Jenkins Automated Build Trigger On Github Pull Request

jenkins github pr integration

Building projects based on pull request is something you cannot avoid in CI/CD pipelines. Nowadays every team does several deployments/operations per day and lots of builds have to happen in this process.

Also, the teams work on the same repo collaborating code require faster code integrations. So it is better to have an automated build process that kicks off the CI/CD pipeline on a pull request rather than manually triggering the jobs.

Trigger Builds Automatically On Github Pull Request

In this tutorial, we will explain how to configure a pull request based build trigger on Jenkins using Github webhooks and Github pull request builder plugin.

Note: Multipbranch Pipeline is the best way to achieve Jenkins pull request based workflow as it is natively available in Jenkins. Check out this article on the multibranch pipeline for setup and configuration.

Install Github Pull Request Builder Plugin

  1. Go to Manange Jenkins --> Manage Plugins
  2. Click on the available tab at the top and search for Github Pull Request Builder. Select the plugin using the checkbox and click Install without restart as shown in the image below.github pull request builder plugin
  3. Once the plugin is installed, select the restart checkbox as shown in the image below.jenkins plugin restart

Github Pull Request Builder Configuration

Once Jenkins is restarted, follow the steps given below for configuring the plugin with your GitHub account.

  1. Head over to Manange Jenkins --> Configure System
  2. Find “GitHub Pull Request Builder” section and click add credentials.Jenkins - Github connectivity
  3. Enter your Github username and password and add it. jenkins - github credentials
  4. You can test the Github API connection using the test credentials button. It should show “connected” as shown below. Save the configuration after testing the API connection.pull request builder credentials configuration

Github Repo Webhook Configuration

For Jenkins to receive PR events through the pull request plugin, you need to add the Jenkins pull request builder payload URL in the Github repository settings.

  1. Go to Github repository settings, and under webhooks, add the Jenkins pull request builder payload URL. It has the following format
    http://<Jenkins-IP>:<port>/ghprbhook/
    github jenkins webhools settings

    If you need just the PR triggers, you can select the “Let me select individual events” option and select just the “Pull requests” option. Save the webhook after selecting the required events.add webhook in github for jenkins

  2. Once saved, go back to the webhook option and see if there is a green tick. It means Github is able to successfully deliver the events to the Jenkins webhook URL.github activated webhook

Job Configuration for Automated Pull Request Builds

Lets get started with the build job configuration for PR plugin.

  1. Under the General tab, select Github project option and enter the Github repo URL for which you want the PR builds without .git extension as shown below.jenkins github repo URL
  2. Click advanced option and enable automatic PR build trigger and add the target branches you would raise the PR for.PR build target branches
  3. Add your pipeline build steps and save the configuration.
  4. Now raise a PR against the whitelisted branch you have given in the Jenkins PR trigger settings. You should see the job getting triggered on Jenkins.

Other Jenkins PR based Build Workflows

Github Pull request builder plugin is not actively developed as the same functionality is being provided by multi-branch pipelines and Github organisation project.

There is also a Generic Webhook Plugin that can be used to trigger Jenkins jobs on a Pull Request.

Also, you can write custom API endpoints that accept Github webhooks and process PR requests to trigger Jenkins job remotely. Custom APIs help only when the native Jenkins functionalities are not providing the workflow you are looking for.

jenkins github pr integration

7 Numpy Practical Examples: Sample Code for Beginners

Numpy Sample Practical Examples

In the previous tutorial, we have discussed some basic concepts of NumPy in Python Numpy Tutorial For Beginners With Examples. In this tutorial, we are going to discuss some problems and the solution with NumPy practical examples and code.

As you might know, NumPy is one of the important Python modules used in the field of data science and machine learning. As a beginner, it is very important to know about a few NumPy practical examples.

Numpy Practical Examples

Let’s have a look at 7 NumPy sample solutions covering some key NumPy concepts. Each example has code with a relevant NumPy library and its output.

How to search the maximum and minimum element in the given array using NumPy?

Searching is a technique that helps finds the place of a given element or value in the list. In Numpy, one can perform various searching operations using the various functions that are provided in the library like argmax, argmin, etc.

  1. numpy.argmax( )

This function returns indices of the maximum element of the array in a particular axis.

Example:

import numpy as np

# Creating 5x4 array
array = np.arange(20).reshape(5, 4)
print(array)
print()

# If no axis mentioned, then it works on the entire array
print(np.argmax(array))

# If axis=1, then it works on each row
print(np.argmax(array, axis=1))

# If axis=0, then it works on each column
print(np.argmax(array, axis=0))

Output:

[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]
[16 17 18 19]]

19
[3 3 3 3 3]
[4 4 4 4]

Similarly one can use numpy.argmin( ) to return indices of the minimum element of the array in a particular axis.

How to sort the elements in the given array using Numpy?

Sorting refers to arrange data in a particular format. Sorting algorithm specifies the way to arrange data in a particular order. In Numpy, one can perform various sorting operations using the various functions that are provided in the library like sort, argsort, etc.

  1. numpy.sort( )

This function returns a sorted copy of an array.

Example:

import numpy as np

array = np.array([
    [3, 7, 1],
    [10, 3, 2],
    [5, 6, 7]
])
print(array)
print()

# Sort the whole array
print(np.sort(array, axis=None))

# Sort along each row
print(np.sort(array, axis=1))

# Sort along each column
print(np.sort(array, axis=0))

Output:

[[ 3 7 1]
[10 3 2]
[ 5 6 7]]

[ 1 2 3 3 5 6 7 7 10]

[[ 1 3 7]
[ 2 3 10]
[ 5 6 7]]

[[ 3 3 1]
[ 5 6 2]
[10 7 7]]
  1. numpy.argsort( )

This function returns the indices that would sort an array.

Example:

import numpy as np

array = np.array([28, 13, 45, 12, 4, 8, 0])
print(array)

print(np.argsort(array))

Output:

[28 13 45 12 4 8 0]
[6 4 5 3 1 0 2]

How to find the mean of every NumPy array in the given list?

The problem statement is given a list of NumPy array, the task is to find mean of every NumPy array.

  1. Using np.mean( )
import numpy as np

list = [
    np.array([3, 2, 8, 9]),
    np.array([4, 12, 34, 25, 78]),
    np.array([23, 12, 67])
]

result = []
for i in range(len(list)):
    result.append(np.mean(list[i]))
print(result)

Output:

[5.5, 30.6, 34.0]

How to add rows and columns in NumPy array?

The problem statement is given NumPy array, the task is to add rows/columns basis on requirements to numpy array.

  1. Adding Row using numpy.vstack( )
import numpy as np

array = np.array([
    [3, 2, 8],
    [4, 12, 34],
    [23, 12, 67]
])

newRow = np.array([2, 1, 8])
newArray = np.vstack((array, newRow))
print(newArray)

Output:

[[ 3 2 8]
[ 4 12 34]
[23 12 67]
[ 2 1 8]]
  1. Adding Column using numpy.column_stack( )
import numpy as np

array = np.array([
    [3, 2, 8],
    [4, 12, 34],
    [23, 12, 67]
])

newColumn = np.array([2, 1, 8])
newArray = np.column_stack((array, newColumn))
print(newArray)

Output:

[[ 3 2 8 2]
[ 4 12 34 1]
[23 12 67 8]]

How to reverse a NumPy array?

The problem statement is given NumPy array, the task is to reverse the NumPy array.

  1. Using numpy.flipud( )
import numpy as np

array = np.array([3, 6, 7, 2, 5, 1, 8])
reversedArray = np.flipud(array)
print(reversedArray)

Output:

[8 1 5 2 7 6 3]

How to multiply two matrices in a single line using NumPy?

The problem statement is given two matrices and one has to multiply those two matrices in a single line using NumPy.

  1. Using numpy.dot( )
import numpy as np

matrix1 = [
    [3, 4, 2],
    [5, 1, 8],
    [3, 1, 9]
]

matrix2 = [
    [3, 7, 5],
    [2, 9, 8],
    [1, 5, 8]
]

result = np.dot(matrix1, matrix2)
print(result)

Output:

[[19 67 63]
[25 84 97]
[20 75 95]]

How to print the checkerboard pattern of nxn using NumPy?

The problem statement is given n, print the checkerboard pattern for a nxn matrix considering that 0 for black and 1 for white.

Solution:

import numpy as np

n = 8

# Create a nxn matrix filled with 0
matrix = np.zeros((n, n), dtype=int)

# fill 1 with alternate rows and column
matrix[::2, 1::2] = 1
matrix[1::2, ::2] = 1

# Print the checkerboard pattern
for i in range(n):
    for j in range(n):
        print(matrix[i][j], end=" ")
    print()

Output:

0 1 0 1 0 1 0 1
1 0 1 0 1 0 1 0
0 1 0 1 0 1 0 1
1 0 1 0 1 0 1 0
0 1 0 1 0 1 0 1
1 0 1 0 1 0 1 0
0 1 0 1 0 1 0 1
1 0 1 0 1 0 1 0

Numpy Sample Practical Examples

18 DevOps Tools for Infrastructure Automation and Monitoring

Infrastructure Automation for devops

To achieve faster application delivery, the right tools must be used in DevOps environments. There is no single tool that fits all your needs such as server provisioning, configuration management, automated builds, code deployments, and monitoring. Many factors determine the use of a particular tool in your infrastructure.  In this article, we will look into core tools that can be used in a typical DevOps environment.

DevOps Tools for Infrastructure Automation

There are many tools available for infrastructure automation. Which tool to be used is decided by the architecture and needs of your infrastructure. We have listed a few great tools below which come under various categories like configuration management, orchestration, continuous integration, monitoring, etc,

We have categorized the toolsets into the following.

  1. Infrastructure as Code
  2. Continuous Integration/Deployment
  3. Config/Secret Management
  4. Monitoring

Infrastructure as Code

These tools can help you manage all infrastructure components like VPCs, instances, firewalls, managed services, etc as code. Once you have the infra code ready, you can use it to create the environment anytime you want without much manual intervention.

These tools can be used on any cloud or on-prem environment without a vendor lockin.

Terraform

Terraform is an Infra provisioning tool which is cloud-agnostic. It is created by Hashicorp and written in Go. It supports all public and private cloud infrastructure provisioning. Unlike other configuration management tools, terraform does a great job in maintaining the state of your infrastructure using a concept called state files.

You can get started with Terraform in days as it is easy to understand. Terraform has its own DSL called HCL (Hashicorp configuration language). Also, you can write your own terraform plugin using Golang for your custom functionalities.

Note: If you a beginner, you can get started with Terraform using this book. It’s a great book for beginners & experienced users

You can find all the community developed terraform modules from terraform registry.

Ansible

Ansible is agent-less configuration management as well as an orchestration tool. In Ansible, the configuration modules are called “Playbooks”. Playbooks are written in YAML format and it is relatively easy to write when compared to other configuration management tools. Like other tools, Ansible can be used for cloud provisioning. You can find all community playbooks from Ansible Galaxy

Chef

Chef is a ruby based configuration management tool. Chef has the concept of cookbooks where you code your infrastructure in DSL (domain-specific language) and with a little bit of programming. Chef provisions virtual machines and configures them according to the rules mentioned in the cookbooks.

An agent would be running on all the servers which have to be configured. The agent will pull the cookbooks from the chef master server and runs those configurations on the server to reach its desired state. You can find all the community cookbook from Chef Supermarket

You might like: How To Become a DevOps Engineer 

Puppet

Puppet is also a ruby based configuration management tool like chef. The configuration code is written using puppet DSL’s and wrapped in modules. While chef cookbooks are more developer-centric while puppet is developed by keeping system administrators in mind.

Puppet runs a puppet agent on all servers to be configured and it pulls the compiled module from the puppet server and installs required software packages specified in the module. You can find all community Puppet Modules from Puppetforge

Saltstack

Saltstack is a python based opens configuration management tool. Unlike chef and puppet, Saltstack supports remote execution of commands. Normally in chef and puppet, the code for configuration will be pulled from the server while, in Saltstack, the code can be pushed to many nodes simultaneously. The compilation of code and configuration is very fast in Saltstack.

Note: The tool selection should be completely based on project requirements and the team’s ability to learn and use the tool. For example, You can use Ansible to create infrastructure components and to configure VM instances. So if you have a small team and environment, terraform is not required to manage the infrastructure separately. Again it depends on how the existing team can learn and manage the toolsets.

Continuous Integration/Deployment

Jenkins

Jenkins is a java based continuous integration tool for faster delivery of applications. Jenkins has to be associated with a version control system like GitHub or SVN. Whenever a new code is pushed to a code repository, the Jenkins server will build and test the new code and notifies the team for the results and changes.

You Might Like: Jenkins Tutorial For Beginners 

Jenkins is not just a CI tool anymore. Jenkins is been used as an orchestration tool by building pipelines for application provisioning and deployment. Its new pipeline as code functionality lets you keep the CI/CD pipelines as a complete code.

Vagrant

Vagrant is a great tool for configuring virtual machines for a development environment. Vagrant runs on top of VM solutions like VirtualBox, VMware, Hyper-V, etc. It uses a configuration file called Vagrantfile, which contains all the configurations needed for the VM.  Once a virtual machine is created, it can be shared with other developers to have the same development environment. vagrant has plugins for cloud provisioning, configuration management tools (chef, puppet, etc,) and  Docker.

Packer

If you want to follow VM based immutable infrastructure patters, the packer comes in handy to package all dependencies and build deployable VM images. It supports both private clouds and public cloud VM image management. You can also make Packer a stage in you CI pipeline to build a VM image as a deployable artifact.

Docker

Docker works on the concept of process-level virtualization. Docker creates isolated environments for applications called containers. These containers can be shipped to any other server without making changes to the application. Docker is considered to be the next step in virtualization. Docker has a huge developer community and it is gaining huge popularity among DevOps practitioners and pioneers in cloud computing.

Helm

Helm is a deployment manager for Kubernetes. You can deploy any complex application on a Kubernetes cluster using Helm Charts. It has great templating features that support templates for all kubernetes objects like deployments, pods, services, config maps, secrets, RBAC, PSP, etc.

Kubernetes Operators

If you using Kubernetes, the operator pattern is something you should really look at. It helps in automating and managing the Kubernetes application with custom user-defined logic. You can use GitOps methodologies to have a completely automated kubernetes deployments based on Git changes and verifications.

Config/Secret Management

Consul

Consul is an opensource highly-available key-value store. It is mainly used for service discovery purposes. If you have a use case to store and retrieve configurations in real-time, consul is the right fit.

etcd

etcd is another opensource key-value store created by the CoreOS team. It is one of the key components used in Kubernetes for storing the state of cluster operations and management.

Vault

Vault is an open-source tool for storing and retrieving secret data. It provides many functionalities to store your secret key in an encrypted way. You can create ACLs, policies, and roles to manage how the secrets will be accessed by end-users.

Monitoring

Prometheus & Alert Manager

Prometheus is an open-source monitoring system. It is very lightweight and specifically built for modern application monitoring. It supports Linux server and container monitoring.

It has out of the box support for Kubernetes and Openshift monitoring. The alert manager manages all the alerting set up for the monitoring metrics.

New Relic

New Relic is a cloud-based (SaaS) solution for application monitoring. It supports the monitoring of various applications like Php, Ruby, Java, NodeJS, etc. It gives you real-time insights about your running application. A new relic agent should be configured in your application to get real-time data. New relic uses various metrics to provide valuable insights about the application it is monitoring.

Sensu

Sensu is an open source monitoring framework written in Ruby. Sensu is a monitoring tool specifically built for cloud environments. It can be easily deployed using tools like chef and puppet. It also has an enterprise edition for monitoring.

Datadog

Datadog is also a cloud-based (Saas) application and server monitoring solution. You can monitor docker containers and other applications using Datadog.

Other tools worth considering,

  1. Riemann (Open Source Monitoring Tool)
  2. AppDynamics (For application monitoring)
  3. Logz.io (For log analysis and management)
  4. ELK stack (Elasticsearch, Logstash, Kibana)
  5. Splunk (Log analysis and alerting)

Conclusion

Infrastructure automation is a requirement for every DevOps team. Usage and selection of a tool depend on factors like cost, skillset, functionality, etc.

Again one tool will not definitely fit your needs. The selection of toolsets should be based on the organization’s/team requirements rather than the functionality of the tool.

You can also check out this article on 90 devops tools list

So what tools are you using for infrastructure automation?

Infrastructure Automation for devops