How to Setup Ansible AWS Dynamic Inventory

Ansible AWS Dynamic Inventory Setup

When you are using Ansible with AWS, maintaining the inventory file will be a hectic task as AWS has frequently changed IPs, autoscaling instances, and much more.

However, there is an easy solution called ansible dynamic inventory. Dynamic inventory is an ansible plugin that makes an API call to AWS to get the instance information in the run time. It gives you the ec2 instance details dynamically to manage the AWS infrastructure.

AWS dynamic inventory workflow

When I started using the Dynamic inventory, it was just a Python file. Later it became an Ansible plugin.

I will talk more about how to manage the AWS dynamic inventory later in this article.

Dynamic inventory is not limited to just AWS. It supports most of the public and private cloud platforms. Here is the article on managing GCP resources using Ansible Dynamic inventory. 

Setup Ansible AWS Dynamic Inventory

In this tutorial, you will learn how to set up a dynamic inventory on AWS using boto and the AWS ec2 Ansible plugin.

Follow the steps carefully for the setup.

Step 1: Ensure you have python3 & pip3 installed in your Ansible server.

Most Linux operating system comes with phyton3. You can validate it using the following command.

python3 --version

If you don’t have python3, you can install it using the following command.

For centos, Redhat,

sudo yum install python3 -y
sudo yum –y install python3-pip

For Debian, Ubuntu,

sudo apt-get install python3 -y
sudo apt-get install python3-pip -y

Step 2: Install the boto3 library. Ansible uses the boot core to make API calls to AWS to retrieve ec2 instance details.

sudo pip3 install boto3

If you have used the Ansible ppa for installation, install pip using the following command.

sudo apt-get install python-boto3

or else you might see the following error.

ERROR! The ec2 dynamic inventory plugin requires boto3 and botocore.

Step 3: Create an inventory directory under /opt and cd into the directory.

sudo mkdir -p /opt/ansible/inventory
cd /opt/ansible/inventory

Step 4: Create a file named aws_ec2.yaml in the inventory directory.

sudo vi aws_ec2.yaml

Copy the following configuration to the file. If you are running an ansible server outside the AWS environment, replace add your AWS access key and secret to the config file.

Important Note: Never commmit this file to public git repos.

---
plugin: aws_ec2
aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>
keyed_groups:
  - key: tags
    prefix: tag

If your ansible server is running inside the AWS environment, attach an ec2 instance role with the required AWS ec2 permissions (Mostly describe instances). This way you don’t have to add the access and secret key in the configuration. Ansible will automatically use the attached role to make the AWS API calls.

Step 5: Open /etc/ansible/ansible.cfg file.

sudo vi /etc/ansible/ansible.cfg

Find the [inventory] section and add the following line to enable the ec2 plugin.

enable_plugins = aws_ec2

It should look something like this.

[inventory]
enable_plugins = aws_ec2
add ansible ec2 inventory plugin to ansible.cfg

Step 6: Now let’s test the dynamic inventory configuration by listing the ec2 instances.

ansible-inventory -i /opt/ansible/inventory/aws_ec2.yaml --list

The above command returns the list of ec2 instances with all its parameters in JSON format.

If you want to use the dynamic inventory as a default Ansible inventory, edit the /etc/ansible/ansible.cfg file and search for inventory parameters under defaults. Change the inventory parameter value as shown below.

inventory      = /opt/ansible/inventory/aws_ec2.yaml
add ec2 dynamic inventory as default inventory.

Now if you run the inventory list command without passing the inventory file, Ansible looks for the default location and picks up the aws_ec2.yaml inventory file.

Step 6: Execute the following command to test if Ansible is able to ping all the machines returned by the dynamic inventory.

ansible all -m ping

Grouping EC2 Resources With Anisble Dynamic Inventory


The primary use case of AWS Ansible dynamic inventory is to execute Ansible playbooks or ad-hoc commands against a single or group of categorized or grouped instances based on tags, regions, or other ec2 parameters.

You can group instances using tags, instances type, instance names, custom filters, and more. Take a look at all supported filters and keyed groups from here.

Here is a minimal configuration for aws_ec2.yaml that uses a few keyed_groups and filters.

---
plugin: aws_ec2

aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE>

regions:
  - us-west-2

keyed_groups:
  - key: tags
    prefix: tag
  - prefix: instance_type
    key: instance_type
  - key: placement.region
    prefix: aws_region

Execute the following command to list the dynamic inventory groups.

ansible-inventory --graph

You will see an output like the following with all instances grouped under tags, zones, and regions with dynamic group names like aws_region_us_west_2 , instance_type_t2_micro, tag_Name_Ansible

ansible AWS ec2 dynamic inventory graph

Now you can execute Ansible ad-hoc commands or playbook against these groups.

Execute Ansible Commands With ec2 Dynamic Inventory

Let’s test the ec2 dynamic inventory by executing few ansible ad-hoc commands.

Note: Make sure you have the SSH keys or user/password setup in your ansible configuration for Ansible to connect to it for executing the commands.

Execute Ping

I am going to execute the ping command with all instances in the region us_west_2. As per my configuration, the dynamic group name is aws_region_us_west_2.

ansible aws_region_us_west_2 -m ping

If you have all the right configurations, you should see an output like the following.

ec2-54-218-105-53.us-west-2.compute.amazonaws.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Using Dynamic Inventory Inside Playbook

If you want to use dynamic inventory inside the playbook, you just need to mention the group name in the hosts variable as shown below.

---
- name: Ansible Test Playbook
  gather_facts: false
  hosts: aws_region_us_west_2
  tasks:

    - name: Run Shell Command
      command: echo "Hello World"

You checkout the ansible playbook examples if you want to test more playbooks.

17 comments
  1. How to parse the EC2 instance tags with multiple values into different tag_groups, I am trying it with ec2 dynamic inventory plugin for ansible 2.12 it doesn’t work. what I get is a single group as “prefix_value1_value2_value3” for the instance with multiple tag values.

    Thanks!

  2. Thanks for the article but it don’t cover my use-case, Please let me know how can I solve this issue
    How to parse the EC2 instance tags with multiple values into different tag_groups, I am trying it with ec2 dynamic inventory plugin for ansible 2.12 it doesn’t work. what I get is a single group as “prefix_value1_value2_value3” for the instance with multiple tag values.

  3. Hello,

    When I try to run the following I’m getting this error. Any idea what could be the cause? Thank you very much.

    ansible-inventory -i /opt/ansible/inventory/aws_ec2.yaml –list

    [WARNING]: * Failed to parse /opt/ansible/inventory/aws_ec2.yaml with aws_ec2 plugin: An error occurred (AuthFailure) when calling the DescribeRegions operation: AWS was not able to validate the provided
    access credentials
    [WARNING]: Unable to parse /opt/ansible/inventory/aws_ec2.yaml as an inventory source
    [WARNING]: No inventory was parsed, only implicit localhost is available

    1. Please check if you have configured valid AWS credentials. Looking at the error, it looks like a credential issue.

  4. You know which part was the worst? Connecting via the public IP.

    Isn’t there a way to make this work with private IPs inside AWS network?

    1. Hi Santhosh,

      To make it work with private IPs, just run the same within a private VPC. If you want to run it from outside AWS, use VPN. Ideally, in organizations, we will the ansible management setup in private VPCs and DevOps engineers connect to private VPCs via VPN. Also, in project use cases, no one will run these commands manually. It will be part of Jenkins or a similar tool.

    1. Hi pavan,

      Ensure you have both python3 and boto3 installed.

      To install boto3

      sudo pip3 install boto3
      

      If you have installed Ansible using Ansible ppa, install boto3 using the following command. It will resolve the issue.

      sudo apt-get install python-boto3
  5. I tried using environment variables for access key and secret key, but it is not working. In the below example i have removed access key and secret key and added those in environment variables.

    plugin: aws_ec2
    regions:
    – “us-east-1”
    keyed_groups:
    – key: tags.Name
    – key: tags.task
    filters:
    instance-state-name : running

  6. Hi, thanks for the valuable information. I need some help here,

    using the below I can get all the ec2 isntances list. But how to get only running instances?

    keyed_groups:
    – key: aws_ec2
    prefix: ec2_hosts

  7. I uninstalled the deb packages and used pip:

    /etc/ansible$ pip install boto
    Collecting boto
      Downloading boto-2.47.0-py2.py3-none-any.whl (1.4MB)
        100% |████████████████████████████████| 1.4MB 940kB/s 
    Installing collected packages: boto
    Successfully installed boto
    
    /etc/ansible$ ./ec2.py --list
    ERROR: "Working with RDS instances requires boto3 - please install boto3 and try again", while: getting RDS instancesubuntu@bastion-east:/etc/ansible$
    
    /etc/ansible$ sudo pip install boto3 (permission error in /usr/local/lib... w/o sudo)
    

    …same error

  8. /etc/ansible$ ./ec2.py --list
    Traceback (most recent call last):
      File "./ec2.py", line 1600, in 
        Ec2Inventory()
      File "./ec2.py", line 193, in __init__
        self.do_api_calls_update_cache()
      File "./ec2.py", line 527, in do_api_calls_update_cache
        self.get_rds_instances_by_region(region)
      File "./ec2.py", line 633, in get_rds_instances_by_region
        client = ec2_utils.boto3_inventory_conn('client', 'rds', region, **self.credentials)
    AttributeError: 'module' object has no attribute 'boto3_inventory_conn'
    

    I’m running ubuntu 16.04 with boto2 and boto3 installed:

    $ dpkg -l | grep boto
    ii python-boto 2.38.0-1ubuntu1 all Python interface to Amazon’s Web Services – Python 2.x
    ii python-boto3 1.2.2-2 all Python interface to Amazon’s Web Services – Python 2.x
    ii python-botocore 1.4.70-1~16.04.0 all Low-level, data-driven core of boto 3 (Python 2)

    I have the credentials in ~/.aws/credentials

  9. I was trying to find examples in Ansible documentation but couldn’t find anything like this. Thanks!

  10. I was go throug all your steps which you provided for the setup, all these steps are very easy to undestand thank you for sharing this tutorial.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like