AWS ARN Explained: Amazon Resource Name Guide

aws arn explained

In this blog, I talk about concepts, tips, and tricks related to AWS arn. I will also talk about how to create arn URLs for a specific AWS resource.

I have also added all the important links to AWS resources to build the ARNs you need quickly.

What is ARN in AWS?

Amazon Resource Names (ARNs) are uniques identifiers assigned to individual resources. It can be an ec2 instance, EBS Volumes, S3 bucket, load balancers, VPCs, route tables, etc.

An ARN looks like the following for an ec2 instance.

arn:aws:ec2:us-east-1:4575734578134:instance/i-054dsfg34gdsfg38

Why are ARNs Important?

ARNs are very important when it comes to IAM policies. You will end up using ARNs if you following the standard best practices for IAM roles and policies.

ARNs has the following key use cases.

  1. They are used in IAM policies for granting restricted granular access to resources. One example is to allow a specific IAM user to access only specific ec2 instances.
  2. It can be used in automation scripts and API calls to refer to other resources.

If you did not understand the above points, don’t worry, we will look at those with practical examples in the following topics.

AWS ARN format

In most cases, you can build the ARN URL yourself following the below format.

arn:aws:service:region:account-id:resource-id
arn:aws:service:region:account-id:resource-type/resource-id
arn:aws:service:region:account-id:resource-type:resource-id

In the above formats, towards the end, you can see the difference in the formats which changes as per the resource types.

Here are the arn examples for all the three formats.

S3 ARN: Where you have a flat hierarchy of buckets and associated objects

arn:aws:s3:::devopscube-bucket

EC2 ARN: ec2 service has sub resource-types like image, security groups etc. The following example uses the instance resource-type.

arn:aws:ec2:us-east-1:4575734578134:instance/i-054dsfg34gdsfg38

Lambda ARN: Where you have functions with versions. Here the version is the qualifier.

arn:aws:lambda:us-east-1:4575734578134:function:api-fucntion:1

How to get the ARNs of AWS resources?

If you are getting started with AWS, you may find it difficult to put together the correct arn URL for a resource.

You can find the syntax for ARNs for all the AWS services here.

For example, if you go ec2 resource from the list, and scroll down to the “Resource Types Defined by Amazon EC2” section, you will find the reference for all the sub rsource types for ec2 as shown below.

aws arn reference document

Another way is to use the aws policy generator.

In the policy generator, when you select the policy resource, it will automatically show the arn suggestion as shown below. You just need to add resource information.

arn policy generator min

ARN Wildcards

ARN definition supports wildcards. You will need wildcard in many use cases.

Let say you want a IAM policy which allows access to all objects in a single bucket. For this you can have a wildcard arn like below.

arn:aws:s3:::my-data-bucket/*

Here is an example of using wildcard arn in an IAM policy. This policy allows all actions for dcubebucket S3 bucket.

{
	"Version": "2012-10-17",
	"Statement": [{
		"Sid": "Stmt1596173683332",
		"Action": "s3:*",
		"Effect": "Allow",
		"Resource": [
			"arn:aws:s3:::dcubebucket",
			"arn:aws:s3:::dcubebucket/*"
		]
	}]
}

Here is another example policy which allows limted access to all emr clusters.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1596174145144",
      "Action": [
        "elasticmapreduce:AddInstanceFleet",
        "elasticmapreduce:AddTags",
        "elasticmapreduce:DescribeCluster",
        "elasticmapreduce:DescribeEditor",
        "elasticmapreduce:DescribeJobFlows",
        "elasticmapreduce:DescribeSecurityConfiguration"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:elasticmapreduce:*:*:cluster/*"
    }
  ]
}

Getting ARN from AWS CLI

You can get the ARNs of specific resources from the CLI.

For all IAM roles, policies and users, you can get the ARN from the CLI by describing it.

Here is an example of getting arn of a role.

aws iam get-role --role-name EMR_DefaultRole

Here is output with the arn.

aws arn cli min

Like this you can try describing the resource and see if it outputs the arn.

Getting ARN from AWS Console

You can get the arn of IAM resources directly from AWS console.

Just browse to the specific resource and you will find the related arn at the top as shown below.

iam arm console min

Getting ARN as Output in Cloudformation

If you are using Cloudformation, you can get the resource arn in the output with the function Fn::GetAtt

Here is an example syntax of getting getting the arn of a Lambda function.

resources:
  Outputs:
    LamdaFunctionArn:
      Export:
        Name: MyFucntionARN
      Value:
        Fn::GetAtt: MyLambdaFunction.Arn

Wrapping Up

I have added some of the resources and tricks I use for aws arn.

I would like to hear from you.

Please let me some scenarios you have come across in the comments section.

aws arn explained

Linux Foundation Launches Advanced Cloud Engineer Bootcamp

Advanced Cloud Engineer Bootcamp

The Linux Foundation has launched an advanced cloud engineer Bootcamp to take your career to the next level by enabling IT administrators to learn the most sought after cloud skills and get certified in six months.

This Bootcamp covers the whole Kubernetes ecosystem from essential topics like containers, Kubernetes deployments, logging, Prometheus monitoring to advanced topics like service mesh. Basically all the skills required to work in a Kubernetes based project.

And here is the best part. With this Bootcamp, you can take the Kubernetes CKA certification exam. It comes with one-year validity and a free retake.

Here is the list of courses covered in the Bootcamp.

  1. Containers Fundamentals (LFS253)
  2. Kubernetes Fundamentals (LFS258)
  3. Service Mesh Fundamentals (LFS243) 
  4. Monitoring Systems and Services with Prometheus (LFS241)
  5. Cloud-Native Logging with Fluentd (LFS242)
  6. Managing Kubernetes Applications with Helm (LFS244)
  7. Certified Kubernetes Administrator Exam (CKA)

Advanced Cloud Engineer Bootcamp is priced at $2300 (List Price) but  if you join before 31st July, you can get it for $599 (saves you $1700).

You may also use the DCUBEOFFER coupon code at check out to get an additional 15% discount on total cart value (Applicable for CKA & CKAD certifications as well).

Note*: It comes with a 30 days money back guarantee

How The Cloud Engineer Bootcamp Work?


The whole Bootcamp is designed for six months. All the courses in the Bootcamp are self-paced. Ideally, you should spend 10 hours per week for six months to complete all the courses in the Bootcamp. 

Even though the courses are self-paced, you will get access to interactive forums and live chat within course instructors.

Every course is associated with hands-on labs and assignments to improve your practical knowledge.

At the end of the Bootcamp, you can appear for the CKA exam completely free with one-year validity a free retake

You will earn a valid advanced cloudeningeer bootcamp badge and CKA certification badge.

Training Badges Master LFACEB
bootcamp badge certk8s 200x200 1

Is Cloud Engineer Bootcamp Worth It?


If you are an IT administrator or someone who wants to learn the latest cloud-native technologies, this is one of the best options as it focuses more on the practical aspects.

If you look at the price, it’s worth it as you will have to spend $2300 if you buy those courses individually. Even the much sought after CKA certification will cost you $300. With an additional $300, you get access to all the other courses plus support for dedicated forums and live instructor sessions.

So it is entirely on you how you make use of this Bootcamp. Like learning any technology, you have to put in your work using these resources.

Advanced Cloud Engineer Bootcamp

How to Automate EBS Snapshot Creation, Retention and Deletion

aws ebs snapshot creation min

It is very important to have data backups on the cloud for data recovery and protection. EBS snapshots play an important role when it comes to backup of your ec2 instance data (root volumes & additional volumes).

Even though snapshots are considered as “poor man’s backup”, it gives you a point in time backup and faster restore options to meet your RPO objective.

Towards the end of the article, I have added some key snapshot features and some best practices to manage snapshots.

AWS EBS Snapshot Automation

Snapshots are the cheapest and easiest way to enable backups for your EC2 instances or EBS volumes.

There are three ways to take automated snapshots.

  1. EBS Life Cycle manager
  2. Cloudwatch Events
  3. Lambda Functions.

In this tutorial, I will guide you to automate EBS snapshot creation and deletion using all three approaches.

EBS Snapshot Automation with Life Cycle manager

EC2 lifecycle manage is a native AWS functionality to manage the lifecycle of EBS volumes and snapshots.

It is the quickest and easiest way to automate EBS snapshots. It works on the concept of tags. Based on the instance or volume tags you can group EBS volumes and perform snapshot operation in bulk or for a single instance.

Follow the steps given below to setup a snapshot lifecycle policy.

Step 1: Tag your ec2 instance and volumes

EBS snapshots with life cycle manager work with the instance & volume tags. It requires instances and volumes to be tagged to identify the snapshot candidate.

You can use the following tag in the instances and volumes that you need automated snapshot.

Key = Backup 
Value = True
ece instance and ebs tagging

Step 2: Find the EBS life cycle manager to create a snapshot lifecyle policy.

Head over to EC2 dashboard and select “Lifecycle Manager” option under ELASTIC BLOCK STORE category as shown below.

life cycle manager dashboard min

You will be taken to the life cycle manager dashboard. Click “Create Snapshot Lifecycle Policy” button.

ebs snapshot life cycle policy creation

Step 3: Add EBS snapshot life cycle policy rules

Enter the policy details as shown below. Make sure you select the right tags for the volumes you need the snapshot.

Note: You can add multiple tags to target specific Volumes

snapshot policy min

Enter snapshot schedule details based on your requirements. You can choose retention type for both count & age.

For regular backups, count is the ideal way.

Also apply proper tags to identify the snapshots.

ebs snapshot policy rules min

There are two optional parameters for snapshot high availability and fast snapshot restore. You can choose these options for production volumes. Keep in mind that these two options will incur extra charges.

ebs snapshot ppolicy optional parameters min 1

Select an IAM role that has permission to create and delete snapshots. If you don’t have an IAM role, you can use the default role option. AWS will automatically create a role for snapshots.

I recommend you to create a custom role and use it with the policy to keep track of IAM roles.

Also select “enable policy” for the policy to be active immediately after creation.

ebs create snapshot policy min

Click create policy.

Now the policy manager will automatically create snapshots based on the schedules you have added.

Create EBS Volume Snapshots With Cloudwatch Events

Cloudwatch custom events & schedules can be used to create EBS snapshots.

You can choose AWS services events for cloudwatch to trigger custom actions.

To demonstrate this, I will use the cloudwatch schedule to create EBS snapshots. Follow the steps given below.

Step1: Create a Cloudwatch Schedule.

Head over to cloudwatch service and click create a rule under the rule options as shown below.

cloudwatch rules min

You can choose either a fixed schedule or a cron expression. Under targets, search for ec2 and select the “EC2 CreateSnapshot API Call” option.

Get the Volume ID from the EBS volume information, apply it to the Volume ID field and click “Configure details”.

Create more targets if you want to take snapshot of more volumes.

cloudwatch ebs volume target min

Enter the rule name, description and click create rule.

create EBS cloudwatch rule min

Thats it. Based on the cloudwatch schedules, the snapshots will be created.

Automate EBS snapshot Creation and Deletion With Lambda Function

If you have any use case where lifecycle manger does not suffice the requirements, you can opt for lambda based snapshot creation. Most use cases come under unscheduled activities.

One use case I can think of is, taking snapshots just before updating/upgrading stateful systems. You can have an automation that will trigger a lambda function that performs the snapshot action.

Getting Started With Lambda Based EBS snapshot

We will use Python 2.7 scripts, lambda, IAM role, and cloud watch event schedule for this setup.

For this lambda function to work, you need to create a tag named “backup” with the value true for all the instances for which you need a backup.

For setting up a lambda function for creating automated snapshots, you need to do the following.

  1. A snapshot creation python script with the necessary parameters.
  2. An IAM role with snapshot create, modify, and delete access.
  3. A lambda function.

Configure Python Script

Following python code will create snapshots on all the instance which have a tag named “backup.”

Note: You can get all the code from here

import boto3
import collections
import datetime

ec = boto3.client('ec2')

def lambda_handler(event, context):
    reservations = ec.describe_instances(
        Filters=[
            {'Name': 'tag-key', 'Values': ['backup', 'Backup']},
        ]
    ).get(
        'Reservations', []
    )

    instances = sum(
        [
            [i for i in r['Instances']]
            for r in reservations
        ], [])

    print "Found %d instances that need backing up" % len(instances)

    to_tag = collections.defaultdict(list)

    for instance in instances:
        try:
            retention_days = [
                int(t.get('Value')) for t in instance['Tags']
                if t['Key'] == 'Retention'][0]
        except IndexError:
            retention_days = 10

        for dev in instance['BlockDeviceMappings']:
            if dev.get('Ebs', None) is None:
                continue
            vol_id = dev['Ebs']['VolumeId']
            print "Found EBS volume %s on instance %s" % (
                vol_id, instance['InstanceId'])

            snap = ec.create_snapshot(
                VolumeId=vol_id,
            )

            to_tag[retention_days].append(snap['SnapshotId'])

            print "Retaining snapshot %s of volume %s from instance %s for %d days" % (
                snap['SnapshotId'],
                vol_id,
                instance['InstanceId'],
                retention_days,
            )


    for retention_days in to_tag.keys():
        delete_date = datetime.date.today() + datetime.timedelta(days=retention_days)
        delete_fmt = delete_date.strftime('%Y-%m-%d')
        print "Will delete %d snapshots on %s" % (len(to_tag[retention_days]), delete_fmt)
        ec.create_tags(
            Resources=to_tag[retention_days],
            Tags=[
                {'Key': 'DeleteOn', 'Value': delete_fmt},
                {'Key': 'Name', 'Value': "LIVE-BACKUP"}
            ]
        )


Also, you can decide on the retention time for the snapshot.

By default, the code sets the retention days as 10. If you want to reduce or increase the retention time, you can change the following parameter in the code.

retention_days = 10

The python script will create a snapshot with a tag key “Deletion” and “Date” as the value that is calculated based on the retention days. This will help in deleting the snapshots which are older than the retention time.

Lambda Function To Automate Snapshot Creation

Now that we have our python script ready for creating snapshots, it has to deployed as a Lambda function.

Triggering the Lambda function totally depends on your use case.

For demo purposes, we will set up cloudwatch triggers to execute the lambda function whenever a snapshot is required.

Follow the steps given below for creating a lambda function.

Step 1: Head over to lambda service page and select “create lambda function”.

create lamda fucntion min

Step 2: Choose “Author from Scratch” and python 2.7 runtime. Also, select an exiting IAM role with snapshot create permissions.

Click “Create Function” function button after filling up the details.

ebs snapshot lambda fucntion min

Step 3: On the next page, if you scroll down, you will find the function code editor. Copy the python script from the above section to the editor and save it.

save lamda code min

Once saved, click the “Test” button. It will open an evet pop up. Just enter an event name and click create it.

lamda code test min

Click “Test” button again and you will see the code getting executed and its logs as show blow. As per the code, it should create snapshots of all volumes if a instance has a tag named “Backup:True”.

lamda test execution result min

Step 4: Now you have a Lamda function ready to create snapshots.

You have to decide what triggers you need to invoke the lambda function. If you click the “Add Trigger” Button from the function dashboard, it will list all the possible trigger options as shown below. You can configure one based on your use case. It can be API gateway wall or a cloudwatch even trigger like I explained above.

cloudwatch lamda event trigger min

For example, I if choose cloudwatch event trigger, It will look like the following.

lamda cloudwatch even trigger min

Automated Deletion Of EBS Snapshots Using Lambda

We have seen how to create a lambda function to create snapshots of instances tagged with a “backup” tag. We cannot keep the snapshots piling up over time. That’s the reason we used the retention days in the python code. It tags the snapshot with the deletion date.

The deletion python script scans for snapshots with a tag with a value that matches the current date. If a snapshot matches the requirement, it will delete that snapshot. This lambda function runs every day to remove the old snapshots.

Create a lambda function with the cloudwatch event schedule as one day. You can follow the same steps I explained above for creating the lambda function.

Here is the python code for snapshot deletion.

import boto3
import re
import datetime

ec = boto3.client('ec2')
iam = boto3.client('iam')

def lambda_handler(event, context):
    account_ids = list()
    try:
        """
        You can replace this try/except by filling in `account_ids` yourself.
        Get your account ID with:
        > import boto3
        > iam = boto3.client('iam')
        > print iam.get_user()['User']['Arn'].split(':')[4]
        """
        iam.get_user()
    except Exception as e:
        # use the exception message to get the account ID the function executes under
        account_ids.append(re.search(r'(arn:aws:sts::)([0-9]+)', str(e)).groups()[1])

    delete_on = datetime.date.today().strftime('%Y-%m-%d')
    filters = [
        {'Name': 'tag-key', 'Values': ['DeleteOn']},
        {'Name': 'tag-value', 'Values': [delete_on]},
    ]
    snapshot_response = ec.describe_snapshots(OwnerIds=account_ids, Filters=filters)

    for snap in snapshot_response['Snapshots']:
        print "Deleting snapshot %s" % snap['SnapshotId']
        ec.delete_snapshot(SnapshotId=snap['SnapshotId'])

How To Restore EBS Snapshot

You can restore a snapshot in two ways.

  1. Restore the EBS Volume from the snapshot.
  2. Restore EC2 Instance from a snapshot

You can optionally change following while restoring a snapshot

  1. Volume Size
  2. Disk Type
  3. Availability Zone

Restore EBS Volume from Snapshot

Follow the steps given below to restore a snapshot to a EBS volume.

Step 1: Head over to snapshots, select the snapshot you want to restore, select the “Actions” dropdown, and click create volume.

ebs snapshot create volume min

Step 2: Fill in the required details and click “create volume” option.

create volume from snapshot min

That’s it. Your volume will be created. You can mount this volume to the required instance to access its data.

Restore EC2 Instance From Snapshot

You can restore a ec2 instance with two simple steps. Please note, the volume

  1. Create an image (AMI) from the snapshot.
  2. Launch an instance from the AMI created from the snapshot.

Follow the below steps.

Step 1: Head over to snapshots, select the snapshot you want to restore, select the “Actions” dropdown, and click create image.

create AMI from snapshot min

Step 2: Enter the AMI name, description, and modify the required parameters. Click “Create Image” to register the AMI.

create ec2 AMI image from snapshot min

Step 3: Now, select AMIs from the left panel menu, select the AMI, and from the “Actions” drop-down, select launch.

It will take you to the generic instance launch wizard. You can launch the VM as you normally do with any ec2 instance creation.

create AMI snapshot min

EBS Snapshot Features

Following are the key features of EBS snapshots.

  1. Snapshots Backend Storage is s3:  Whenever you take a snapshot, it gets stored in S3. 
  2. EBS snapshots are incremental: Every time you request a Snapshot of your EBS volume, only the changed data in the disk (delta) is copied to the new one. So irrespective of the number of snapshots, you will only pay to changed data present in the Volume. Meaning, your consistent data never gets duplicated between Snapshots. For example, your disk storage can be 20 GB, and snapshot storage can be 30 GB due to the changes notified during every snapshot creation. You can read more about this here 
snapshot 1a

It is very important to have data backups on the cloud for data recovery and protection. EBS snapshots play an important role when it comes to backup of your ec2 instance data.

Even though snapshots are considered as “poor man’s backup”, it gives you a point in time backup and faster restore options.

Towards the end of the article, I have added some key snapshot features and some best practices that you can follow to manage snapshots.

EBS Snapshot Best Practices

Following are some best practices you can follow to manage EBS snapshots.

  1. Standard Tagging: Tag your EBS volumes with standard tags across all your environments. This helps in a well-managed snapshot lifecycle management using the life cycle manager. Tags also help in tracking the cost associated with snapshots. You can have billing reposts based on tags.
  2. Application Data Consistency: To have consistency for your snapshot backups, it is recommended to stop the IO activity on your disk and perform the disk snapshot.
  3. Simultaneous Snapshot request: Snapshots do not affect disk performance, however, the simultaneous request could affect the disk performance.