Serverless Framework Tutorial for Beginners Using AWS Lambda

serverless framework tutorial for beginners

Serverless architecture is gaining popularity and it is been used by many organizations for all its operational tasks. Ther are many companies which use serverless services like Lamba for its microservices architecture.

AWS, Google Cloud, and Azure provide good web portal, CLI, and SDK for their respective serverless services. However, When you go through the CI/CD process, you need a good framework other than CLI’s and SDKs for good traceability and management. Here is where serverless framework comes into play. provides a framework for deploying serverless code to AWS Lambda, Google Cloud Functions and Azure Functions. You can organize your serverless deployment using configuration files provided by this framework.

We have done a basic deployment on AWS Lambda using this framework and we loved it. This guide will help you to get started with the serverless framework on AWS Lambda.

Installation and Configuration

[alert-warning]You should have awscli installed and configured on your system. Serverless framework will use the ~/.aws/credentials file for deploying lambdas[/alert-warning]

1. Install node js. Follow steps from here from here — > Latest Nodejs Installation

2. Install the serverless framework.

npm install -g serverless

3. Check the installation using the following command.


Getting Started

Let’s get started with a demo python application.

Step1: cd in your project directory. You can use any folder of your choice.

Step 2:. Create a basic python service template. It will create a python based serverless template.

serverless create --template aws-python --path devopscube-demo

The output would look like the following.

Serverless: Generating boilerplate...
Serverless: Generating boilerplate in "/serverless-demo/devopscube-demo"
 _______                             __
|   _   .-----.----.--.--.-----.----|  .-----.-----.-----.
|   |___|  -__|   _|  |  |  -__|   _|  |  -__|__ --|__ --|
|____   |_____|__|  \___/|_____|__| |__|_____|_____|_____|
|   |   |             The Serverless Application Framework
|       |                 , v1.15.3

Serverless: Successfully generated boilerplate for template: "aws-python"

It will create the following folder structure

|-- devopscube-demo
|   |--
|   |-- serverless.yml

Step 3: cd into devopscube-demo This folder would contain two files named and serverless.yml. file contains a python code that will be deployed to lambda. serverless.yml contains the configuration which tells the serverless framework about how and what events it should associate with the given lambda function. You can specify the function name in serverless.yml file. In our case its the default hello

cd devopscube-demo

Step 4Lets deploy the basic service to AWS. This would create all the basic configurations on AWS for deploying the lambda function. (IAM Role, S3 bucket to keep the artifact, Cloudformation Template for Lambda and AWS log group for Lambda logs.)

serverless deploy -v

In your AWS dashboard, you can see the created lambda function.

Step 5: Now, let’s invoke our basic python function.

serverless invoke -f hello -l

You will see a successful execution with the following output

    "body": "{\"input\": {}, \"message\": \"Go Serverless v1.0! Your function executed successfully!\"}",
    "statusCode": 200
START RequestId: ebb938e0-5803-11e7-825b-8f51036d398a Version: $LATEST
END RequestId: ebb938e0-5803-11e7-825b-8f51036d398a
REPORT RequestId: ebb938e0-5803-11e7-825b-8f51036d398a	Duration: 0.26 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 19 MB

Step 5: Now let’s remove the default python program and add our own program which returns the sum of two numbers. Our new would look like the following.

  import json

  def hello(event, context):
      a = 1
      b = 4
      return a+b

Step 6: Since we have updated our function, we need to deploy the function again for updating the lambda for new code. let’s deploy our code using the following command.

serverless deploy function -f hello

Once deployed, you can invoke the function again using the invoke command.

serverless invoke -f hello -l

Adding More Function

You can add more functions to your existing template. Here is what you have to do.

1. Create a python file named in your project directory where you have the file.

2. Add the following code to the file.

  import json

  def demo(event, context):
      body = {
          "message": "Go Serverless v1.0! Your function executed successfully!",
          "input": event

      response = {
          "statusCode": 200,
          "body": json.dumps(body)

      return response

      # Use this code if you don't use the http event with the LAMBDA-PROXY integration
      return {
          "message": "Go Serverless v1.0! Your function executed successfully!",
          "event": event

The above code contains the python definition called demo. In our first example, it was hello.

2. Add a new function called demo-function in the serverless.yml file along with your first hello function. Your function definition would look like the following.

      handler: handler.hello
      handler: testing.demo

3. Now that we have added a new handler and a function, we should redeploy the service.

serverless deploy -v

3. Once deployed, invoke our new function.

serverless invoke -f demo -l

4. If you edit the code, you can update and re-invoke it using the following command.

serverless deploy function -f demo
serverless invoke -f demo -l

There are more functionalities in this framework. If you are using or thinking about using AWS Lambda, Google Cloud Functions or Azure functions, you should definitely try out the serverless framework. In this serverless framework tutorial, we have just scratched the surface. We will be covering more topics on the serverless framework.

serverless framework tutorial for beginners

Elasticsearch Tutorial For Beginners – Getting Started Series

elasticsearch beginner tutorial

Elasticsearch is one of the famous open source tools for in searching and indexing category. It is being used by highly respected organizations like Wikipedia, Linkedin, etc. The project started in 2010. Its core is Lucene indexing engine and has an HTTP interface for communicating with the core indexing engine. Elasticsearch is highly scalable and lightning fast.

[alert-announce]You Might Like: Elasticsearch Online Training[/alert-announce]

Elasticsearch Tutorial

In this tutorial series, I will cover elastic search installation, cluster setup, index creation strategies, backups, client nodes and much more. Throughout this series of posts, I will teach you to set up a production ready elasticsearch cluster even though you don’t have any prior knowledge in elasticsearch.

Note: This article is focussed on IT Ops/ DevOps, guys.

Elasticsearch Operations

Unlike a traditional SQL database, elasticsearch is distributed, and it can scale horizontally. This type of scaling allows you to add many nodes to process the requests and to handle the load.

To understand its distributed nature, you should understand the basic building blocks of it. Let’s have a look at its basic building blocks.

1. Indexes: All the data you store in elasticsearch is stored in the form of indexes. Adding data to an index is called indexing.

2. Shards: All the indexes are stored in shards. A shard is a Lucine database. Shard is the scalable unit of elasticsearch. The rule of thumb is to have at least one shard in a node.

You can store a single index in multiple shards on a single node. However, does not make any sense to have replicas on the same node. So when you add a node to the elasticsearch cluster, it gets added as a peer and shards get migrated to the new node for an even distribution of shards. This process is termed as “Rebalancing”.

Replicas The duplicates of the shard is known as replicas. For high availability, you can have the shard duplicates distributed across the cluster.

Node Role The nodes in the cluster falls under different roles. The data node, the master node, and the client node. The default installation has all the three roles set up in one server. However, with some fine tuning, you can set up these nodes as different servers for high availability and better performance.

1. Data nodes: It contains all the data and shards. Its primary function is to house all the data and does not serve any of the query requests.

2. Client Node: Client node is the entry point for elasticsearch queries. It receives all the queries and routes them to data nodes.

3. Master Node: This node maintains the cluster and updates the cluster state. All the nodes in the cluster will have the cluster state by only the master node will be able to update the status of the cluster.

[irp posts=”551″ name=”How To Setup an Elasticsearch Cluster – Beginners Guide”]

Cluster Capacity Planning

The amount of resource you need to set up the cluster can only be determined by the amount of data you are going to process. You can insert data into a single node cluster and perform a test by checking the CPU and memory utilizations. If there are enough CPU and memory available, you can insert more data and perform the tests.

By repeated testing, you will know the number of nodes you need to have in a production cluster.

For example, if you have a 500k document and the query response is taking 4 seconds, you will need four data nodes to reduce the response time to 1 sec.

Next, you need to plan on your master node. Elasticsearch official site recommends that you should have at least three master nodes in a production cluster so that it will maintain a quorum of two.

Finally, you will need more than one client node under a load balancer. So that the load on the data nodes can be reduced.

So as per our design, there will be two client nodes, three master nodes and four data nodes for a 500k document process. This is just an example and not based on any tests.

Wrapping Up

In this elasticsearch tutorial, we went through the basic concepts involved elasticsearch. In the next article, I will teach you how to set up an elasticsearch cluster using a client, master and data nodes.

elasticsearch beginner tutorial

How to Setup Google Cloud CLI/SDK – Beginner Guide

google gcloud cli/sdk setup

This tutorial will guide you through steps for setting up google cloud SDK on your workstation.

Note: This tutorial will work only on MAC and Linux systems

Step 1: Select the download appropriate sdk package from

Step 2: Untar the sdk package.

tar -xvf google-cloud-sdk*

Step 3: Install the sdk using the following command.


Step 3: Follow through the installation instruction and select the required options.

Configuring Google Cloud SDK

Follow the steps given below for configuring the google cloud sdk.

1. Initialize the sdk using the following commands. Y

gcloud init

2. Accept the google login option for logging in to your google cloud account.

To continue, you must log in. Would you like to log in (Y/n)? Y

3. From the browser login to your google cloud account and grant permissions to access google cloud resources.

4. At the command prompt, you will be prompted with options for initial configurations which are self explanatory.

Testing the CLI setup

Now lets run some basic gcloud cli commands to verify the installation.

1. List the credential account.

gcloud auth list

2. List the sdk configuration.

gcloud config list

3. List all the local gcloud configurations and files

gcloud info

4. To list all the gcloud commands, use the following command.

gcloud help

Create an Instance Using CLI

To start with, we will create a instance using the CLI.

1. Get the list of images using the following command.

gcloud compute images list

2. The following command will create a f1 micro Centos instance. You can refer this official documentation for more information on the flags.

gcloud compute instances create devopscube-demo-instance \
 --image centos-7-v20170523 \
 --image-project centos-cloud --set-machine-type f1-micro --zone us-central1-a

Connecting Instance  via ssh

To connect the instance via ssh, just execute the following command. The gcloud command will automatically create the ssh key in your ~/.ssh folder if it doesn’t exist and connects to your instance.

gcloud compute ssh (instance-name)

For example,

gcloud compute ssh devopscube-demo-instance

Deleting the Instance

You can delete the created instance using the following command.

gcloud compute instances delete (instance name)

For example,

gcloud compute instances delete devopscube-demo-instance
google gcloud cli/sdk setup