How To Run Docker in Docker Container [3 Easy Methods]

run docker in docker using three easy methods

In this blog, I will walk you through the steps required to run docker in docker using three different methods.

Docker in Docker Use Cases

Here are a few use cases to run docker inside a docker container.

  1. One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
  2. Building Docker images with a VM is pretty straight forward. However, when you plan to use Jenkins docker based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
  3. Sandboxed environments.
  4. For experimental purposes on your local development workstation.

Run Docker in a Docker Container

There are three ways to achieve docker in docker

  1. Run docker by mounting docker.sock (DooD Method)
  2. dind method
  3. Using Nestybox sysbox Docker runtime

Let’s have a look at each option in detail. Make sure you have docker installed in your host to try this setup.

Method 1: Docker in Docker Using [/var/run/docker.sock]

Docker in Docker Using [/var/run/docker.sock]

What is /var/run/docker.sock?

/var/run/docker.sock is the default Unix socket. Sockets are meant for communication between processes on the same host. Docker daemon by default listens to docker.sock. If you are on the same host where Docker daemon is running, you can use the /var/run/docker.sock to manage containers.

For example, if you run the following command, it would return the version of docker engine.

curl --unix-socket /var/run/docker.sock http://localhost/version

Now that you have a bit of understanding of what is docker.sock, let’s see how to run docker in docker using docker.sock

To run docker inside docker, all you have to do it just run docker with the default Unix socket docker.sock as a volume.

For example,

docker run -v /var/run/docker.sock:/var/run/docker.sock \
           -ti docker-image

Just a word of caution: If your container gets access to docker.sock, it means it has more privileges over your docker daemon. So when used in real projects, understand the security risks, and use it.

Now, from within the container, you should be able to execute docker commands for building and pushing images to the registry.

Here, the actual docker operations happen on the VM host running your base docker container rather than from within the container. Meaning, even though you are executing the docker commands from within the container, you are instructing the docker client to connect to the VM host docker-engine through docker.sock

To test his setup, use the official docker image from the docker hub. It has docker the docker binary in it.

Follow the steps given below to test the setup.

Step 1: Start Docker container in interactive mode mounting the docker.sock as volume. We will use the official docker image.

docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker

Step 2: Once you are inside the container, execute the following docker command.

docker pull ubuntu

Step 3: When you list the docker images, you should see the ubuntu image along with other docker images in your host VM.

docker images

Step 4: Now create a Dockerfile inside test directory.

mkdir test && cd test
vi Dockerfile

Copy the following Dockerfile contents to test the image build from within the container.

FROM ubuntu:18.04

LABEL maintainer="Bibin Wilson <[email protected]>"

RUN apt-get update && \
    apt-get -qy full-upgrade && \
    apt-get install -qy curl && \
    apt-get install -qy curl && \
    curl -sSL | sh

Build the Dockerfile

docker build -t test-image .

Method 2: Docker in Docker Using dind

Docker in Docker Using dind

This method actually creates a child container inside a container. Use this method only if you really want to have the containers and images inside the container. Otherwise, I would suggest you use the first approach.

For this, you just need to use the official docker image with dind tag. The dind image is baked with required utilities for Docker to run inside a docker container.

Follow the steps to test the setup.

Note: This requires your container to be run in privileged mode.

Step 1: Create a container named dind-test with docker:dind image

docker run --privileged -d --name dind-test docker:dind

Step 2: Log in to the container using exec.

docker exec -it dind-test /bin/sh

Now, perform steps 2 to 4 from the previous method and validate docker command-line instructions and image build.

Method 3: Docker in Docker Using Sysbox Runtime

Docker in Docker Using Sysbox Runtime

Method 1 & 2 has some disadvantages in terms of security because of running the base containers in privileged mode. Nestybox tries to solve that problem by having a sysbox Docker runtime.

If you create a container using Nestybox sysbox runtime, it can create virtual environments inside a container that is capable of running systemd, docker, kubernetes without having privileged access to the underlying host system.

Explaining sysbox demands significant comprehension so I’ve excluded from the scope of this post. Please refer this page to understand fully about sysbox

To get a glimpse, let us now try out an example

Step 1: Install sysbox runtime environment. Refer to this page to get the latest official instructions on installing sysbox runtime.

Step 2: Once you have the sysbox runtime available, all you have to do is start the docker container with a sysbox runtime flag as shown below. Here we are using the official docker dind image.

docker run --runtime=sysbox-runc --name sysbox-dind -d docker:dind

Step 3: Now take an exec session to the sysbox-dind container.

docker exec -it sysbox-dind /bin/sh

Now, you can try building images with the Dockerfile as shown in the previous methods.

Key Considerations

  1. Use Docker in Docker only if it is a requirement. Do the POCs and enough testing before migrating any workflow to the Docker-in-Docker method.
  2. While using containers in privileged mode, make sure you get the necessary approvals from enterprise security teams on what you are planning to do.
  3. When using Docker in Docker with kubernetes pods there are certain challenges. Refer to this blog to know more about it.
  4. If you plan to use Nestybox (Sysbox), make sure it is tested and approved by enterprise architects/security teams.


Here are some frequently asked docker in docker questions.

Is running Docker in Docker secure?

Running docker in docker using docker.sock and dind method is less secure as it has complete privileges over the docker daemon

How to run docker in docker in Jenkins?

You can use the Jenkins dynamic docker agent setup and mount the docker.sock to the agent container to execute docker commands from within the agent container.

run docker in docker using three easy methods

List Of 11 Best Open Source & Free Monitoring Tools

Best Opensource/Free Monitoring Tools

Monitoring is necessary for businesses to make sure that the required system is up and working. Monitoring of various aspects of the IT infrastructure can be quite pesky and cause a lot of difficulties if not done properly. Regardless of the size of the company, one cannot ignore the need for Server, network and infrastructure monitoring.

All modern cloud and on-premise infrastructure come with robust monitoring solutions. Sometimes it is wise to make use of default monitoring systems that come with the infrastructure providers. However, opensource monitoring tools provider a lot of functionality to monitor your infrastructure components.

Following are the key areas when it comes to monitoring.

  1. Real-time Server Monitoring
  2. Network Performance monitoring
  3. Container Monitoring (Docker/Kubernetes/Mesos etc)
  4. Cloud Infrastructure monitoring (Public & Private)
  5. Application monitoring

List Of Best Opensource Monitoring Tools

Professional or Business-grade tech solutions are generally regarded as costly, but that’s not necessarily always the case.

A monitoring software should be,

  1. Scalable
  2. Able to handle and process huge amounts of monitoring data
  3. Collect system/application metrics in real-time
  4. Highly available
  5. Support all modern cloud and containerized applications.
  6. Support metric visualization tools
  7. Have a good user-friendly interface.

There are numerous absolutely free and open-source network monitoring tools that can be considered while looking for monitoring solutions. Let’s take a look at the top-rated open-source monitoring tools and see what works for you!

1. Prometheus

Prometheus is an open-source monitoring solution primarily fixated on data gathering and analysis based on time-series data. It enables users to set up monitoring capabilities by utilizing the in-built toolset. It is an ideal monitoring setup for containerized environments like kubernetes.

Tutorial: How To Install and Configure Prometheus

It is able to assemble information on various devices using SNMP pings and inspect network bandwidth usage from the device point of view, along with the other functions. The PromQL system analyzes data and permits the program to produce plots, tables, and other graphics on the systems it monitors.

The alert manager is another component of Prometheus. It handles alerting for all alerting rules configured in Prometheus.

Prometheus can collect system metrics, application metrics, and metrics from modern containerized applications. Also, it has very good integration with tools like Grafana for visualizing metrics.

2. Riemann

Riemann is an ideal monitoring tool for distributed systems. It’s a low latency even processing system which is capable of collecting metrics from a variety of distributed systems. It is designed to handle millions of even per second with low latency. It is an apt monitoring tool for highly distributed scalable systems.

3. Sensu

Sensu is indorsed as a full-stack monitoring tool. By means of a single platform, you can monitor services, applications, servers, and reports on business KPIs. Its monitoring does not require a separate workflow.  It supports all the popular Operating Systems like Windows, Linux, etc.

You Might Like: 15 DevOps Tools for Infrastructure Automation

4. Zabbix

Zabbix is open-source monitoring software with an easy to use interface for users with a low learning curve that provides enterprise-class solutions to large organizations. It is a centralized system that stores the data is a relational DB for efficient processing.

5. Nagios

Nagios is an open source monitoring tool that has been in the market since 1999. It provides numerous facilities like integration with third-party apps using an additional plugin. Considering the length of time that Nagios has been in the industry, there are plenty of plugins written for it.  It can monitor a variety of components including Oss, applications, websites, middlewares, web servers, etc.

6. Icinga

Icinga is an open-source network monitoring tool that calculates the availability and performance of the network. Through a web interface, your business can observe applications and hosts around your complete network infrastructure. The tool is scalable and easily configurable to function with each type of device. There exist a few of Icinga modules for very specific monitoring capabilities, like monitoring for VMWare’s vSphere cloud environment and business process modeling.

7. Cacti

Cacti is an open-source network monitoring tool built on RRD Tool’s data classification and plotting system. The tool utilizes data gathering functionality and network polling to collect information on various devices on networks of any scope. This comprises the capability to create custom scripts for data gathering along with the facility for SNMP polling. It then showcases this information in easy to comprehend plots which can be organized in whichever hierarchy as per your business’s convenience.

8. LibreNMS

LibreNMS is an open-source network monitoring system that utilizes multiple network protocols to observe every device on your network. The LibreNMS API can recover, manage, and plot the data it collects and facilitates horizontal scaling to grow its monitoring abilities along with your network. The tool presents a flexible alerting system that is custom-made to communicate to you by the method that suits best to your company. They offer their iOS and Android

9. Observium Community

Observium Community is the free counterpart of Observium’s network monitoring tool. In the free version of the Observium Community, you can monitor a limitless amount of devices along with taking complete advantage of Obersvium’s network mapping attributes. The Observium network monitoring tool features the programmed discovery of connected devices. It also comes well-appointed with discovery protocols to make sure that the map of your network is the latest. In this manner, you can keep track of new devices as they connect with the network.

10. Pandora FMS

Pandora FMS is an open source monitoring tool that aids businesses to observe their whole IT substructure. It not just features network monitoring capabilities but also Unix and Windows servers and virtual interfaces. For networks, Pandora FMS comprises of top-notch features like SNMP support, ICMP polling, network latency monitoring, as well as system overload. Agents can also be installed on devices to observe aspects like device temperature and overheating, as well as for logfile happenings.

11. LogRhythm NetMon Freemium

LogRhythm NetMon Freemium is a free version of LogRhythm NetMon that offers similar business-grade module capturing and analysis abilities as the full version. Though there are restrictions or limits on data processing and module storage, the freemium version still permits the users to perform network risk detection and response functions built on data packet analysis. It also offers a similar network threat alerting system as the full version, letting you stay updated on your network’s performance and security.

12. SolarWinds Real-Time Bandwidth Monitor

SolarWinds Real-Time Bandwidth Monitor is a no-cost, open-source bandwidth monitoring tool. The tool keeps a tab on bandwidth usage live and displays plots on your network’s bandwidth centered on bandwidth polling. The tool notifies you when the bandwidth usage enters an acute state, letting your business know right away when your network’s bandwidth is running short. Critical bandwidth usage levels can be custom described so the tool knows exactly when the devices on your network are using more bandwidth than required.

13. Famatech Advanced IP Scanner

Famatech’s Advanced IP Scanner is free of cost network monitoring as well as a scanning tool that offers analysis on Local Area Networks and LAN devices. The advanced IP Scanner allows the scanning of devices on the network and remotely regulate the connected computers and other resources. It provides the ability to switch computers off from the tool if the device is not in use and is using resources. The tool connects with Famatech’s Radmin solution for distant IT management, so you can manage the IPs wherever you are.

 14. AppNeta PathTest

AppNeta PathTest is a free network volume testing tool intended to aid businesses to comprehend the true ability of their network. PathTest seeks to advance layer three and layer four performance by exhibiting a precise depiction of your network’s maximum capabilities. It deliberately floods your network with data packets to fill the network to its full capacity. Users can set the duration of this test up to a maximum of 10 seconds and run the tests at any time.


Monitoring provides supervisors a crisp view of the services, applications, and devices running on their network and the ability to track the performance of these resources. This facilitates active management rather than responding to issues as and when they happen.

Monitoring tools are utilized to monitor the status of the framework being used, so as to have the warnings of defects, failures, or issues and to improve them. There are monitoring tools for servers, network, cloud infrastructure, containers, databases, security, execution, site and web use, and applications.

Opting for an appropriate monitoring solution for your business is not as easy as it seems. IT professionals like the Network and DevOps Engineers need to consider multiple factors while searching for monitoring solutions for their enterprise, such as compatibility, facility, effortlessness, and budget.

Best Opensource/Free Monitoring Tools

How to Setup Docker Containers as Build Slaves for Jenkins

Docker containers as Build Slaves for Jenkins

The resource utilization of the Jenkins slaves is very less if you do not have builds happening continuously. In this scenario, it is better to use ephemeral Docker containers as Jenkins build slaves for better resource utilization.

As you know, spinning up a new container takes less than a minute; every build spins up a new container, builds the project, and is destroyed. This way, you can reduce the number of static Jenkins build VMs.

Docker containers as Build Slaves

In this guide, I will walk you through the steps for configuring docker container as build slaves.

I assume that you have a Jenkins server up and running. If you do not have one, follow this tutorial. How to setup Jenkins 2

If you want docker based Jenkins setup, you can follow this tutorial -> Setup Jenkins On a Docker container

Let’s Implement It

Configure a Docker Host With Remote API [Important]

The first thing we should do is set up a docker host. Jenkins server will connect to this host for spinning up the slave containers. I am going to use the Centos server as my docker host. You can use any OS which supports Docker.

Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.

jenkins docker slave

Make sure the following ports are enabled in your server firewall to accept connections from Jenkins master.

Docker Remote API port4243
Docker Hostport Range32768 to 60999

32768 to 60999 is used by Docker to assign a host port for Jenkins to connect to the container. Without this connection, the build slave would go in a pending state.

Lets get started,

Step 1: Spin up a VM, and install docker on it. You can follow the official documentation for installing docker. based on the Linux distribution you use. Make sure the docker service is up and running.

Step 2: Log in to the server and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.

ExecStart=/usr/bin/dockerd -H tcp:// -H unix:///var/run/docker.sock

Step 3: Reload and restart docker service.

sudo systemctl daemon-reload
sudo service docker restart

Step 4: Validate API by executing the following curl commands. Replace with your host IP.

curl http://localhost:4243/version

Check the docker remote API article for a detailed explanation of Docker API.

Once you enabled and tested the API, you can now start building the docker slave image.

Create a Jenkins Agent Docker Image

I have created a Jenkins docker image for maven. You can use this image or use its Dockerfile as a reference for creating your own.

If you are creating the image on your own, its image should contain the following minimum configurations to act as a slave.

  1. sshd service running on port 22.
  2. Jenkins user with password.
  3. All the required application dependencies for the build. For example, for a java maven project, you need to have git, java, and maven installed on the image.

Make sure sshd service is running and can be logged into the containers using a username and password. Otherwise, Jenkins will not be able to start the build process.

Note: The default ssh username is jenkins and password is also jenkins as per the given Dockerfile. You will have to use these credentials in the below configuration.

Configure Jenkins Server

Step 1: Head over to Jenkins Dashboard –> Manage Jenkins –> Manage Plugins.

Step 2: Under the Available tab, search for “Docker” and install the docker cloud plugin and restart Jenkins. Here is the official plugin site. Make sure you install the right plugin as shown below.

jenkins docker plugin min

Step 3: Once installed, head over to Jenkins Dashboard –> Manage Jenkins –>Configure system.

Step 4: Under “Configure System”, if you scroll down, there will be a section named “cloud” at the last. There you can fill out the docker host parameters for spinning up the slaves.

Note: In Jenkins versions 2.200 or later you will find dedicated cloud configuration under Manage Jenkins –> Manage Nodes and Clouds

Step 5: Under docker, you need to fill out the details as shown in the image below.

Note: Replace “Docker URI” with your docker host IP. For example, tcp:// You can use the “Test connection” to test if Jenkins is able to connect to the Docker host.

configure docker cloud min

Step 6: Now, from “Docker Agent Template” dropdown, click the “Add Docker template” and fill in the details based on the explanation and the image given below and save the configuration.

  1. Labels – Identification for the docker host. It will be used in the Job configuration. Here we use java-docker-slave
  2. Name: Name of the docker template. Here we use the same name as label ie, java-docker-slave
  3. Docker Image bibinwilson/jenkins-slave:latest or the image that you created for the slave.
  4. Remote Filing System Root – Home folder for the user you have created. In our case, it’s /home/jenkins
  5. Credentials – click add and enter the SSH username and password that you have created for the docker image. Leave the rest of the configuration as shown in the image below and click save. If you are using my Docker image, the user will be jenkins & password is also jenkins.

Note: There are additional configurations like registry authentication and container settings that you might have to use when configuring this set up in the corporate network.

docker template min

You can also use JNLP based slave agents. For this, the configurations need a little change as shown below. Primarily the docker image name and the connect method.

Note: For JNLP to work, you need to enable the JNLP connection port (50000) in Jenkins’s global security configuration (TCP port for inbound agents). Also, the Jenkins master firewall should be able to accept this connection form the docker host.

jenkins docker jnlp min

By default, the workspace will not be persisted in the host. However, if you want the workspace to be persistent, add a host volume path under container settings.

For example, if you want the workspace to be available at /home/ubuntu, you can add the volume path as shown below. /home/jenkins is the path inside the container.


Towards the right of the Volumes option, if you click the question mark, it will show you additional volume options as shown below.

container volume min

If you are planning to run docker in docker for your CI process, you can mount the host docker.sock as volume to execute docker commands. Check out my article on running docker in docker to know more about it.

Test Docker Slaves Using FreeStyle Job

Now that you have the slave configurations ready,

  1. Create a freestyle job, select “Restrict where this project can be run” option and select the docker host as a slave using the label.
  2. Add a shell build step which echoes a simple “Hello World
docker slave freestyle job min

If you have done all the configurations right, Jenkins will spin up a container, builds the project and destroys the container once the build is done.

First you will see a pending notification as Jenkins tries to deploy a container on run time and establishes an SSH connection. After a few seconds, your job will start building.


You can check the build logs in your jobs console output as shown below.

docker slave output min

Also, you can check out the video explaining the whole process.

Possible Errors:

  1. Jenkins is not able to deploy containers on the host:– Please make sure you have proper connectivity to the docker host on API port.
  2. Jenkins builds goes in the pending state forever:– Make sure you have Docker host ports (32768 to 60999) access from Jenkins master to docker host.
  3. JNLP slaves go into the pending state: Make sure you have enabled the JNLP port in the Jenkins global security configuration.


In this article, I walked you through the process of setting up dynamic slaves using Docker.

It can be further customized to fit your specific use cases.

Please let me know your thoughts in the comment section. Also, don’t forget to share this article 🙂

Docker containers as Build Slaves for Jenkins

How to Use Parameters in Jenkins Declarative Pipeline

Use Parameters in Jenkins Declarative Pipeline

In Jenkins’s declarative pipeline, you can add parameters as part of Jenkinsfile. There are many supported parameters type that you can use with a declarative pipeline.

In this blog, you have answers to the following.

  1. How to use parameters in the declarative pipeline?
  2. How to use dynamic parameters or active choice parameters in the declarative pipeline?

Generating Pipeline Code for Parameters

You can generate the parameter pipeline code block easily using the Jenkins pipeline generator. You will find the Pipeline syntax generator link under all the pipeline jobs, as shown in the image below.

Jenkins pipeline syntax generator

Navigate to the pipeline generator in Jenkins and under steps, search for properties, as shown below.

Generating Jenkins parameter code for declarative pipeline

Using Parameters in Jenkinsfile

Here is an example ready to use Jenkins declarative pipeline with parameters.

This script has the following parameter types.

  1. Choice parameters
  2. Boolean parameter
  3. Multi-line string parameter
  4. String Parameter

Here is the Github link for this code.

pipeline {
    agent any
    stages {
        stage('Setup parameters') {
            steps {
                script { 
                                choices: ['ONE', 'TWO'], 
                                name: 'PARAMETER_01'
                                defaultValue: true, 
                                description: '', 
                                name: 'BOOLEAN'
                                defaultValue: '''
                                this is a multi-line 
                                string parameter example
                                 name: 'MULTI-LINE-STRING'
                                defaultValue: 'scriptcrunch', 
                                name: 'STRING-PARAMETER', 
                                trim: true

Note: The parameters specified in the Jenkinsfile will appear in the job only after the first run. Your first job run will fail as you will not be able to provide the parameter value through the job.

Access Parameters Inside Pipeline Stages

You can access a parameter in any stage of a pipeline. Accessing parameters in stages is pretty straightforward. You just have to use params.[NAME] in places where you need to substitute the parameter.

Here is an example of a stage that will be executed based on the condition that we get from the choice parameter.

The parameter name is ENVIRONMENT, and we access it in the stage as params.ENVIRONMENT. So when the choice parameter matches PROD, it will execute the steps mentioned in the stage.

stage('Deploy to Production') {
            when {
                expression { 
                   return params.ENVIRONMENT == 'PROD'
            steps {
                    sh """
                    echo "deploy to production"

Using Active Choice Parameter in Declarative Pipeline for Dynamic Parameters

Unlike default parameter types, the Active choice parameter type gives you more control over the parameters using a groovy script. You can have dynamic parameters based on user parameter selection.

To use the active choice parameter, you need to have an Active Choices plugin installed in Jenkins.

Here is a small use case for an active choice parameter.

  1. A job should have three parameters
    • Environment (dev, stage & prod)
    • AMI List (Should list the AMIs based on environment)
    • AMI information (Show information about the AMIs related to a specific environment)
  2. If the user selects dev, the AMI list should dynamically change the values related to dev and show information related to the AMIs.

Here is the image which shows the above use case. It shows how the AMI list and AMI information changes when you select different environments.

declarative active choice parameter demo

There are three types of active choice parameters.

Active Choices Parameter

Thi parameter type returns a set of parameters returned by the groovy script. For example, an environment parameter that lists dev, stage, and prod values.


You can also return values from third party APIs as parameters.

One such example is dynamically showing folders from a Github repo in the Jenkins parameters. To make this work you just need to write a groovy script that calls Github APIs and query the folders of the specific repository.

Active Choices Reactive Parameter

Returns parameters based on conditions based on another referenced parameter. You can refer to an active choice parameter and return a parameter based on a condition. For example, if the environment parameter is selected as a dev, the reactive parameter will return AMI ids for dev based on groovy conditions.

In the following example, Env is the reference parameter.

if (Env.equals("dev")){
    return["ami-sd2345sd", "ami-asdf245sdf", "ami-asdf3245sd"]
else if(Env.equals("stage")){
    return["ami-sd34sdf", "ami-sdf345sdc", "ami-sdf34sdf"]
else if(Env.equals("prod")){
    return["ami-sdf34", "ami-sdf34ds", "ami-sdf3sf3"]

Active Choices Reactive Reference Parameter

The reactive reference parameter is similar to a reactive parameter except for the fact that it mostly will not be used in the build environment. Meaning, it is often used to display information to the user dynamically to select the correct values from the other parameter input fields, as shown in the above use case image.

Using Active Choice Parameters With Declarative Pipeline

If you are wondering how to use active choice parameters in a declarative pipeline, here is the Jenkinsfile with all Active Choice parameter types. If you execute this, you will get parameters like the demo I have shown with the use case.

Note: Sometimes, after the execution of the pipeline, the parameters won’t show up correctly. If it happens, open job configuration and save it one time without changing anything. The values will show up.

If you have trouble copying the code, use this Github link

pipeline {
    agent any
        stages {
                steps {
                    script {
                                [$class: 'ChoiceParameter', 
                                    choiceType: 'PT_SINGLE_SELECT', 
                                    description: 'Select the Environemnt from the Dropdown List', 
                                    filterLength: 1, 
                                    filterable: false, 
                                    name: 'Env', 
                                    script: [
                                        $class: 'GroovyScript', 
                                        fallbackScript: [
                                            classpath: [], 
                                            sandbox: false, 
                                                "return['Could not get The environemnts']"
                                        script: [
                                            classpath: [], 
                                            sandbox: false, 
                                [$class: 'CascadeChoiceParameter', 
                                    choiceType: 'PT_SINGLE_SELECT', 
                                    description: 'Select the AMI from the Dropdown List',
                                    name: 'AMI List', 
                                    referencedParameters: 'Env', 
                                        [$class: 'GroovyScript', 
                                        fallbackScript: [
                                                classpath: [], 
                                                sandbox: false, 
                                                script: "return['Could not get Environment from Env Param']"
                                        script: [
                                                classpath: [], 
                                                sandbox: false, 
                                                script: '''
                                                if (Env.equals("dev")){
                                                    return["ami-sd2345sd", "ami-asdf245sdf", "ami-asdf3245sd"]
                                                else if(Env.equals("stage")){
                                                    return["ami-sd34sdf", "ami-sdf345sdc", "ami-sdf34sdf"]
                                                else if(Env.equals("prod")){
                                                    return["ami-sdf34sdf", "ami-sdf34ds", "ami-sdf3sf3"]
                                [$class: 'DynamicReferenceParameter', 
                                    choiceType: 'ET_ORDERED_LIST', 
                                    description: 'Select the  AMI based on the following infomration', 
                                    name: 'Image Information', 
                                    referencedParameters: 'Env', 
                                        [$class: 'GroovyScript', 
                                        script: 'return["Could not get AMi Information"]', 
                                        script: [
                                            script: '''
                                                    if (Env.equals("dev")){
                                                        return["ami-sd2345sd:  AMI with Java", "ami-asdf245sdf: AMI with Python", "ami-asdf3245sd: AMI with Groovy"]
                                                    else if(Env.equals("stage")){
                                                        return["ami-sd34sdf:  AMI with Java", "ami-sdf345sdc: AMI with Python", "ami-sdf34sdf: AMI with Groovy"]
                                                    else if(Env.equals("prod")){
                                                        return["ami-sdf34sdf:  AMI with Java", "ami-sdf34ds: AMI with Python", "ami-sdf3sf3: AMI with Groovy"]

Jenkinsfile Parameter Best Practices

The following are some of the best practices you can follow while using parameters in a Jenkinsfile.

  1. Never pass passwords in the String or Multi-line parameter block. Instead, use the password parameter of access Jenkins credentials with credential id as the parameter.
  2. Try to use parameters only if required. Alternatively, you can use a config management tool to read configs or parameters in the runtime.
  3. Handle the wrong parameter execution in the stages with a proper exception handling. It avoids unwanted step execution when a wrong parameter is provided. It happens typically in multi-line and string parameters.

Jenkinsfile Parameter FAQs

How to dynamically populate the choice parameter in the declarative pipeline?

Dynamic parameters can be achieved by using an active choice parameter. It uses a groovy script to dynamically populate choice parameter values.

How are the parameters used in the declarative pipeline?

In the declarative pipeline, parameters can be incorporated using the properties block. It supports all types of Jenkins parameters.

How to generate pipeline code for parameters?

You can use the native Jenkins pipeline syntax generator to generate the code block for any type of pipeline parameters.

Use Parameters in Jenkins Declarative Pipeline

Jenkins Tutorial For Beginners: Step by Step Guides

Jenkins tutorial for beginners

Jenkins is the widely adopted open source continuous integration tool. A lot has changed in Jenkins 2.x when compared to the older version. In this Jenkins tutorial series, we will try to cover all the essential topics for a beginner to get started with Jenkins.

Jenkins is not just a Continuous Integration tool anymore. It is a Continuous Integration and Continuous delivery tool. You can orchestrate any application deployments using Jenkins with a wide range of plugins and native Jenkins workflows.

Jenkins Tutorials For Beginners

In this collection of Jenkins tutorial posts, we will be covering various Jenkins tutorials, which will help beginners to get started with many of the Jenkins core functionalities.

Following is the list of Jenkins beginner tutorials. It is a growing list of Jenkins step by step guides.

Jenkins Administration

  1. Jenkins Architecture Explained
  2. Installing and configuring Jenkins 2.0
  3. Setting up Jenkins on Kubernetes Cluster
  4. Configure SSL on Jenkins Server
  5. Setting up a distributed Jenkins architecture (Master and slaves)
  6. Backing up Jenkins Data and Configurations
  7. Setting up Custom UI for Jenkins
  8. Running Jenkins on port 80

Jenkins Pipeline Development

  1. Jenkins Pipeline as Code Tutorial for Beginners
  2. Beginner Guide to Parameters in Declarative Pipeline
  3. Jenkins Shared Libary explained
  4. Creating Jenkins Shared Library
  5. Jenkins Multi-branch Pipeline Detailed Guide for Beginners

Scaling Jenkins

  1. Configuring Docker Containers as Build Slaves
  2. Configuring ECS as Build Slave For Jenkins

CI/CD With Jenkins

  1. Java Continuous Integration with Jenkins
  2. Jenkins PR based builds with Github Pull Request Builder Plugin

Jenkins Core Features

Lets have look at the overview of key Jenkins 2.x features that you should know.

  1. Pipeline as Code
  2. Shared Libraries
  3. Better UI and UX
  4. Improvements in security and plugins

Pipeline as Code

Jenkins introduced a DSL by which you can version your build, test, deploy pipelines as a code. Pipeline code is wrapped around groovy script which is easy to write and manage. An example pipeline code is shown below.

  git url: ''
  def mvnHome = tool 'M2'
  env.PATH = "${MNHOME}/bin:${env.PATH}"
  sh 'mvn -B clean verify'

Using pipeline as a code you can run parallel builds on a single job on different slaves. Also, you have good programmatic control over how and what each Jenkins job should do.

Jenkinsfile is the best way to implement Pipeline as code. There are two types of pipeline as code.

  1. Scripted Pipeline and
  2. Declarative Pipeline.

Our recommendation is to use only declarative pipeline for all your Jenkins based CI/CD workflows as you will have more control and customization over your pipelines.

Jenkins Shared Libraries

Jenkins shared library is a great way to reuse the pipeline code. You can create libraries of your CI/CD code which can be referenced in your pipeline script. The extended shared libraries will allow you to write custom groovy code for more flexibility.

Jenkins X

Jenkins X is a project from Jenkins for CI/CD on Kubernetes. This project is entirely different from normal Jenkins.

Better UI and UX

Jenkins 2.0 has a better User interface. The pipeline design is also great in which the whole flow is visualized. Now you can configure the user, password, and plugins right from the moment you start the Jenkins instance through awesome UI.

Also, Jenkins Blueocean is a great plugin which gives a great view for pipeline jobs. You can even create a pipeline using the blue ocean visual pipeline editor. Blueocen looks like the following.

Jenkins blue ocean
Jenkins tutorial for beginners

Install Jenkins on Ubuntu in 10 Easy Steps

Install and Configure Jenkins on Ubuntu

Jenkins 2.x has lots of great functionalities that will make the CI pipeline smooth using the pipeline as code and reusable with shared libraries.

In this guide, we will walk you through the steps for installing and configuring Jenkins on a ubuntu server in 10 easy steps. Also, we have added the steps to install Jenkins using Docker on Ubuntu server.

Install and Configure Jenkins on Ubuntu

Follow the steps given below to install and configure Jenkins 2 on a ubuntu server.

Note: Centos/Rehat users follow this tutorial Install jenkins on centos/Redhat

Step 1: Log in to the server and update it.

sudo apt-get -y update

Step 2: Install open JDK 11.

sudo apt install openjdk-11-jdk -y

Step 3: Add the Jenkins Debian repo.

<code>wget -q -O - | sudo apt-key add -</code>
<code>sudo sh -c 'echo deb binary/ &gt; /etc/apt/sources.list.d/jenkins.list'</code>

Step 4: Update the packages

sudo apt-get update -y

Step 5: Install latest LTS Jenkins.

sudo apt-get install jenkins -y

Step 6: Start the Jenkins service & enable it for starting during bootup.

sudo systemctl start jenkins
sudo systemctl enable jenkins

You can check the status of Jenkins service using the following command.

sudo systemctl status jenkins

Step 7: Now you will be able to access the Jenkins server on port 8080 from localhost or using the IP address as shown below.

jenkins unlock admin password on Ubuntu

Step 8: As you can see in the above image, you need to provide the administrative password. You can get the password using the following command.

 sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Copy the password and click continue.

Step 9: Next, you will be asked to configure plugins as shown below. Select the “Install Suggested Plugins” option. This will install all the required plugins for building your projects. It will take a few minutes to install the plugins.

Jenkins install plugins on ubuntu

Step 10: Once installed, You need to create a user with password and click “save and finish”

Jenkins 2.0 user configuration

Click “Start Using Jenkins” and it will take you to the Jenkins Dashboard. Log in using the username and password that you have given in step 8.

That’s it! Now you have a fully functional Jenkins server up and running. Consider setting up Jenkins backup using the backup plugin.

Here are some key configurations and file locations in Jenkins that you should know.

Note: For production setup, the recommended approach is to mount the Jenkins data folder to an additional data disk. This way you don’t lose any Jenkins data if the server crashes.

Jenkins Data Location/var/lib/jenkins
Jenkins main configuration file/var/lib/jenkins/config.xml
Jobs folder/var/lib/jenkins/jobs

Next, would be the configuration of a distributed master-slave setup wherein you will have an active master and slave agents for building the projects.

Check Jenkins SSL setup guide if you want to setup SSL for your Jenkins instance.

Setting up Jenkins Using Docker on Ubuntu

If you are a docker user, you can run Jenkins on a docker container.

Refer docker installation document to install the latest edition of docker.

Execute the following command to deploy Jenkins on Docker

docker run -p 8080:8080 -p 50000:50000 --name jenkins jenkinsci/jenkins:latest

The above command won’t persist any changes if the container crashes. So it is better to mount a host volume to the container to hold all the Jenkins configurations.

Here is the command to deploy Jenkins container with the host volume mount.

docker run -p 8080:8080 -p 50000:50000 -v /home/ubuntu:/var/jenkins_home jenkinsci/jenkins:latest

Install and Configure Jenkins on Ubuntu