Open Source Technology Trends in 2017

open source technology trends

Open source technologies and solutions are now an integral part of just about every enterprise. In fact, an average commercial software project now has more than 100+ components or 50% of its components in open source.

Open Source Technology Trends

Many factors contribute to the widespread adoption of open source technologies, including the open nature of the code, it’s free nature, ability to draw the power of the community, faster time-to-market, and more.

open source trends

Here is how the adoption of open source technology trends.

The Battle against Vulnerabilities Reaches the Enterprise

Open-source projects are not free of vulnerabilities. Two out of every three commonly used commercial applications are known to contain some form of vulnerability or other. The extent of damages increases significantly with new and emerging technologies such as artificial intelligence (AI) and the Internet of Things (IoT). Such vulnerabilities leave systems running on open source susceptible to attacks, and create a trust deficit. Cyber attackers have indeed started to exploit such vulnerabilities in a big way, with attacks on open source technologies increasing by 20% in 2017.

Open source stakeholders are however now rising to the challenge, taking up the gauntlet to identify vulnerabilities in open-source code on a regular basis. Unlike the past, they take the lead in developing patches proactively, improving the security of the codebase. Enterprises are now more willing than ever before to contribute to such common events.

At another pane, enterprises have started embracing open source web application penetration testing tools such as Grabber, Vega, Zed Attack Proxy in a big way, to unearth common vulnerabilities such as cross-site scripting, SQL injection, and more. Such increased adoption has resulted in the further development and maturity of such tools.

Analytics Get Bigger and Better

Big Data is already a rage. However, in the scramble to get data, several enterprises neglected the purpose behind collecting such data. The realization has however struck in, and the trend now is towards collecting actionable data, on which business intelligence rests.

There is a corresponding rise in open source analytic solutions, especially cloud-based solutions. These analytic engines have started to draw on machine learning and artificial intelligence in a big way. Enterprises using these solutions gain a competitive advantage in getting first-hand actionable insights, on which they may make informed decisions.

A big application of analytics has been in the IoT space, where data from sensors are being put to good use to improve accuracy and make systems more efficient. While big businesses, such as oil and gas industries, now reap the benefits, other sectors have started to join the bandwagon slowly but surely. For instance, using sensors to monitor patients remotely, and tracking medication orders is now a norm, cutting across healthcare providers.

While Hadoop retains its dominance as the tool of choice to create Big Data solutions, upstarts such as Docker’s Spark, Google’s Kubernetes and Mesos are fast becoming popular and viable alternatives. These new tools gel in perfectly in a containerized environment.

open source trends 2017

The World of Containers is in the Middle of a Big Churn

Containers, which allow packaging and porting applications, started off as a fad but is now a key component of an enterprise’s open source architecture. Containers containing micro-services allow businesses to leverage highly portable assets or resources with a high degree of scalability and stability, at affordable costs.

The popularity of containers is for a large part owing to Docker. Docker containers offer an innovative approach of virtualizing the OS and splitting it up into virtualized compartments, thereby allowing placing and running chunks of code in any system where Linux is running. A significant open-source development in 2016 was tension between Docker and the wider open-source community, with the commuting accusing Docker of not being sufficiently open and compatible with other open source platforms. Come 2017, Docker has made significant changes and is now embracing open source in a bigger way. Docker releasing one of its core container components, as an independent open source project, drives home this point.

However, Docker is also facing stiff challenges in 2017, especially from upstarts providing a container-as-a-service solution. Kubernetes is on way to become the de facto industry standard for container orchestration in 2017. Enterprises hitherto faced difficulty in setting up and using this otherwise powerful tool, but now the increasing adoption of container-based PaaS systems, such as RedHat’s OpenShift and CoreOS Tectonic is helping enterprises embrace Kubernetes.

Enterprise DevOps Adoption Intensifies

Until not too long ago, DevOps was considered as a drag. However, of late, the DevOps approach and accompanying tools have become mainstream.

The increased acceptance of DevOps is not just because of the realization of the many advantages on offer. IT companies were until now struggling to understand the exact definition of DevOps. In 2017, the concept has finally reached stability, and can now be identified as a static set of principles and practices.

The increased DevOps adoption has a spin-off benefit in fueling automation of software delivery and infrastructure changes. Developers would now be able to spend more time coding and save time setting up the infrastructure.

OpenStack Grows Even More Popular

OpenStack, the cloud operating system which enables effective control over large swathes of computing, storage, and networking resources throughout the data center, through an easy dashboard, is now a favorite of open source developers. Its maturity, ease of use, ability to integrate well into a heterogeneous infrastructure and its inherent cost-saving capabilities enable the tool to sustain its popularity, even as new tools emerge.

OpenStack, by taking the resource layer out of the VDI stack, and separating components technically, infuse flexibility, make IT resources future-proof, and allow enterprises to mix and match hosting environments as required. A developer using OpenStack, for instance, could run some RDS sessions in Azure, place a few virtual desktops in AWS, conjure up a private cloud in OpenStack, and wrap it all together with vSphere servers that already reside in the enterprise data center.

In 2017, more and more enterprises are adopting OpenStack as their open-source software of choice, to power private and public clouds, and empower admins.

The Biggies Hold their Ground in the Face of New Onslaughts

Linux still holds its ground as the leading and most widely adopted open-source project. Companies leveraging Linux to roll out commercial applications have proliferated, but the big names, including Red Hat, Ubuntu, and SUSE hold their ground. Git retains its popularity for its hugely successful GitHub and GitLab, with most developers using it for easy version control and change tracking capabilities. In the database space, MySQL retains its dominance as the most popular open source tool. Innovations, such as “NoSQL” database technologies, which are non-relational databases, have also become popular, for parsing unstructured data. The popular NoSQL databases include MongoDB and Redis.

Another red-hot area in open source is code integrating continuously and seamlessly with other platforms. Tools furthering such ends, such as Jenkins, Maven, and Artifactory are the ones to watch out for in the new future.

While the world of open source is always in a state of continuous churn, open source itself is in for the long haul. In fact, CIOs of major global companies now rely on open-source technologies rather than proprietary code to power their infrastructure. Successful companies tie up with innovate and result-oriented developers, who keep themselves abreast with the latest and emerging technologies, and deliver cutting-edge technologies to further the cause of the enterprise.


open source technology trends

Sitting On the Fence About DevOps Integration? Don’t Miss Your Chance to Jump on the Bandwagon

devops integration

DevOps practices are becoming a staple of companies worldwide and include industry giants such as Netflix, Etsy and Google. However, even though these companies are a shining beacon of the power and effectiveness of DevOps practices, the scale of their operations can be intimidating. People always say that these are “special” companies and what works for them will not work in any other business. There are certain phrases DevOps consultants hate to hear: “This can’t work here” and “We’ve always done it this way.” In today’s IT landscape adaptation is crucial and companies that fail to adopt DevOps practices are sure to go extinct in the near future.

[irp posts=”1131″ name=”Skillsets To Work In DevOps Environment – A Comprehensive Guide”]

If we think about modern-day industry giants such as Google, Amazon, Twitter, LinkedIn and many others, they did not achieve such drastic growth overnight. They began as regular run-of-the-mill companies just like everybody else, with monolithic code bases and unreliable deploys. Over time, they were able to grow their operations and become industry leaders. However, they still face the same problems as everyone else and even though the scale of their solutions is much greater, what all companies can learn from their case studies is that they are constantly working to improve what they do.

Getting Started with DevOps

The most common question from business leaders is “How do we get started with DevOps?” In essence, you have two directions in which you can go: you try to cut deployment time in half, or you can say “Deployment time takes too much time. We will do half as many.” The latter is the wrong direction.

Setting a goal, such as cutting deployment time in half will get your organization streamlining and eliminate any unnecessary processes. It will also promote teams to extract those processes from the minds of a few people and put them into something like a batch script or a simple document so these processes can easily be repeated across the team and tasks can get accomplished simultaneously.

Also, keep in mind that you don’t have to do a hard-left enterprise-wide shift to DevOps. You can start with small projects or small, minor automation that which will free up a lot of time. Begin with an incremental project-by-project basis or automation too -by-automation tool basis. Regardless of the way you begin using DevOps, be sure to have a management environment that promotes taking chances and learning from mistakes.

Choosing a Cloud Computing Platform

Most industry giants and small businesses alike have decided to use Amazon Web Services (AWS) to implement their DevOps operations, but you need to choose the one that best works for you. Some alternatives are Google Cloud Platform, Microsoft Azure, IBM Blumix among others. Conduct some research about the various platforms out there and choose the easiest one to work with and the one that best suits your overall needs.

Savings on Opportunity Costs

When we talk about moving to the cloud, we talk about saving time and money, but we rarely mention saving on opportunity costs. These costs are difficult to calculate, but spending a lot of time and intellectual capital on a various heavy lifting of IT infrastructure provisioning at scale Imagine usually means sacrificing the quality of your product at some extent.

DevOps with AWS as the underlying platform can be the spark you need to deploy nonstop innovation to reduce opportunity costs for your business. Imagine reaching new clients or going further with your current ones and having full confidence that you are constantly rolling out innovative products. DevOps with AWS allows your business to continuously come out with frequent and incremental features and services and do so securely, as opposed to waiting weeks, months, or sometimes even years in a traditional IT software model.

Speed Up Your Security and Compliance

With the help of DevOps, you can reduce the time and costs associated with managing and securing your environment, providing information on your environment’s performance or monitoring network traffic. You can use AWS Code Deploy to patch through your instances and enjoy complete governance over your cloud infrastructure by knowing the state of your environment with AWS Config.

With all the automation going on in the cloud, why not automate your documentation as well? Imagine being audited every day, yet still having the ability to provide the necessary documentation as quickly as doing a web search. This is possible with AWS Lambda. Not only can you use AWS Lambda to automate your document workflow, but you can also build things like compliance wikis and a continuously updated dashboard.

Development and Testing Time Savings

Testing new ideas can be even faster with AWS Cloud Formation by spinning up parallel environments in the cloud programmatically. You can say “goodbye” to the “Well it works on my computer” excuse for good because this tool provides your testers and developers a through, production-esque environment to genuinely test the scalability and performance of their app.

You can also conduct scale-out testing and without impacting production, while still being able to test for compatibility. Thanks to the cloud, dev/test environments can be short-lived and dismantled or rebuilt to start all over again day-after-day.

In order to keep up with customer demands and the ever-evolving IT environment, implementing DevOps is a matter of life and death for your business. In the current IT business, there is no such thing as standing still. You are either getting an advantage, or you are falling behind. Without DevOps, your business will fall so far behind that it will risk going the way of the dinosaurs.

There are much more ways DevOps can reenergize your company. However, one thing is certain: implementing DevOps practices is a proven method to save your company both time and money while focusing on innovation.This ensures the overall success of your business in the long run.

Setup Jenkins master and Build Slaves as Docker Container

jenkins master and slave on docker

Do you want dockerized Jenkins which includes the configuration for build slaves also as Docker containers? So that you can run this image using docker command, then bang, everything is ready to run your Jenkins job on docker slave. Now let’s make this happen.

Install Jenkins as docker container

Go to jenkins website ( and find “Docker” section, before executing the command docker run -d -p 49001:8080 -v
$PWD/jenkins:/var/jenkins_home -t jenkins/jenkins
, we need to let “jenkins” be the owner of the jenkins directory on host using its UUID:

sudo chown -R 1000:1000 /opt/jenkins_home

Otherwise, you will find the jenkins docker container cannot be started, just exit after the container created, that’s because the jenkins container user by default is “jenkins”,

Now you can access Jenkins from your browser, for example: http://[your Jenkins ip]:49001/, create job and build your project from your Jenkins master container

Setup Build Slave as docker container

Although you can add VM as build slave in Jenkins, it’s more flexible and convenient to make build slave as docker container, because you don’t need to maintain each slave VM, you can just give Jenkins a slave host’s IP and slave’s docker image template, then Jenkins can create slave as docker container on that host. Jenkins make this happen by a plugin called DockerPlugin.

Let’s see how to install and configure this plugin.

Install the plugin

Navigate to the Manage Jenkins > Manage Plugins page in the web UI. Find DockerPlugin and install it. See for reference.

Plugin Configuration

Create your slave image.

Step 1: Find steps in official site, see the section “Creating a docker image”, commands include:

docker pull ubuntu
docker run -i -t ubuntu /bin/bash
apt-get update
apt-get install openssh-server
mkdir /var/run/sshd
apt-get install openjdk-6-jdk
adduser jenkins

However, the steps are insufficient if your build slave needs to build your project’s docker image, create docker container and build your project because you can not run docker command in the ubuntu container after these steps. The aim is not docker-in-docker, i.e., docker container running in another docker container, which is another more complicated case. We just need to install the docker binary in docker container and mount the docker socket (by volume “/var/run/docker.sock”) so as docker command can be executed in the container. So add the below

  • To let docker daemon can be found in docker container, install docker binary inside the ubuntu container (mount docker socket step will leave to DockerPlugin because DockerPlugin will create slave container automatically when running a job):
curl -fsSL | sh
  • Add Jenkins user in docker group in the ubuntu container so as DockerPlugin can access docker binary to execute docker command by “jenkins” user: user mod -a -G docker jenkins.
  • Commit the ubuntu container as docker image, for example, make the image name as jenkins-slave:
docker commit [container ID] jenkins-slave

Step 2: On docker host, to expose docker’s TCP port so DockerPlugin can access docker host and create build slave container, edit /etc/default/docker to modify the value of DOCKER_OPTS as (note that 4243 is the default TCP port for docker):

DOCKER_OPTS='-H tcp:// -H unix:///var/run/docker.sock’.

3. Now we can return back to Jenkins Web UI and configure Jenkins:
Manage Jenkins > Configure system > Cloud > docker:

1) Input docker URL: tcp://[your host ip]:4243

If click “Test Connection”, you can see docker API version shown on UI.

Screen Shot 2017 10 20 at 10.55.29 AM

2) In “Docker Template” section, click “Container settings”, input Volumes:


The volumes are volume mappings that DockerPlugin will use to create slave containers. The first volume mapping mounts docker socket into the container to enable docker command can be listened by docker daemon on the host, the second line let Jenkins find her workspace to execute the job.

Screen Shot 2017 10 20 at 11.01.49 AM

3) DockerPlugin by default will destroy build slave container after each build, you can save your slave container for debugging purpose or deploy purpose. On “Container settings”, from “Availability” label, click “Experimental options”, choose “Experimental: Keep this agent online as much as possible”.

Screen Shot 2017 10 20 at 11.02.43 AM

Now the docker slave configuration is complete. The whole configuration can be saved by committing the Jenkins image. To verify, you can create a job to run. Your Job configuration can be as below screenshots if you want a pipeline job in order to build from a repository (note your repository should contain a Jenkinsfile).

Screen Shot 2017 10 20 at 11.04.06 AM

Screen Shot 2017 10 20 at 11.05.04 AM

When the job is running, you can see the created slave containers from “Build Executor Status”, docker-xxx is a slave container’s name. After the job finished building, the container status will
be “Idle”, if you destroy the container on your slave host manually, the status will be “offline”.

Screen Shot 2017 10 20 at 11.05.42 AM

logoONLINE COURSE:  Docker Technologies for DevOps and Developers

Learn how to develop and deploy web applications with Docker technologies. Take your DevOps skills to the next level.

  • Learn to containerize applications with microservices approach
  • Best practices for making Dockerfiles
  • Learn to build multi-node swarm cluster and learn to orchestrate applications
  • In depth understanding for Docker and its workflows for complex applications

jenkins master and slave on docker

How To Setup Jenkins On Kubernetes Cluster – Beginners Guide

jenkins on kubernetes

This guide explains the step by step process for setting up Jenkins on a Kubernetes cluster.

Setup Jenkins On Kubernetes Cluster

For setting up a Jenkins cluster on Kubernetes, we will do the following.

  1. Create a Namespace
  2. Create a deployment yaml and deploy it.
  3. Create a service yaml and deploy it.
  4. Access the Jenkins application on a Node Port.

Note: This tutorial doesn’t use persistent volume as this is a generic guide. For using persistent volume for your Jenkins data, you need to create volumes of relevant cloud or on-prem data center and configure it.

Create a Jenkins Deployment

1. Create a Namespace for Jenkins. So that we will have an isolation for the CI/CD environment.

kubectl create ns jenkins

2. Create a Deployment file named jenkins-deployment.yaml the latest Jenkins Docker Image.

Note: The following deployment file doesn’t add any persistent volume for jenkins. For production use cases, you should add a persistent volume for your jenkins data. A sample implementation of persistent volume for Jenkins in Google Kubernetes Engine can be found here 

apiVersion: extensions/v1beta1 # for versions before 1.7.0 use apps/v1beta1
kind: Deployment
  name: jenkins-deployment
  replicas: 1
      app: jenkins
        app: jenkins
      - name: jenkins
        image: jenkins:2.60.3
        - containerPort: 8080

3. Create the jenkins deployment in jenkins namespace using the following command.

kubectl create -f jenkins-deployment.yaml --namespace=jenkins

4. Now, you can get the deployment details using the following command.

kubectl  describe deployments --namespace=jenkins

Also, You can get the details from the kubernetes dashboard as shown below.

Screen Shot 2017 10 08 at 7.05.25 PM

Create a Jenkins Service

We have created a deployment. However is not accessible to the outside world. For accessing the Jenkins container from outside world, we should create a service and map it to the deployment.

1. Create a jenkins-service.yaml file with the following contents.

apiVersion: v1
kind: Service
  name: jenkins
  type: NodePort
    - port: 8080
      targetPort: 8080
      nodePort: 30000
    app: jenkins

Note: In this, we are using the type as NodePort which will expose Jenkins on all kubernetes node IP’s. Also, we have mentioned the nodeport as 30000. So you can access the application on port 30000. If you are on Google Cloud or AWS, you can use the type as Loadbalancer which will launch create a Load balancer and points to the jenkins deployment.

[irp posts=”397″ name=”List of DevOps Blogs and Resources for Learning”]

2. Create the jenkins service using the following command.

kubectl create -f jenkins-service.yaml --namespace=jenkins

Now if you browse to any one of the Node IP on port 30000, you will be able to access the Jenkins dashboard.


3. Jenkins will ask for initial Admin password. You can get that from the pod logs either from kubernetes dashboard or  CLI. You can get the pod details using the following CLI command.

kubectl get pods --namespace=jenkins

And with the pod name, you can get the logs as shown below. replace the pod name with your pod name.

kubectl logs jenkins-deployment-2539456353-j00w5 --namespace=jenkins

The password can be found at the end of the log as shown below.

Screen Shot 2017 10 08 at 8.00.16 PM


ONLINE COURSE: The Complete Kubernetes Course

Learn how you can run, deploy, manage and maintain containerized Docker applications on Kubernetes

  • Learn to launch kubernetes cluster
  • Get started with Containerization of apps
  • Deploy applications on kubernetes cluster
  • Run stateful and stateless applications on containers

jenkins on kubernetes