Git Basics Every Developer and Administrator Should Know

git basics

Version control systems are repositories used to version your code/scripts collaboratively with the advantages of tracking changes as features added, files deleted, etc.

Version control systems are not limited to developers. In this DevOps era, developers and the operations team should have good knowledge about version control systems like Git.

This post will cover the basic Git commands and workflows that you can use with your projects.

Git Basics for Developers/Administrators

You can install git on your workstation from here – Git Download ( Windows, OSX, Linux).

Creating a repository

Create a project directory and cd into it. Execute the following git command from the directory to create a git repository.

Note: If you have an existing project, you can cd into the root of source code directory and use the following command.

git init
[email protected]:~$ mkdir demorepo && cd demorepo
[email protected]:~/demorepo$ git init
Initialized empty Git repository in /home/ubuntu/demorepo/.git/
[email protected]:~/demorepo$

Checking out a repository

You can create a copy of your git repository using the clone command. Execute the following command to clone your project directory.

git clone /path/to/project-repository
[email protected]:~$ mkdir repo-copy && cd repo-copy
[email protected]:~/repo-copy$ git clone  /home/ubuntu/demorepo
Cloning into 'demorepo'...
[email protected]:~/repo-copy$ ls
[email protected]:~/repo-copy$

Git Workflow

Every git repository has three trees. A working directory, Index and Head.

Working directory: It contains the actual project files.

Index: It is the staging area where you add the project files that needs to be committed.

Head: Head is where the reference to you previous commit exists.

git basic workflow

Adding file to the staging area: (add)

When you create new files in your project directory, you need to add it to the staging area. Execute the following command to add the files to the staging area.

git add <filename>

Let’s say you want to add all the files in your project directory to the staging area. Execute the following command to do the same

git add --all

Committing new changes to the repository (commit):

Once you have added all the files to the staging area, you can commit the changes with a reference message using the “-m” flag as shown below.

git commit -m “my first commit” 

Once committed, a version of your project files will be versioned in your local repository. The next step is to push your code to a remote centralized repository. The best online source code repository is Github. It offers both public and private repositories.

You can sign up for a Github account here.

Once signed up create a repository with your project name using the + option available at the top navigation bar. Create a repository without initializing the file.

creating github repository

Once you create a repository, you will be redirected to a page with the commands you need to execute on your local repository. There are two sections of commands as shown below.

github commands

If you have already committed your code to your local repository, you should leave the first part and execute the commands in the second part.

But, if you are going to start a new project, execute all the commands in your project folder as mentioned in the first sections.

There are two important commands you need to execute to push your code to a remote Github repository.

git remote add origin<user-name>/<repo-name>.git
git push -u origin master

The first command will add the remote repository URL to your local git repository. The second command will push the local repository code to the remote repository mentioned in the first command. You will be asked to enter the username and password of Github after executing the second command. Once authenticated, your local repository code will be pushed to the remote Github repository.


Let’s say you want to work on a new feature for your project without affecting the initial version. For this, you can make use of Git’s branching feature.

A git branch is nothing but a clone of your original code. Once you added the feature and tested the code, you can merge your branch with the new functionalities to the master branch (last version).

You can create a branch using the following git command.

Syntax: git checkout -b <branch-name>
Example: git checkout -b signup-function

Once you have checkout to a new branch, all the commits you make will be committed to the branch you have created locally. To push your branch to Github upstream repository, execute the following command.

Syntax: git push origin <branch-name>
Example: git push origin signup-function

If you want to make changes to your master branch, execute the following command.

git checkout master

Once you have tested the new feature in your new branch, the next step is to merge it with the master branch. You can do that using the following commands.

git checkout master
git merge <branch-name>

Note: When merging, your current branch should be the master. “git status” command will show your current working branch.

Replacing the local repository with a remote repository:

If you think you don’t want all the local commits and changes of your local server, you can roll back to the remote server contents using git fetch command. The syntax is as follows.

git fetch origin
git reset --hard origin/master

Other resources:

  1. Gui of Git – Tortisegit
  2. Gui Client  – Sourcetree
  3. Atlassian Git Tutorials


We have covered the basics to get started with Git. Let you know your through and tips in the comments section.

git basics

Setting Up Alert Manager on Kubernetes – Beginners Guide

kubernetes alertmanager prometheus setup

AlertManager is an opensource alerting system which works with Prometheus Monitoring system. In our last article, we have explained Prometheus setup on Kubernetes.

In this guide, we will cover the Alert Manager setup and its integration with Prometheus.

Note: In this guide, all the Alert Manager Kubernetes objects will be created inside a namespace called monitoring. If you use a different namespace, you can replace it in the YAML files.

Alert Manager on Kubernetes

Alert Manager setup has the following key configurations.

  1. A config map for Alert Manager configuration
  2. A config Map for Alert Manager alert templates
  3. Alert Manager Deployment
  4. Alert Manager service to access the web UI.

Key Things To Note

  1. You should have a working Prometheus setup up and running. Follow this tutorial for Prometheus setup ==> Prometheus Setup On Kubernetes
  2. Prometheus should have the correct alert manager service endpoint in its config.yaml as shown below. Only then, Prometheus will be able to send the alert to Alert Manager.
          - scheme: http
            - targets:
              - "alertmanager.monitoring.svc:9093"
  3. All the alerting rules have to be present on Prometheus config based on your needs. It should be created as part of the Prometheus config map with a file named prometheus.rules and added to the config.yaml in the following way.
          - /etc/prometheus/prometheus.rules
  4. Alerts can be written based on the metrics you receive on Prometheus.
  5. For receiving emails for alerts, you need to have a valid SMTP host in the alert manager config.yaml (smarthost prameter). You can customize the email template as per your needs in the Alert Template config map. We have given the generic template in this guide.

Let’s get started with the setup.

Config Map for Alert Manager Configuration

Alert Manager reads its configuration from a config.yaml file. It contains the configuration of alert template path, email and other alert receiving configuration. In this setup, we are using email and slack receivers. You can have a look at all the supported alert receivers from here.

Create a file named AlertManagerConfigmap.yaml and copy the following contents.

kind: ConfigMap
apiVersion: v1
  name: alertmanager-config
  namespace: monitoring
  config.yml: |-
    - '/etc/alertmanager/*.tmpl'
      receiver: alert-emailer
      group_by: ['alertname', 'priority']
      group_wait: 10s
      repeat_interval: 30m
        - receiver: slack_demo
        # Send severity=slack alerts to slack.
            severity: slack
          group_wait: 10s
          repeat_interval: 1m
    - name: alert-emailer
      - to: [email protected]
        send_resolved: false
        from: [email protected]
        require_tls: false
    - name: slack_demo
      - api_url:
        channel: '#devopscube-demo'

Let’s create the config map using kubectl.

kubectl create -f AlertManagerConfigmap.yaml

Config Map for Alert Template

We need alert templates for all the receivers we use (email, slack etc). Alert manager will dynamically substitute the values and delivers alerts to the receivers based on the template. You can customize these templates based on your needs.

Create a file named AlertTemplateConfigMap.yaml and copy the contents from this file link ==> Alert Manager Template YAML

Create the configmap using kubectl.

kubectl create -f AlertTemplateConfigMap.yaml

Create a Deployment

In this deployment, we will mount the two config maps we created.

Create a file called Deployment.yaml with the following contents.

apiVersion: apps/v1
kind: Deployment
  name: alertmanager
  namespace: monitoring
  replicas: 1
      app: alertmanager
      name: alertmanager
        app: alertmanager
      - name: alertmanager
        image: prom/alertmanager:v0.19.0
          - "--config.file=/etc/alertmanager/config.yml"
          - "--storage.path=/alertmanager"
        - name: alertmanager
          containerPort: 9093
        - name: config-volume
          mountPath: /etc/alertmanager
        - name: templates-volume
          mountPath: /etc/alertmanager-templates
        - name: alertmanager
          mountPath: /alertmanager
      - name: config-volume
          name: alertmanager-config
      - name: templates-volume
          name: alertmanager-templates
      - name: alertmanager
        emptyDir: {}

Create the deployment using kubectl.

kubectl create -f Deployment.yaml

Create a Service

We need to expose the alert manager using NodePort or Load Balancer just to access the Web UI. Prometheus will talk to alert manager using the internal service endpoint.

Create a Service.yaml file with the following contents.

apiVersion: v1
kind: Service
  name: alertmanager
  namespace: monitoring
  annotations: 'true'   /   '8080'
    app: alertmanager
  type: NodePort  
    - port: 9093
      targetPort: 9093
      nodePort: 31000

Create the service using kubectl.

kubectl create -f Service.yaml

Now, you will be able to access Alert Manager on Node Port 31000. For example,
kubernetes alertmanager dasbboard
kubernetes alertmanager prometheus setup

How to Setup Prometheus Monitoring On Kubernetes Cluster

prometheus monitoring min

Prometheus is an open source monitoring framework. Explaining Prometheus is out of the scope of this article. In this article, I will guide you to setup Prometheus on a Kubernetes cluster and collect node, pods and services metrics automatically using Kubernetes service discovery configurations. If you want to know more about Prometheus, You can watch all the Prometheus related videos from here.

If you would like to install Prometheus on a Linux VM, please see the Prometheus on Linux guide.

Prometheus Monitoring on Kubernetes

I assume that you have a kubernetes cluster up and running with kubectl setup on your workstation. If you don’t have a kubernetes setup, you can set up a cluster on google cloud by following this article.

cloud engineer

Latest Prometheus is available as a docker image in its official docker hub account. We will use that image for the setup.

Connect to the Cluster

Connect to your Kubernetes cluster and set up the proxy for accessing the Kubernetes dashboard.

Note: If you are using GKE, you need to run the following commands as you need privileges to create cluster roles.

ACCOUNT=$(gcloud info --format='value(config.account)')
kubectl create clusterrolebinding owner-cluster-admin-binding \
    --clusterrole cluster-admin \
    --user $ACCOUNT

Let’s get started with the setup.

Note: All the configuration files I mentioned in this guide is hosted on Github. You can clone the repo using the following command. Thanks to James for contributing to this repo. Please don’t hesitate to contribute to the repo for adding features. You can use the config files from the github repo or create the files on the go as mentioned in the steps.

git clone

Create a Namespace

First, we will create a Kubernetes namespace for all our monitoring components. Execute the following command to create a new namespace called monitoring.

kubectl create namespace monitoring

You need to assign cluster reader permission to this namespace so that Prometheus can fetch the metrics from Kubernetes API’s.

Step 1: Create a file named clusterRole.yaml and copy the content of this file –> ClusterRole Config

Step 2: Create the role using the following command.

kubectl create -f clusterRole.yaml

Create a Config Map

We should create a config map with all the prometheus scrape config and alerting rules, which will be mounted to the Prometheus container in /etc/prometheus as prometheus.yaml and prometheus.rules files.

Step 1: Create a file called config-map.yaml and copy the contents of this file –> Prometheus Config File

Step 2: Execute the following command to create the config map in Kubernetes.

kubectl create -f config-map.yaml

The prometheus.yaml contains all the configuration to dynamically discover pods and services running in the Kubernetes cluster. We have the following scrape jobs in our Prometheus scrape configuration.

  1. kubernetes-apiservers: It gets all the metrics from the API servers.
  2. kubernetes-nodes: All Kubernetes node metrics will be collected with this job.
  3. kubernetes-pods: All the pod metrics will be discovered if the pod metadata is annotated with and annotations.
  4. kubernetes-cadvisor: Collects all cAdvisor metrics.
  5. kubernetes-service-endpoints: All the Service endpoints will be scrapped if the service metadata is annotated with and annotations. It will be a blackbox monitoring.

prometheus.rules will contain all the alert rules for sending alerts to alert manager.

Create a Prometheus Deployment

Step 1: Create a file named prometheus-deployment.yaml and copy the following contents onto the file. In this configuration, we are mounting the Prometheus config map as a file inside /etc/prometheus. It uses the official Prometheus image from the docker hub.

apiVersion: apps/v1
kind: Deployment
  name: prometheus-deployment
  namespace: monitoring
    app: prometheus-server
  replicas: 1
      app: prometheus-server
        app: prometheus-server
        - name: prometheus
          image: prom/prometheus
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
            - containerPort: 9090
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
        - name: prometheus-config-volume
            defaultMode: 420
            name: prometheus-server-conf
        - name: prometheus-storage-volume
          emptyDir: {}

You Might Like: Kubernetes Deployment Tutorial For Beginners

Step 2: Create a deployment on monitoring namespace using the above file.

kubectl create  -f prometheus-deployment.yaml 

Step 3: You can check the created deployment using the following command.

kubectl get deployments --namespace=monitoring

You can also get details from the kubernetes dashboard like shown below.

prometheus on kubernetes

Connecting To Prometheus Dashboard

You can view the deployed Prometheus dashboard in two ways.

  1. Using Kubectl port forwarding
  2. Exposing the Prometheus deployment as a service with NodePort or a Load Balancer.

We will look at both options.

Using Kubectl port forwarding

Using kubectl port forwarding, you can access the pod from your workstation using a selected port on your localhost.

Step 1: First, get the Prometheus pod name.

kubectl get pods --namespace=monitoring

The output will look like the following.

➜  kubectl get pods --namespace=monitoring
NAME                                     READY     STATUS    RESTARTS   AGE
prometheus-monitoring-3331088907-hm5n1   1/1       Running   0          5m

Step 2: Execute the following command with your pod name to access Prometheus from localhost port 8080.

Note: Replace prometheus-monitoring-3331088907-hm5n1 with your pod name.

kubectl port-forward prometheus-monitoring-3331088907-hm5n1 8080:9090 -n monitoring

Step 3: Now, if you access http://localhost:8080 on your browser, you will get the Prometheus home page.

Exposing Prometheus as a Service

To access the Prometheus dashboard over a IP or a DNS name, you need to expose it as Kubernetes service.

Step 1: Create a file named prometheus-service.yaml and copy the following contents. We will expose Prometheus on all kubernetes node IP’s on port 30000.

Note: If you are on AWS or Google Cloud, You can use Loadbalancer type, which will create a load balancer and points it to the service.

apiVersion: v1
kind: Service
  name: prometheus-service
  namespace: monitoring
  annotations: 'true'   '9090'
    app: prometheus-server
  type: NodePort  
    - port: 8080
      targetPort: 9090 
      nodePort: 30000

The annotations in the above service YAML makes sure that the service endpoint is scrapped by Prometheus. The should always be the target port mentioned in service YAML

Step 2: Create the service using the following command.

kubectl create -f prometheus-service.yaml --namespace=monitoring

Step 3: Once created, you can access the Prometheus dashboard using any Kubernetes node IP on port 30000. If you are on the cloud, make sure you have the right firewall rules for accessing the apps.

Screen Shot 2017 10 11 at 12.15.57 PM

Step 4: Now if you browse to status --> Targets, you will see all the Kubernetes endpoints connected to Prometheus automatically using service discovery as shown below. So you will get all kubernetes container and node metrics in Prometheus.

prometheus kubernetes target configuration

Step 5: You can head over the homepage and select the metrics you need from the drop-down and get the graph for the time range you mention. An example graph for container memory utilization is shown below.

prometheus kubernetes metrics

Setting Up Kube State Metrics

Kube state metrics service will provide many metrics which is not available by default. Please make sure you deploy Kube state metrics to monitor all your kubernetes API objects like deployments, pods, jobs, cronjobs etc..

Please follow this article to setup Kube state metrics on kubernetes ==> How To Setup Kube State Metrics on Kubernetes

Setting Up Alert Manager

We have covered the Alert Manager setup in a separate article. Please follow ==> Alert Manager Setup on Kubernetes

Setting Up Grafana

Using Grafana you can create dashboards from Prometheus metrics to monitor the kubernetes cluster. Please follow this article for the setup ==> How To Setup Grafana On Kubernetes


ONLINE COURSE: The Complete Kubernetes Course

Learn how you can run, deploy, manage and maintain containerized Docker applications on Kubernetes

  • Learn to launch kubernetes cluster
  • Get started with Containerization of apps
  • Deploy applications on kubernetes cluster
  • Run stateful and stateless applications on containers