Kubernetes Deployment Tutorial For Beginners

This Kubernetes deployment tutorial guide will explain the key concepts in a Kubernetes YAML specification with an Nginx example deployment.

Introduction:

In Kubernetes, pods are the basic units that get deployed in the cluster. Kubernetes deployment is an abstraction layer for the pods. The main purpose of the deployment object is to maintain the resources declared in the deployment configuration in its desired state. A deployment configuration can be of YAML or JSON format.

Key Things To Understand

  1. A Deployment can schedule multiple pods. A pod as a unit cannot scale by itself.
  2. A Deployment represents a single purpose with a group of PODs.
  3. A single POD can have multiple containers and these containers inside a single POD shares the same IP and can talk to each other using localhost address.
  4. To access a Deployment with one or many PODs, you need a Kubernetes Service endpoint mapped to the deployment using labels and selectors.
  5. A deployment should have only stateless services. Any application that requires state management should be deployed as a Kubernetes StatefulSet.

Deployment YAML:

Kubernetes deployment Yaml contains the following main specifications.

  1. apiVersion
  2. Kind
  3. metadata
  4. spec

Now let’s look at each specification in detail.

Note: In Kubernetes, everything persistent is defined as an object. Example: Deployments, services, Replica Set, Configmap, Jobs etc

apiVersion

This specifies the API version of the Kubernetes deployment object. It varies between each Kubernetes version.

How To Use the Right API version: Kubernetes contains three API versions.

  1. Alpha: This is the early release candidate. It might contain bugs and there is no guarantee that it will work in the future. Example: scalingpolicy.kope.io/v1alpha1
  2. Beta: The API’s become beta once its alpha tested. It will be in continuous development & testing until it becomes stable. Beta versions will most likely go into the Kubernetes main APIs.Example: batch/v1beta1
  3. Stable: The APIs which does not contain alpha and beta goes into the stable category. Only stable versions are recommended to be used in production systems. Example: apps/v1

These APIs could belong to different API groups.

An example list of Kubernetes APIs from different API groups from Kubernetes version 1.10.6 is shown below. Deployment object belongs to apps API group. You can list these API on http://localhost:8001/ using the kubectl proxy.

{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps/v1",
    "/apis/apps/v1beta1",
    "/apis/apps/v1beta2",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2beta1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v1beta1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/cloud.google.com",
    "/apis/cloud.google.com/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/metrics.k8s.io",
    "/apis/metrics.k8s.io/v1beta1",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/scalingpolicy.kope.io",
    "/apis/scalingpolicy.kope.io/v1alpha1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1"
    ]
}

Kind

Kind describes the type of the object/resource to be created. In our case its a deployment object. Following are the main list of objects/resources supported by Kubernetes.

componentstatuses
configmaps
daemonsets
deployments
events
endpoints
horizontalpodautoscalers
ingress
jobs
limitranges
namespaces
nodes
pods
persistentvolumes
persistentvolumeclaims
resourcequotas
replicasets
replicationcontrollers
serviceaccounts
services

Metadata

It is a set of data to uniquely identify a Kubernetes object. Following are the key metadata that can be added to an object.

labels
name
namespace
annotations

Let’s have a look at each metadata type

  1. Labels: Key-value pairs primarily used to group and categorize deployment object. It is intended for an object to object grouping and mapping using selectors. For example, kubernetes service uses the pod labels in its selectors to send traffic to the right pods. We will see more about labels and selectors in the service creation section.
  2. Name: It represents the name of the deployment to be created.
  3. Namespace: Name of the namespace where you want to create the deployment.
  4. Annotations: key-value pairs like labels, however, used for different use cases. You can add any information to annotations. For example, you can have an annotation like "monitoring" : "true and external sources will be able to find all the objects with this annotation to scrape its metrics. Objects without this annotation will be omitted.

There are other system generated metadata such us UUID, timestamp, resource version etc. that gets added to each deployment.

Example metadata

metadata:
  name: resource-name
  namespace: deployment-demo
  labels:
    app: web
    platform: java
    release: 18.0
  annotations:
    monitoring: true
    prod: true

Spec

Under spec, we declare the desired state and characteristics of the object we want to have. For example, in deployment spec, we would specify the number of replicas, image name etc. Kubernetes will make sure all the declaration under the spec is brought to the desired state.

Spec has three important subfields.

  1. Replicas: It will make sure the numbers of pods running all the time for the deployment. Example,
    spec:
      replicas: 3
  2. Selector: It defines the labels that match the pods for the deployments to manage. Example,
    selector:
        matchLabels:
          app: nginx
  3. Template: It has its own metadata and spec. Spec will have all the container information a pod should have. Container image info, port information, ENV variables, command arguments etc. Example,
    template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
            - image: nginx
              name: nginx

Kubernetes Example Deployment

Since we have looked at the basics let start with an example deployment. We will do the following in this section.

  1. Create a namespace
  2. Create a Nginx Deployment
  3. Create a Nginx Service
  4. Expose and access the Nginx Service

Note: Few of the operations we perform in this example can be performed with just kubectl and without a YAML Declaration. However, we are using the YAML specifications for all operations to understand it better.

Exercise Folder

To begin the exercise, create a folder names deployment-demo and cd into that folder. Create all the exercise files in this folder.

mkdir deployment-demo && cd deployment-demo

Create a Namespace

Let’s create a YAML named namespace.yaml file for creating the namespace.

apiVersion: v1
kind: Namespace
metadata:
  name: deployment-demo
  labels:
    apps: web-based
  annotations:
    type: demo

Use kubectl command to create the namespace.

kubectl create -f namespace.yaml

Equivalent kubectl command

kubectl create namespace deployment-demo

Assign Resource Quota To Namespace

Now let’s assign some resource quota limits to our newly created namespace. This will make sure the pods deployed in this namespace will not consume more system resources than mentioned in the resource quota.

Create a file named resourceQuota.yaml. Here is the resource quota YAML contents.

apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-quota
  namespace: deployment-demo
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi

Create the resource quota using the YAML.

kubectl create -f resourceQuota.yaml

Now, let’s describe the namespace to check if the resource quota has been applied to the deployment-demo namespace.

kubectl describe ns deployment-demo

The output should look like the following.

Name:         deployment-demo
Labels:       apps=web-based
Annotations:  type=demo
Status:       Active

Resource Quotas
 Name:            mem-cpu-quota
 Resource         Used  Hard
 --------         ---   ---
 limits.cpu       0     2
 limits.memory    0     2Gi
 requests.cpu     0     1
 requests.memory  0     1Gi

Create a Deployment

We will use the public Nginx image for this deployment.

Create a file named deployment.yaml and copy the following YAML to the file.

Note: This deployment YAML has minimal required information we discussed above. You can have more specification in the deployment YAML based on the requirement.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
  namespace: deployment-demo
  annotations:
    monitoring: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
        resources:
          limits:
            memory: "2Gi"
            cpu: "1000m"
          requests: 
            memory: "1Gi"
            cpu: "500m"

Under containers, we have defined its resource limits, requests and container port (one exposed in Dockerfile).

Create the deployment using kubectl

kubectl create -f deployment.yaml

Check the deployment

kubectl get deployments -n deployment-demo

Even though we have added minimal information, after deployment, Kubernetes will add more information to the deployment such as resourceVersion, uid, status etc.

You can check it by describing the deployment in YAML format using the kubectl command.

kubectl get deployment nginx -n deployment-demo  --output yaml

Create a Service and Expose The Deployment

Now that we have a running deployment, we will create a Kubernetes service of type NodePort ( 30500) pointing to the nginx deployment. Using NodePort you will be able to access the Nginx service on all the kubernetes node on port 30500.

Create a file named service.yaml and copy the following contents.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: deployment-demo
spec:
  ports:
  - nodePort: 30500
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: NodePort

Service is the best example for explaining labels and selectors. In this service, we have a selector with “app” = “nginx” label. Using this, the service will be able to match the pods in our nginx deployment as the deployment and the pods have the same label. So automatically all the requests coming to the nginx service will be sent to the nginx deployment.

Let’s create the service using kubectl command.

kubectl create -f service.yaml

You can view the service created using kubectl command.

kubectl get services  -n deployment-demo

Now, you will be able to access the nginx service on any one of the kubernetes node IP on port 30500

For example,

http://35.134.110.153:30500/

How To Setup Latest Nexus OSS On Kubernetes

nexus on kubernetes

Nexus is an opensource artifact storage and management system. It is a widely used tool and can be seen in most of the CI/CD workflows. We have covered Nexus setup on Linux VM in another article.

This guide will walk you through the step by step process of deploying Sonatype Nexus OSS on a Kubernetes cluster.

Setup Nexus OSS On Kubernetes

Key things to be noted,

  1. Nexus deployment and service are created in the devops-tools namespace. So make sure you have the namespace created or you can edit the YAML to deploy in a different namespace. Also, we have different deployment files for Nexus 2 & Nexus 3 versions.
  2. In this guide, we are using the volume mount for nexus data. For production workloads, you need to replace host volume mounts with persistent volumes.
  3. Service is exposed as NodePort. It can be replaced with LoadBalancer type on a cloud.

Let’s get started with the setup.

Step 1: Create a namespace called devops-tools

kubectl create namespace devops-tools

Step 2:  Create a Deployment.yaml file. It is different for nexus 2.x and 3.x. We have given both. Create the YAML based on the Nexus version you need. Note: The images used in this deployment is from public official Sonatype docker repo.(Nexus2 image  & Dockerfile )  (nexus 3 image & Dockerfile)

  1. Deployment YAML for Nexus 2.x: Here we are passing a few customizable ENV variable and adding a volume mount for nexus data.
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nexus
      namespace: devops-tools
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: nexus-server
        spec:
          containers:
            - name: nexus
              image: sonatype/nexus:latest
              env:
              - name: MAX_HEAP
                value: "800m"
              - name: MIN_HEAP
                value: "300m"
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1000m"
                requests:
                  memory: "2Gi"
                  cpu: "500m"
              ports:
                - containerPort: 8081
              volumeMounts:
                - name: nexus-data
                  mountPath: /sonatype-work
          volumes:
            - name: nexus-data
              emptyDir: {}
    
  2. Deployment YAML for Nexus 3.x: Here we dont have any custom env variables. You can check the official docker repo for the supported env variables.
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nexus
      namespace: devops-tools
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: nexus-server
        spec:
          containers:
            - name: nexus
              image: sonatype/nexus3:latest
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1000m"
                requests:
                  memory: "2Gi"
                  cpu: "500m"
              ports:
                - containerPort: 8081
              volumeMounts:
                - name: nexus-data
                  mountPath: /nexus-data
          volumes:
            - name: nexus-data
              emptyDir: {}
    

Step 3: Create the deployment using kubectl command.

kubectl create -f Deployment.yaml

Check the deployment pod status

kubectl get po -n devops-tools

Step 4: Create a Service.yaml file with the following contents to expose the nexus endpoint using NodePort.

Note: If you are on a cloud, you can expose the service using a load balancer using the service type Loadbalancer. Also, the Prometheus annotations will help in service endpoint monitoring by Prometheus.

apiVersion: v1
kind: Service
metadata:
  name: nexus-service
  namespace: devops-tools
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/path:   /
      prometheus.io/port:   '8081'
spec:
  selector: 
    app: nexus-server
  type: NodePort  
  ports:
    - port: 8081
      targetPort: 8081
      nodePort: 32000

Check the service configuration using kubectl.

kubectl describe service nexus-service -n devops-tools

Step 5: Now you will be able to access nexus on any of the Kubernetes node IP on port 32000/nexus as we have exposed the node port. For example,

For Nexus 2,

http://35.144.130.153:32000/nexus

For Nexus 3,

http://35.144.130.153:32000

Note: The default username and password will be admin & admin123

nexus on kubernetes

How To Mount Extra Disks on Google Cloud VM Instance

google cloud disk mount

By default, the new disks attached during the instance creation cannot be used directly. You need to format and mount it to your instance to put that in use.

In this article, we will explain how to format and mount an extra disk to your google compute engine VM instance.

Note: We assume that you have created the VM instance with an extra disk attached to it.

Formatting and Mounting Extra Disk on VM Instance

1. Login to the instance and list the available extra disk using the following command.

sudo lsblk

An example output is shown below. All the extra disks will not have any entry under the MOUNTPOINT tab. Here, sdb is the extra disk that has to be formatted and mounted.

[[email protected] ~]$ lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  10G  0 disk 
└─sda1   8:1    0  10G  0 part /
sdb      8:16   0  20G  0 disk

2. Next, we should format the disk to ext4 using the following command. In the below command we are mentioning /dev/sdb as that is the extra disk available.

sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb

3. Next, create a mount directory on the instance as shown below. You can replace the /demo-mount with a custom name and path you prefer.

sudo mkdir -p /demo-mount

4. Now, mount the disk to the directory we created using the following command.

sudo mount -o discard,defaults /dev/sdb /demo-mount

5. If you want write permissions to this disk for all the users, you can execute the following command. Or, based on the privileges you need for the disk, you can apply the user permissions.

sudo chmod a+w /demo-data

6. Check the mounted disk using the following command.

df -h

A sample output,

[[email protected]]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        10G  2.3G  7.8G  23% /
devtmpfs        842M     0  842M   0% /dev
tmpfs           849M     0  849M   0% /dev/shm
tmpfs           849M  8.4M  840M   1% /run
tmpfs           849M     0  849M   0% /sys/fs/cgroup
tmpfs           170M     0  170M   0% /run/user/1001
tmpfs           170M     0  170M   0% /run/user/0
/dev/sdb         20G   45M   20G   1% /demo-data

Automount Disk On Reboot

To automount the disk on system start or reboots, you need to add the mount entry to the fstab. Follow the steps given below for adding the mount to fstab.

1. First, back up the fstab file.

sudo cp /etc/fstab /etc/fstab.backup

2. Execute the following command to make a fstab entry with the UUID of the disk.

echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /demo-mount ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab

2. Check the UUID of the extra disk

sudo blkid -s UUID -o value /dev/sdb

3. Open fstab file and check for the new entry for the UUID of the extra disk

sudo cat /etc/fstab

Now, on every reboot, the disk will automatically mount to the defined folder based on the fstab entry.

google cloud disk mount

How To Setup a etcd Cluster On Linux – Beginners Guide

Setup a etcd Cluster On Linux

Introduction

etcd is an open source key-value store for storing and retrieving configurations. It is a core component in Kubernetes to store and retrieve objects state information. It works in a leader-member fashion by making the etcd clusters highly available to withstand node failures.

    1. Its a distributed key-value store
    2. It uses raft protocol
    3. Clients can use REST/gRPC to retrieve the stored values.

Prerequisites

Before you begin, make sure you have the following setup.

  1. Three Linux servers (Can be an odd quorum of 5, 7 etc based on the needs)
  2. A valid hostname for all the servers
  3. Firewall rules enabled all the servers on following ports for client requests and peer to peer communication.
    2380
    2379

Setup an etcd Cluster on Linux

etcd setup is fairly easy and this guide follows the static bootstrap method, which means you need to know the IPs of your nodes for bootstrapping. This guide covers all the necessary steps to set up a cluster on Linux servers. It is a multinode setup with systemd files to run etcd as a service.

Following are the etcd server hostname and IP details used in this guide. Change the IPs mentioned in the guide with your IPs where ever needed.

  1. etcd-1: 10.128.0.2
  2. etcd-2: 10.128.0.4
  3. etcd-3: 10.128.0.3

Let’s get started with the setup.

On All the 3 Nodes

Perform steps 1 to 6 on all the three nodes.

Step 1: CD into local src folder

cd /usr/local/src

Step 2: Download the latest etcd release from the etcd Github Releases. At the time of writing this article, the latest version is 3.3.10

sudo wget "https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"

Step 3: Untar the binary.

sudo tar -xvf etcd-v3.3.9-linux-amd64.tar.gz

Step 4: Move the extracted etcd executables (etcd & ectdctl) to local bin.

sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/

Step 5: Create relevant etcd folders, user & group. We will be running the etcd service as an etcd user.

sudo mkdir -p /etc/etcd /var/lib/etcd
groupadd -f -g 1501 etcd
useradd -c "etcd user" -d /var/lib/etcd -s /bin/false -g etcd -u 1501 etcd
chown -R etcd:etcd /var/lib/etcd

Step 6: Perform the following as root user.

Set two environment variables. One to fetch the system IP and another to get the system hostname.

ETCD_HOST_IP=$(ip addr show eth0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
ETCD_NAME=$(hostname -s)

Create a systemd service file for etcd. Replace --listen-client-urls with your server IPs

Note: –name , –initial-advertise-peer-urls, –listen-peer-urls, –listen-client-urls will be different for each server. ETCD_NAME & ETCD_HOST_IP variables will automatically replace it.

cat << EOF > /lib/systemd/system/etcd.service
[Unit]
Description=etcd service
Documentation=https://github.com/coreos/etcd
 
[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \\
 --name ${ETCD_NAME} \\
 --data-dir /var/lib/etcd \\
 --initial-advertise-peer-urls http://${ETCD_HOST_IP}:2380 \\
 --listen-peer-urls http://${ETCD_HOST_IP}:2380 \\
 --listen-client-urls http://${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \\
 --advertise-client-urls http://${ETCD_HOST_IP}:2379 \\
 --initial-cluster-token etcd-cluster-1 \\
 --initial-cluster etcd-1=http://10.142.0.2:2380,etcd-2=http://10.142.0.4:2380,etcd-3=http://10.142.0.3:2380 \\
 --initial-cluster-state new \\
 --heartbeat-interval 1000 \\
 --election-timeout 5000
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target
EOF

Bootstrap The etcd Cluster

Once all the configurations are applied on the three servers, start and enable the newly created etcd service on all the nodes. The first server will act as a bootstrap node. One node will be automatically elected as a leader once the service is started in all the three nodes.

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd.service
systemctl status -l etcd.service

Verify etcd Cluster Status

ectdctl is the utility to interact with the etcd cluster. You can find this utility in the folder/usr/local/bin of all the nodes.

You can use any one of the cluster nodes to perform the following checks.

Check the cluster health using the following command

etcdctl cluster-health

Verify cluster membership status using the following command. It will show the leader status.

etcdctl  member list

By default, etcdctl uses etcd v2. So you need to explicitly use a variable ETCDCTL_API=3 to access etcd v3 functionalities.

You can set it as an environment variable or pass it along with each etcdctl command as shown below.

Let’s write few key-value pairs in the cluster and verify it.

ETCDCTL_API=3 etcdctl put name1 batman
ETCDCTL_API=3 etcdctl put name2 ironman
ETCDCTL_API=3 etcdctl put name3 superman
ETCDCTL_API=3 etcdctl put name4 spiderman

Now you can try getting the value of name3 using the following command.

ETCDCTL_API=3 etcdctl get name3

You can list all the keys using ranges and prefixes

ETCDCTL_API=3 etcdctl get name1 name4 # lists range name1 to name 4
ETCDCTL_API=3 etcdctl get --prefix name # lists all keys with name prefix

 

Setup a etcd Cluster On Linux

How To Setup Google Provider and Backend For Terraform

Setup Google Provider And Backend For Terraform

As part of getting started, you should have a valid Google Service account which has required permissions to resources that you are trying to manage using Terraform. You can get this service account from the Google Cloud IAM console.

This guide covers the following two main setups.

  1. Google Provider Setup
  2. GCS backend Setup with multiple ways of initialization.

Terraform Google Provider Configuration

Terraform google cloud provider configuration is a series for key-value pairs and contains four pairs.

  1. Credentials: Google service account file path.
  2. Project: The Google Project which Terraform wants to manage.
  3. Region: Google cloud region
  4. Zone: Google cloud zone.

An example configuration is given below. With this configuration, you can connect to Google account in the central region on devopscube-demo project.

provider "google" {
  credentials = "${file("service-account.json")}"
  project     = "devopscube-demo"
  region      = "us-central1"
  zone        = "us-central1-c"
}

Note: ${file("service-account.json")} looks for the service account key in the current folder where you are running the terraform command.

Managing Service Account

You can manage the service account in the following ways.

1. Enter the path of the service account file with the credentials key. For example,

credentials = "${file("/opt/terraform/service-account.json")}"

2. Set a credential environment variable GOOGLE_CLOUD_KEYFILE_JSON. In this case, we don’t have to use credentials key in the provider definition. Terraform will automatically pick up the credential location from the environment variable. Example,

export GOOGLE_CLOUD_KEYFILE_JSON="/opt/terraform/service-account.json"

Note: export will only set the environment variable in the current terminal. To set a permanent environment variable, you need to add it to the user/system profile.

Example Terraform Code With Google Provider

provider "google" {
  credentials = "${file("/opt/creds/service-account.json")}"
  project     = "devopscube-demo"
  region      = "us-central1"
}
resource "google_compute_instance" "ubuntu-xenial" {
   name = "devopscube-demo-instance"
   machine_type = "f1-micro"
   zone = "us-west1-a"
   boot_disk {
       auto_delete = false
      initialize_params {
      image = "ubuntu-1604-xenial-v20181023"
      size = "30"     
   }
}
network_interface {
   network = "default"
   access_config {}
}

metadata {
    foo = "bar"
}

metadata_startup_script = "echo demo > /demo.txt"

scheduling {
    preemptible       = true
  }

service_account {
   scopes = ["cloud-platform"]
   }
}

Once you execute the init command, terraform will automatically download the Google backend plugin.

Google Cloud Storage (GCS) Terraform Backend Setup

During every terraform run, terraform creates a state file for the executed plan. By default, it creates the state in the local file system. You can store this state in remote GCS backend.

Before getting started, you need to have the following.

  1. GCS Bucket: A google storage bucket where you want to save the terraform state. You can create one from here.
  2. Valid Google Service Account: Google service account with permissions to write to the storage bucket used by Terraform to save the states.

GCS backend configuration has the following key-value pairs.

  1. Bucket: Google storage bucket name.
  2. Prefix: Folders inside the bucket.
  3. Credentials: Path to google service account file.

Backend configuration will look like this. You should have this backend configuration in your root terraform file.

terraform {
  backend "gcs" {
    bucket = "devopscube-states"
    prefix = "demo"
    credentials = "service-account.json"
  }
}

Initializing GCS Backend

You need to initialize the GCS backend for the first time using the init command. This would read all the backend configuration from the terraform block and initialize a state file in the bucket path.

terraform init

You can also pass the backend configuration in the runtime using a partial configuration.

The terraform block would have an empty declaration as shown below.

terraform {
  backend "gcs" {}  
}

Now, you can pass the backend configurations with the init command as shown below.

terraform init \
    -backend-config="bucket=devopscube-states" \
    -backend-config="prefix=demo" \
    -backend-config="credentials=service-account.json"

You can also pass the backend configuration as a file during runtime.

Note: This is a perfect use case of using secrets with vault. You can store your backend configs in vault and retrieve it during terraform initialization. Covering vault integration in out of the scope of this article.

For example, create a file named backend.config and enter all the backend details as key-value pairs as shown below.

bucket = "devopscube-states"
prefix = "demo"
credentials = "service-account.json"

You can pass this file with the init command as follows.

terraform init -backend-config=backend.config

 

Setup Google Provider And Backend For Terraform