Etcd Backup and Restore on Kubernetes Cluster [Tutorial]

backup etcd and restore it on Kubernetes

In this kubernetes tutorial, you will learn the etcd backup and restore on Kubernetes cluster with an etcd snapshot.

In Kubernetes architecture, etcd is an integral part of the cluster. All the cluster objects and their state is stored in etcd. Few things you should know about etcd from a Kubernetes perspective.

  1. It is a consistent, distributed, and secure key-value store.
  2. It uses raft protocol.
  3. Supports highly available architecture with stacked etcd.
  4. It stores kubernetes cluster configurations, all API objects, object states, and service discovery details.

If you want to understand more about etcd and how kubernetes uses it, I recommend reading the comprehensive Kubernetes Architecture post.

Also, if you consider Kubernetes design best practices, Kubernetes etcd backup and restore is one of the important aspects under the backup strategy.

Kubernetes etcd Backup Using etcdctl

Here is what you should know about etcd backup.

  1. etcd has a built-in snapshot mechanism.
  2. etcdctl is the command line utility that interacts with etcd for snapshots.
etcd backup workflow using etcdctl

Follow the steps given below to take an etcd snapshot.

Step 1: Log in to the control plane.

Step 2: If you don’t have etcdctl in your cluster control plane, install it using the following command.

sudo apt install etcd-client

Step 3: We need to pass the following three pieces of information to etcdctl to take an etcd snapshot.

  1. etcd endpoint (–endpoints)
  2. ca certificate (–cacert)
  3. server certificate (–cert)
  4. server key (–key)

You can get the above details in two ways.

From the etcd static pod manifest file located at /etc/kubernetes/manifests/etcd.yaml the location.

getting cert files from etcd static pod manifest

You can also get the above details by describing the etcd pod running in the kube-system namespace.

While describing the pod, replace etcd-master-node with your etcd pod name.

kubectl get po -n kube-system
kubectl describe pod etcd-master-node -n kube-system

Another way to view the etcd-server parameters faster is using the following command.

ps -aux | grep etcd

Choose a method that you are comfortable with the get the cert details.

Step 4: Take an etcd snapshot backup using the following command.

ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=<ca-file> \
  --cert=<cert-file> \
  --key=<key-file> \
  snapshot save <backup-file-location>

The command looks like the following when you add the actual location and parameters. Execute the command to perform a backup. You can replace /opt/backup/etcd.db with the location and name of your choice.

ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  snapshot save /opt/backup/etcd.db

On a successful execution you will get a “Snapshot saved at /opt/backup/etcd.db”, message as shown below.

etcd backup snapshot command

Also, you can verify the snapshot using the following command.

ETCDCTL_API=3 etcdctl --write-out=table snapshot status /opt/backup/etcd.db

Here is an example output.

+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| b7147656 |    51465 |       1099 |     5.1 MB |
+----------+----------+------------+------------+

Kubernetes etcd Restore Using Snapshot Backup

Now we have the backup in the /opt/backup/etcd.db location. We will use the snapshot backup to restore etcd.

Here is the command to restore etcd.

ETCDCTL_API=3 etcdctl snapshot restore <backup-file-location>

Let’s execute the etcd restore command. In my case /opt/backup/etcd.db is the backup file.

ETCDCTL_API=3 etcdctl snapshot restore /opt/backup/etcd.db

If you want to use a specific data directory for the restore, you can add the location using the --data-dir flag as shown below.

ETCDCTL_API=3 etcdctl --data-dir /opt/etcd snapshot restore /opt/backup/etcd.db

After restoring, we need to update the /etc/kubernetes/manifests/etcd.yaml as in the configuration it points to the older path.

We have now restored the etcd snapshot to a new path so the only change to be made in the YAML file, is to change the hostPath for the volume called etcd-data from the old directory (/var/lib/etcd) to the new directory (/opt/etcd).

Edit the etcd.yaml and change the volume:

volumes:
  - hostPath:
      path: /opt/etcd
      type: DirectoryOrCreate
    name: etcd-data

The etcd pod will be automatically created with new configuration and you would be able to see the previous data.

Also, if you want to change --data-dir to /opt/etcd in the etcd manifest, make sure that the volumeMounts for etcd-data is updated as well, with the mountPath pointing to /opt/etcd

etcd Backup FAQs

How to take etcd backup in Kubernetes?

To take the etcd backup, you need to the etcdctl command line utility. You need to use the etcdctl snapshot command with the etcd certificates to perform a backup operation.

Conclusion

In this blog, we learned Kubernetes etcd backup and restore using etcdctl command line utility.

etcd backup, and restore are essential tasks in Kubernetes cluster administration. Also, it is an important topic in the CKA certification exam.

If you are preparing for the CKA exam, do check out the CKA exam study guide and for the exam discount voucher, check the Kubernetes certification coupon.

6 comments
  1. another way to view the etcd-server parameters faster is using
    ps -aux | grep etcd
    output will look like

    root 1877 1.0 3.2 11222268 65248 ? Ssl 20:26 0:43 etcd –advertise-client-urls=https://172.30.1.2:2379 –cert-file=/etc/kubernetes/pki/etcd/server.crt –client-cert-auth=true –data-dir=/var/lib/etcd –experimental-initial-corrupt-check=true –experimental-watch-progress-notify-interval=5s –initial-advertise-peer-urls=https://172.30.1.2:2380 –initial-cluster=controlplane=https://172.30.1.2:2380 –key-file=/etc/kubernetes/pki/etcd/server.key –listen-client-urls=https://127.0.0.1:2379,https://172.30.1.2:2379 –listen-metrics-urls=http://127.0.0.1:2381 –listen-peer-urls=https://172.30.1.2:2380 –name=controlplane –peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt –peer-client-cert-auth=true –peer-key-file=/etc/kubernetes/pki/etcd/peer.key –peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt –snapshot-count=10000 –trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    root 1883 3.2 12.1 1541372 247008 ? Ssl 20:26 2:09 kube-apiserver –advertise-address=172.30.1.2 –allow-privileged=true –authorization-mode=Node,RBAC –client-ca-file=/etc/kubernetes/pki/ca.crt –enable-admission-plugins=NodeRestriction –enable-bootstrap-token-auth=true –etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt –etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt –etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key –etcd-servers=https://127.0.0.1:2379 –kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt –kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key –kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname –proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt –proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key –requestheader-allowed-names=front-proxy-client –requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt –requestheader-extra-headers-prefix=X-Remote-Extra- –requestheader-group-headers=X-Remote-Group –requestheader-username-headers=X-Remote-User –secure-port=6443 –service-account-issuer=https://kubernetes.default.svc.cluster.local –service-account-key-file=/etc/kubernetes/pki/sa.pub –service-account-signing-key-file=/etc/kubernetes/pki/sa.key –service-cluster-ip-range=10.96.0.0/12 –tls-cert-file=/etc/kubernetes/pki/apiserver.crt –tls-private-key-file=/etc/kubernetes/pki/apiserver.key

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like