GeekSocket Plug in and be Geekified

Migrating from Helm v2 to v3

Version 3 of Helm, the package manager for Kubernetes released a few months ago. This release comes with a lot of new changes and improvements. I was trying out the beta releases of v3 with cluster setup we have. I will be talking about changes in this release, how to migrate your charts and releases.

What is Helm?
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.

Before taking a look at v3, let’s see what was the issue with Tiller component from v2.

Security issue with Tiller from Helm v2

Helm v2 comes with a server side component called Tiller. Tiller takes care of creating all the resources which are part of a chart. Usually it needs admin privileges, so that it can create all the required resources in the cluster.

Tiller exposes one gRPC port, using which helm client communicates with it. By default, this port is accessible to any authenticated user in the cluster. This also means that any application running in the cluster can access it (if you don’t have a proper NetworkPolicy). This leads to risk of any compromised application running delete/install operations in the cluster.

To mitigate this issue one can do following things,

  • Have less privileges to Tiller.
  • Install Tiller in each namespace and give it access to that namespace only.
  • Enable TLS authentication between helm client and Tiller.

Tiller component has been removed from Helm v3. The privileges helm client gets are the user’s privileges who is running the helm command (uses KUBECONFIG).

What’s new in Helm 3

While Helm v3 is a major rewrite over v2, here is the list of few changes in the tool.

  • Tiller has been removed completely. All the operations happen using the client binary itself (helm command).
  • Helm code base can now be used as a package by other tools. This can be utilized to achieve same results as of helm command using Go code.
  • Chart dependencies are moved to Chart.yaml instead of a separate requirements.yaml file
  • apiVersion of Chart.yaml has been updated to v2
  • CRDs are now handled differently than normal resources
  • Ability to use Docker registry to distribute charts, modifications to commands and so on

Checkout Overview of Helm 3 Changes and Changes since Helm 2 for more details about these changes.

Migrating to Helm 3

Before migrating existing releases from clusters, we need to make sure that the chart we use are compatible with Helm 3. Easiest way is to test all the charts in a separate cluster, which is the exact replica of existing cluster. While doing that, I found that almost all the charts I use, were working just fine.

The prometheus-operator chart had issues as it uses CRDs to create Prometheus, Alertmanager etc.

What is changed when it comes to CRDs

The way Helm 3 handles CRDs has been changed. They are now treated as special resources and are never upgraded by Helm once installed. The upgrade operation should be done by the cluster operators (admins) with extra care. Few things about this change:

  • CRDs should be put in the crds directory at the top level of chart directory.
  • The YAML files in this directory cannot be templatized like any other resources from templates directory.
  • The crd-install hook , which used to take care of installing CRD files from templates directory has been removed. If there are any files with crd-install annotation, then those are skipped by Helm.
  • Files from crds directory are applied first before rendering the chart. These are never applied again if the CRDs exist in the cluster.

The snippet of sort() function from releaseutil package can be found here. It calls continue when it finds any unknown hook. More information about this change is available at Proposal: Manage CRDs.

Here are few links from Helm’s official documentation about CRDs, Helm | Charts - CRDs, Helm | Custom Resource Definitions.

Modifying the charts to be compatible with both versions

From all the charts I had, prometheus-operator chart was failing as it uses CRDs. Helm 3 was skipping the CRDs from templates directory. After experimenting, I came up with a solution. The idea was to copy the YAML files of CRDs from templates to the crds directory. This involves removing any templating from those files. Helm 2 will ignore files from crds and Helm 3 will skip CRD files from templates.

$ helm3 install prom stable/prometheus-operator
…
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"

After proposing this change (helm/charts#18721), I got a suggestion from vsliouniaev to use .Files.Glob instead of keeping the same file at two places. There were two ways to achieve this. One was to have an object with kind: List, which will hold all the CRD objects. Second way was to have one YAML file with multiple YAML documents of CRDs separated by --- separator. Second way worked well and was backword compatible with older releases as well.

# templates/prometheus-operator/crds.yaml
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" }}
{{ $.Files.Get $path }}
---
{{- end }}

Detailed explanation can be found in helm/charts#18721 (comment).

With this change merged, prometheus-operator was compatible with both version 2 and 3 of Helm. The same solution can be applied to other charts, so that you can install them with Helm 3. All the charts from stable repository are now compatible. Thanks to everyone who helped with [stable/*] Helm 3 backwards-compatibility for community charts. Make sure you check out all the points which are mentioned in this issue to have the compatibility.

Migrating the client configurations

Helm 3 uses XDG Base Directory Specification. Data, configurations, cache are now stored in different directories instead of ~/.helm ($HELM_HOME). The migration plugin developed by Helm community can migrate the repositories, plugins etc according to the new directory structure.

helm-2to3 plugin on GitHub.

Follow the Setting up Helm v3, helm-2to3 plugin and Migrate Helm v2 configuration sections from the official blog post about migration to v3. After following the steps, you will have helm3 command along with 2to3 plugin installed on you machine.

Migrating the installed releases

When we install or update a chart, it creates one Helm release in the cluster. It is stored as either ConfigMap or Secret within the cluster in case of Helm 2. This makes it possible to do the rollback and keeps a history. The format in which Helm 3 stores the release information is different from v2. The 2to3 plugin will do the work of converting these releases to the new format.

$ helm3 history prom
REVISION	UPDATED                 	STATUS    	CHART                    	APP VERSION	DESCRIPTION     
1       	Wed Dec 11 15:23:41 2019	superseded	prometheus-operator-8.3.3	0.34.0     	Install complete
6       	Wed Jan 22 10:43:05 2020	deployed  	prometheus-operator-8.3.3	0.34.0     	Upgrade complete

Before migrating the releases, make sure that you are the latest version of all the charts installed in the cluster. Some of the charts might have updated to make sure they are compatible with Helm 3.

Take a look at Readme before migration.

1. Backup all the existing releases

It’s important to take a backup of all the releases before starting with the migration. Though the first step of migration won’t delete Helm 2’s release information (ConfigMaps or Secrets), it’s always better to take a backup.

Using helm-backup plugin by maorfr

The helm-backup is a Helm 2 plugin which can take backup of releases and also has ability to restore them. Here is how it achieves the backup and restore.

  • It finds the storage method used by Tiller then backs up the configMaps/Secrets along with release names as a tar file
  • While doing the restore it first applies the ConfigMaps/Secrets. Then, it tries to find the release with STATUS=DEPLOYED label, gets the manifest (YAML) for that release and then applies it.

Take a look at these functions for more details, Backup(namespace string), Restore(namespace string), helm_restore.Restore(releaseName, tillerNamespace, label string).

This installs the plugin and takes backup of all the releases from cluster-tools namespace.

$ helm plugin install https://github.com/maorfr/helm-backup
Downloading and installing helm-backup 0.1.2 ...
https://github.com/maorfr/helm-backup/releases/download/0.1.2/helm-backup-linux-0.1.2.tgz
Installed plugin: backup

$ helm backup cluster-tools
2020/03/14 13:56:36 getting tiller storage
2020/03/14 13:56:36 found tiller storage: configmaps
2020/03/14 13:56:36 getting releases in namespace "cluster-tools"
2020/03/14 13:56:36 found relases: metrics-server, nginx-ingress
2020/03/14 13:56:36 getting backup data
2020/03/14 13:56:37 successfully got backup data
2020/03/14 13:56:37 writing backup to file cluster-tools.tgz
2020/03/14 13:56:37 backup of namespace "cluster-tools" to file cluster-tools.tgz complete

Using kubectl to backup the releases

Another way is to just backup all the ConfigMaps or Secrets which hold the release information. The easiest way to do it is to run the following command. Based on your storage backend configuration of Tiller, use ConfigMaps or Secrets.

$ kubectl get configmaps \
    --namespace "kube-system" \
    --selector "OWNER=TILLER" \
    --output "yaml" > helm2-backup-cm.yaml
# helm2-backup-cm.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
  data:
    release: H4sIAAAAAAAC/+x8z28cyXUwtFp+sOuzA4de+7C5PA…
  kind: ConfigMap
  metadata:
    creationTimestamp: "2020-03-14T08:13:41Z"
    labels:
      MODIFIED_AT: "1584173700"
      NAME: metrics-server
      OWNER: TILLER
      STATUS: SUPERSEDED
      VERSION: "1"
    name: metrics-server.v1
    namespace: kube-system
# …

2. Convert the Helm 2 releases to Helm 3

The 2to3 plugin has a dry-run mode which can be used to check if a release will get converted properly. I wrote a small script which finds all the installed releases and then runs the convert command over it with --dry-run. After that it takes input from user if they want to convert the selected release.

#!/usr/bin/env bash
# https://gitlab.com/snippets/1953002

helm3_cmd="helm3"
if [[ -x "$(which helm 2>/dev/null)" ]]; then
  helm2_releases="$(helm ls --all --short)"
else
  echo "'helm' is not installed or not present in PATH. Using kubectl to get list of releases."
  # …
fi

echo -e "Found the following releases:\n${helm2_releases}\n"

for release in ${helm2_releases}; do
  ${helm3_cmd} 2to3 convert --dry-run "${release}"
  read -p "Convert '${release}'? [Y/n] " user_choice
  if [[ "${user_choice}" == "Y" ]]; then
    echo "Converting '${release}'."
    ${helm3_cmd} 2to3 convert "${release}"
    echo "Converted '${release}' successfully."
  else
    echo "Skipping conversion of '${release}'."
  fi
done

Complete script can be downloaded from here.

$ ./helm_2to3_batch_convert.sh
Found the following releases:
metrics-server
nginx-ingress

2020/03/14 16:42:53 NOTE: This is in dry-run mode, the following actions will not be executed.
2020/03/14 16:42:53 Run without --dry-run to take the actions described below:
2020/03/14 16:42:53 
2020/03/14 16:42:53 Release "metrics-server" will be converted from Helm v2 to Helm v3.
2020/03/14 16:42:53 [Helm 3] Release "metrics-server" will be created.
2020/03/14 16:42:53 [Helm 3] ReleaseVersion "metrics-server.v1" will be created.
Convert 'metrics-server'? [Y/n] Y
Converting 'metrics-server'.
2020/03/14 16:42:57 Release "metrics-server" will be converted from Helm v2 to Helm v3.
2020/03/14 16:42:57 [Helm 3] Release "metrics-server" will be created.
2020/03/14 16:42:57 [Helm 3] ReleaseVersion "metrics-server.v1" will be created.
2020/03/14 16:42:57 [Helm 3] ReleaseVersion "metrics-server.v1" created.
2020/03/14 16:42:57 [Helm 3] Release "metrics-server" created.
2020/03/14 16:42:57 Release "metrics-server" was converted successfully from Helm v2 to Helm v3.
2020/03/14 16:42:57 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2020/03/14 16:42:57 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over.
Converted 'metrics-server' successfully.
2020/03/14 16:42:57 NOTE: This is in dry-run mode, the following actions will not be executed.
2020/03/14 16:42:57 Run without --dry-run to take the actions described below:
2020/03/14 16:42:57 
2020/03/14 16:42:57 Release "nginx-ingress" will be converted from Helm v2 to Helm v3.
2020/03/14 16:42:57 [Helm 3] Release "nginx-ingress" will be created.
2020/03/14 16:42:57 [Helm 3] ReleaseVersion "nginx-ingress.v1" will be created.
2020/03/14 16:42:57 [Helm 3] ReleaseVersion "nginx-ingress.v2" will be created.
Convert 'nginx-ingress'? [Y/n] n
Skipping conversion of 'nginx-ingress'.

Run helm3 ls to see if the releases are converted correctly. (Note: releases are namespace scoped in Helm 3.)

$ helm3 ls --namespace cluster-tools
NAME          	NAMESPACE    	REVISION	UPDATED                                	STATUS  	CHART                	APP VERSION
metrics-server	cluster-tools	5       	2020-03-14 08:15:10.393212762 +0000 UTC	deployed	metrics-server-2.10.0	0.3.6      
nginx-ingress 	cluster-tools	3       	2020-03-14 08:11:36.01362401 +0000 UTC 	deployed	nginx-ingress-1.26.2 	0.26.1     

Once all the releases are converted, run an upgrade for all the releases using helm3. It’s possible that the migration of a release happens successfully but the chart is incompatible with Helm 3. Due to this, the next upgrade to the release using helm3 might fail.

3. Cleanup the Helm 2 data and resources

After converting all the releases successfully (also testing upgrades using helm3). It’s time to cleanup the cluster resources which were used by Helm 2. The 2to3 cleanup command will remove ‘Helm v2 client Configuration’, ‘Release Data’ and ‘Tiller’ from the cluster.

Follow the Clean up of Helm v2 data section from the official blog post.

Frequently asked questions

These are few of the questions which might be useful to someone who is doing the migration or starting with Helm 3.

  1. Is it possible to upgrade from v1 of Chart.yaml to v2?
    Yes. The upgrade is same as a normal upgrade.
  2. How to list the releases from all the namespaces?
    helm3 ls --all-namespaces or helm3 ls -A (new in v3.1.0)
  3. How to add the stable or incubator repositories?
    $ helm3 repo add stable https://kubernetes-charts.storage.googleapis.com/
    $ helm3 repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
    
    Reference 1, 2.
  4. What is going to happen with stable and incubator chart repositories?
    TL;DR. Those are going to be deprecated. Read more about it here.

References

Note: this post is also available on InfraCloud’s blog.


Comments

Comments are not enabled on this site. The old comments might still be displayed. You can reply on one of the platforms listed in ‘Posted on’ list, or email me.