Back in 2018 when I joined InfraCloud, I had a nice opportunity to spend my first day writing an Ansible playbook to setup my new machine. Though I knew what Ansible is and how it works, I had never tried writing a playbook.
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
– Ansible Documentation
Read more about it at What is Ansible? | Opensource.com and How Ansible Works | Ansible.com.
I followed the official documentation for playbooks, which helped me to get started. A playbook is set of plays. A play has list of tasks, hosts to run the tasks on etc. Each task uses a module to perform the operations. Basic example is the dnf module, which can be used to install packages.
--ask-become-pass
for privilege escalationI was using sudo ls
before running the playbook, so that any task
using sudo internally will not fail. This was not the right way to
give a playbook the privileges it
needs. Akshay
helped me to understand how to use --ask-become-pass
argument to
ansible-playbook
command in order run tasks which need privileged
access.
$ ansible-playbook --help
…
-K, --ask-become-pass
ask for privilege escalation password
lineinfile
and blockinfile
While having discussion with Akshay, I realized that using blockinfile module helps us to create or update a text block. Basically, when we run the same playbook multiple times, it will not add the text multiple times. It just updates the existing block, as there are marker comments which work as identifiers for Ansible. Using blockinfile was more appropriate in my case as I was working with a complete text block rather than just a line. Whereas for lineinfile, we need to have proper regex, which make it possible to replace particular line or insert a new line before or after the lines matching given regex.
In one day, I was able to write a very simple playbook which could install few packages and create the configuration for Powerline. While writing it, I learned how to use loops, ask-become feature and modules like lineinfile, blockinfile, dnf.
Few months back I bought a new machine. And this time, I wanted to setup the workstation using Ansible. After a year of procrastination, finally I started improving the playbook further. Now the playbook does complete setup of the machine which has Fedora installed, which I can use as my workstation.
The playbook does the following work,
Link to the workstation setup repository: https://gitlab.com/bhavin192/setupit
To setup the dotfiles, I decided to start using GNU
Stow. It helps to manage symlinks
to files located in a directory. The following set of tasks first
check if the symlink .stowed
exist in user’s home. If it’s not
there, then it clones the dotfiles
repository and stows the files. It also removes the
.bashrc
before running stow if it already exist.
# tasks/dotfiles.yaml
- name: Check if already stowed
stat:
path: ~/.stowed
register: st
- name: Remove existing .bashrc from home
file:
path: ~/.bashrc
state: absent
when: st.stat.islnk is not defined
- name: Clone the dotfiles repository
git:
repo: https://gitlab.com/bhavin192/dotfiles.git
dest: ~/src/dotfiles
when: st.stat.islnk is not defined
- name: Stow the dotfiles
shell: |
stow --verbose 2 --dir "${HOME}/src/dotfiles" --target "${HOME}" .
when: st.stat.islnk is not defined
The
get_url
Ansible module can be used to
download files. It accepts checksum as parameter and verifies that
once the file is downloaded. Value of this parameter can be an URL to
checksum file or the checksum value itself with algorithm. It only
supports the checksum format where the checksum and file name is
present. It looks something like this,
$ sha256sum dive_0.9.1_linux_amd64.tar.gz
2e1cd4a28d8ac9ed72ce…afc6e271bda02974dde8 dive_0.9.1_linux_amd64.tar.gz
But most of the files I wanted to download had just the checksum value in the files. Workaround for this is to download the checksum file first and then use lookup to save it in a variable. This variable can be passed as checksum value to get_url. Take a look at ansible/ansible#48790 (comment) for more details.
# tasks/binaries.yaml
- name: Create a temporary directory
tempfile:
state: directory
prefix: "setupit."
register: tmpdir
- name: Download checksum files
get_url:
url: "https://dl.k8s.io/{{ k8s_version }}/bin/linux/amd64/kubectl.sha512"
dest: "{{ tmpdir.path }}/setupit-checksum-kubectl-sha512"
- name: Download binaries
get_url:
url: "https://dl.k8s.io/{{ k8s_version }}/bin/linux/amd64/kubectl"
dest: "~/.local/bin/kubectl"
checksum: "sha512:{{ bin_sha }}"
mode: u+x
vars:
bin_sha: "{{ lookup('file', '{{ tmpdir.path }}/setupit-checksum-kubectl-sha512') }}"
You can find the complete implementation in tasks/binaries.yaml and vars/binaries.yaml
Some of the binaries which I wanted to setup were released as tar
archives. I wanted to extract only one file from those files. I even
wanted to modify the name in the case of Helm v3 binary. The Ansible
module unarchive
uses gtar
to extract files. It accepts extra_opts
parameter, these are the arguments which are passed to gtar
command.
# tasks/binaries.yaml
- name: Extract Helm 3 binary as helm3
unarchive:
src: "{{ tmpdir.path }}/helm3-linux-amd64.tar.gz"
remote_src: yes
dest: "~/.local/bin"
extra_opts:
- "--add-file"
- "linux-amd64/helm"
- "--strip-components"
- "1"
- "--transform"
- "s/helm/helm3/"
--add-file
extract only linux-amd64/helm
from the
archive--strip-components
removes the given number of leading components
from the name i.e. linux-amd64
and extracts the helm binary in
~/.local/bin/
instead of ~/.local/bin/linux-amd64
--transform
applies given sed replace expression to file namesI got my playbook reviewed by Akshay and Chandan. We came up with the following modifications which I will be doing.
Comments are not enabled on this site. The old comments might still be displayed. You can reply on one of the platforms listed in ‘Posted on’ list, or email me.