Archive for March, 2016

Using the Ansible Slurp module

I recently discovered the slurp module within Ansible when I was attempting to find new modules in Ansible 2.0. It is particularly interesting for me since I’ve been doing a bunch of stuff involving the contents of files on remote nodes for my OpenVPN playbook. So I decided to try using it in one of my latest playbooks and see how much better it is than doing command: cat <file>.

Using it

My usecase for slurp was checking if a newly bootstrapped host was Fedora 22, and upgrading it to Fedora 23 if it was. The problem in this case was that recent versions of Fedora don’t come with Python 2, so we can’t use gather facts to find the version of Fedora (and need to install Python2 before we do anything).

The suggested method was to install python using the raw command, and then run the setup module to make the facts available.

But I was going to reboot the node right after the install in any case, so I didn’t feel like running the full setup module, so this was a perfect place to try the slurp module.

Using it is simple – there’s only one parameter: src, the file you want to get the contents of.

Similarly, using the results is also simple, with one exception: The content of the file is base64 encoded, so it must be decoded before use. Thankfully, Ansible/Jinja2 provides the b64decode filter to easily get the contents into a usable form.

My final playbook ended up looking something like this:

gather_facts: no
tasks:
    - name: install packages for ansible support
      raw: dnf -y -e0 -d0 install python python-dnf
    - name: Check for Fedora 22
      slurp:
        src: /etc/fedora-release
      register: fedora
    - name: Upgrade to Fedora 23
      command: dnf -y -e0 -d0 --releasever 23 distro-sync
      when: '"Fedora release 22" in fedora.content|b64decode'

Functionally, it’s pretty much identical to using the old command: cat <file> , register, and when: xyz in cmd.stdout style to get & use the contents of files. All of those elements are still there, just renamed at most – register is still being used unmodified.

The fact that I’m using a dedicated module for it though makes my playbook look a lot more Ansible-ish, which is something I like. (And the fact I don’t need to have a changed_when entry is a strong plus for code cleanliness.)

No Comments

Backing up & restoring Jenkins

I’m moving my jenkins instance to a new server, which means meaning up & restoring it.

Backup

The nice thing about it is that it’s almost entirely self-contained in /var/lib/jenkins, which means I really only have 1 directory to backup.

I’m using duply to back the folder up – but it’s 1.9GB in size. So to save space & bandwidth, I’m going to exclude certain files. This is the content of my /etc/duply/jenkins/exclude file:

**/*.rpm
**/plugins/*/
**/plugins/*.jpi
**/plugins/*.bak
**/workspace
**/.jenkins/war

The main thing I’m excluding is the build artifacts – because I’m building RPMs, the SRPMS are rather large (nginx-pagespeed SRPMs weigh in at 110+MB), so I exclude all files ending in .rpm.

Next, I’m excluding most of the stuff in the plugin folder. My reasoning behind this is that the plugins themselves are downloadable. However, Jenkins disables plugins/pins plugins to the currently installed version by creating empty files of the form <plugin>.jpi.disabled/<plugin>.jpi.pinned. I want these settings to carry over between versions. Unfortunately trying + **/plugins/*.jpi.pinned showed that everything else got removed from the backup. I’m assuming this is due to the use of an inclusive rule, so the default include got changed to default exclude.

In any case, I end up explicitly excluding things I don’t care about, which is probably good if something that I might need ever ends up in the plugins folder.

I also exclude workspace because everything can be recreated by building from specific git commits if need be. The job information is logged in jobs/, so I can easily find past commits even though the workspace itself no longer exists.

Finally, I also exclude the jenkins war folder. I believe that this is an unpacked version of the .war file that gets installed to /usr/lib/jenkins. It seems to get created when I start jenkins itself.

With just these 6 excludes, I’ve dropped the backup archive size down to <5 MB, which is a big win.

Normally I’d just take a live backup while Jenkins is running, but in this case where I’m moving servers, I completely shutdown Jenkins first, before taking a final backup with duply jenkins full.

Restoring

For the restore, I first installed Jenkins using a Jenkins playbook from Ansible Galaxy. It’s fairly barebones, but it works well – and I don’t need to spend time developing my own playbook. I also installed duply, and I manually installed an updated version of duplicity for CentOS 7 from koji to get the latest fixes.

Once I got duply set back up, I restored all the files to a new folder with duply jenkins restore /root/jenkins. I restored it to a separate folder because duply appears to remove the entire destination folder if it exists, and I wanted to merge the two folders.

After the restore was complete, I ran rsync -rtl --remove-source-files /tmp/jenkins/ /var/lib/jenkins to merge the restored data into the newly installed Jenkins instance.

At this point, everything should have worked, except I was unable to login. After spending some time fruitlessly searching Google, I ran chown -R jenkins:jenkins /var/lib/jenkins, as the rsync didn’t preserve the file owner when it created the new files. Luckily enough, that fixed the problem, and I could now login.

I then spent a few hours working all this into an Ansible playbook so future moves are much easier.

, ,

No Comments