Handling Launchpad branches via tarmac running on Canonistack
Using tarmac to handle the landing of Launchpad branches is quite easy, with only a minimal amount of configuration. Getting it to run on a Canonical Openstack (a.k.a. canonistack) instance is pretty easy, thanks to the work of James Westby.
Quick-start: Why didn't my branch land?
If you are here just because you want to investigate why your branch didn't land, do the following:
First,
Ensure the merge proposal has one Approve vote and the whole thing is marked Approved.
- Ensure there is a commit message.
If those are o.k., then:
Do the steps below at SSH access to the instance
- ssh to the instance
Look at the log file at /home/tarmac/.config/tarmac/tarmac.log
Tarmac configuration
Configuring tarmac is pretty straightforward. The user it will run under (tarmac in our case) creates a file ~/.config/tarmac/tarmac.conf that references the project or branch to be monitored in Launchpad and the various options for landing. Here is an example for the lpsetup project:
[Tarmac] log_file = /var/log/tarmac.log [lp:lpsetup] tree_dir = /home/tarmac/repos/lpsetup/trunk commit_message_template = [r=<approved_by_nicks>][bug=<bugs_fixed>] <commit_message> (<author_nick>) verify_command = find . -name "*.py" | grep -v distribute_setup.py | xargs pocketlint && nosetests rejected_branch_status = Work in progress voting_criteria = Approve >= 1, Disapprove == 0
The first stanza [Tarmac] sets up global options for the running of the tarmac program. In this case we just specify where we want the log file to go.
The name in subsequent stanzas is the URL for the branch to be monitored. In our case [lp:lpsetup], we can use the shorthand to specify the development focus branch for the project. You can use any reference understood by bzr. Inside the stanza you simply specify the different options you want. The tarmac introduction document lists most of the options available. Tarmac also has a plugin framework that allows you to extend its functionality.
To run tarmac you'd use tarmac merge --debug until you're happy with your configuration. Look out for:
tree_dir will be created by tarmac, and it is happiest if it is allowed to create the leaves. In the example given, I created /var/cache/tarmac and used chown to give ownership to the user running tarmac. It then created the rest of the path and did the initial checkout.
Running tarmac in a cron job with my personal account caused problems as I have create_signatures = always as part of my Bazaar configuration. Even when I set up the cron script to know about my GPG_AGENT_INFO settings I still got errors. It's best to run as a dedicated user and turn off signatures for Bazaar.
Fabric + Puppet + Canonistack + Tarmac
James Westby put together some really nice puppet scripts for handling most of the tarmac configuration and getting it up and running on a canonistack instance. His sanitized branch is at lp:~james-w/+junk/tarmac-puppet and the README is fantastic.
I've created a customized branch for yellow's initial needs and put it in a private branch at lp:~yellow/lp-tarmac-configs/tarmac-puppet. Most of the customization is in tarmac.pp though I did make minor changes to fabfile.py to specify a precise instance due to some problems with the ssh-import-id command on lucid.
Note this branch contains the launchpadlib credentials and the private SSH keys for the lpqabot account, which is why it is private.
Deployment strategy
Currently the moving parts are fabric is used to spin up a canonistack instance and then uses puppet to deploy tarmac.
Prerequisites
Canonistack setup
To start up a canonistack instance you'll need to follow the directions at https://wiki.canonical.com/InformationInfrastructure/IS/CanonicalOpenstack. You can stop after the Create a key pair step.
source ~/.canonistack/novarc # not required if you use the startup and shutdown scripts.
SSH access to the instance
The tarmac instance is available via ssh tarmac@lptarmac.no-ip.org. You'll need to change your ~/.ssh/config as follows:
Host 10.55.60.* lptarmac.no-ip.org User ubuntu IdentityFile ~/.canonistack/<your-user-name>.key ProxyCommand ssh <your-user-name>@chinstrap.canonical.com nc -q0 %h %p
Installing fabric to launch the instance
You'll also need to install fabric on your local machine as it is used to create the canonistack instance and deploy puppet:
sudo apt-get update
sudo apt-get install fabric
Running it
To start a canonistack instance running tarmac from scratch you'd do the following:
bzr branch lp:~yellow/lp-tarmac-configs/tarmac-puppet
cd tarmac-puppet
fab create or ./startmeup
And you'll see something like:
/home/bac/tarmac/tarmac-puppet> fab create [localhost] local: euca-run-instances -k $NOVA_USERNAME -t m1.small ami-000000be --user-data-file cloud-init.sh Waiting for the i-00003262 instance to start [localhost] local: euca-describe-instances i-00003262 [localhost] local: euca-describe-instances i-00003262 Waiting 30s to be really sure the connection can be established Copying files to the instance [localhost] local: scp -o StrictHostKeyChecking=no * 10.55.60.129: Run puppet on the instance [localhost] local: ssh -o StrictHostKeyChecking=no 10.55.60.129 sudo puppet apply tarmac.pp --templatedir=. [localhost] local: ssh -o StrictHostKeyChecking=no 10.55.60.129 sudo puppet apply tarmac.pp --templatedir=. [localhost] local: ssh -o StrictHostKeyChecking=no 10.55.60.129 sudo puppet apply tarmac.pp --templatedir=. [localhost] local: ssh -o StrictHostKeyChecking=no 10.55.60.129 sudo puppet apply tarmac.pp --templatedir=. [localhost] local: ssh -o StrictHostKeyChecking=no 10.55.60.129 sudo puppet apply tarmac.pp --templatedir=. Tarmac deployed
At this point, you should be able to ssh in as tarmac:
% ssh tarmac@10.55.60.129 or % ssh tarmac@lptarmac.no-ip.org
The log file will be in /home/tarmac/.config/tarmac/tarmac.log. The cron job is set to run every minute and debug logging is turned on.
Everyone in the yellow squad has ssh access as the tarmac user.
If fab create does not successfully deploy tarmac after just a minute or so, you should kill it and ssh to the instance but as the ubuntu user not the tarmac user. You can try running puppet by hand by mimicking the command above in the puppet deploy in the hope of getting some useful failure messages:
% sudo puppet apply tarmac.pp --templatedir=.
Sadly when puppet apply fails fabric doesn't seem to notice and just retries.
Shutting down the instance
% ./shutmedown
Detriments to this approach
If additional testing is required beyond what can be done via the verify_command option, such as running Jenkins, then using canonistack may be underpowered and running in the Canonical QA Lab might make more sense.
- Rumor has it canonistack instances have been known to just disappear. This rumor needs to be debunked or substantiated and understood.
- It doesn't use juju.
Further work
- See about getting a fixed IP address for the canonistack instance, or
Come up with a way of communicating the IP address to the team. Solved by using ddclient to update lptarmac.no-ip.org.
- Figure out how to share management of the instance so that other team members can terminate and restart it as needed.
Ownership of this process and the customized branch needs to be transferred to the larger Launchpad team when ready. This page should move away from the yellow namespace.