= Juju Development Notes = == Notes for getting started with Juju == * Read the [[https://juju.ubuntu.com/docs/getting-started.html|Juju Getting Started Guide]] * The two lines in your existing {{{$HOME/.ec2/aws_id}}} are the {{{access-key}}} and {{{secret-key}}} for the {{{environments.yaml}}} file. (You only need those for an ec2 instance, and even then only if you don't have the EC2 env variables hanging around, as described in Juju docs. Putting them in the yaml is clear and simple though.) * If you want to use the PPA for juju (recommended at this time), make sure you set juju-origin: to {{{ppa}}}. * Create a directory for your charms. Mine is {{{$HOME/juju}}} -- lets call it $JUJUDIR * Get the buildbot-master charm from {{{lp:~yellow/launchpad/buildbot-master}}} and put it in $JUJUDIR. * Charms are specific to a series. Since we're deploying oneiric, all charms are expected to be found in an {{{oneiric}}} subdirectory. So create {{{$JUJUDIR/oneiric}}} and from inside it {{{ln -s ../buildbot-master}}}. * The following commands should get you up and running: {{{ cd $JUJUDIR juju bootstrap -e local # Assuming you called your LXC configuration in environments.yaml 'local' juju deploy --repository=. local:buildbot-master watch juju status # wait for the state to be 'started' juju expose buildbot-master }}} You should then be able to go to http::8010 and see the buildbot page for pyflakes. The IP address and port are available in the juju status output. == Working with larger EC2 instances == Building Launchpad requires that the machine have some decent memory, or else the installation will fail because of the kernel killing the process because of an OOM error. While askubuntu [[http://askubuntu.com/questions/52021/how-do-i-adjust-the-instance-size-that-juju-uses|has some hints on getting Juju to use larger EC2 instances]], it can still be confusing. Here are some step-by-steps that hopefully clarify things. === General notes === You currently must set your machine size generically for the entire cluster. You cannot specify that certain services should get certain machine sizes (and it doesn't *appear* to have made it in for 12.04, but we could be wrong). http://askubuntu.com/questions/101005/how-to-specify-ec2-instance-types-for-different-services-in-a-charm === Oneiric === In your ~/.juju/environments.yaml, in the environment section that you want to modify (presumably with a "type: ec2" section), add these lines. {{{ default-instance-type: m1.large default-image-id: ami-6fa27506 }}} === Precise === Find the image you want to use, from http://uec-images.ubuntu.com/precise/current/ or http://uec-images.ubuntu.com/query/precise/server/daily.txt if Precise has not yet shipped. Once it has shipped, replace "current" with "released" in those URLs. You are looking for an amd64 image in your geographic area that is *not* "hvm". Juju seems to use "ebs" (elastic block store) instances by default, but as far as we can tell using "instance-store" images makes more sense for how juju uses these at this time. In your ~/.juju/environments.yaml, in the environment section that you want to modify (presumably with a "type: ec2" section), add these lines. {{{ default-instance-type: m1.large default-image-id: AMI YOU FOUND GOES HERE }}} == Notes for hacking on Juju == * Juju uses `trial`, the Twisted test runner. Its semantics are different from Zope's testrunner. To run a single test you must provide the full (pythonic) path to that test: {{{ $ ./test the.path.to.the.TestCase.and_test }}} * There are some tests which will always fail if you don't have your EC2 credentials in your ENV. There's debate about whether these tests should be made to not depend on the ENV variables. * The same tests look like they try to connect to EC2 but apparently don't. There's debate about whether they should or not. * Because juju is all Twisted, the tests run inside a reactor. This means that if you're debugging the reactor will swallow a CTRL-C and just report an error for every subsequent test. Upshot '''test narrowly when debugging'''. == Notes for working with Juju == * Juju is broken with precise and vmware when trying to use a local environment. Bug:920454. Oneiric/vmware/local works as does Precise/metal/local. == Notes for working with Juju Charms == * You can only pass in config options as strings to `juju set`. If you want to upload an entire config file, you need to do something like: {{{ $ juju set my-service option=$(uuencode /path/to/config/file) }}} * juju debug-log is broken (well, juju-log on the units is broken, but the upshot is that it won't get reflected if you run juju debug-log on your local machine). Brad has a workaround: {{{ (On the client) root@ip-10-116-73-15:/# grep unit.hook.api /var/lib/juju/units/$UNITNAME/charm.log 2012-01-17 20:51:17,315: unit.hook.api@INFO: --> install 2012-01-17 20:51:20,588: unit.hook.api@INFO: Creating master in /tmp/buildbot 2012-01-17 20:51:23,539: unit.hook.api@INFO: <-- install 2012-01-17 20:51:24,883: unit.hook.api@INFO: --> config-changed 2012-01-17 20:51:25,561: unit.hook.api@INFO: Updating buildbot configuration. 2012-01-17 20:51:26,811: unit.hook.api@INFO: Config decoded and written. 2012-01-17 20:51:27,406: unit.hook.api@INFO: <-- config-changed 2012-01-17 20:51:28,793: unit.hook.api@INFO: --> start 2012-01-17 20:51:30,613: unit.hook.api@DEBUG: opened 8010/tcp 2012-01-17 20:51:31,258: unit.hook.api@INFO: <-- start }}} On LXC the log can be found on the client at {{{ /var/log/juju/unit-$UNITNAME.log}}}. * The order of hooks on startup is as follows: * install * config-changed * start Upon setting a relation the hooks are called in this order: * -relation-joined * -relation-changed Subsequent changes (via the partner calling {{{relation-set}}} results in {{{-relation-changed}}} being called once. As a result, config-changed hooks shouldn't assume that a service has been started; they should check before restarting it. * On upgrading to Precise from Oneiric, there is a problem in which /etc/alternatives/java does not get updated. See Bug:887077. The work-around in that bug stated by pitti did not work for me. I had to manually change the link like so: * The ''name'' specified in metadata.yaml must match the directory name or juju reports the charm cannot be found. {{{ /home/bac> ls -l /etc/alternatives/java lrwxrwxrwx 1 root root 45 Jan 23 13:46 /etc/alternatives/java -> /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java* }}} == Notes for creating Juju Charms == === Getting reviews of reviews === If you want to write an internal-only charm, talk with the #juju folks first (m_3 told Gary 2012-01-27 to ping them in this case). We want to become good writers and reviewers of charms. The Juju team has said they are willing to help us with this. To ask for reviews, do the following: * File a bug in https://bugs.launchpad.net/charms for your work. Give a "new-charm" tag. * Make sure your charm branch is in the lp:charms project. * Attach the charm branch to the bug. * Set status to Fix Committed. * Get one of us to do a review. * Wait for one of the people in http://launchpad.net/~charmers to do an official review. If nothing seems to happen or you are in a rush, try in #juju (freenode) to ping marcoceppi, nijaba, SpamapS, or m_3 for a review. == Debugging notes == === LXC === You can see the Juju environment in the lxc by looking (cat) in the host at /proc/PID/environ, where PID is the PID of the machine agent (ps aux | grep machine should show it). The machine configures the containers with a base juju environment. Look in your ~/.juju/environments.yaml for the data-dir configured for your lxc. Now look in the dir. It has lots of good logs for what is going on. $data-dir/$uid-$environmentname/units/master-customize.log has some good stuff in particular. === LXC + SSH fails === Juju packages built on r453 can prevent SSH from working with LXC. Looking at the end of the master-customize.log shows that the user 'ubuntu' already exists and the setting up of .ssh is skipped. You can still ssh to the unit but you'll be prompted for the ubuntu user's password (which is "ubuntu"). However {{{debug-hooks}}} will not work. === Juju fails due to config-get returning null === Updates to the juju packages can sometime lead to an inconsistent state within lxc where Juju loses its mind. The symptom is seeing a hook fail because calls to {{{config-get}}} return null when values should definitely be set. If this happens, ensure you have {{{juju-origin: pap}}} in the local section of {{{environments.yaml}}}. If the problem persists blowing away the lxc cache {{{rm -rf /var/cache/lxc}}} has worked. Make sure everything is shut down first. Your next use of lxc will be slow as it has to repopulate the cache with downloaded packages. == Feedback notes == If anyone wants to, we can collaborate on feedback to give the juju devs here. * Docs are not a panacea. "fitting our brain" is hard goal but valuable. * The one-way communication aspect of relation-set was surprising. Maybe include a command to look at what you've said, both as a utility and to show that there are separate buckets. * The config-changed hooks should be used minimally? charm authors should use the install hook primarily, with "deploy --config"? * Juju should allow service-destroyed hooks: [[https://bugs.launchpad.net/juju/+bug/932269|bug 932269]] * Optimization in ec2 of reusing machines is painful, especially without any kind of "destroy service" hook. * The upgrade-charm command seems like it needs to work even if service is not active, so that you can bring up new version (?) * Being able to do a relation-set within a config-changed is important (already known, aiui) * Doc: terminal needs to be in UTF8 (not by default on my Precise machine for some reason! ANSIX3.4-1968) or else byobu will render the bottom line problematically. * Using environmental variables for identifying the other end of a relation in relation-changed (https://juju.ubuntu.com/docs/charm.html#hook-environment) was surprising, esp. given that you can use relation-get to get things like private-address from relation-get. * Doc: recommend charms for review, such as mysql for multiple slaves. * Juju should have a way to share code across charms, like helpers. bzr does not provide svn-external-alike. build step? :-/ * juju log as non root (at least as buildbot) hangs. Not good default behavior. * the wrapper script functionality would appear to be includable in the base juju command * It would be nice to be able to have the default environment set via an environmental variable. That way I don't have to keep editing .juju/environments.yaml ("default") (or using -e for every single freaking command) as I switch between lxc, ec2 and canonistack. * Juju should provide a way to reset a node without tearing down the environment (see above under optimisation in EC2, too). It would be nice to ensure that a node is bounced before a new charm is deployed on it.