Diff for "Running/LXC"

Not logged in - Log In / Register

Differences between revisions 1 and 94 (spanning 93 versions)
Revision 1 as of 2010-05-24 23:55:17
Size: 1063
Editor: lifeless
Comment: create
Revision 94 as of 2012-05-03 08:08:27
Size: 5667
Editor: wgrant
Comment: Reword and prettifyu
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
This page explains how to set up and run Launchpad (for development) inside a VM. This page explains how to set up and run Launchpad (for development) inside an LXC container.
Line 5: Line 5:
Launchpad development setup makes numerous changes to your machine; its nice to be unaffected by those except when you are actually doing such development. Launchpad development setup makes significant changes to your machine; it's nice to be unaffected by those when you're not doing such development. Also, multiple containers can be used to work around Launchpad's limitations regarding concurrent test runs on a single machine.
Line 7: Line 7:
Also, launchpad has limitations on concurrent testing per-machine and so forth - multiple VM's can be used to work around this. These instructions should work on Ubuntu 12.04 LTS and later. Older versions of LXC are significantly less reliable and polished, so if you've used a version of LXC older than 12.04 LTS's final release on your development machine, you'll want to remove `/var/cache/lxc` first to ensure that you don't have a broken cache.

= Make a LXC =

 1. Install LXC's userspace tools.
 {{{
sudo apt-get install lxc
}}}

 1. Create a container. You might want to use an HTTP proxy or alternate Ubuntu mirror; you can do this by specifying an http_proxy or MIRROR environment variable after `sudo`.
 {{{
sudo lxc-create -t ubuntu -n lpdev -- -r lucid -a i386 -b $USER
}}}

 1. Start the container. You'll probably see a few early warnings about boot processes dying -- they're normal and can be ignored as long as you end up at a login prompt.
 {{{
sudo lxc-start -n lpdev
}}}
    
 1. '''[Inside the container]''' Log in with your normal username and password. You'll have full sudo powers.

 1. '''[Inside the container]''' Get the container's IP address from the console. You'll generally want to `ssh` in for convenience and better termcap functionality, and the default Ubuntu LXC dnsmasq setup doesn't provide name resolution from the host
 {{{
ip addr show dev eth0 | grep 'inet'
}}}

 1. '''[Inside the container]''' Install some additional packages needed before running `rocketfuel-setup` and the rest of the Launchpad setup process.
 {{{
# rocketfuel-setup assumes that bzr is already installed.
apt-get install bzr
# PostgreSQL will fail to create a cluster during installation if your
# locale is configured but not supported by the container, so install
# the relevant language pack (this example is only good for English).
# You need to do this if the prior apt commands spewed locale warnings.
#
# If you didn't do this before running rocketfuel-setup, you'll want
# to run `sudo pg_createcluster 8.4 main` afterwards to fix the damage.
apt-get install language-pack-en
}}}

 1. Shut down by running `poweroff` inside the container, and you should eventually be dumped back out to your host system. If it looks like it's hanging, force it to stop with `sudo lxc-stop -n lpdev` on the host.
 
 1. Start it up again, headless this time (`-d`). The same IP address will be used, so you don't need console access.
 {{{
sudo lxc-start -n lpdev -d
}}}

 1. `ssh <IP address obtained above>` to connect to the VM. If your SSH key is in your local `authorized_keys` file you shouldn't be prompted for a password, as your home directory (including public and private keys) is bind mounted into the container. `ssh -A` might give you a better agent experience when dealing with Launchpad code hosting.

 1. You can now follow the normal [[Getting|LP installation instructions]]. Be warned that changes in your home directory will also be seen outside the container and vice versa. If your home directory already has a Launchpad work area set up you'll want to run `rocketfuel-setup --no-workspace` to avoid trying to recreate it, but all subsequent steps are still required.

 1. Follow [[Running/RemoteAccess]] to set up access from the host's applications to the container's Launchpad instance.
Line 10: Line 61:
= Make a VM image = = Problems =
Line 12: Line 63:
 1. Install KVM == rabbitmq does not start up ==

rabbitmq may fail to start up. If that happens it appears to be a [[http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-April/007024.html|mnesia glitch]] best sorted by zapping mnesia.
 {{{
sudo rm -rf /var/lib/rabbitmq/mnesia/rabbit/*
sudo service rabbit-mq start
}}}

== database-developer-setup fails, and thinks you are on Postgres 8.2 ==

As noted above, if you have a localised (non-C) locale, you need to install your specific language pack. For instance, if your computer has a localised English locale, use this:
Line 15: Line 76:
% sudo apt-get install virt-manager apt-get install language-pack-en
Line 18: Line 79:
 1. Download the Lucid server ISO == lxc-start hangs ==
Line 20: Line 81:
 1. Run virt-manager. [[http://paste.ubuntu.com/772517/|The symptom looks like this]]. It hangs after that.
Line 22: Line 83:
 1. Double click on localhost(QEMU) No fix or workaround identified yet, other than making a new lxc container.
Line 24: Line 85:
 1. click on the New virtual machine icon To debug, try '''{{{lxc-start -n $containername -l debug -o outout}}}''' and look at outout.
Line 26: Line 87:
 1. follow your nose here, using the ISO as the install media, and allocating no less than 2G of disk and 1G of memory. I suggest 4G if you can spare it. == DNS fails inside the container ==
Line 28: Line 89:
 1. After its installed, connect to the image and install {{{acpid}}} and {{{openssh-server}}} After restarting in daemon mode and logging in as a regular user, DNS was not working.
Ensure there is a nameserver in the container's /etc/resolv.conf, which is created at startup by resolverconf. Stopping and starting the container solved the problem.
Line 30: Line 92:
 1. Use ssh-copy-id to copy your public key into the VM. == Random flakiness ==
Line 32: Line 94:
 1. ssh -A <vm IP address> to connect to the VM. You can now follow the [[Getting|getting-started]] on LP instructions. Using lxc via juju I ran into all sorts of problems with DNS, version mismatches, etc. Since it was via juju I wasn't able to muck around with /etc/resolv.conf (the damage was done before I got the chance to ssh to the guest.) I found {{{sudo rm -rf /var/cache/lxc}}} solved the problem. It is rather brutal but worked. Of course the next run took a long time as all of that previously cached stuff had to be refetched.

== Other problems ==

If other lxc users don't have an idea (known lxc users as of this writing include lifeless, wgrant, frankban and gary_poster) try asking hallyn or Spamaps on #ubuntu-server on freenode.

= References =


= Alternatively =

You can also run in a [[Running/Schroot|chroot]] environment or a [[Running/VirtualMachine|VM]].

This page explains how to set up and run Launchpad (for development) inside an LXC container.

Why?

Launchpad development setup makes significant changes to your machine; it's nice to be unaffected by those when you're not doing such development. Also, multiple containers can be used to work around Launchpad's limitations regarding concurrent test runs on a single machine.

These instructions should work on Ubuntu 12.04 LTS and later. Older versions of LXC are significantly less reliable and polished, so if you've used a version of LXC older than 12.04 LTS's final release on your development machine, you'll want to remove /var/cache/lxc first to ensure that you don't have a broken cache.

Make a LXC

  1. Install LXC's userspace tools.
    sudo apt-get install lxc
  2. Create a container. You might want to use an HTTP proxy or alternate Ubuntu mirror; you can do this by specifying an http_proxy or MIRROR environment variable after sudo.

    sudo lxc-create -t ubuntu -n lpdev -- -r lucid -a i386 -b $USER
  3. Start the container. You'll probably see a few early warnings about boot processes dying -- they're normal and can be ignored as long as you end up at a login prompt.
    sudo lxc-start -n lpdev
  4. [Inside the container] Log in with your normal username and password. You'll have full sudo powers.

  5. [Inside the container] Get the container's IP address from the console. You'll generally want to ssh in for convenience and better termcap functionality, and the default Ubuntu LXC dnsmasq setup doesn't provide name resolution from the host

    ip addr show dev eth0 | grep 'inet'
  6. [Inside the container] Install some additional packages needed before running rocketfuel-setup and the rest of the Launchpad setup process.

    # rocketfuel-setup assumes that bzr is already installed.
    apt-get install bzr
    # PostgreSQL will fail to create a cluster during installation if your
    # locale is configured but not supported by the container, so install
    # the relevant language pack (this example is only good for English).
    # You need to do this if the prior apt commands spewed locale warnings.
    #
    # If you didn't do this before running rocketfuel-setup, you'll want
    # to run `sudo pg_createcluster 8.4 main` afterwards to fix the damage.
    apt-get install language-pack-en
  7. Shut down by running poweroff inside the container, and you should eventually be dumped back out to your host system. If it looks like it's hanging, force it to stop with sudo lxc-stop -n lpdev on the host.

  8. Start it up again, headless this time (-d). The same IP address will be used, so you don't need console access.

    sudo lxc-start -n lpdev -d
  9. ssh <IP address obtained above> to connect to the VM. If your SSH key is in your local authorized_keys file you shouldn't be prompted for a password, as your home directory (including public and private keys) is bind mounted into the container. ssh -A might give you a better agent experience when dealing with Launchpad code hosting.

  10. You can now follow the normal LP installation instructions. Be warned that changes in your home directory will also be seen outside the container and vice versa. If your home directory already has a Launchpad work area set up you'll want to run rocketfuel-setup --no-workspace to avoid trying to recreate it, but all subsequent steps are still required.

  11. Follow Running/RemoteAccess to set up access from the host's applications to the container's Launchpad instance.

Problems

rabbitmq does not start up

rabbitmq may fail to start up. If that happens it appears to be a mnesia glitch best sorted by zapping mnesia.

  • sudo rm -rf /var/lib/rabbitmq/mnesia/rabbit/*
    sudo service rabbit-mq start

database-developer-setup fails, and thinks you are on Postgres 8.2

As noted above, if you have a localised (non-C) locale, you need to install your specific language pack. For instance, if your computer has a localised English locale, use this:

apt-get install language-pack-en

lxc-start hangs

The symptom looks like this. It hangs after that.

No fix or workaround identified yet, other than making a new lxc container.

To debug, try lxc-start -n $containername -l debug -o outout and look at outout.

DNS fails inside the container

After restarting in daemon mode and logging in as a regular user, DNS was not working. Ensure there is a nameserver in the container's /etc/resolv.conf, which is created at startup by resolverconf. Stopping and starting the container solved the problem.

Random flakiness

Using lxc via juju I ran into all sorts of problems with DNS, version mismatches, etc. Since it was via juju I wasn't able to muck around with /etc/resolv.conf (the damage was done before I got the chance to ssh to the guest.) I found sudo rm -rf /var/cache/lxc solved the problem. It is rather brutal but worked. Of course the next run took a long time as all of that previously cached stuff had to be refetched.

Other problems

If other lxc users don't have an idea (known lxc users as of this writing include lifeless, wgrant, frankban and gary_poster) try asking hallyn or Spamaps on #ubuntu-server on freenode.

References

Alternatively

You can also run in a chroot environment or a VM.