Diff for "Running/LXC"

Not logged in - Log In / Register

Differences between revisions 10 and 45 (spanning 35 versions)
Revision 10 as of 2011-03-09 18:35:54
Size: 2488
Editor: jameinel
Comment: Add link to RemoteAccess
Revision 45 as of 2011-07-06 01:45:08
Size: 5086
Editor: lifeless
Comment: document the u1 hack for rabbit and *why* it works.
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
This page explains how to set up and run Launchpad (for development) inside a VM. ## page was copied from Running/VirtualMachine
This page explains how to set up and run Launchpad (for development) inside a LXC.
Line 5: Line 6:
Launchpad development setup makes numerous changes to your machine; its nice to be unaffected by those except when you are actually doing such development. Launchpad development setup makes significant changes to your machine; its nice to be unaffected by those except when you are actually doing such development.
Line 7: Line 8:
Also, launchpad has limitations on concurrent testing per-machine and so forth - multiple VM's can be used to work around this. Also, launchpad has some limitations on concurrent testing per-machine and so forth - multiple container's can be used to work around this.
Line 9: Line 10:
= Make a LXC =
Line 10: Line 12:
= Make a VM image =

1. Install KVM

{{{
% sudo apt-get install virt-manager
 1. Install lxc
 {{{
sudo apt-get install lxc
Line 18: Line 17:
 1. Download the Lucid server ISO  1. Work around Bug:800456 and Bug:801002
 {{{
sudo apt-get install cgroup-bin libvirt-bin
}}}
Line 20: Line 22:
 1. Run virt-manager.  1. Work around Bug:784093
 {{{
sudo dd of=/etc/cgconfig.conf << EOF
mount {
 cpu = /sys/fs//cgroup/cpu;
 cpuacct = /sys/fs/cgroup/cpu;
 devices = /sys/fs/cgroup/cpu;
 memory = /sys/fs/cgroup/cpu;
}
EOF
sudo service cgconfig restart
}}}
Line 22: Line 35:
 1. Double click on localhost(QEMU)  1. Work around Bug:798476 (optional if you run i386 or have a -tonne- of memory and don't care about 64-bit footprint.
    Grab the patch from the bug and apply it to /usr/lib/lxc/templates/lxc-lucid. If you're running i386 already or want a 64-bit lxc then do not pass arch= on the lxc-create command line.
Line 24: Line 38:
 1. click on the New virtual machine icon  1. Create a config for your containers
 {{{
sudo dd of=/etc/lxc/local.conf << EOF
lxc.network.type=veth
lxc.network.link=virbr0
lxc.network.flags=up
#fuse (workaround for Bug:800886)
lxc.cgroup.devices.allow = c 10:229 rwm
# part of the Bug:798476 workaround -
# remove if you are running a 64 bit lxc or
# 32 bit on 32-bit base os
lxc.arch = i686
EOF
}}}
Line 26: Line 53:
 1. follow your nose here, using the ISO as the install media, and allocating no less than 2G of disk and 1G of memory. I suggest 4G if you can spare it.  1. Create a container
 {{{
sudo arch=i386 lxc-create -n lucid-test-lp -t lucid -f /etc/lxc/local.conf
}}}
    If you want to use a proxy
 {{{
sudo arch=i386 http_proxy=http://host:port/ lxc-create -n lucid-test-lp -t lucid -f /etc/lxc/local.conf
}}}
    And if you want to set a custom mirror, similar to http_proxy, but set MIRROR= instead.
Line 28: Line 63:
 1. After its installed, connect to the image and install {{{acpid}}} and {{{openssh-server}}}  1. (Outside the container) grab your user id and username so you can setup a bind mount outside the container:
 {{{
id -u
id -nu
}}}
Line 30: Line 69:
 1. Use ssh-copy-id to copy your public key into the VM.  1. Start the container
 {{{
sudo lxc-start -n lucid-test-lp
}}}
    Ignore the warning about openssh crashing - it restarts on a later event.
    The initial credentials are root:root.
Line 32: Line 76:
 1. ssh -A <vm IP address> to connect to the VM.  1. Grab the ip address (handed out via libvirt's dhcp server) - you may wish to ssh in rather than using the console (seems to have better termcap experience).
 {{{
ip addr show dev eth0 | grep 'inet'
}}}
Line 34: Line 81:
 1. {{{bzr whoami "Your Name <your.email@example.com>"}}} to set your bzr identity in the VM.  1. The new container won't have your proxy / mirror settings preserved. Customise it at this point before going further if you care about this.
Line 36: Line 83:
 1. You can now follow the [[Getting|getting-started]] on LP instructions.  1. Enable multiverse (rocketfuel-setup wants it, don't ask me why).

 1. Install some additional packages we'll need to run rocketfuel-setup etc.
 {{{
apt-get install python-software-properties
apt-add-repository ppa:ubuntu-virt
apt-get update
apt-get install bzr less sudo lxcguest
# select I for 'install' when prompted about console.conf
}}}

 1. Inside the container add the user:
 {{{
adduser --uid $id $username
adduser $username sudo
}}}

 1. Change the /etc/hosts file - make sure that the local host name (e.g. lucid-test-lp) is the first item on its line. Something like:
{{{
127.0.0.1 localhost
127.0.1.1 lucid-test-lp
}}}
This is needed because rabbit looks up its hostname, then resolves it (which if its on the 127.0.0.1 line returns 127.0.0.1) and then resolves that *back* to a hostname (and the 127.0.0.1 lookup returns localhost, not the hostname) - if they don't all match, it bails.

 1. To stop it now run 'poweroff -n' (normally you would use lxc-stop, but see Bug:784093).

 1. Setup a bind mount so you can access your home dir (and thus your LP source code) from within the lxc container:
    * edit /var/lib/lxc/lucid-test-lp/fstab
    * Add a line:
 {{{
/home/$username /var/lib/lxc/lucid-test-lp/rootfs/home/$username none bind 0 0
}}}
 
 1. Start it up again - headless now, we have the ip address from before.
 {{{
sudo lxc-start -n lucid-test-lp -d
}}}

 1. ssh <vm IP address> to connect to the VM. Your ssh key is already present because of the bind mount to your home dir.

 1. You can now follow the [[Getting|getting-started]] on LP instructions. Be warned that changes in ~ will affect you outside the container. You will want to run rocketfuel-setup with --no-workspace if your home already has a workarea. You may need to run utilities/launchpad-database-setup separately too.

 1. You probably want to follow [[Running/RemoteAccess]] has a discussion for how you can configure things so your non-container browser can access web pages from within the container.

 1. rabbitmq may fail to start up. If that happens it appears to be a [[http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-April/007024.html|mnesia glitch]] best sorted by zapping mnesia.
{{{
sudo rm -rf /var/lib/rabbitmq/mnesia/rabbit/*
sudo service rabbit-mq start
}}}
Line 40: Line 135:
See also this email thread about [[https://lists.launchpad.net/launchpad-dev/msg03456.html|running Launchpad in a virtual machine]], and this [[https://lists.launchpad.net/launchpad-dev/msg03454.html|discussion of the differences]] between running in a [[Running/Schroot|chroot]] environment and running a VM. [[Running/RemoteAccess]] has a discussion for how you can configure the VM to allow the host machine to access the web pages, etc.
Line 44: Line 138:
You can skip some manual steps of installing from an ISO using a command like this:

{{{
sudo ubuntu-vm-builder kvm lucid --domain vm --dest ~/vm/lp-dev \
 --hostname lp-dev \
 --mem 2048 --cpus 2 \
 --components main,universe,multiverse,restricted \
 --mirror http://10.113.3.35:3142/mirror.internode.on.net/pub/ubuntu/ubuntu \
 --libvirt qemu:///system \
 --debug -v \
 --ssh-user-key ~/.ssh/id_rsa.pub --ssh-key ~/.ssh/id_rsa.pub \
 --rootsize 24000 \
 --user $USER
}}}

After installation completes, it should show up in your virt-manager menu.

= In LXC =

It seems like it would be nice to run Launchpad in [[http://lxc.teegra.net/|LXC containers]]: they should be more efficient than a VM (especially with regard to memory and disk) but more isolated than a chroot. More testing or documentation is needed.
You can also run in a [[Running/Schroot|chroot]] environment or a [[Running/VirtualMachine|VM]].

This page explains how to set up and run Launchpad (for development) inside a LXC.

Why?

Launchpad development setup makes significant changes to your machine; its nice to be unaffected by those except when you are actually doing such development.

Also, launchpad has some limitations on concurrent testing per-machine and so forth - multiple container's can be used to work around this.

Make a LXC

  1. Install lxc
    sudo apt-get install lxc
  2. Work around 800456 and 801002

    sudo apt-get install cgroup-bin libvirt-bin
  3. Work around 784093

    sudo dd of=/etc/cgconfig.conf << EOF
    mount {
     cpu = /sys/fs//cgroup/cpu;
     cpuacct = /sys/fs/cgroup/cpu;
     devices = /sys/fs/cgroup/cpu;
     memory = /sys/fs/cgroup/cpu;
    }
    EOF
    sudo service cgconfig restart
  4. Work around 798476 (optional if you run i386 or have a -tonne- of memory and don't care about 64-bit footprint.

    • Grab the patch from the bug and apply it to /usr/lib/lxc/templates/lxc-lucid. If you're running i386 already or want a 64-bit lxc then do not pass arch= on the lxc-create command line.
  5. Create a config for your containers
    sudo dd of=/etc/lxc/local.conf << EOF
    lxc.network.type=veth
    lxc.network.link=virbr0
    lxc.network.flags=up
    #fuse (workaround for Bug:800886)
    lxc.cgroup.devices.allow = c 10:229 rwm
    # part of the Bug:798476 workaround - 
    # remove if you are running a 64 bit lxc or
    # 32 bit on 32-bit base os
    lxc.arch = i686
    EOF
  6. Create a container
    sudo arch=i386 lxc-create -n lucid-test-lp -t lucid -f /etc/lxc/local.conf
    • If you want to use a proxy
    sudo arch=i386 http_proxy=http://host:port/ lxc-create -n lucid-test-lp -t lucid -f /etc/lxc/local.conf
    • And if you want to set a custom mirror, similar to http_proxy, but set MIRROR= instead.
  7. (Outside the container) grab your user id and username so you can setup a bind mount outside the container:
    id -u
    id -nu
  8. Start the container
    sudo lxc-start -n lucid-test-lp
    • Ignore the warning about openssh crashing - it restarts on a later event. The initial credentials are root:root.
  9. Grab the ip address (handed out via libvirt's dhcp server) - you may wish to ssh in rather than using the console (seems to have better termcap experience).
    ip addr show dev eth0 | grep 'inet'
  10. The new container won't have your proxy / mirror settings preserved. Customise it at this point before going further if you care about this.
  11. Enable multiverse (rocketfuel-setup wants it, don't ask me why).
  12. Install some additional packages we'll need to run rocketfuel-setup etc.
    apt-get install python-software-properties
    apt-add-repository ppa:ubuntu-virt
    apt-get update
    apt-get install bzr less sudo lxcguest
    # select I for 'install' when prompted about console.conf
  13. Inside the container add the user:
    adduser --uid $id $username
    adduser $username sudo
  14. Change the /etc/hosts file - make sure that the local host name (e.g. lucid-test-lp) is the first item on its line. Something like:

127.0.0.1 localhost
127.0.1.1 lucid-test-lp

This is needed because rabbit looks up its hostname, then resolves it (which if its on the 127.0.0.1 line returns 127.0.0.1) and then resolves that *back* to a hostname (and the 127.0.0.1 lookup returns localhost, not the hostname) - if they don't all match, it bails.

  1. To stop it now run 'poweroff -n' (normally you would use lxc-stop, but see 784093).

  2. Setup a bind mount so you can access your home dir (and thus your LP source code) from within the lxc container:
    • edit /var/lib/lxc/lucid-test-lp/fstab
    • Add a line:
    /home/$username /var/lib/lxc/lucid-test-lp/rootfs/home/$username none bind 0 0
  3. Start it up again - headless now, we have the ip address from before.
    sudo lxc-start -n lucid-test-lp -d
  4. ssh <vm IP address> to connect to the VM. Your ssh key is already present because of the bind mount to your home dir.

  5. You can now follow the getting-started on LP instructions. Be warned that changes in ~ will affect you outside the container. You will want to run rocketfuel-setup with --no-workspace if your home already has a workarea. You may need to run utilities/launchpad-database-setup separately too.

  6. You probably want to follow Running/RemoteAccess has a discussion for how you can configure things so your non-container browser can access web pages from within the container.

  7. rabbitmq may fail to start up. If that happens it appears to be a mnesia glitch best sorted by zapping mnesia.

sudo rm -rf /var/lib/rabbitmq/mnesia/rabbit/*
sudo service rabbit-mq start

References

Alternatively

You can also run in a chroot environment or a VM.