Diff for "Soyuz/HowToUseSoyuzLocally"

Not logged in - Log In / Register

Differences between revisions 43 and 82 (spanning 39 versions)
Revision 43 as of 2010-03-08 08:06:34
Size: 5931
Editor: jtv
Comment:
Revision 82 as of 2022-12-10 08:09:22
Size: 7579
Editor: jugmac00
Comment: add warning that the documentation is outdated
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= Setting up a local Soyuz = Attention: This guide is outdated and needs a revision - please reach out to the Launchpad team if you need support.
Line 3: Line 3:
You're going to run Soyuz in a branch you create for the purpose. To get the whole experience, you'll also be installing the slave-side {{{launchpad-buildd}}} package on your system. You're going to run Soyuz in a branch you create for the purpose. To get the whole experience, you'll also be installing the builder-side {{{launchpad-buildd}}} package on your system.

= Initial setup =
Line 6: Line 8:
 * Once you've set up your test database, run {{{utilities/soyuz-sampledata-setup.py -e you@example.com}}} (where ''you@example.com'' should be an email address you own and have a GPG key for). This prepares more suitable sample data in the {{{launchpad_dev}}} database. If you get a "duplicate key" error, {{{make schema}}} and run again.

After this, your sample data will include a realistic sampling of Ubuntu releases, and you'll have a user called "ppa-user" with password "test" and using your own email address. You'll be able to log in with your own email address, and sign things for this user with your own GPG key. The user is an Ubuntero and a member of {{{ubuntu-team}}}.
 * Once you've set up your test database, run {{{utilities/soyuz-sampledata-setup.py -e you@example.com}}} (where ''you@example.com'' should be an email address you own and have a GPG key for). This prepares more suitable sample data in the {{{launchpad_dev}}} database, including recent Ubuntu series. If you get a "duplicate key" error, {{{make schema}}} and run again.
 * `make run` (or if you also want to use codehosting, `make run_codehosting`—some services may fail to start up because you already started them, but it shouldn't be a problem).
 * Open https://launchpad.test/~ppa-user/+archive/test-ppa in a browser to get to your pre-made testing PPA. Log in with your own email adddress and password ''test''. This user has your GPG key associated, has signed the Ubuntu Code of Conduct, and is a member {{{ubuntu-team}}} (conferring upload rights to the primary archive).
Line 11: Line 13:
= Configure an account and PPA = == Extra PPA dependencies ==
Line 13: Line 15:
 * `make run` (or if you also want to use codehosting, `make run_codehosting`—some services may fail to start up because you already started them, but it shouldn't be a problem).
 * Open https://launchpad.dev/ in a browser. Log in with your own email adddress and password ''test''.


= Set up the PPA =

You'll find a pre-made PPA for ppa-user. It has an external dependency on Lucid. If that's not enough, or not what you want:
The testing PPA has an external dependency on Lucid. If that's not enough, or not what you want:
Line 22: Line 18:
 * Open https://launchpad.dev/~ppa-user/+archive/test-ppa/+admin  * Open https://launchpad.test/~ppa-user/+archive/test-ppa/+admin
Line 29: Line 25:
= Configure a buildd = = Set up a builder =
Line 31: Line 27:
 * `cd lib/canonical/buildd`
 * `debian/rules package`
 * `dpkg-buildpackage -b`
 * `sudo dpkg -i ../launchpad-buildd_50_all.deb`
 * `sudo apt-get -f install`
 * Make it work.
  * Might need to add `--umask=022` to `twistd` args in `/etc/init.d/launchpad-buildd`, or `pkg-create-dbgsym` fails obscurely.
  * Edit `/etc/launchpad-buildd/default` and make sure `ntphost` points to an existing NTP server. You can check the [[http://www.pool.ntp.org/|NTP server pool]] to find one near you.
 * Get an Ubuntu buildd chroot from Launchpad
  * Find the URL at https://edge.launchpad.net/api/devel/ubuntu/lucid/i386/chroot_url (or similar).
  * If you want lucid and your architecture is i386, download the tarball with:
== Set up for development ==

If you are intending to do any development on launchpad-buildd or similar, you possibly want [[Soyuz/HowToDevelopWithBuildd|HowToDevelopWithBuildd]].

== Installation ==

 * Create a new focal virtual-machine with kvm (recommended), or alternatively a focal lxc container. If using lxc, set `lxc.aa_profile = unconfined` in `/var/lib/lxc/container-name/config` which is required to disable App``Armor support.

If you are running Launchpad in a container, you will more than likely want your VM's network bridged on `lxcbr0`.

In your builder vm/lxc:

 * `sudo apt-add-repository ppa:launchpad/buildd-staging`
 * `sudo apt-get update`
 * `sudo apt-get install launchpad-buildd bzr-builder quilt binfmt-support qemu-user-static`

Alternatively, launchpad-buildd can be built from `lp:launchpad-buildd` with `dpkg-buildpackage -b`.

 * Edit `/etc/launchpad-buildd/default` and make sure ntphost points to an existing NTP server. You can check the [[http://www.pool.ntp.org/|NTP server pool]] to find one near you.

To run the builder by default, you should make sure that other hosts on the Internet cannot send requests to it! Then:

 * `echo RUN_NETWORK_REQUESTS_AS_ROOT=yes > /etc/default/launchpad-buildd`

== Launchpad Configuration ==

From your host system:

 * Get an Ubuntu buildd chroot from Launchpad, using `manage-chroot` from [[https://code.launchpad.net/+branch/ubuntu-archive-tools|lp:ubuntu-archive-tools]]:
 * `manage-chroot -s precise -a i386 get`
 * `LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 manage-chroot -l dev -s precise -a i386 -f chroot-ubuntu-precise-i386.tar.bz2 set`
 * Register a new builder with the URL pointed to `http://YOUR-BUILDER-IP:8221/` (https://launchpad.test/builders/+new)

Shortly thereafter, the new builder should report a successful status of 'idle'.

If you want to test just the builder without a Launchpad instance, then, instead of using `manage-chroot -l dev set`, you can copy the chroot tarball to `/home/buildd/filecache-default/`; the base name of the file should be its sha1sum. You'll need to copy any other needed files (e.g. source packages) into the cache in the same way. You can then send XML-RPC instructions to the builder as below.

= Drive builder through rpc =

With librarian running, fire up a python3 shell and:
Line 43: Line 69:
wget -O - -q https://edge.launchpad.net/api/devel/ubuntu/lucid/i386/chroot_url | xargs wget from xmlrpc.client import ServerProxy
proxy = ServerProxy('http://localhost:8221/rpc')
proxy.ensurepresent('d267a7b39544795f0e98d00c3cf7862045311464', 'http://launchpad.test:58080/93/chroot-ubuntu-lucid-i386.tar.bz2', '', '')
proxy.build('1-1', 'translation-templates', 'd267a7b39544795f0e98d00c3cf7862045311464', {},
  {'archives': ['deb http://archive.ubuntu.com/ubuntu/ lucid main'], 'branch_url': '/home/buildd/gimp-2.6.8'})
proxy.status()
proxy.clean() # Clean up if it failed
Line 45: Line 77:
 * `scripts/ftpmaster-tools/manage-chroot.py -s lucid -a i386 add -f chroot-ubuntu-lucid-i386.tar.bz2`
 * Mark Bob the Builder as OK (https://launchpad.dev/builders/bob/+edit)
Line 48: Line 78:
Please note a "builder" and a "chroot" are not the same thing, nor are they "associated" in any way. The logic is as follows:
 * PPAs require builders (a machine where to build packages)
 * The builder may be virtualized (for instance, launchpad.net uses Xen-virtualized builders) or the same machine where Launchpad (meaning "your launchpad.dev") is running, which is what this page details. Using the very same machine where Launchpad.dev is running limits the architectures you can build for: if you are running on i386, you can't build for PowerPC, for instace.
 * A builder needs to setup a bootstrap (what you get with debootstrap) Linux. That bootstrap is what you get in the chroot tarballs mentioned above.
 * When you upload a package and tell Launchpad to process it (see below for this), the builder will pick the chroot it needs (for instance, karmic i386), uncompress it and start building the package.
 * In summary, there are two lists: a list of builders and a list of chroots.
You may have to calculate a new sha1sum of the chroot file.
Line 57: Line 82:
 * Run `scripts/process-upload.py /var/tmp/poppy` (creates hierarchy)  * Run `scripts/process-upload.py /var/tmp/txpkgupload` (creates hierarchy)
Line 62: Line 87:
fqdn = ppa.launchpad.dev:2121 fqdn = ppa.launchpad.test:2121
Line 68: Line 93:
 * `dput lpdev:~ppa-user/test-ppa/ubuntu some_source.changes`
 * `scripts/process-upload.py /var/tmp/poppy # Accept the source upload.`
 * Find a source package `some_source` with a changes file `some_source.changes`
 * `dput -u
lpdev:~ppa-user/test-ppa/ubuntu some_source.changes`
 * `scripts/process-upload.py /var/tmp/txpkgupload -C absolutely-anything -vvv # Accept the source upload.`
Line 71: Line 97:
 * Within five seconds of upload acceptance, the buildd should start building. Wait until it is complete.
 * `scripts/process-accepted.py --ppa ubuntu # Create publishings for the binaries.`
 * `scripts/publish-distro.py --ppa # Publish the source and binaries.`
 * Within five seconds of upload acceptance, the buildd should start building. Wait until it is complete (the build page will say "Uploading build").
 * `scripts/process-upload.py -vvv --builds -C buildd /var/tmp/builddmaster # Process the build upload.`
 * `scripts/process-accepted.py -vv --ppa ubuntu #
Create publishings for the binaries.`
 * `scripts/publish-distro.py -vv --ppa # Publish the source and binaries.`
Line 75: Line 102:

= Build an OCI image =

 * Using Launchpad interface, create a new OCI project and a recipe for it.
 * On the OCI Recipe page, click on "Request builds", and select which architectures should be built on the following screen.
 * Once you have requested a build, you should run `./cronscripts/process-job-source.py -v IOCIRecipeRequestBuildsJobSource` to create builds for that build request.
 * If you have builders idle, this should start the build. Make sure to have run `utilities/start-dev-soyuz.sh`, and check builders status at `/builders` page.
 * Once the build finishes, run `./scripts/process-upload.py -M --builds /var/tmp/builddmaster/` on Launchpad to make it collect the built layers and manifests.
 * At this point, in each build page you should have the files listed.
 * You can upload the built image to registry by running `./cronscripts/process-job-source.py -v IOCIRegistryUploadJobSource` in Launchpad. You can manage the push rules at OCI recipe's page, clicking at "Edit push rules" button.
Line 79: Line 116:
 * `scripts/process-upload.py /var/tmp/poppy`  * `scripts/process-upload.py -vvv /var/tmp/txpkgupload`
Line 83: Line 120:
 * `scripts/process-accepted.py ubuntu`
 * `scripts/publish-distro.py`
 * `scripts/process-accepted.py -vv ubuntu`
 * `scripts/publish-distro.py -vv`
Line 86: Line 123:

= Notes =

<<Anchor(pygpgme)>>
== Errors importing key to local zeca ==
If you are getting [[http://pastebin.ubuntu.com/386801/|an error]] when importing your key to zeca and you are running Lucid, and [[https://bugs.edge.launchpad.net/pygpgme/+bug/452194|bug 452194]] is not fixed, you will need to:
 1. Grab the [[http://pastebin.ubuntu.com/386803/|one line]] patch.,
 1. Apply it (patch -p0 sourcecode/pygpgme/src/pygpgme-context.c < pygpgme.patch)
 1. make compile and then restart your local development zeca as well as the web app.

Attention: This guide is outdated and needs a revision - please reach out to the Launchpad team if you need support.

You're going to run Soyuz in a branch you create for the purpose. To get the whole experience, you'll also be installing the builder-side launchpad-buildd package on your system.

Initial setup

  • Run utilities/start-dev-soyuz.sh to ensure that some Soyuz-related services are running. Some of these may already be running, in which case you'll get some failures that are probably harmless. Note: these services eat lots of memory.

  • Once you've set up your test database, run utilities/soyuz-sampledata-setup.py -e you@example.com (where you@example.com should be an email address you own and have a GPG key for). This prepares more suitable sample data in the launchpad_dev database, including recent Ubuntu series. If you get a "duplicate key" error, make schema and run again.

  • make run (or if you also want to use codehosting, make run_codehosting—some services may fail to start up because you already started them, but it shouldn't be a problem).

  • Open https://launchpad.test/~ppa-user/+archive/test-ppa in a browser to get to your pre-made testing PPA. Log in with your own email adddress and password test. This user has your GPG key associated, has signed the Ubuntu Code of Conduct, and is a member ubuntu-team (conferring upload rights to the primary archive).

Extra PPA dependencies

The testing PPA has an external dependency on Lucid. If that's not enough, or not what you want:

deb http://archive.ubuntu.com/ubuntu %(series)s main restricted universe multiverse

Set up a builder

Set up for development

If you are intending to do any development on launchpad-buildd or similar, you possibly want HowToDevelopWithBuildd.

Installation

  • Create a new focal virtual-machine with kvm (recommended), or alternatively a focal lxc container. If using lxc, set lxc.aa_profile = unconfined in /var/lib/lxc/container-name/config which is required to disable AppArmor support.

If you are running Launchpad in a container, you will more than likely want your VM's network bridged on lxcbr0.

In your builder vm/lxc:

  • sudo apt-add-repository ppa:launchpad/buildd-staging

  • sudo apt-get update

  • sudo apt-get install launchpad-buildd bzr-builder quilt binfmt-support qemu-user-static

Alternatively, launchpad-buildd can be built from lp:launchpad-buildd with dpkg-buildpackage -b.

  • Edit /etc/launchpad-buildd/default and make sure ntphost points to an existing NTP server. You can check the NTP server pool to find one near you.

To run the builder by default, you should make sure that other hosts on the Internet cannot send requests to it! Then:

  • echo RUN_NETWORK_REQUESTS_AS_ROOT=yes > /etc/default/launchpad-buildd

Launchpad Configuration

From your host system:

  • Get an Ubuntu buildd chroot from Launchpad, using manage-chroot from lp:ubuntu-archive-tools:

  • manage-chroot -s precise -a i386 get

  • LP_DISABLE_SSL_CERTIFICATE_VALIDATION=1 manage-chroot -l dev -s precise -a i386 -f chroot-ubuntu-precise-i386.tar.bz2 set

  • Register a new builder with the URL pointed to http://YOUR-BUILDER-IP:8221/ (https://launchpad.test/builders/+new)

Shortly thereafter, the new builder should report a successful status of 'idle'.

If you want to test just the builder without a Launchpad instance, then, instead of using manage-chroot -l dev set, you can copy the chroot tarball to /home/buildd/filecache-default/; the base name of the file should be its sha1sum. You'll need to copy any other needed files (e.g. source packages) into the cache in the same way. You can then send XML-RPC instructions to the builder as below.

Drive builder through rpc

With librarian running, fire up a python3 shell and:

from xmlrpc.client import ServerProxy
proxy = ServerProxy('http://localhost:8221/rpc')
proxy.ensurepresent('d267a7b39544795f0e98d00c3cf7862045311464', 'http://launchpad.test:58080/93/chroot-ubuntu-lucid-i386.tar.bz2', '', '')
proxy.build('1-1', 'translation-templates', 'd267a7b39544795f0e98d00c3cf7862045311464', {},
  {'archives': ['deb http://archive.ubuntu.com/ubuntu/ lucid main'], 'branch_url': '/home/buildd/gimp-2.6.8'})
proxy.status()
proxy.clean() # Clean up if it failed

You may have to calculate a new sha1sum of the chroot file.

Upload a source to the PPA

  • Run scripts/process-upload.py /var/tmp/txpkgupload (creates hierarchy)

  • Add to ~/.dput.cf:

[lpdev]
fqdn = ppa.launchpad.test:2121
method = ftp
incoming = %(lpdev)s
login = anonymous
  • Find a source package some_source with a changes file some_source.changes

  • dput -u lpdev:~ppa-user/test-ppa/ubuntu some_source.changes

  • scripts/process-upload.py /var/tmp/txpkgupload -C absolutely-anything -vvv # Accept the source upload.

  • If this is your first time running soyuz locally, you'll also need to publish ubuntu: scripts/publish-distro.py -C

  • Within five seconds of upload acceptance, the buildd should start building. Wait until it is complete (the build page will say "Uploading build").
  • scripts/process-upload.py -vvv --builds -C buildd /var/tmp/builddmaster # Process the build upload.

  • scripts/process-accepted.py -vv --ppa ubuntu # Create publishings for the binaries.

  • scripts/publish-distro.py -vv --ppa # Publish the source and binaries.

    • Note that private archive builds will not be dispatched until their source is published.

Build an OCI image

  • Using Launchpad interface, create a new OCI project and a recipe for it.
  • On the OCI Recipe page, click on "Request builds", and select which architectures should be built on the following screen.
  • Once you have requested a build, you should run ./cronscripts/process-job-source.py -v IOCIRecipeRequestBuildsJobSource to create builds for that build request.

  • If you have builders idle, this should start the build. Make sure to have run utilities/start-dev-soyuz.sh, and check builders status at /builders page.

  • Once the build finishes, run ./scripts/process-upload.py -M --builds /var/tmp/builddmaster/ on Launchpad to make it collect the built layers and manifests.

  • At this point, in each build page you should have the files listed.
  • You can upload the built image to registry by running ./cronscripts/process-job-source.py -v IOCIRegistryUploadJobSource in Launchpad. You can manage the push rules at OCI recipe's page, clicking at "Edit push rules" button.

Dealing with the primary archive

  • dput lpdev:ubuntu some_source.changes

  • scripts/process-upload.py -vvv /var/tmp/txpkgupload

  • Watch the output -- the upload might end up in NEW.
    • If it does, go to the queue and accept it.
  • Your builder should now be busy. Once it finishes, the binaries might go into NEW. Accept them if required.
  • scripts/process-accepted.py -vv ubuntu

  • scripts/publish-distro.py -vv

    • The first time, add -C to ensure a full publication of the archive.

Soyuz/HowToUseSoyuzLocally (last edited 2022-12-10 08:09:22 by jugmac00)