Not logged in - Log In / Register

Revision 5 as of 2011-10-14 11:02:10

Clear message

QA on Soyuz

(See also the Effective QA section in BuildFarm - you must visit this if you made changes in the publisher because builders need to access the build farm)

There are three places you can do QA for Soyuz:

QA on qastaging

qastaging only runs the UI side of Soyuz services, so you can only test code that runs in the webapp:

QA on staging

staging runs its own instance of the buildd-manager and has its own builder, clementine. This means that any builds that are queued will get processed and built. Builds can be created in 2 main ways:

Since qastaging does not run the uploader the latter is not possible, so you need to copy some package in the UI to make a build job. Visit the Copy Packages link on any PPA page.

Because the publisher also doesn't run on staging, the built binaries will remain in limbo for ever (the green cog icon).

QA on dogfood

Dogfood is the original QA platform for Soyuz and some developers have shell access which is necessary to run the scripts and work with production-like data. Since this machine is not open for general access to developers, anyone wishing to QA changes to the following things will need to contact a member of the original Soyuz team to do their QA for them.

Dogfood has limited resources and is starting to get close to disk space limitations when restoring the production database. We generally only restore once per 6 months after production initialises a new Ubuntu release, as it's more effective to restore the database (~40 hours) than to initialise new series on dogfood. Dogfood is very, very slow.

Getting more QA facilities on [qa]staging

It should be easy to run the poppy daemon, and the upload processor and process-accepted script cron jobs on staging. It should also be possible to ask a LOSA to run one of the archive-admin scripts manually, when required, with no issues.

The publisher is a very different matter, because of the nature of the staging database getting restored every week, it will get out of sync with the published repositories. This can cause OOPSes in the scripts, some of which will be false alarms but will still look odd to the uninitiated. At best it will be transient, at worst it will block the publisher and hence QA. There are also some problems with librarian files getting expired in production that are not expired in staging which will also make the publisher fall over.

The PPA signing private keys are also not available outside of production - we remove the references in the dogfood DB to avoid the publisher problems in that area but if they need to be tested then it presents some manual work.

So, some more thought needs to be put into how best to handle publishing on staging.