= Build Farm = == Topics == * [[BuildFarm/TryOutBuildSlave|TryOutBuildSlave]] - running and controlling a build slave locally. * [[BuildFarm/UpdatingCode | Updating code]] == Effective QA == When making changes to '''any''' aspect of build farm code we '''must''' test it on dogfood or staging's build farm before releasing to production. The code can be any of: * The slave code (currently in lib/canonical/buildd) * The manager code (lib/lp/buildmaster/manager.py) * Job-specific code that lives in the app directories, such as lib/lp/soyuz/model/binarypackagebuild.py In addition, because any changes to the publisher can affect builds (builders need to access the archive), the buildd-manager must also be tested with an archive that was re-written by new code that changes the way publishing is done. There are many strategies we can take to ensure that the change is tested to a good level of confidence. === Stress testing === Create a whole bunch of jobs to queue up and let them rip. The easiest way to create Soyuz jobs is to copy w/o binaries between PPAs, which makes a build request. You could also grab a load of sources using apt-get source, upload them to dogfood and run process-upload.py. This is a good idea in addition to the PPA route as it tests code that handles non-virtual builds as well. See also the script in http://pastebin.ubuntu.com/264863/ which uses the SoyuzTestPublisher to create fake packages. The script is written to run on a local instance, but can be adapted to run on dogfood or staging where real data exists. Observe the results of jobs. === Negative testing === Create jobs known to fail in some way. The type of failure mostly depends on what sort of code change you are making but consider: * different job states (depfail, fail to build etc) * any changes to the manager/slave protocol should make sure you exercise all aspects of that very carefully, in particular make sure "None" is passed in a field as it can't be serialised and makes the XMLRPC libraries throw a Fault exception. * make a job that will complete successfully on the slave, but fail to upload so that you can test upload processing. * two build jobs for the same package in the same archive should result in the first being superseded without being built if the second is created before the first is dispatched. * Jobs that fail should not get re-dispatched automatically * Create a job that would make the slave code blow up. Perhaps a bad source package recipe text? === Security testing === Attempt to subvert security. This can be as easy as URL hacking, but it depends on the change you're making.