= Results Tracker = ''A repository to submit test results for tracking purposes.'' '''As a ''' member of the community<
> '''I want ''' to submit test results from my computer<
> '''so that ''' the platform can know what works and doesn't work.<
> '''As a ''' member of the platform team<
> '''I want ''' to know what hardware passed and failed testing<
> '''so that ''' more hardware can be enabled for the community.<
> '''As a ''' member of a team working with hardware vendors<
> '''I want ''' to submit test results to a private repository<
> '''so that ''' all members in the team can fix problems together.<
> ''Consider clarifying the feature by describing what it is not?'' * The Results Tracker does not run tests, it is only a repository of results from tests run elsewhere. == Rationale == ''Why are we doing this now?'' Many teams have expressed the need for a repository of test results for reporting purposes. These teams have either not provided any reports, created their own custom database or plan to create their own database. This is resulting in a fragmentation of the data which will only continue to grow unless a solution is provided: * The community has been submitting test results to Launchpad since Hoary, with the original hwdb client later replaced with Checkbox, but this information has never been used. * The QA community has created their own website, [[http://iso.qa.ubuntu.com/|ISO Testing Tracker]], to track test results from testing ISO images. * The Hardware Certification team has created their own website, [[https://certification.canonical.com/|Hardware Certification]], to track and manage test results from hardware vendors. * The OEM team has created their own internal database developed by Chris Wayne. * The Kernel team has also created own database using CouchDB and Brad Figg has created a web inteface for reporting purposes. * The ARM and Hardware Enablement teams might have to create their own database and website as well unless a solution is provided to support their use cases. * The DX team needs to crowd source new software for the desktop by extending Checkbox to attach additional information, but this information cannot be retrieved from Launchpad unless iterating over all submissions. ''What value does this give our users? Which users?'' * For the community of users, this will finally enable them to reap the rewards from running tests on their computer by making this information available to Ubuntu and upstream developers. * For the community of developers, this will provide them feedback from the community to determine which hardware is compatible and which packages work. * For Hardware Certification, Kernel and OEM, they could have a central repository where test results from the community and hardware vendors can be compared against each other. * For ARM and Hardware Enablement, they need a generic repository where test results can be shared between members of their team. == Stakeholders == ''Who really cares about this feature? When did you last talk to them?'' In general, anyone who happens to be interested in the results from testing the compatibility of hardware and the working of package. In particular, the following people have expressed interest during the Maverick UDS and have attended at least one session about enhancing Launchpad with test results: * [[https://wiki.ubuntu.com/Specs/M/ARMValidationDashboard|ARM team]]: Tobin Davis * Community: James Tatum, Paolo Sammicheli and Stéphane Graber * DX team: David Barth and Kalle Valo * [[LEP/ResultsTrackerHardwareCertification|Hardware Certification]]: Marc Tardif and Ronald Mc''''''Collam * Hardware Enablement: Hugh Blemings and Jeremy Kerr * ISD team: Julien Funk * Kernel team: Jeremy Foshee and Brad Figg * Landscape team: Andreas Hasenack * Linaro team: Paul Larson and Zygmunt Krynicki * [[LEP/ResultsTrackerOEM|OEM team]]: Chris Gregan, Chris Wayne, Javier Collado and Tony Espy * QA team: Ara Pulido and Mathieu Trudel * Server team: Carlos de Avillez, Mathias Gug and Scott Moser And, the following people have specifically expressed interest in contributing to the design and implementation: * Jeremy Kerr * Marc Tardif * Mathias Gug * Scott Moser * Zygmunt Krynicki == Constraints == ''What MUST the new behaviour provide?'' * API to submit new test results and append to existing submissions in the tracker. * API to retrieve sets of results across multiple submissions from the tracker. * API to query results based on hardware devices and/or package versions. * API to get reporting information from the tracker. * Ability to make public test results private, because of sensitive information, and vice versa. * Launchpadlib needs to be extended to support streaming so that results can be uploaded in constant memory. * Support hundreds of systems each uploading a hundred test results per day each containing a total of 1 megabyte of data on average. ''What MUST it not do?'' * API should not allow for removing devices. * No HTML interface should be provided at this point, reporting should be done using the API. * No cleanup of historical data should be done at this point, this should be revisited later. * No upper bound should be enforced when submitting test results. The objective is to encourage the community to submit test results as much as possible. == Workflows == ''What are the workflows for this feature?'' * User runs Checkbox to test their hardware devices and installed packages, then submits the information to the Results Tracker. {{attachment:ResultsTrackerCheckbox.png|Checkbox}} * User calls the API using launchpadlib to retrieve test results and their relation to hardware devices and package versions. {{{ lp = lpl_common.connect() results = lp.load('%s+results' % lp._root_uri) device_search = results.DeviceSearch(product_name='5100') for device in results.searchDevices(device_search): print '%s %s' % (device.vendor.name, device.product.name) result_search = results.ResultSearch(device=device, status='failure') device_results = results.searchResults(result_search) print ' %d' % len(device_results) }}} ''Provide mockups for each workflow.'' The mockups for this feature are mostly concerned with storing and retrieving information, as opposed to exposing a web interface. These are provided in the [[LEP/ResultsTrackerImplementation|implementation]] details. == Success == ''How will we know when we are done?'' * When the community of users and developers can finally see reports from the tests they have run on their computers. * When the ARM and Hardware Enablement teams adopt the Results Tracker as a repository for their hardware and test results. * When the DX team is able to crowd source testing of software for the desktop. * When the Hardware Certification, Kernel and OEM teams use the Results Tracker instead of their own databases. * When the Hardware Database (hwdb) and the Results Tracker in Launchpad use the same database. * A complete coverage report for a daily build should take less than a day to produce. ''How will we measure how well we have done?'' * By the simplicity of the interfaces for generating reports from test results submitted by the community. * By the completeness of the interfaces for integration in the other sites, eg Hardware Certification, Kernel and OEM teams. == Thoughts? == ''Put everything else here. Better out than in.'' * Launchpad merge requests could eventually be conditional on test results passing. * Launchpad bugs could eventually be linkable to test results, and the status of a bug could be updated when a test starts passing. * The initial web service should be prototyped on the dogfood server to start profiling the models as early as possible. * The interface should support the optional concept of test versions and results should be extensible with attachments in addition to the output. * The interface should be used for initial research to validate usefulness by upstream projects. Jelmer Vernooij, who maintains Samba, would be a good candidate.