Results Tracker

A repository to submit test results for tracking purposes.

As a member of the community
I want to submit test results from my computer
so that the platform can know what works and doesn't work.

As a member of the platform team
I want to know what hardware passed and failed testing
so that more hardware can be enabled for the community.

As a member of a team working with hardware vendors
I want to submit test results to a private repository
so that all members in the team can fix problems together.

Consider clarifying the feature by describing what it is not?

Rationale

Why are we doing this now?

Many teams have expressed the need for a repository of test results for reporting purposes. These teams have either not provided any reports, created their own custom database or plan to create their own database. This is resulting in a fragmentation of the data which will only continue to grow unless a solution is provided:

What value does this give our users? Which users?

Stakeholders

Who really cares about this feature? When did you last talk to them?

In general, anyone who happens to be interested in the results from testing the compatibility of hardware and the working of package. In particular, the following people have expressed interest during the Maverick UDS and have attended at least one session about enhancing Launchpad with test results:

And, the following people have specifically expressed interest in contributing to the design and implementation:

Constraints

What MUST the new behaviour provide?

What MUST it not do?

Workflows

What are the workflows for this feature?

Checkbox

    lp = lpl_common.connect()
    results = lp.load('%s+results' % lp._root_uri)
    device_search = results.DeviceSearch(product_name='5100')
    for device in results.searchDevices(device_search):
        print '%s %s' % (device.vendor.name, device.product.name)
        result_search = results.ResultSearch(device=device, status='failure')
        device_results = results.searchResults(result_search)
        print ' %d' % len(device_results)

Provide mockups for each workflow.

The mockups for this feature are mostly concerned with storing and retrieving information, as opposed to exposing a web interface. These are provided in the implementation details.

Success

How will we know when we are done?

How will we measure how well we have done?

Thoughts?

Put everything else here. Better out than in.

LEP/ResultsTracker (last edited 2010-09-14 19:46:33 by cr3)