Contents |
In order to focus more developer effort on making checkwatches its best, we're treating this like any other story under development. We'll call this the Reliable Bug Syncing" story.
Drivers: Gavin Panella and Tom Berger
Ideas
- More stats, displayed publicly, with graphs and colour? Export raw data via API so that others can monitor health. Useful indicators, per bug tracker, could be:
- Count and % of watches with errors = health. We would aim to drive this to zero. This also gives us a quick indication of which trackers are broken.
- Min, max, average time since last check. We would aim to keep all of these to be close to the check interval. Anything else is an indicator of poor performance (locally or remotely). Locally means we have a bug to investigate. Remote means we might need to back off.
Min, max, average time since last successful check. We would aim to keep all of these to be close to the check interval. Anything else needs to be investigated.
- Min, max, average time taken to check new watches. We would aim to drive this towards zero.
Use JumpBox via Amazon EC2 to get access to real world instances (up-to-date and historical) of Bugzilla, Mantis, etc. Automate starting, populating, and testing against these instances. Tie into our test suite.
Give checkwatches a serious amount of (tough) love.
- Consider doing comment syncing, status syncing and bug linking as separate jobs.
- Use the code-hosting jobs model to schedule jobs.
Assess the test suite; make sure it's going to help us refactor. Inspiration: the bugchange test suite last year was the key to making sweeping changes to notifications and activity recording with little fall-out.
Plans
Bugs:
story-reliable-bug-syncing (Bugs tagged with story-reliable-bug-syncing)
bugwatch -story-reliable-bug-syncing (Bugs tagged with bugwatch and not story-reliable-bug-syncing)
10.01 (January 2010)
10.02 (January 2010)