17022
Comment: reporting
|
17790
more
|
Deletions are marked like this. | Additions are marked like this. |
Line 156: | Line 156: |
=== Many trees === Single user facing changes may require commits to multiple services. Though we can version our apis quite easily to avoid needing coordinated deploys, it will require more thought. === More testing overhead for developers === The explicit contracts and test fakes mean that developers may need to write very similar code more than once - something that might be seen as a DRY violation. === More visible components to be aware of === Running a number of microservices will increase our monitoring overhead and add complexity to our deployment and QA processes. We can mitigate the deployment and QA issue by keeping an aggressively small window between land+deploy or land+rollback (which a fast testing turnaround should support) == Benefits from this design == |
|
Line 158: | Line 171: |
== Benefits from this design == |
Current state
Launchpad is currently designed as single large python project + postgresql database with component libraries, and a very few additional services: loggerhead, mhonarc.
Some parts of the project run different stacks: the librarian, buildd-manager, and some are deployed very differently - buildd-slave.
Friction attributable to this design
Code coupling
Because optimisations within the data model require complex queries, different domains within the Launchpad code tree get quite tangled.
Test suite
As a result of high code coupling and a lack of reliable contracts many changes have unexpectedly wide consequences. Because the code base is very large the test suite is very large. Because the layers are not amenable to substitution unit tests are very tricky to write and we usually exercise most of the entire stack. Additionally finding the right tests to work on is hard - partly because what one might consider layers are all smooshed together, and partly due to having many different styles of test which are not consistently testing at the same layers - it is hard to tell where to start.
Monolithic downtime
With a single database and deployed tree, most changes require complete downtime.
Poor integration with non-included services
Things like mhonarc and loggerhead have a very poor story in LP because we don't have a good story for skinning them and our monolithic approach drives the story we do have.
Benefits from this design
Schema and code coherency
We never have to deal with a schema from a different tree - all the code that knows about the plumbing is in one place.
Atomic relational integrity
All our data is in one place, we can use foreign keys to track deletes and things are never inconsistent.
Less moving parts to learn
As all the code is in one tree, we generally have one set of idioms, one language, one database engine to work with. This helps keep things approachable (and as bugs in-tree generally get more rapid attention than bugs in related trees we have some anecdata to support this benefit).
A service based Launchpad
In changing the tradeoff we make we should be clear about the things we want to optimise for, so we can evaluate new tradeoff points.
As a team we want to achieve three key things:
- Fast development times
- Which implies fast test runs and a low wtf factor on changes
- Low latency services
- Low or no-downtime schema changes
Better isolation of changes permits confident changes with smaller numbers of tests run, and decreases the WTF factor - so we need to look at how we can increase isolation of changes. Improving the overall speed with which requests are serviced helps with latency - so we need to consider whether we will add (or remove) intrinsic limits to performance. The busier a subsystem is the harder it is to take it down for schema changes (and the larger the subsystems is the wider the outage caused by taking it down).
For change isolation we need to look at the entire stack - previously we've only considered contracts within the one code base. But actually sitting on one schema implies one large bucket within which changes can propogate : and we see this regularly when we deal with optimisations and changes to the implementation of common layers (such as the team participation graph-caching optimisation).
One way we could improve isolation is to (gradually, not radically) convert Launchpad to be built from smaller services, each of which has a crisp contract, its own storage and schema. This would permit testing of just the changes within one service - or only the services that talk to that service.
While there are many ways one might try to slice up Launchpad into smaller services, we need to avoid creating silos between components we want to deeply integrate. One way to avoid that when we don't know what we want to integrate is to focus on layers which can be reused across Launchpad rather than on (for instance) breaking out user-visible components.
Anatomy of a service
Contracts and protocols
Having clear contracts between services implies that services are mostly self contained. It could also be taken to suggest that we need a homogeneous protocol for accessing different services, with API docs and so forth. Given that we have heterogeneous components to integrate (already - see mhonarc, loggerhead, gpg keyservice, bzr codehosting,...) the benefits of a single protocol would need to be balanced against the overhead of having to write thunks for any existing service that happens to use a different protocol.
We can put a few guidelines in place to support having clear boundaries though:
- One running service (network listening thing) per source tree. This avoids having two services that talk to each other getting under the hood and poking around.
- Access to a particular form of persistence (e.g. the librarians files-on-disk, or a particular postgresql schema) requires being in the same source tree as the definition of that persistence mechanism. This avoids having to synchronise deploys to multiple services when schema changes are occuring: no leaky borders.
- Services should supply a test fake along with their production implementation. This permits fast test runs and determining the needed API for a new change in a backing service via TDD: exercise the test fake and change that while building the closest-to-user layer, then make the concrete implementation match the test fake.
And we can evaluate and choose a 'sensible default' protocol for new services we write. One possibility is a restful JSON based API, but we couldn't use launchpadlib as a client for that without fixing it to be suitable for use inside a web server. (The use of runtime code generation, disk cache, wadl parsing are all significant obstacles for a server). At this point there is no clear winner, though xmlrpc may well be the best compromise between speed, ease of updates and mocking, existing server-suitable client libraries and so forth.
Technology
We already have a heterogeneous collection of service implementations: pastedeploy(loggerhead), twisted(buildd, librarian), zope(lp ocre), whatever mhonarc is written in, bzr's service implementation(custom, with adapters to ssh(via twisted) and wsgi). If we make each service as simple to maintain as possible, we can offset to some degree the costs inherent in having multiple stacks in play. For new services we should spend a little time asking the question 'how big will this get, how responsive does it need to be, and how fast do we need to change it', but for existing services we should focus on making them easier to maintain (because we already know how much load they get). If we bring in a new service with an existing implementation we need to ask whether we can effectively deal with bugs in that service: there is no point having the worlds best bread slicer if we can't fix it when it breaks. This applies to the entire stack: database, language, network protocol.
Identified service opportunities
These are potential things we could pull out to services - they are examples only - detailed analysis of each has not been done, so its not possible to say that they are all definites: they are merely opportunities.
team participation / directory service
The largest teams-per-person is ~300, the largest persons-per-team is 18K, but discounting the top two drops it to 3.7K, and top ten gets down to 1.6K. The 18K case can be serialised and passed over the network in 300ms though, which makes it feasible to grab and pass between systems. Smaller cases like a 2K membership team can be handled in 40ms (using psql and postgres to assess).
We have a number of significant use cases around the directory service which are poorly satisfied at the moment - we don't permit non-membership relations like 'administers' or 'audits' (e.g. is granted view privileges but not mutation privileges).
Running (minimally) the person-in-team, teams-for-person, persons-for-team facilities as a service would aid the separation of SSO (by providing a high availability service that the SSO web service could back end onto).
blob storage (the librarian)
The librarian stores upwards of 14M distinct files (after coalescing by hash) - but it is tightly coupled to the Launchpad schema. It suffers from cold-cache effects on a regular basis, and we have explicit mechanisms in the schema to let us have weaker-than-actual links (for instance we can delete the blob but keep the reference, and delete the reference but retain the blob for a while).
We could build/bring in a simpler blob store and layer our special needs on top or as an extension to it. For instances needs such as the public-restricted librarian, size-calculations for many objects, or even aggregates (e.g. model a ppa as a bucket of blobs and we can get size data directly aggregated by the blob store)
The current service is difficult to evolve because its tightly coupled: any attempt to modify the schema runs into the slow-patch-application + high-change-friction issue which primarily exists because dozens of call sites talk directly to the storage schema even though most of them just want url generation.
mhonarc (the lists.launchpad.net UI)
This runs as an external service but our appservers do not use it as a backend - instead end users use it directly. If we modified it to write its archived information as machine readable metadata (e.g. a json file per message, per index page, per list) then our template servers which know about facets and menus and so on could efficiently grab that and render a nice page.
This would be easier than retrofitting an event system to update individual archives with different menus as LP policy and metadata change. It would even - if we want it to - permit robust renames of mailing lists and so on.
loggerhead (bazaar.launchpad.net web UI)
Like with mhonarc this already runs as a separate service; however we don't present any of its content in the main UI, and its a constant source of poor user experience. Happily it already has a minimal json API we can use and extend.
bzr+ssh (bazaar.launchpad.net bzr protocol)
This doesn't talk to the database at all and is a shoe-in to be split out now.
distribution source package names
The package names that can be used in the bugtracker depend on what packages are present in the distribution - currently this is a non-trivial query, but it could easily be delivered via a web service to the bugtracker UI. Whether a deeper split in the packaging metadata is needed or desirable is a related but as yet unanalyzed question.
A graph database
We have many places in the system - team participation; branch merged-status, package set traversals and probably more. A generic high performance graph database supporting caching, reachability oracles, parallelism could be used to simplify much of the graph using places in launchpad. While we could use a postgresql datatype, previous searches of these generally are less capable than dedicated graph servers.
A subscription service
There are some things we have subscriptions to (pillars, branches, bugs, questions) all of which are implemented differently; beyond that we'd like to have ephemeral subscriptions to named objects(e.g. when someone has a browser page open on a given url and we want to push updates to the page to them), and possibly even ephemeral subscriptions to anonymous things (which we might use to implement hand-off based callbacks for event driven api scripts).
A subscription service which offers filtering, durations and callbacks on changes could provide a key piece of functionality for implementing lower latency backend services (like browser notifications when a branch has been scanned) all in a single module which can be highly optimised.
A discussions/microlist service
Similar to the subscription service we also have many objects that can be commented on, and we want the same facilities on them all - spam management, searching and indexing, reporting (what has person X commented on); but at the moment we have to do schema changes on every new object we want a discussion facility around. It might even be possible (if we wanted to) to converge one on discussion facility with mailman/mhonarc backending & optimising things.
Template/API service (internet facing)
This is probably the key service: the services our users (both browser and launchpadlib) talk to. Currently our least reliable tests are involved with actual service delibery - and probably always will be (the nature of the beast when we're driving browsers programmatically). We could look at a number of possible splitups, but any changes made to this service are likely to be very visible. Our public API serves two masters; the web site (where it does template rendering into fragments for page updates) and and launchpadlib, our programmatic interface for users to drive Launchpad. The launchpadlib API depends heavily on the WADL and lazr.restful zope stack - changing that (for any reason) is going to require considerable care as we have users on stable LTS releases of Ubuntu to cater to.
However, if we treat the templating and api engine as the entry-service rather than as part of the core data access service, we can dramatically simplify the testing story: a clean contract between template rendering/public api and model manipulation/optimisation/refactoring. If care is taken around how information disclosure is managed, this front end service could dispense with the entire zope security model, and with database access also removed, would have no *correctness* related thread-local information: we could use scatter-gather techniques to gather all the needed information for a page upfront concurrently rather than serially. For instance, bug page rendering would (in terms of data gathering) change from sum(time to get tasks, time to get messages, time to get questions, time to get attachments, time to render) into sum(max(time get tasks, time to get messages, time to get questions, time to get attachments), time to render.
Evolving the system: how to make a change in a service based world
As with most code, we would start with a bug - lets say that the change requires both ui and primary database changes: we need to change tables in the big ball of twine we call Postgres, and we need to change the javascript and html page that user are shown. As a for instance, lets say we're working on 323000 - we're going to add a link to a canned search of 'bugs that affect me' in the users bug pages.
Say that this requires two changes: a UI change to add the link, and a backend change to have a search for 'bugs that affect me'. Fixing this bug then would require a change in the backend bug search logic, enough support for that added to the test fake for that service to use it to test the UI, and a UI change to include the link and check that following it works. We'd need the following tests:
- A contract test for the backend which would run against the real service and the test fake.
- A UI test that the link shows up on the users bug page portlet
- A UI test that the link can be followed and generates the expected backend lookup (runs against the test fake)
Deploying this change would be a two step process: deploy the backend and then deploy the front end.
== What about our scripts, job system etc ==
Case by case analysis of course, but in principle all our scripts would become internal API consumers rather than direct database users. This would prevent all the hung-transaction situations we regularly run into with cron scripts utilities and so forth.
Reporting of resource usage
We would need to make some changes in our DB user management - specifically our mid-layer appservers would need to support both api impersonation (query on behalf of user fred) and api categorisation (connect to the db as db user 'rosetta-stats'). Or we could give up (this very useful) metric of db utilitisation and the related security that having different users can offer.
Friction in this design
Many trees
Single user facing changes may require commits to multiple services. Though we can version our apis quite easily to avoid needing coordinated deploys, it will require more thought.
More testing overhead for developers
The explicit contracts and test fakes mean that developers may need to write very similar code more than once - something that might be seen as a DRY violation.
More visible components to be aware of
Running a number of microservices will increase our monitoring overhead and add complexity to our deployment and QA processes. We can mitigate the deployment and QA issue by keeping an aggressively small window between land+deploy or land+rollback (which a fast testing turnaround should support)
Benefits from this design
Big picture migration strategy - pay as we go
Credits
This document is a synthesis of many discussions held about Launchpad over the year preceeding its creation; all good ideas go to those participants (including but not limited to poolie, gary, jml, deryck, sinzui, bigjools, wgrant, flacoste, statik, stub) - all mistakes are mine (lifeless).