Why a Roadmap?
Migrating to a services based design is a large project. We can't do it in one hit, and per the analysis we have a lot of different things we can do. We want to make sure that the project is a net win: we shouldn't put more effort into it than we will save in efficiency on future changes / headaches.
Considerations
- We have a lot of learning to do - HA and deployment will be significantly more complex. Our monitoring needs to get a lot better.
- Some things will be dependent on object-sync facilities (e.g. rabbitmq) in the data centre.
- Adding backend services will increase latency if done poorly.
Structure
This roadmap is laid out in a few sections. Each section is broadly implementable separately. Each thing that we might do has a micro-dump of its possibilities: not quite a LEP. The next step to working on any given thing is either to JFDI it or to LEP it depending on the apparent complexity.
Defaults
We need to choose various defaults for backend services.
Some have already be chosen.
Message queues
We're using rabbitMQ as decided mid 2010. Not because its the best, but because its already in use in Canonical, and we're very unlikely to gain enough using a different MQ for now: once we're solidly service based we can revisit this.
In-datacentre backend authentication
This is not decided upon as yet. Some options are:
- micro services run behind Apache and use ip address + basic auth (must be from a known ip address with a basic auth password to work).
- OAuth. Possibly with ip limits as per basic.
The big webapp today runs behind haproxy not apache (apache is only at the outer edge) so we should expect to implement whatever we choose directly for it (but not for other microservices).
-- RobertCollins: I'm strongly leaning towards ip address + basic auth. This will permit easy debugging, extremely lightweight client and per-request overhead, and we can add OAuth into Apache whenever we want.
In-datacentre network protocol
In the datacentre we have no latency to worry about, but we do need to worry about efficiency and ease of development. While some services already have protocols, any new service we make will need us to choose a protocol for it. No decisions yet but some options are:
- XMLRPC: pros: already deploys, batteries included in Python and many other languages. cons: XML, RPC model rather than restful - no opportunity for caching, URLs can be opaque when debugging.
- adhoc restful json based apis. pros: nice to look at by hand, easy to interact with manually. cons: not included in the Python standard library, optimises for things that don't really affect us.
- google protobufs: pros: clear contracts, wire level upgrading built-in. cons: not well understood within the LP team, currently somewhat slow [in python].
Excluded options:
- lazr.restful : Not suitable for rapid development or consumption by other servers.
High availability of backend services
HAProxy. Apache. Linux Virtual Server. (LVS per datacentre -> HAProxy -> backends with apache in each datacentre -> local connection to the microservice).
Independent Services
Things like a GPG service go here. Such services will have various benefits and need separate analysis per service.
GPGHandler
Launchpad has to manage a number of GPG keys in a few different ways:
- Create new keys for PPA signatures
- Validate signatures (text in, (signer, status, cleartext) out)
- Store prepared key revocation certificates in the event of a compromise.
- Sign new package binaries build in the build cluster.
We may not want to expose all of these as a web service (because a single lying client in the datacentre could get a hostile binary package signed). However exposing the validation of signatures is a no brainer and would save some significant operational complexity. We can iterate to add more as wanted.
Benefits: avoid cold cache warmup for GPG validation (by having a long running gpg cache dir), make GPG validation available to non-lp-zope services without the headache of direct integration. Costs: Migrate over the GPG handler code and create a test fake.
existing backend services
Things like splitting out the codehosting server, the importd engine, rosetta translation import/export services go here. The benefits from splitting these things out will be shorter test run times for the main application; as they are generally already service based the operational change should be modest. Consider these low hanging fruit - easy to do, some benefits, low risk.
script migration
Our scripts all need internal APIs setup and to be migrated to them. They belong here. Many of these scripts routinely cause headaches. We should expect to identify a raft of performance problems and data integrity / access control issues as we migrate them. (Our current scripts have a lobotomised security model in place which we would not want to keep).
optimisations
We have a number of potential optimisations best done using services: examples include
- search
- graph traversals/reachability queries
- batch/aggregate workloads (map reduce)
- long poll callbacks
These are best done using services because they either need custom databases, layer on a SQL database or will be long running and not subject to our normal transaction timeliness constraints.
integrations
We have other services which are already poorly integrated into Launchpad. Specific examples are:
- mhonarc
- mailman
- loggerhead
Overhauling these so that their UI is entirely done in the LP template engine and they act as pure data sources with real-time lookups would make a significant improvement to user experience.
decoupling
We have functionality that is currently tightly coupled which we would benefit from splitting into dedicated services:
- directory services
- subscriptions
- discussion/messaging
- rendering/UI (with API bundled in because we render in the public API)