This page tracks the work to discover and implement the best performance improvements to the Launchpad web service.
First step: quantify performance
We want to be able to measure our performance. Ideally, this would be both end-to-end and subdivided into our network performance, our performance on the client, and our performance on the server. These have four goals.
- Help us more accurately guess the potential effectiveness of a given solution, to help us winnow and prioritize the list.
- Help us evaluate the effectiveness of a given solution after a full or partial implementation, to validate our efforts.
- Help us determine what quantifiable performance level gives our users a qualitatively positive experience.
- Help us move quickly.
The last goal means we need to find a balance between thoroughness and expediency in our construction of tests.
Second step: collect, evaluate, winnow, and prioritize possible solutions
We are particularly responsible for the systemic performance of the webservice. This means that we want the average performance to be good. We need to work with the Launchpad team to create good performance within individual requests, but we are more interested here with things that can make the whole webservice faster. Tools that can help developers make individual pages faster easily, but with some effort and customization, are also of interest.
Again, our solutions will focus on different aspects of the end-to-end performance of the webservice. We then have three basic areas to attack.
- Reduce and speed network requests.
- Make the launchpadlib requests faster systemically on the server.
- Make the launchpadlib client faster.
The following collects brainstormed ideas so far.
- [Network] Switch many requests to HTTP, to avoid SSL handshake costs
[Network] Investigate whether HTTP 1.1 KeepAlive is enabled, and if not, how it can be
- [Server] Incorporate memcached and/or noSQL db like MongoDB
- memcached story: memcached would cache object pre-renderings. These include etag information and all information pre-security-screening that lazr.restful would send over the wire. lazr.restful would get the pre-renderings and turn them into security-screened versions, and send them over the wire. (Questions: will the code to do the security screening need to touch the Storm object so much that it would negate most speed increases? Can we cache the security questions in a way that makes this faster, if so?) When the database performs an update, it would use triggers to invalidate the associated stored memcached values, if any. (Questions: how badly will this affect our write performance? Can we mitigate?)
- mongodb story: the point of this story is to support keeping questions about collections from hitting postgres. That is much more expensive than just getting the values for a single row. If we can get the collections very fast from a noSQL db, that might be a big win. It would also support getting "nested" requests (see idea below) quickly. The proposed implementation is similar to the memcached story, except that triggers in postgres would completely maintain the pre-rendered data in the persistent noSQL db, rather than invalidating cached data. We would then use indexes in mongoDB to get the nested collections back.
- [Network] Support "nested" requests: e.g., get a person and the first 50 of her bugs in a single request.
- [Client] Profile client and examine
- [Server] Profile server code and examine
- [Server] Add zc.zservertracelog notes for when lazr.restful code starts, when it passes off to launchpad code, when it gets the result from launchpad, and when it hands the result back to the publisher. First and last values may be unnecessary--equivalent to already-existing values.
- [Client] Faster import of launchpadlib (what is the problem?)
- [Client] Can we provided cache control for wadl file? Maybe cache for one day?
Third step: implement the next solution
The next solution is TBD.
Next...
Rinse and repeat back to the first step, trying to determine if our quantifiable performance gives an qualitative experience that we find acceptable.
memcached tests
I hacked lazr.restful to cache completed representations in memcached, and to use them if they were cached. This will not work in a real situation, but it provides an upper bound on how much time we can possibly save by using memcached. I used the https://dev.launchpad.net/Foundations/Webservice?action=AttachFile&do=view&target=performance_test.py throughout.
Entries
Here's the script retrieving an entry 30 times. (I had to disable conditional requests.)
{{{Import cost: 0.44 sec Startup cost: 1.27 sec First fetch took 0.18 sec First five fetches took 0.66 sec (mean: 0.13 sec) All 30 fetches took 3.13 sec (mean: 0.10 sec)
Import cost: 0.44 sec Startup cost: 0.84 sec First fetch took 0.10 sec First five fetches took 0.50 sec (mean: 0.10 sec) All 30 fetches took 3.31 sec (mean: 0.11 sec)}}}
I introduce memcached and here are the results:
{{{Import cost: 0.47 sec Startup cost: 1.27 sec First fetch took 0.17 sec First five fetches took 0.58 sec (mean: 0.12 sec) All 30 fetches took 2.80 sec (mean: 0.09 sec)
Import cost: 0.44 sec Startup cost: 0.86 sec First fetch took 0.08 sec First five fetches took 0.43 sec (mean: 0.09 sec) All 30 fetches took 2.86 sec (mean: 0.10 sec)}}}
As you can see, there's no significant benefit to caching a single entry representation over not caching it.