Database/Performance

Not logged in - Log In / Register

Revision 6 as of 2011-01-26 04:31:09

Clear message

Poor query times - looks right, takes ages

Normally, the simplest form of the query is the fastest.

Checklist of known problems

  1. subselects, EXISTS, IS IN - these can be useful for optimizing particular edge cases, but can degrade performance for the normal cases.
  2. database views. These used to be useful to keep our code sane, but it is now clearer to use Storm to express the view logic from the client side. Database views now serve little purpose for us except to hide unnecessary joins that will degrade performance.
  3. bad query plans generated on the DB server - talk to a team lead to get an explain analyze done on staging, or to Stuart or a LOSA to get the same done on production (if the staging one looks ok its important to check production too).
    • Bad plans can happen due to out of date statistics or corner cases, sometimes rewriting the query to be simpler/slightly different can help. Specific things to watch out for are nested joins and unneeded tables (which can dramatically change the lookup).
  4. fat indices - if the explain looks sensible, talk to Stuart about this.
  5. missing indices - check that the query should be indexable (and if in doubt chat to Stuart).
  6. using functions in ORDER BY: calling functions on every row of an intermediary table - if a sort cannot be answered by iterating an index then postgresql will generate an intermediate table containing all the rows that match the constraints, and sort that in-memory; functions that affect the sort order have to be evaluated before doing the sort - and thus before any LIMIT occurs.
  7. Querying for unrelated tables. Quite possibly either prejoins, or prepopulation of derived attributes. Look for a code path that is narrower, or pass down a hint of some sort about the data you need so the actual query can be more appropriate. Sometimes more data is genuinely needed but still messes up the query: consider using a later query rather than prejoining. E.g. using the pre_iter_hook of DecoratedResultSet to populate the storm cache.

Many small queries [in a webapp context] -- Late Evaluation

Databases work best when used to work with sets of data, not objects - but we write in python, which is procedural and we define per-object code paths.

One particular trip up that can occur is with related and derived data.

Consider:

def get_finished_foos(self):
    return Store.of(self).find(Foo, And(Foo.me == self.id, Foo.finished == True))

This will perform great for one object, but if you use it in a loop going over even as few as 30 or 40 objects you will cause a large amount of work - 30 to 40 separate round trips to the database.

Its much better to prepopulate a cache of these finished_foos when you request the owning object in the first place, when you know that you will need them.

To do this, use a Tuple Query with Storm, and assign the related objects to a cached attribute which your method can return. For attributes the @cachedproperty('_foo_cached') can be used to do this in combination with a DecoratedResultSet

Be sure to clear these caches with a Storm invalidation hook, to avoid test suite fallout. Objects are not reused between requests on the appservers, so we're generally safe there. (Our storm and sqlbase classes within the Launchpad tree have these hooks, so you only need to manually invalidate if you are using storm directly).

A word of warning too - Utilities will often get in the way of optimising this :)