TestsStyleGuide

Not logged in - Log In / Register

Revision 5 as of 2009-05-14 13:17:23

Clear message

Tests Style Guide

This page documents current conventions for our Launchpad tests. Reviewers make sure that code merged is documented and covered by tests. Following the principles outlined in this document will minimize comments related to test style from reviewers.

Reviewers will block merge of code that is under-documented or under-tested. We have two primary means of documentation:

  1. System documentation under lib/canonical/launchpad/doc.

  2. Page tests in lib/canonical/launchpad/pagetests.

While these two types of documentation use the doctest format, which means that they contain testable examples, they are documentation first. So they are not the best place to test many corner cases or various similar possibilities. This is best done in other unit tests or functional tests, which have ensuring complete test coverage as their main objective.

Testable Documentation

Testable documentation includes system documentation doctests and page tests.

System Documentation

These are doctests located under lib/canonical/launchpad/doc. They are used to document the APIs and other internal objects. The documentation should explain to a developer how to use these objects and what purpose they serve.ΒΈ

Each modification to canonical.launchpad.interfaces should be documented in one of these files.

(Each file in that directory is automatically added to the test suite. If you need to configure the test layer in which the test will be run or need to customize the test fixture, you can add special instructions for the file in the system documentation harness in lib/canonical/launchpad/ftests/test_system_documentation.py.)

Use Cases Documentation: Page Tests

We use page tests to document all the use cases that Launchpad caters for. The narrative in these files should document the use case. That is, they should explain what the user's objective is and how he accomplishes it.

The examples in these files uses zope.testbrowser to show how the user would navigate the workflow relevant to the use case.

So each addition to the UI should be covered by an appropriate section in a page test.

The page tests do not need to document and demonstrate each and every possible way to navigate the workflow. This can usually be done in a more direct manner by testing the view object directly. View objects are usually documented that way along other system objects in files named *-pages.txt.

(See PageTestsOrSystemDocs for background discussion on using a system doctest vs a page test.)

Common Conventions

The basic conventions for testable documentation are:

Each individual test should be of the form:

    >>> do_something()
    expected output

This means that something like this isn't considered a test, but test setup (since it doesn't produce any output)

    >>> do_something()

For the reason above, the assert statement shouldn't be used in doctests.

Comparing Results

When writing doctest, make sure that if the test fails, the failure message will be helpful to debug the problem. Avoid constructs like:

The failure message for this test will be:

which isn't helpful at all in understanding what went wrong. This example is a lot more helpful when it fails:

For page tests, where the page contains a lot of elements, you should zoom in to the relevant part. You can use the find_main_content(), find_tags_by_class(), find_tag_by_id(), and find_portlet() helper methods. They return BeautifulSoup instances, which makes it easy to access specific elements in the tree.

There is also an extract_text() helper that only renders the HTML text:

Read PageTests for other tips on writing page tests.

When to print and when to return values

Doctests mimic the Python interactive interpreter, so generally it's preferred to simply return values and expect to see their string representation. In a few cases though, it's better to print the results instead of just returning them.

The two most common cases of this are None and strings. The interactive interpreter suppresses None return values, so relying on these means the doctest makes less sense. You could compare against None, but the True or False output isn't explicit, so it's almost always better to print values you expect to be None.

Instead of:

>>> should_be_none()
>>> do_something_else()

Use:

>>> print should_be_none()
None
>>> do_something_else()

For a different reason, it's also usually better to print string results rather than just returning them. Strings can often be either 8-bit strings or unicodes, and usually for the test's purposes you don't care. Also, returning the string causes the quotes to be included in the output, while printing the string does not. Again, those extra quotes are usually noise.

Instead of:

>>> get_some_unicode()
u'foo'
>>> get_some_string()
"Don't care"

Use:

>>> print get_some_unicode()
foo
>>> print get_some_string()
Don't care

This also future-proofs you against changes that may today return an 8-bit string but will in the future return a unicode.

There are some situations where you actually do care whether the return value is an 8-bit or unicode. You might decide in those cases to return the results instead of printing them, but also consider using an isinstance() test instead. Also, due to some limitations in doctest, if your unicode strings contain non-ascii characters, you may crash the doctest infrastructure. In that case again, return the value or using its repr will be better. Use your best judgement here.

Dictionaries and sets

You can't just print the value of a dictionary or a set when that collection has more than one element in it, e.g.

>>> print my_dict
{'a': 1, 'b': 2}

The reason is that Python does not guarantee the order of its elements in a dictionary or set, so the printed representation of a dictionary is indeterminate. You have a few choices here. You could use Python's pretty module, except that in Python 2.4, this also isn't guaranteed to give you a sort order (this has been fixed in Python 2.5, which we'll move to sometime after the date of this writing 06-Mar-2009).

In page tests, there's a pretty() global which is basically exposing Python 2.5's pretty printer, and this you can use safely:

>>> pretty(my_dict)
{'a': 1, 'b': 2}

This function isn't yet available in non-pagetest doctests, though there's no good reason why. Please expose it there too!

Though it's a bit uglier, you can also print the sorted items of a dictionary:

>>> sorted(my_dict.items())
[('a', 1), ('b', 2)]

Global State

Be especially careful of test code that changes global state. For example, we were recently bit by code in a test that did this:

socket.setdefaulttimeout(1)

While that may be necessary for the specific test, it's important to understand that this code changes global state and thus can adversely affect all of our other tests. In fact, this code caused intermittent and very difficult to debug failures that mucked up PQM for many unrelated branches.

The guideline then is this: If code changes global state (for example, by monkey-patching a module's globals) then the test must be sure to restore the previous state, either in a try-finally clause, or at the end of the doctest, or in the test's tearDown hook.

Style to Avoid

A very important consideration is that documentation tests are really documentation that happens to be testable. So, the writing style should be appropriate for documentation. It should be affirmative and descriptive. There shouldn't be any phrases like:

While these constructs may help the reader understand what is happening, they only have indirect value as documentation. They can usually be replaced by simply stating what the result is.

For example:

Can be replaced by:

Also, use of "should" or "will" can usually be replaced by the present tense to make the style affirmative.

For example:

Can be replaced by:

A good rule of thumb to know whether the narrative style works as documentation is to read the narrative as if the code examples where not there. If the text style makes sense, the style is probably good.

Using Sample Data

If possible, avoid using the existing sample data in tests, apart from some basic objects, like users. Sample data is good for demonstrating the UI, but it can make tests harder to understand, since it requires knowledge of the properties of the used sample data. Using sample data in tests also makes it harder to modify the data.

If you do use sample data in the test, assert your expectations to avoid subtle errors if someone modifies it. For example:

Anonymous users can't see a private bug's description.

    >>> private_bug = getUtility(IBugSet).get(5)
    >>> private_bug.private
    True

    >>> login(ANONYMOUS)
    >>> private_bug.description
    Traceback (most recent call last):
    ...
    Unauthorized:...

When using fake domains and especially fake email addresses, wherever possible use the example.{com,org,net} domains, e.g. aperson@example.com. These are guaranteed by internet standard never to exist, so it can't be possible to accidentally spam them if something goes wrong on our end.

Fixtures and Helpers

Sometimes a lot of code is needed to set up a test, or to extract the relevant information in the examples. It is usually a good idea to factor this code into functions that can be documented in the file itself (when the function will only be used in that file), or even better, moved into a test helper module from which you can import.

(Current practice is to put these helpers in modules in canonical.launchpad.ftests, but shouldn't these be moved to canonical.launchpad.testing or canonical.testing like it's done in Zope?)

Functional and Unit Tests

Complete test coverage without impairing documentation often requires dedicated functional or unit tests. These can be written either using regular Python test cases using the unittest module, or using doctests.

There is no central location for these tests. They are usually found in a tests or ftests directory alongside the tested module. (The difference between the two directories is of historical origin. In the past, the tests directory contained unit tests and the ftests directory contained functional tests. Nowadays the test runner will differentiate between the two based on the test layer, not on directory name.)

XXX We want to clean this up! See CleaningUpOurCode

Doctests

You can write your unit tests or functional tests using doctests. These are useful because they tend to make tests easier to read. Doctests also excel for comparing output.

You will need a harness that will add the doctest to the test suite.

Here is the appropriate boilerplate:

Python Test Cases

Sometimes it's more convenient to use regular Python test cases, when each test case must be run in isolation, or when there is a lot of code to reuse in each test. (Usually this can also be achieved with doctests, by defining appropriate helpers in the harness and using them in the doctest. We even have doctests that are run against different objects by the harness. See lib/canonical/launchpad/interfaces/ftests/test_questiontarget.py and lib/canonical/launchpad/browser/ftests/test_bugs_fixed_elsewhere.py for examples.)

Even when using Python test cases, the test should be human-readable. So:

In general, you should follow Launchpad coding conventions (see PythonStyleGuide), however when naming test methods:

Docstring Unit Tests

Another alternative for unit tests is to embed the doctest in the methods' docstring, however this style is now strongly discouraged.

The advantage of this method is that the testing code remains close to the tested code. It also gives an example of the method usage right in the docstring.

The main disadvantage of that method is that it is easy to make the docstring too long. Use that kind of testing only for simple unit tests where the test actually reads well as an example. The whole docstring (including the test) shouldn't be longer than 35 lines and not require any external fixtures. When it's longer, it's better to transform this into a doctest in a separate file, or a regular Python unit test.

Example of such a test:

You'll also need a test harness to add these tests to the test suite. You'll put a test_<name of module>.py file in a tests subdirectory. That harness is usually pretty simple: