18608
Comment:
|
← Revision 35 as of 2022-02-11 15:19:54 ⇥
76
udpate link
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
||<tablestyle="float:right; font-size: 0.9em; width:40%; background:#F1F1ED; margin: 0 0 1em 1em;" style="padding:0.5em;"><<TableOfContents>>|| = Tests Style Guide = This page documents current conventions for our Launchpad tests. Reviewers make sure that code merged is documented and covered by tests. Following the principles outlined in this document will minimize comments related to test style from reviewers. Reviewers will block merge of code that is under-documented or under-tested. We have two primary means of documentation: 1. System documentation under `lib/canonical/launchpad/doc`. 1. Page tests in `lib/canonical/launchpad/pagetests`. While these two types of documentation use the doctest format, which means that they contain testable examples, they are documentation first. So they are not the best place to test many corner cases or various similar possibilities. This is best done in other unit tests or functional tests, which have ensuring complete test coverage as their main objective. == Testable Documentation == Testable documentation includes system documentation doctests and page tests. === System Documentation === These are doctests located under `lib/canonical/launchpad/doc`. They are used to document the APIs and other internal objects. The documentation should explain to a developer how to use these objects and what purpose they serve.¸ Each modification to `canonical.launchpad.interfaces` should be documented in one of these files. (Each file in that directory is automatically added to the test suite. If you need to configure the test layer in which the test will be run or need to customize the test fixture, you can add special instructions for the file in the system documentation harness in `lib/canonical/launchpad/ftests/test_system_documentation.py`.) === Use Cases Documentation: Page Tests === We use page tests to document all the use cases that Launchpad caters for. The narrative in these files should document the use case. That is, they should explain what the user's objective is and how he accomplishes it. The examples in these files uses `zope.testbrowser` to show how the user would navigate the workflow relevant to the use case. So each addition to the UI should be covered by an appropriate section in a page test. The page tests do not need to document and demonstrate each and every possible way to navigate the workflow. This can usually be done in a more direct manner by testing the view object directly. View objects are usually documented that way along other system objects in files named `*-pages.txt`. (See PageTestsOrSystemDocs for background discussion on using a system doctest vs a page test.) === Common Conventions === The basic conventions for testable documentation are: * ASCII only. Unicode strings can be converted to a readable ASCII representation using {{{my_string.encode('doctest')}}}. * Example code is wrapped at 78 columns, follows regular PythonStyleGuide, and is indented 4 spaces. * Narrative text may be wrapped at either 72 or 78 columns. * You can use regular Python comments for explanations related to the code and not to the documentation. * New doctests use Restructured Text (or "ReST", see http://docutils.sourceforge.net/docs/user/rst/quickref.html). Old doctests use Moin headers; you should stay consistent within the file, so either convert the entire document to ReST or stick with Moin within that file. * The file should have a first-level title element. An expansion of the filename is usually a good start. For example, the file bugcomment.txt could have this title: {{{ ============ Bug Comments ============ }}} * Two blank lines are used to separate the start of a new section (a header). {{{ An Example ========== Launchpad tracks foo and bar elements using the IFooBarSet utility. >>> from canonical.launchpad.interfaces import IBar, IFoo, IFooBarSet >>> from canonical.launchpad.webapp.testing import verifyObject >>> foobarset = getUtility(IFooBarSet) >>> verifyObject(IFooBarSet, foobarset) True You use the getFoo() method to obtain an IFoo instance by id: >>> foo = foobarset.getFoo('aFoo') >>> verifyObject(IFoo, foo) True Similarly, you use the getBar() method to retrieve an IBar instance by id: >>> bar = foobarset.getBar('aBar') >>> verifyObject(IBar, bar) True }}} Each individual test should be of the form: {{{ >>> do_something() expected output }}} This means that something like this isn't considered a test, but test setup (since it doesn't produce any output) {{{ >>> do_something() }}} For the reason above, the assert statement shouldn't be used in doctests. === Comparing Results === When writing doctest, make sure that if the test fails, the failure message will be helpful to debug the problem. Avoid constructs like: {{{ >>> 'Test' in foo.getText() True }}} The failure message for this test will be: {{{ - True + False }}} which isn't helpful at all in understanding what went wrong. This example is a lot more helpful when it fails: {{{ >>> foo.getText() '...Test...' }}} For page tests, where the page contains a lot of elements, you should zoom in to the relevant part. You can use the `find_main_content()`, `find_tags_by_class()`, `find_tag_by_id()`, and `find_portlet()` helper methods. They return `BeautifulSoup` instances, which makes it easy to access specific elements in the tree. {{{ The new status is displayed in the portlet. >>> details_portlet = find_portlet(browser.contents, 'Question details') >>> print details_portlet.find('b', text='Status:').next.strip() Needs information }}} There is also an `extract_text()` helper that only renders the HTML text: {{{ >>> print extract_text( ... find_tag_by_id(browser.contents, 'branchtable')) main 60 New firefox klingon 30 Experimental gnome-terminal junk.contrib 60 New 2005-10-31 12:03:57 ... weeks ago }}} Read PageTests for other tips on writing page tests. === When to print and when to return values === Doctests mimic the Python interactive interpreter, so generally it's preferred to simply return values and expect to see their string representation. In a few cases though, it's better to `print` the results instead of just returning them. The two most common cases of this are `None` and strings. The interactive interpreter suppresses `None` return values, so relying on these means the doctest makes less sense. You could compare against `None`, but the `True` or `False` output isn't explicit, so it's almost always better to print values you expect to be `None`. Instead of: {{{ >>> should_be_none() >>> do_something_else() }}} Use: {{{ >>> print should_be_none() None >>> do_something_else() }}} For a different reason, it's also usually better to print string results rather than just returning them. Strings can often be either 8-bit strings or unicodes, and usually for the test's purposes you don't care. Also, returning the string causes the quotes to be included in the output, while printing the string does not. Again, those extra quotes are usually noise. Instead of: {{{ >>> get_some_unicode() u'foo' >>> get_some_string() "Don't care" }}} Use: {{{ >>> print get_some_unicode() foo >>> print get_some_string() Don't care }}} This also future-proofs you against changes that may today return an 8-bit string but will in the future return a unicode. There are some situations where you actually do care whether the return value is an 8-bit or unicode. You might decide in those cases to return the results instead of printing them, but also consider using an `isinstance()` test instead. Also, due to some limitations in doctest, if your unicode strings contain non-ascii characters, you may crash the doctest infrastructure. In that case again, return the value or using its `repr` will be better. Use your best judgement here. === Dictionaries and sets === You can't just print the value of a dictionary or a set when that collection has more than one element in it, e.g. {{{ >>> print my_dict {'a': 1, 'b': 2} }}} The reason is that Python does not guarantee the order of its elements in a dictionary or set, so the printed representation of a dictionary is indeterminate. You have a few choices here. You could use Python's `pretty` module, except that in Python 2.4, this also isn't guaranteed to give you a sort order (this has been fixed in Python 2.5, which we'll move to sometime after the date of this writing 06-Mar-2009). In page tests, there's a `pretty()` global which is basically exposing Python 2.5's pretty printer, and this you can use safely: {{{ >>> pretty(my_dict) {'a': 1, 'b': 2} }}} '''This function isn't yet available in non-pagetest doctests, though there's no good reason why. Please expose it there too!''' Though it's a bit uglier, you can also print the sorted items of a dictionary: {{{ >>> sorted(my_dict.items()) [('a', 1), ('b', 2)] }}} === Global State === Be especially careful of test code that changes global state. For example, we were recently bit by code in a test that did this: {{{ socket.setdefaulttimeout(1) }}} While that may be necessary for the specific test, it's important to understand that this code changes global state and thus can adversely affect all of our other tests. In fact, this code caused intermittent and very difficult to debug failures that mucked up PQM for many unrelated branches. The guideline then is this: ''If code changes global state (for example, by monkey-patching a module's globals) then the test must be sure to restore the previous state, either in a `try`-`finally` clause, or at the end of the doctest, or in the test's `tearDown` hook.'' === Style to Avoid === A very important consideration is that documentation tests are really '''documentation''' that happens to be testable. So, the writing style should be appropriate for documentation. It should be affirmative and descriptive. There shouldn't be any phrases like: * Test that... * Check that... * Verify that... * This test... While these constructs may help the reader understand what is happening, they only have indirect value as documentation. They can usually be replaced by simply stating what the result is. For example: {{{ Test that the bar was added to the foo's related_bars: >>> bar in foo.related_bars True }}} Can be replaced by: {{{ After being linked, the bar is available in the foo's related_bars attribute: >>> bar in foo.related_bars True }}} Also, use of "should" or "will" can usually be replaced by the present tense to make the style affirmative. For example: {{{ The bar not_a_foo attribute should now be set: >>> bar.not_a_foo True }}} Can be replaced by: {{{ The bar not_a_foo attribute is set after this operation: >>> bar.not_a_foo True }}} A good rule of thumb to know whether the narrative style works as documentation is to read the narrative as if the code examples where not there. If the text style makes sense, the style is probably good. === Using Sample Data === If possible, avoid using the existing sample data in tests, apart from some basic objects, like users. Sample data is good for demonstrating the UI, but it can make tests harder to understand, since it requires knowledge of the properties of the used sample data. Using sample data in tests also makes it harder to modify the data. If you do use sample data in the test, assert your expectations to avoid subtle errors if someone modifies it. For example: {{{ Anonymous users can't see a private bug's description. >>> private_bug = getUtility(IBugSet).get(5) >>> private_bug.private True >>> login(ANONYMOUS) >>> private_bug.description Traceback (most recent call last): ... Unauthorized:... }}} When using fake domains and '''especially''' fake email addresses, wherever possible use the `example.{com,org,net}` domains, e.g. `aperson@example.com`. These are guaranteed by internet standard never to exist, so it can't be possible to accidentally spam them if something goes wrong on our end. === Fixtures and Helpers === Sometimes a lot of code is needed to set up a test, or to extract the relevant information in the examples. It is usually a good idea to factor this code into functions that can be documented in the file itself (when the function will only be used in that file), or even better, moved into a test helper module from which you can import. (Current practice is to put these helpers in modules in `canonical.launchpad.ftests`, but shouldn't these be moved to `canonical.launchpad.testing` or `canonical.testing` like it's done in Zope?) == Functional and Unit Tests == Complete test coverage without impairing documentation often requires dedicated functional or unit tests. These can be written either using regular Python test cases using the `unittest` module, or using doctests. There is no central location for these tests. They are usually found in a `tests` or `ftests` directory alongside the tested module. (The difference between the two directories is of historical origin. In the past, the `tests` directory contained unit tests and the `ftests` directory contained functional tests. Nowadays the test runner will differentiate between the two based on the test layer, not on directory name.) '''XXX We want to clean this up! See CleaningUpOurCode''' === Doctests === You can write your unit tests or functional tests using doctests. These are useful because they tend to make tests easier to read. Doctests also excel for comparing output. You will need a harness that will add the doctest to the test suite. Here is the appropriate boilerplate: {{{ # Copyright 2007 Canonical Ltd. All rights reserved. """Test harness for running the mytest.txt test. Description of that test. """ __metaclass__ = type __all__ = [] import unittest from canonical.functional import FunctionalDocFileSuite from canonical.testing import LaunchpadFunctionalLayer from canonical.launchpad.ftests.test_system_documentation import ( default_optionflags, setUp, tearDown) def test_suite(): return FunctionalDocFileSuite('mytest.txt', setUp=setUp, tearDown=tearDown, optionflags=default_optionflags, package=__name__, layer=LaunchpadFunctionalLayer) }}} === Python Test Cases === Sometimes it's more convenient to use regular Python test cases, when each test case must be run in isolation, or when there is a lot of code to reuse in each test. (Usually this can also be achieved with doctests, by defining appropriate helpers in the harness and using them in the doctest. We even have doctests that are run against different objects by the harness. See `lib/canonical/launchpad/interfaces/ftests/test_questiontarget.py` and `lib/canonical/launchpad/browser/ftests/test_bugs_fixed_elsewhere.py` for examples.) Even when using Python test cases, the test should be human-readable. So: * Keep the test short and concise. * Write in the docstring of each test case what is being tested. As a special case for test methods, a comment block at the beginning of the method is considered an acceptable substitute to a docstring. * Organize related test cases in the same class. Explain test objectives in the class docstring. * Make sure that each assert fails with an appropriate error message explaining what is expected. For example, this {{{ self.failUnless('aString' in result) }}} should be replaced by: {{{ self.failUnless('aString' in result, "'aString' not in %s" % result) }}} In general, you should follow Launchpad coding conventions (see PythonStyleGuide), however when naming test methods: * Use PEP 8 names, e.g. `test_for_my_feature()` * When testing a specific Launchpad method, a mix of PEP 8 and camel case is used, e.g. `test_fooBarBaz()` * When testing alternatives for a LP method, use this style: `test_fooBarBaz_with_first_alternative()`, `test_fooBarBaz_with_second_alternative()`, etc. === Docstring Unit Tests === Another alternative for unit tests is to embed the doctest in the methods' docstring, however '''this style is now strongly discouraged'''. The advantage of this method is that the testing code remains close to the tested code. It also gives an example of the method usage right in the docstring. The main disadvantage of that method is that it is easy to make the docstring too long. Use that kind of testing only for simple unit tests where the test actually reads well as an example. The whole docstring (including the test) shouldn't be longer than 35 lines and not require any external fixtures. When it's longer, it's better to transform this into a doctest in a separate file, or a regular Python unit test. Example of such a test: {{{ def is_english_variant(language): """Return whether the language is a variant of modern English . >>> class Language: ... def __init__(self, code): ... self.code = code >>> is_english_variant(Language('fr')) False >>> is_english_variant(Language('en')) True >>> is_english_variant(Language('en_CA')) True >>> is_english_variant(Language('enm')) False """ }}} You'll also need a test harness to add these tests to the test suite. You'll put a `test_<name of module>.py` file in a `tests` subdirectory. That harness is usually pretty simple: {{{ # Copyright 2007 Canonical Ltd. All rights reserved. """Test harness for canonical.launchpad.mymodule.""" __metaclass__ = type __all__ = [] import unittest from zope.testing.doctest import DocTestSuite import canonical.launchpad.mymodule def test_suite(): suite = unittest.TestSuite() suite.addTest(DocTestSuite(canonical.launchpad.mymodule)) return suite }}} |
#refresh 0 https://launchpad.readthedocs.io/en/latest/reference/tests.html |