18195
Comment:
|
16646
clarify assertEqual arguments some more
|
Deletions are marked like this. | Additions are marked like this. |
Line 5: | Line 5: |
This page documents current conventions for our Launchpad tests. | This page documents conventions for our Launchpad tests. |
Line 13: | Line 13: |
1. System documentation under `lib/canonical/launchpad/doc`. 1. Page tests in `lib/canonical/launchpad/pagetests`. |
1. System documentation in `doc` directories such as `lib/lp/<app>/doc`. 1. Page tests in `lib/lp/<app>/stories`. 1. View tests in `lib/lp/<app>/browser/tests`. |
Line 27: | Line 28: |
What do we mean by "testable documentation"? One rule of thumb is the narrative should be similar to what you'd see in an API reference manual. Another trick to ensure the narrative is appropriately detailed is to mentally remove all of the code snippets and see if the narrative stands by itself. |
|
Line 29: | Line 34: |
These are doctests located under `lib/canonical/launchpad/doc`. They | These are doctests located under `lib/lp/<app>/doc`. They |
Line 32: | Line 37: |
and what purpose they serve.¸ Each modification to `canonical.launchpad.interfaces` should be |
and what purpose they serve. Each modification to `lp.<app>.interfaces` should be |
Line 41: | Line 46: |
`lib/canonical/launchpad/ftests/test_system_documentation.py`.) | `lib/lp/<app>/tests/test_doc.py`.) |
Line 45: | Line 50: |
We use page tests to document all the use cases that Launchpad caters for. The narrative in these files should document the use case. That is, |
We use page tests to document all the use cases that Launchpad satisfies. The narrative in these files should document the use case. That is, |
Line 58: | Line 63: |
more direct manner by testing the view object directly. View objects are usually documented that way along other system objects in files named `*-pages.txt`. |
more direct manner by testing the view object directly. |
Line 64: | Line 67: |
=== Browser View Tests === View objects are usually documented that way along other system objects in files named `*-pages.txt` or in `lib/lp/<app>/browser/tests/*-views.txt`. The browser tests directory contains both doctest files for documenting the use of browser view classes and unit tests (e.g. `test_*.py`) for performing unit tests, including covering corner cases. Currently some apps, registry for example, have a large number of doctests in this location that are not strictly testable documentation. Over time these non-documentation doctest files should be converted to unit tests. '''All new browser tests that are not testable documentation should be written as unit tests.''' |
|
Line 72: | Line 88: |
* Moin headers style is used. * The file should have a first-level title element. An expansion of the filename is usually a good start. (For example, the file bugcomment.txt could have as title `= Bug Comments =`.) |
* Doctests use Restructured Text (or "ReST", see http://docutils.sourceforge.net/docs/user/rst/quickref.html). * We use ReST because it's what the Python community have standardized on and because it makes setting up Sphinx to browse all the doctests simpler. * The file should have a first-level title element. An expansion of the filename is usually a good start. For example, the file bugcomment.txt could have this title: {{{ Bug Comments ============ }}} |
Line 77: | Line 99: |
== An Example == | An Example --------- |
Line 81: | Line 104: |
>>> from canonical.launchpad.interfaces import IBar, IFoo, IFooBarSet >>> from canonical.launchpad.webapp.testing import verifyObject |
>>> from lp.foo.interfaces.bar import IBar, IFoo, IFooBarSet >>> from lp.testing import verifyObject |
Line 105: | Line 128: |
>>> do_something() expected output |
>>> do_something() expected output |
Line 154: | Line 177: |
>>> print details_portlet.find('b', text='Status:').next.strip() | >>> print(details_portlet.find('b', text='Status:').next.strip()) |
Line 161: | Line 184: |
>>> print extract_text( ... find_tag_by_id(browser.contents, 'branchtable')) |
>>> print(extract_text( ... find_tag_by_id(browser.contents, 'branchtable'))) |
Line 194: | Line 217: |
>>> print should_be_none() | >>> print(should_be_none()) |
Line 200: | Line 223: |
rather than just returning them. Strings can often be either 8-bit strings or unicodes, and usually for the test's purposes you don't care. Also, returning |
rather than just returning them. Returning |
Line 203: | Line 225: |
string does not. Again, those extra quotes are usually noise. | string does not. Those extra quotes are usually noise. |
Line 208: | Line 230: |
>>> get_some_unicode() u'foo' |
>>> get_some_text() 'foo' |
Line 217: | Line 239: |
>>> print get_some_unicode() | >>> print(get_some_text()) |
Line 219: | Line 241: |
>>> print get_some_string() | >>> print(get_some_string()) |
Line 222: | Line 244: |
This also future-proofs you against changes that may today return an 8-bit string but will in the future return a unicode. There are some situations where you actually do care whether the return value is an 8-bit or unicode. You might decide in those cases to return the results instead of printing them, but also consider using an `isinstance()` test instead. Also, due to some limitations in doctest, if your unicode strings contain non-ascii characters, you may crash the doctest infrastructure. In that case again, return the value or using its `repr` will be better. Use your best judgement here. |
|
Line 240: | Line 251: |
>>> print my_dict | >>> print(my_dict) |
Line 246: | Line 257: |
indeterminate. You have a few choices here. You could use Python's `pretty` module, except that in Python 2.4, this also isn't guaranteed to give you a sort order (this has been fixed in Python 2.5, which we'll move to sometime after the date of this writing 06-Mar-2009). In page tests, there's a `pretty()` global which is basically exposing Python 2.5's pretty printer, and this you can use safely: |
indeterminate. In page tests, there's a `pretty()` global which is basically exposing Python's pretty printer, and you can use it safely: |
Line 259: | Line 265: |
'''This function isn't yet available in non-pagetest doctests, though there's no good reason why. Please expose it there too!''' |
|
Line 277: | Line 280: |
While that may be necessary for the specific test, it's important to understand that this code changes global state and thus can adversely affect all of our other tests. In fact, this code caused intermittent and very difficult to debug failures that mucked up PQM for many unrelated branches. The guideline then is this: ''If code changes global state (for example, by monkey-patching a module's globals) then the test must be sure to restore the previous state, either in a `try`-`finally` clause, or at the end of the doctest, or in the test's `tearDown` hook.'' |
While that may be necessary for the specific test, it's important to understand that this code changes global state and thus can adversely affect all of our other tests. In fact, this code caused intermittent and very difficult-to-debug failures that mucked up PQM for many unrelated branches. The guideline then is this: ''If code changes global state (for example, by monkey-patching a module's globals) then the test '''must''' be sure to restore the previous state, either in a `try`-`finally` clause, or at the end of the doctest, or in the test's `tearDown` hook.'' |
Line 370: | Line 373: |
These are guaranteed by internet standard never to exist, so it can't be | These are guaranteed by Internet standard never to exist, so it can't be |
Line 381: | Line 384: |
(Current practice is to put these helpers in modules in `canonical.launchpad.ftests`, but shouldn't these be moved to `canonical.launchpad.testing` or `canonical.testing` like it's done in Zope?) |
These helpers currently live in `lib/lp/testing'. New helpers should go there, unless they're very specific to a particular corner of the application; in such cases you can use something like `lp.foo.testing`. |
Line 389: | Line 390: |
dedicated functional or unit tests. These can be written either using regular Python test cases using the `unittest` module, or using doctests. There is no central location for these tests. They are usually found in a `tests` or `ftests` directory alongside the tested module. (The difference between the two directories is of historical origin. In the past, the `tests` directory contained unit tests and the `ftests` directory contained functional tests. Nowadays the test runner will differentiate between the two based on the test layer, not on directory name.) '''XXX We want to clean this up! See CleaningUpOurCode''' === Doctests === You can write your unit tests or functional tests using doctests. These are useful because they tend to make tests easier to read. Doctests also excel for comparing output. You will need a harness that will add the doctest to the test suite. Here is the appropriate boilerplate: {{{ # Copyright 2007 Canonical Ltd. All rights reserved. """Test harness for running the mytest.txt test. Description of that test. """ __metaclass__ = type __all__ = [] import unittest from canonical.functional import FunctionalDocFileSuite from canonical.testing import LaunchpadFunctionalLayer from canonical.launchpad.ftests.test_system_documentation import ( default_optionflags, setUp, tearDown) def test_suite(): return FunctionalDocFileSuite('mytest.txt', setUp=setUp, tearDown=tearDown, optionflags=default_optionflags, package=__name__, layer=LaunchpadFunctionalLayer) }}} |
dedicated functional or unit tests. In Launchpad, Python test cases are used for these types of tests. You may encounter legacy code that uses doctests for functional testing. If you do, please consider converting it to a Python test case. Functional tests are found in the `tests` subdirectory of each directory containing code under test. |
Line 442: | Line 400: |
Sometimes it's more convenient to use regular Python test cases, when each test case must be run in isolation, or when there is a lot of code to reuse in each test. (Usually this can also be achieved with doctests, by defining appropriate helpers in the harness and using them in the doctest. We even have doctests that are run against different objects by the harness. See `lib/canonical/launchpad/interfaces/ftests/test_questiontarget.py` and `lib/canonical/launchpad/browser/ftests/test_bugs_fixed_elsewhere.py` for examples.) Even when using Python test cases, the test should be human-readable. So: |
Although Python test cases are not documentation they must still be human-readable. So: |
Line 455: | Line 404: |
* Write in the docstring of each test case what is being tested. As a special case for test methods, a comment block at the beginning of the method is considered an acceptable substitute to a docstring. | * Stick to the "setup, exercise, assert" pattern, especially avoid "exercise, assert, exercise some more, assert". * Put into the docstring of each test case what is being tested. As a special case for test methods, a comment block at the beginning of the method is considered an acceptable substitute to a docstring. Please observe "Style to avoid", as explained above. |
Line 457: | Line 407: |
* Make sure that each assert fails with an appropriate error message explaining what is expected. | * When asserting for equality use the form `assertEqual(expected_results, actual_results, "...")` (the third argument is optional, for use if failure messages would otherwise be particularly unclear). * Make sure that each assert fails with an appropriate error message explaining what is expected. `lp.testing.TestCase` and `TestCaseWithFactory` are derived from `testtools.TestCase` and therefore produce good error messages. Only some cases may warrant an explicit error message. |
Line 462: | Line 413: |
self.failUnless('aString' in result) | self.assertTrue('aString' in result) |
Line 465: | Line 416: |
should be replaced by: | could be replaced by: |
Line 468: | Line 419: |
self.failUnless('aString' in result, "'aString' not in %s" % result) | self.assertIn('aString', result) }}} * Consider using testtools matchers where reasonable. These can often improve failure messages so that they show more information in one go, which can be useful when debugging mysterious failures. For example, instead of this: {{{ self.assertEqual('expected', obj.foo) self.assertEqual('more expected', obj.bar) }}} prefer this: {{{ self.assertThat(obj, MatchesStructure.byEquality( foo='expected', bar='more expected')) |
Line 481: | Line 444: |
=== Docstring Unit Tests === Another alternative for unit tests is to embed the doctest in the methods' docstring, however '''this style is now strongly discouraged'''. The advantage of this method is that the testing code remains close to the tested code. It also gives an example of the method usage right in the docstring. The main disadvantage of that method is that it is easy to make the docstring too long. Use that kind of testing only for simple unit tests where the test actually reads well as an example. The whole docstring (including the test) shouldn't be longer than 35 lines and not require any external fixtures. When it's longer, it's better to transform this into a doctest in a separate file, or a regular Python unit test. Example of such a test: {{{ def is_english_variant(language): """Return whether the language is a variant of modern English . >>> class Language: ... def __init__(self, code): ... self.code = code >>> is_english_variant(Language('fr')) False >>> is_english_variant(Language('en')) True >>> is_english_variant(Language('en_CA')) True >>> is_english_variant(Language('enm')) False """ }}} You'll also need a test harness to add these tests to the test suite. You'll put a `test_<name of module>.py` file in a `tests` subdirectory. That harness is usually pretty simple: {{{ # Copyright 2007 Canonical Ltd. All rights reserved. """Test harness for canonical.launchpad.mymodule.""" __metaclass__ = type __all__ = [] import unittest from zope.testing.doctest import DocTestSuite import canonical.launchpad.mymodule def test_suite(): suite = unittest.TestSuite() suite.addTest(DocTestSuite(canonical.launchpad.mymodule)) return suite }}} |
== How To Use the Correct Test URL == When tests run, and need to connect to the application server instance under test, you need to ensure that a URL with the correct port for that test instance is used. Here's how to do that. The config instance has an API which allows the correct URL to be determined. The API is defined on !CanonicalConfig and as a convenience is available as a class method on !BaseLayer. {{{ def appserver_root_url(self, facet='mainsite', ensureSlash=False): """Return the correct app server root url for the given facet.""" }}} Code snippets for a number of scenarios as as follows. '''Doc Tests''' {{{ >>> from lp.testing.layers import BaseLayer >>> root_url = BaseLayer.appserver_root_url() >>> browser.open(root_url) }}} '''Unit Tests''' {{{ class TestOpenIDReplayAttack(TestCaseWithFactory): layer = AppServerLayer def test_replay_attacks_do_not_succeed(self): browser = Browser(mech_browser=MyMechanizeBrowser()) browser.open('%s/+login' % self.layer.appserver_root_url()) }}} |
Tests Style Guide
This page documents conventions for our Launchpad tests. Reviewers make sure that code merged is documented and covered by tests. Following the principles outlined in this document will minimize comments related to test style from reviewers.
Reviewers will block merge of code that is under-documented or under-tested. We have two primary means of documentation:
System documentation in doc directories such as lib/lp/<app>/doc.
Page tests in lib/lp/<app>/stories.
View tests in lib/lp/<app>/browser/tests.
While these two types of documentation use the doctest format, which means that they contain testable examples, they are documentation first. So they are not the best place to test many corner cases or various similar possibilities. This is best done in other unit tests or functional tests, which have ensuring complete test coverage as their main objective.
Testable Documentation
Testable documentation includes system documentation doctests and page tests.
What do we mean by "testable documentation"? One rule of thumb is the narrative should be similar to what you'd see in an API reference manual. Another trick to ensure the narrative is appropriately detailed is to mentally remove all of the code snippets and see if the narrative stands by itself.
System Documentation
These are doctests located under lib/lp/<app>/doc. They are used to document the APIs and other internal objects. The documentation should explain to a developer how to use these objects and what purpose they serve.
Each modification to lp.<app>.interfaces should be documented in one of these files.
(Each file in that directory is automatically added to the test suite. If you need to configure the test layer in which the test will be run or need to customize the test fixture, you can add special instructions for the file in the system documentation harness in lib/lp/<app>/tests/test_doc.py.)
Use Cases Documentation: Page Tests
We use page tests to document all the use cases that Launchpad satisfies. The narrative in these files should document the use case. That is, they should explain what the user's objective is and how he accomplishes it.
The examples in these files uses zope.testbrowser to show how the user would navigate the workflow relevant to the use case.
So each addition to the UI should be covered by an appropriate section in a page test.
The page tests do not need to document and demonstrate each and every possible way to navigate the workflow. This can usually be done in a more direct manner by testing the view object directly.
(See PageTestsOrSystemDocs for background discussion on using a system doctest vs a page test.)
Browser View Tests
View objects are usually documented that way along other system objects in files named *-pages.txt or in lib/lp/<app>/browser/tests/*-views.txt.
The browser tests directory contains both doctest files for documenting the use of browser view classes and unit tests (e.g. test_*.py) for performing unit tests, including covering corner cases. Currently some apps, registry for example, have a large number of doctests in this location that are not strictly testable documentation. Over time these non-documentation doctest files should be converted to unit tests.
All new browser tests that are not testable documentation should be written as unit tests.
Common Conventions
The basic conventions for testable documentation are:
Example code is wrapped at 78 columns, follows regular PythonStyleGuide, and is indented 4 spaces.
- Narrative text may be wrapped at either 72 or 78 columns.
- You can use regular Python comments for explanations related to the code and not to the documentation.
Doctests use Restructured Text (or "ReST", see http://docutils.sourceforge.net/docs/user/rst/quickref.html).
- We use ReST because it's what the Python community have standardized on and because it makes setting up Sphinx to browse all the doctests simpler.
- The file should have a first-level title element. An expansion of the filename is usually a good start. For example, the file bugcomment.txt could have this title:
Bug Comments ============
- Two blank lines are used to separate the start of a new section (a header).
An Example --------- Launchpad tracks foo and bar elements using the IFooBarSet utility. >>> from lp.foo.interfaces.bar import IBar, IFoo, IFooBarSet >>> from lp.testing import verifyObject >>> foobarset = getUtility(IFooBarSet) >>> verifyObject(IFooBarSet, foobarset) True You use the getFoo() method to obtain an IFoo instance by id: >>> foo = foobarset.getFoo('aFoo') >>> verifyObject(IFoo, foo) True Similarly, you use the getBar() method to retrieve an IBar instance by id: >>> bar = foobarset.getBar('aBar') >>> verifyObject(IBar, bar) True
Each individual test should be of the form:
>>> do_something() expected output
This means that something like this isn't considered a test, but test setup (since it doesn't produce any output)
>>> do_something()
For the reason above, the assert statement shouldn't be used in doctests.
Comparing Results
When writing doctest, make sure that if the test fails, the failure message will be helpful to debug the problem. Avoid constructs like:
>>> 'Test' in foo.getText() True
The failure message for this test will be:
- True + False
which isn't helpful at all in understanding what went wrong. This example is a lot more helpful when it fails:
>>> foo.getText() '...Test...'
For page tests, where the page contains a lot of elements, you should zoom in to the relevant part. You can use the find_main_content(), find_tags_by_class(), find_tag_by_id(), and find_portlet() helper methods. They return BeautifulSoup instances, which makes it easy to access specific elements in the tree.
The new status is displayed in the portlet. >>> details_portlet = find_portlet(browser.contents, 'Question details') >>> print(details_portlet.find('b', text='Status:').next.strip()) Needs information
There is also an extract_text() helper that only renders the HTML text:
>>> print(extract_text( ... find_tag_by_id(browser.contents, 'branchtable'))) main 60 New firefox klingon 30 Experimental gnome-terminal junk.contrib 60 New 2005-10-31 12:03:57 ... weeks ago
Read PageTests for other tips on writing page tests.
When to print and when to return values
Doctests mimic the Python interactive interpreter, so generally it's preferred to simply return values and expect to see their string representation. In a few cases though, it's better to print the results instead of just returning them.
The two most common cases of this are None and strings. The interactive interpreter suppresses None return values, so relying on these means the doctest makes less sense. You could compare against None, but the True or False output isn't explicit, so it's almost always better to print values you expect to be None.
Instead of:
>>> should_be_none() >>> do_something_else()
Use:
>>> print(should_be_none()) None >>> do_something_else()
For a different reason, it's also usually better to print string results rather than just returning them. Returning the string causes the quotes to be included in the output, while printing the string does not. Those extra quotes are usually noise.
Instead of:
>>> get_some_text() 'foo' >>> get_some_string() "Don't care"
Use:
>>> print(get_some_text()) foo >>> print(get_some_string()) Don't care
Dictionaries and sets
You can't just print the value of a dictionary or a set when that collection has more than one element in it, e.g.
>>> print(my_dict) {'a': 1, 'b': 2}
The reason is that Python does not guarantee the order of its elements in a dictionary or set, so the printed representation of a dictionary is indeterminate. In page tests, there's a pretty() global which is basically exposing Python's pretty printer, and you can use it safely:
>>> pretty(my_dict) {'a': 1, 'b': 2}
Though it's a bit uglier, you can also print the sorted items of a dictionary:
>>> sorted(my_dict.items()) [('a', 1), ('b', 2)]
Global State
Be especially careful of test code that changes global state. For example, we were recently bit by code in a test that did this:
socket.setdefaulttimeout(1)
While that may be necessary for the specific test, it's important to understand that this code changes global state and thus can adversely affect all of our other tests. In fact, this code caused intermittent and very difficult-to-debug failures that mucked up PQM for many unrelated branches.
The guideline then is this: If code changes global state (for example, by monkey-patching a module's globals) then the test must be sure to restore the previous state, either in a try-finally clause, or at the end of the doctest, or in the test's tearDown hook.
Style to Avoid
A very important consideration is that documentation tests are really documentation that happens to be testable. So, the writing style should be appropriate for documentation. It should be affirmative and descriptive. There shouldn't be any phrases like:
- Test that...
- Check that...
- Verify that...
- This test...
While these constructs may help the reader understand what is happening, they only have indirect value as documentation. They can usually be replaced by simply stating what the result is.
For example:
Test that the bar was added to the foo's related_bars: >>> bar in foo.related_bars True
Can be replaced by:
After being linked, the bar is available in the foo's related_bars attribute: >>> bar in foo.related_bars True
Also, use of "should" or "will" can usually be replaced by the present tense to make the style affirmative.
For example:
The bar not_a_foo attribute should now be set: >>> bar.not_a_foo True
Can be replaced by:
The bar not_a_foo attribute is set after this operation: >>> bar.not_a_foo True
A good rule of thumb to know whether the narrative style works as documentation is to read the narrative as if the code examples where not there. If the text style makes sense, the style is probably good.
Using Sample Data
If possible, avoid using the existing sample data in tests, apart from some basic objects, like users. Sample data is good for demonstrating the UI, but it can make tests harder to understand, since it requires knowledge of the properties of the used sample data. Using sample data in tests also makes it harder to modify the data.
If you do use sample data in the test, assert your expectations to avoid subtle errors if someone modifies it. For example:
Anonymous users can't see a private bug's description. >>> private_bug = getUtility(IBugSet).get(5) >>> private_bug.private True >>> login(ANONYMOUS) >>> private_bug.description Traceback (most recent call last): ... Unauthorized:...
When using fake domains and especially fake email addresses, wherever possible use the example.{com,org,net} domains, e.g. aperson@example.com. These are guaranteed by Internet standard never to exist, so it can't be possible to accidentally spam them if something goes wrong on our end.
Fixtures and Helpers
Sometimes a lot of code is needed to set up a test, or to extract the relevant information in the examples. It is usually a good idea to factor this code into functions that can be documented in the file itself (when the function will only be used in that file), or even better, moved into a test helper module from which you can import.
These helpers currently live in `lib/lp/testing'. New helpers should go there, unless they're very specific to a particular corner of the application; in such cases you can use something like lp.foo.testing.
Functional and Unit Tests
Complete test coverage without impairing documentation often requires dedicated functional or unit tests. In Launchpad, Python test cases are used for these types of tests. You may encounter legacy code that uses doctests for functional testing. If you do, please consider converting it to a Python test case.
Functional tests are found in the tests subdirectory of each directory containing code under test.
Python Test Cases
Although Python test cases are not documentation they must still be human-readable. So:
- Keep the test short and concise.
- Stick to the "setup, exercise, assert" pattern, especially avoid "exercise, assert, exercise some more, assert".
- Put into the docstring of each test case what is being tested. As a special case for test methods, a comment block at the beginning of the method is considered an acceptable substitute to a docstring. Please observe "Style to avoid", as explained above.
- Organize related test cases in the same class. Explain test objectives in the class docstring.
When asserting for equality use the form assertEqual(expected_results, actual_results, "...") (the third argument is optional, for use if failure messages would otherwise be particularly unclear).
Make sure that each assert fails with an appropriate error message explaining what is expected. lp.testing.TestCase and TestCaseWithFactory are derived from testtools.TestCase and therefore produce good error messages. Only some cases may warrant an explicit error message.
- For example, this
self.assertTrue('aString' in result)
could be replaced by:self.assertIn('aString', result)
- For example, this
- Consider using testtools matchers where reasonable. These can often improve failure messages so that they show more information in one go, which can be useful when debugging mysterious failures.
For example, instead of this:
self.assertEqual('expected', obj.foo) self.assertEqual('more expected', obj.bar)
prefer this:
self.assertThat(obj, MatchesStructure.byEquality( foo='expected', bar='more expected'))
In general, you should follow Launchpad coding conventions (see PythonStyleGuide), however when naming test methods:
Use PEP 8 names, e.g. test_for_my_feature()
- When testing a specific Launchpad method, a mix of PEP 8 and camel case is
used, e.g. test_fooBarBaz()
- When testing alternatives for a LP method, use this style:
test_fooBarBaz_with_first_alternative(), test_fooBarBaz_with_second_alternative(), etc.
How To Use the Correct Test URL
When tests run, and need to connect to the application server instance under test, you need to ensure that a URL with the correct port for that test instance is used. Here's how to do that.
The config instance has an API which allows the correct URL to be determined. The API is defined on CanonicalConfig and as a convenience is available as a class method on BaseLayer.
def appserver_root_url(self, facet='mainsite', ensureSlash=False): """Return the correct app server root url for the given facet."""
Code snippets for a number of scenarios as as follows.
Doc Tests
>>> from lp.testing.layers import BaseLayer >>> root_url = BaseLayer.appserver_root_url() >>> browser.open(root_url)
Unit Tests
class TestOpenIDReplayAttack(TestCaseWithFactory): layer = AppServerLayer def test_replay_attacks_do_not_succeed(self): browser = Browser(mech_browser=MyMechanizeBrowser()) browser.open('%s/+login' % self.layer.appserver_root_url())