Diff for "Testing"

Not logged in - Log In / Register

Differences between revisions 3 and 36 (spanning 33 versions)
Revision 3 as of 2009-10-09 03:12:43
Size: 664
Editor: mwhudson
Comment:
Revision 36 as of 2020-07-16 16:48:22
Size: 5075
Comment: Everything is still in progress
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
||<tablestyle="width: 100%;" colspan=3 style="background: #2a2929; font-weight: bold; color: #f6bc05;">This page is an introduction to writing and running test. All pages dedicated to testing can be found in CategoryTesting. ||

||<tablestyle="float:right; font-size: 0.9em; width:30%; background:#F1F1ED; margin: 0 0 1em 1em;" style="padding:0.5em;"><<TableOfContents>>||

Line 3: Line 8:
'''''This page is very much still in progress.''''' You might also be interested in [[Debugging]] Launchpad.
Line 5: Line 10:
The normal pattern for testing your changes is to run the tests you ''think'' will be affected locally, and then (possibly just before submission) run '''all''' the tests [[EC2Test|in the cloud]] before landing your changes on [[Trunk|trunk]]. == General usage ==

The normal pattern for testing your changes is to run the tests you ''think'' will be affected locally, but fundamentally we rely on post-merge testing by [[http://lpbuildbot.canonical.com/|buildbot]].

== Iterating with testrepository ==

{{{
apt-get install testrepository
#cd $yourlaunchpaddevdir
testr init
}}}

Don't worry about creating a .testr.conf file; the defaults created for you works fine.

To run all the tests:
{{{testr run}}}

To run an individual test using the `-t PATTERN` option:
{{{testr run -- -t foo}}}

To see the current known failures:
{{{testr failing}}}

To run just the known failing tests:
{{{testr run --failing}}}

You can load the file from ec2 test runs. To do so, you need to uncompress the file, so

{{{gunzip < attachment.log.gz | testr load}}}, or
{{{zcat < attachment.log.gz | testr load}}}

(where {{{attachment.log.gz}}} is the log file attached to the ec2 email, whatever the name) will do the trick. After a successful load, you will get a line like {{{id: 0 tests: 8956 failures: 5 skips: 23}}}. The word on the street is that you don't have to worry about the "skips" part of that line. To re-run the tests:

{{{testr run --failing}}}

To see the current failing tests

{{{testr failing}}}

testr is moving and bug reports and patches are accepted :).

== Running old skool ==
Line 9: Line 55:
You can see all the options you have by running {{{./bin/test --help}}}, but usually you will run {{{./bin/test -vvct $pattern}}} where {{{$pattern}}} is a regular expression that is used to filter the tests that will be run. You can see all the options you have by running {{{./bin/test --help}}}.

Usually you will run {{{./bin/test -vvct PATTERN}}} where {{{PATTERN}}} is a regular expression that is used to filter the tests that will be run.

You can use '!PATTERN' to match all expression not matching that patter.
Ex, to run all test, except the one at Windmill layer you can use:
{{{
./bin/test -vvc --layer '!Windmill'
}}}

== Speed up the tests ==

Since Librarian and MemCache does not change often and they take a long time to be started and shutdown, ./bin/test can leave them started by using this command:
{{{
LP_PERSISTENT_TEST_SERVICES=1 ./bin/test PATTERN
}}}

You can kill them using {{{ ./bin/kill-test-services }}}

When running tests written in python files, and you only want to test a file, you can speed up the test by specifying the full path to the python file:
{{{
LP_PERSISTENT_TEST_SERVICES=1 ./bin/test PATH/TO/PYTHON/TEST/FILE.py
}}}

See [[https://lists.launchpad.net/launchpad-dev/msg02780.html|this mail]] for more.

== Headless Windmill tests ==

 * In case you are running windmill test on a machine without X server, you can use xvfb-run to run WindmillTests:
{{{
xvfb-run -s '-screen 0 1024x768x24' bin/test YOUR_TEST_ARGUMENTS
}}}

== Testing Launchpad Translations ==

 * Rosetta admin user - translations-deity@example.com
 * Pagetest browser setup for Rosetta Administrators {{{rosetta_admin_browser = setupRosettaExpertBrowser()}}}
 * Pagetest browser setup for Ubuntu Translation Administrators {{{ubuntu_admin_browser = setupDTCBrowser()}}}

= Performance and stress tests =

== Populate the db ==

To add object into the database you can use:

{{{
& env LP_DBNAME="launchpad_dev" make iharness
from canonical.lp import initZopeless
zl = initZopeless()
#
# use the factory here
#
zl.commit()
}}}

= Writing tests =

For each part of an application there are 2 level of testing:
 * unit testing
  * comprehensive testing, including cornerstone cases
  * written in python files (try to avoid doctest)
 * smoke/functional/integration testing
  * testing normal use cases
  * written using doctest format

== Model ==
 1. Unit testing
  * Files locate in lib/lp/<application>/scripts/tests
 1. Integration testing
  * Files locate in lib/lp/<application>/doc

== View ==
 1. Unit/integration testing
  * Files locate in lib/lp/<application>/browser/tests
  * More details: ViewTests
 1. Smoke testing
  * Files locate in lib/lp/<application>/stories
  * More details: PageTests

== Javascript ==
 1. Unit testing
  * More details: TestingJavaScript JavascriptUnitTesting
 1. XHR integration testing
  * Use sparingly.
  * We use YUI tests with a full appserver behind it.
  * See standard_yuixhr_test_template.js and standard_yuixhr_test_template.py in the root of the Launchpad tree.

== Scripts ==
  * Files locate in lib/lp/<application>/scripts/tests

== API ==
  * Files located in lib/lp/<application>/stories/webservices
  * More details: TestingWebServices


----
CategoryTesting

This page is an introduction to writing and running test. All pages dedicated to testing can be found in CategoryTesting.

Testing Your Launchpad Changes

You might also be interested in Debugging Launchpad.

General usage

The normal pattern for testing your changes is to run the tests you think will be affected locally, but fundamentally we rely on post-merge testing by buildbot.

Iterating with testrepository

apt-get install testrepository
#cd $yourlaunchpaddevdir
testr init

Don't worry about creating a .testr.conf file; the defaults created for you works fine.

To run all the tests: testr run

To run an individual test using the -t PATTERN option: testr run -- -t foo

To see the current known failures: testr failing

To run just the known failing tests: testr run --failing

You can load the file from ec2 test runs. To do so, you need to uncompress the file, so

gunzip < attachment.log.gz | testr load, or zcat < attachment.log.gz | testr load

(where attachment.log.gz is the log file attached to the ec2 email, whatever the name) will do the trick. After a successful load, you will get a line like id: 0 tests: 8956 failures: 5 skips: 23. The word on the street is that you don't have to worry about the "skips" part of that line. To re-run the tests:

testr run --failing

To see the current failing tests

testr failing

testr is moving and bug reports and patches are accepted :).

Running old skool

To run the tests, you run the ./bin/test script, which is produced by make build.

You can see all the options you have by running ./bin/test --help.

Usually you will run ./bin/test -vvct PATTERN where PATTERN is a regular expression that is used to filter the tests that will be run.

You can use '!PATTERN' to match all expression not matching that patter. Ex, to run all test, except the one at Windmill layer you can use:

./bin/test -vvc --layer '!Windmill'

Speed up the tests

Since Librarian and MemCache does not change often and they take a long time to be started and shutdown, ./bin/test can leave them started by using this command:

LP_PERSISTENT_TEST_SERVICES=1 ./bin/test PATTERN 

You can kill them using  ./bin/kill-test-services 

When running tests written in python files, and you only want to test a file, you can speed up the test by specifying the full path to the python file:

LP_PERSISTENT_TEST_SERVICES=1 ./bin/test PATH/TO/PYTHON/TEST/FILE.py

See this mail for more.

Headless Windmill tests

  • In case you are running windmill test on a machine without X server, you can use xvfb-run to run WindmillTests:

xvfb-run -s '-screen 0 1024x768x24' bin/test YOUR_TEST_ARGUMENTS

Testing Launchpad Translations

  • Rosetta admin user - translations-deity@example.com

  • Pagetest browser setup for Rosetta Administrators rosetta_admin_browser = setupRosettaExpertBrowser()

  • Pagetest browser setup for Ubuntu Translation Administrators ubuntu_admin_browser = setupDTCBrowser()

Performance and stress tests

== Populate the db ==

To add object into the database you can use:

& env LP_DBNAME="launchpad_dev" make iharness
from canonical.lp import initZopeless
zl = initZopeless()
#
# use the factory here
#
zl.commit()

Writing tests

For each part of an application there are 2 level of testing:

  • unit testing
    • comprehensive testing, including cornerstone cases
    • written in python files (try to avoid doctest)
  • smoke/functional/integration testing
    • testing normal use cases
    • written using doctest format

Model

  1. Unit testing
    • Files locate in lib/lp/<application>/scripts/tests

  2. Integration testing
    • Files locate in lib/lp/<application>/doc

View

  1. Unit/integration testing
    • Files locate in lib/lp/<application>/browser/tests

    • More details: ViewTests

  2. Smoke testing
    • Files locate in lib/lp/<application>/stories

    • More details: PageTests

Javascript

  1. Unit testing
  2. XHR integration testing
    • Use sparingly.
    • We use YUI tests with a full appserver behind it.
    • See standard_yuixhr_test_template.js and standard_yuixhr_test_template.py in the root of the Launchpad tree.

Scripts

  • Files locate in lib/lp/<application>/scripts/tests

API

  • Files located in lib/lp/<application>/stories/webservices

  • More details: TestingWebServices


CategoryTesting

Testing (last edited 2020-07-16 16:51:31 by doismellburning)