Foundations/Webservice/TestingApiApplications

Not logged in - Log In / Register

This page provides a guide to writing tests for Launchpad API clients.

Techniques for Testing API Clients

Point your application at the staging.launchpad.net web service

By writing data to the staging.launchpad.net web service you can observe how the application will behave against a copy of real, live data. You can also observe the application going over the network.

The data on staging.launchpad.net is copied from the production database once a week. Copying overwrites all differences between the staging and production data.

Pros

Cons

An Example

You will want to provide an easy way to switch to the staging service for developers and for operators, too. Command-line switches are one standard Unix way to accomplish this:

   1 import sys
   2 from launchpadlib.launchpad import Launchpad
   3 from optparse import OptionParser
   4 
   5 client_name = 'just testing'
   6 
   7 
   8 def main(argv):
   9   parser = OptionParser()
  10   parser.add_option('-S', '--service', default='production',
  11                     help="Set the Launchpad web service instance this script will "
  12                          "read and write to [default: %default]")
  13 
  14   options, args = parse_args(argv)
  15   
  16   print "Connecting to the Launchpad API {0} service.".format(options.service)
  17   launchpad = Launchpad.login_with(client_name, options.service)
  18 
  19 
  20 if __name__ == '__main__':
  21   main(sys.argv)

An example usage:

$ myscript --service=staging
Connecting to the Launchpad API staging service.

Notes

The staging database always runs code that is released or soon-to-be released on the Launchpad production servers. The API version will be equal to or greater than the production API version.

The staging server goes down .


Give your application read-only access to the server

By giving your application read-only access to the server you can allow developers and users to test your application against live data without fear of it destroying data. By only reading data it also becomes possible to run the tests multiple times in a row.

Pros

Cons

An Example

A convenient trick to enforce read-only access for your script is to log in to the API anonymously. Anonymous users are not allowed to write data to Launchpad. If you do this, any point in your code that tries to write to the web service will raise an error: you can then modify the code to handle the read-only mode gracefully.

As before, we will use a command-line switch to make the functionality accessible to both developers and operators.

   1 import sys
   2 from launchpadlib.launchpad import Launchpad
   3 from optparse import OptionParser
   4 
   5 client_name = 'just testing'
   6 service = 'production'
   7 
   8 
   9 def main(argv):
  10   parser = OptionParser()
  11   parser.add_option('-d', '--dry-run', default=False, dest='readonly'
  12                     help="Only pretend to write data to the server.")
  13 
  14   options, args = parse_args(argv)  
  15   print "Connecting to the Launchpad API {0} service.".format(service)
  16 
  17 
  18   if options.readonly:
  19     api_login = Launchpad.login_anonymously
  20   else:
  21     api_login = Launchpad.login_with
  22 
  23   launchpad = api_login(client_name, service)
  24 
  25 
  26   me = launchpad.people['my-user-name']
  27   me.display_name = 'A user who edits through the Launchpad web service.'
  28 
  29   # This would raise a ServerError if we were logged in anonymously
  30   # me.lp_save()
  31 
  32   # This is the correct way to do it:
  33   if not options.readonly:
  34     me.lp_save()
  35   else:
  36     print "Skipped saving display name: {0.display_name}".format(me)
  37 
  38 
  39 if __name__ == '__main__':
  40   main(sys.argv)

Notes


Have your application generate its own test data on the staging server

This is an advanced technique that builds upon 'test your application against staging'. You can have your application generate its own test data on the staging server, then run its tests against said data.

Pros

Cons

An Example

The creation and deletion of test data can be automated using the Python unittest framework. We will also make use of the testtools python helper library to save us some work.

   1 from testtools import TestCase
   2 
   3 client_name = 'just testing'
   4 
   5 
   6 class DataWritingTestCase(TestCase):
   7 
   8   def setUp(self):
   9     super(DataWritingTestCase).setUp()
  10 
  11     # Create a launchpadlib instance that has access to staging
  12     self.launchpad = Launchpad.login_with(client_name, 'staging')
  13     self.ubuntu = self.launchpad.distributions['ubuntu']
  14 
  15     # Create some sample data.  We will use a unique string for the bug's description,
  16     # conveniently provided by testtools.TestCase.getUniqueString()
  17     self.bug = self.launchpad.bugs.createBug(
  18       description=self.getUniqueString(),
  19       target=self.ubuntu,
  20       title=self.getUniqueString(),
  21       tags='my-tag-for-testing')
  22 
  23     # Clean up our sample data when the test is over.  Remove the tag so the bug no
  24     # longer shows up in search-by-tag queries.  We have to do this because bugs can't
  25     # be deleted!
  26     self.addCleanup(setattr, self.bug, 'tags', '')
  27 
  28 
  29   def test_my_bugs(self):
  30     mybugs = list(self.ubuntu.searchTasks(tags='my-tag-for-testing'))
  31     self.assertEqual(1, len(mybugs))

Notes

When writing your test case make absolutely sure that the launchpad instance used by the test suite is staging - hard-code the value if you have to. You do not want your test suite to accidentally modify production data!

Keep in mind that automated tests written this way will be slow because they go over the network. If you are using a test runner that supports conditional running of tests, you may want to protect these test suites from normal command-line test runs.

Real examples

The code snippets above give you a sense of how each of the techniques are used individually. There are also real world examples, where a mix of those techniques are used to add tests to two scripts used by Canonical's OEM team.

In the milestone scanner script, we have an example of the firts techinque in use. A --selftest option was added on lines 212 to 215 to provide a way to connect to the staging web service and grab the results from there.

A test was added inside the _selftest() (lines 177 to 193) function asserting the output returned by the script matches the expectation. A small change was also made to scan_milestones() to accomodate the test execution. Rather than printing to the standard output, the function now allows printing to a file, which is then used to grab the results back from the test execution. That's all.

In the lp-scanner a --selftest option is also added but it uses the unittest library to define test cases and run them.

You can see in the test case takes advantage of a mix of the first technique and the third one. It connects to the staging web service, creates a few objects needed by the tests and then the security_scanner() function is called. The results then are compared against the sample-report provided in the script.

Foundations/Webservice/TestingApiApplications (last edited 2010-11-02 00:50:26 by gary)