## page was renamed from mars/TestingApiApplications ||<>|| This page provides a guide to writing tests for [[API|Launchpad API]] clients. = Techniques for Testing API Clients = <> == Point your application at the staging.launchpad.net web service == By writing data to the staging.launchpad.net web service you can observe how the application will behave against a copy of real, live data. You can also observe the application going over the network. The data on staging.launchpad.net is copied from the production database once a week. Copying overwrites all differences between the staging and production data. === Pros === * Your application gets to write real data to a live site over the network. * The changes you make to the site data are not permanent === Cons === * If your application reads and writes data to Launchpad, and you want to do multiple test runs against staging between the weekly database resets, then you will have to write your application to reset the data for you, or you must reset it manually. Depending on the problem you are trying to solve, you may also have the option to have your application ignore and overwrite any data already present on the server. You may also consider having your application [[#generatetestdata|generate its own test data]]. * Because the staging server data is only updated weekly, you can not test your application against data in the same week that data was entered into production. You may be able to work around by providing your users with a [[#readonlyswitch|read-only switch]]. * The staging server goes offline regularly for code and database updates. A code update takes approximately 100 minutes to complete. Database updates happen on the weekend and take approximately 28 hours to complete. === An Example === You will want to provide an easy way to switch to the staging service for developers and for operators, too. Command-line switches are one standard Unix way to accomplish this: {{{ #!python import sys from launchpadlib.launchpad import Launchpad from optparse import OptionParser client_name = 'just testing' def main(argv): parser = OptionParser() parser.add_option('-S', '--service', default='production', help="Set the Launchpad web service instance this script will " "read and write to [default: %default]") options, args = parse_args(argv) print "Connecting to the Launchpad API {0} service.".format(options.service) launchpad = Launchpad.login_with(client_name, options.service) if __name__ == '__main__': main(sys.argv) }}} An example usage: {{{ $ myscript --service=staging Connecting to the Launchpad API staging service. }}} === Notes === The staging database always runs code that is released or soon-to-be released on the Launchpad production servers. The API version will be equal to or greater than the production API version. The staging server goes down . ---- <> == Give your application read-only access to the server == By giving your application read-only access to the server you can allow developers and users to test your application against live data without fear of it destroying data. By only reading data it also becomes possible to run the tests multiple times in a row. === Pros === * Both developers and users can test your application without fear of it destroying data * You can run the tests as many times as desired * Your application gets to read real data from a production server === Cons === * Some application tests are not possible without writing data. You may not get 100% test coverage with this technique. * You must modify your code so that it gracefully handles the inability to write data to the server, and you must do so at every point where data writes happen. This can be tedious to retrofit into an existing script and complicates your code. === An Example === A convenient trick to enforce read-only access for your script is to log in to the API anonymously. Anonymous users are not allowed to write data to Launchpad. If you do this, any point in your code that tries to write to the web service will raise an error: you can then modify the code to handle the read-only mode gracefully. As before, we will use a command-line switch to make the functionality accessible to both developers and operators. {{{ #!python import sys from launchpadlib.launchpad import Launchpad from optparse import OptionParser client_name = 'just testing' service = 'production' def main(argv): parser = OptionParser() parser.add_option('-d', '--dry-run', default=False, dest='readonly' help="Only pretend to write data to the server.") options, args = parse_args(argv) print "Connecting to the Launchpad API {0} service.".format(service) if options.readonly: api_login = Launchpad.login_anonymously else: api_login = Launchpad.login_with launchpad = api_login(client_name, service) me = launchpad.people['my-user-name'] me.display_name = 'A user who edits through the Launchpad web service.' # This would raise a ServerError if we were logged in anonymously # me.lp_save() # This is the correct way to do it: if not options.readonly: me.lp_save() else: print "Skipped saving display name: {0.display_name}".format(me) if __name__ == '__main__': main(sys.argv) }}} === Notes === ---- <> == Have your application generate its own test data on the staging server == This is an advanced technique that builds upon '[[#usestaging|test your application against staging]]'. You can have your application generate its own test data on the staging server, then run its tests against said data. === Pros === * Your application can read and write test data to a live server * You are not limited by how many times in a row you can run the tests * The changes you make to the site data are not permanent === Cons === * Not all of data creation functions in Launchpad are available through the API (working with SSH keys and Project Groups are two notable examples). You may be able to work around this by employing a mix of well-known existing data and generated sample data. === An Example === The creation and deletion of test data can be automated using the Python unittest framework. We will also make use of the `testtools` python helper library to save us some work. {{{ #!python from testtools import TestCase client_name = 'just testing' class DataWritingTestCase(TestCase): def setUp(self): super(DataWritingTestCase).setUp() # Create a launchpadlib instance that has access to staging self.launchpad = Launchpad.login_with(client_name, 'staging') self.ubuntu = self.launchpad.distributions['ubuntu'] # Create some sample data. We will use a unique string for the bug's description, # conveniently provided by testtools.TestCase.getUniqueString() self.bug = self.launchpad.bugs.createBug( description=self.getUniqueString(), target=self.ubuntu, title=self.getUniqueString(), tags='my-tag-for-testing') # Clean up our sample data when the test is over. Remove the tag so the bug no # longer shows up in search-by-tag queries. We have to do this because bugs can't # be deleted! self.addCleanup(setattr, self.bug, 'tags', '') def test_my_bugs(self): mybugs = list(self.ubuntu.searchTasks(tags='my-tag-for-testing')) self.assertEqual(1, len(mybugs)) }}} === Notes === When writing your test case make '''absolutely sure''' that the launchpad instance used by the test suite is staging - hard-code the value if you have to. You do ''not'' want your test suite to accidentally modify production data! Keep in mind that automated tests written this way will be slow because they go over the network. If you are using a test runner that supports conditional running of tests, you may want to protect these test suites from normal command-line test runs. = Real examples = The code snippets above give you a sense of how each of the techniques are used individually. There are also real world examples, where a mix of those techniques are used to add tests to two scripts used by Canonical's OEM team. In the [[https://bazaar.launchpad.net/~mars/oem-launchpad-scripts/oem-launchpad-scripts/annotate/head%3A/milestone-scanner/milestones.py|milestone scanner script]], we have an example of the firts techinque in use. A --selftest option was added on lines 212 to 215 to provide a way to connect to the staging web service and grab the results from there. A test was added inside the _selftest() (lines 177 to 193) function asserting the output returned by the script matches the expectation. A small change was also made to scan_milestones() to accomodate the test execution. Rather than printing to the standard output, the function now allows printing to a file, which is then used to grab the results back from the test execution. That's all. In the [[https://launchpad.net/lp-scanner|lp-scanner]] a --selftest option is also added but it uses the unittest library to define test cases and run them. You can see in the test case takes advantage of a mix of the first technique and the third one. It connects to the staging web service, creates a few objects needed by the tests and then the security_scanner() function is called. The results then are compared against the sample-report provided in the script.