Diff for "PolicyAndProcess/DatabaseSchemaChangesProcess"

Not logged in - Log In / Register

Differences between revisions 1 and 91 (spanning 90 versions)
Revision 1 as of 2009-11-29 01:33:52
Size: 4069
Editor: brianfromme
Comment:
Revision 91 as of 2022-07-29 14:22:13
Size: 13444
Editor: lgp171188
Comment: Updated the new location of the dbpatches repository
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
 * '''Process Name:''' Database Schema Changes
 * '''Process Owner:''' Kiko
 * '''Parent Process/Activity:''' None
= Step-by-step procedure =
Line 7: Line 5:
== Process Overview ==  1. Prepare a branch containing just [[#Making_a_database_patch|your database patch]] for review.
  * The patch must either be a hot patch (function / trigger / index) or a cold patch (model change / model change + function/trigger). If you need both hot and cold changes, you require multiple branches.
Line 9: Line 8:
=== Process Description ===  1. Submit a merge proposal for your branch, requesting a '''db''' review from launchpad-reviewers. For hot patches the target branch is `master`, but for cold patches it should be `db-devel`.
Line 11: Line 10:
This process explains how to create and submit Database Schema Changes.  1. Iterate on the branch as needed. The DB review process can sometimes require significant changes to achieve acceptable performance on either the patch application or queries / updates to the resulting schema.
Line 13: Line 12:
=== Rationale ===  1. The schema change is approved for landing when you have an 'Approved' vote from a DB reviewer (unless the reviewer in question explicitly sets a higher barrier).
Line 15: Line 14:
 * PQM runs the launchpad test suite
 * The Launchpad test suite runs against sample data
 * However, there are problems that can occur with schema changes or data update scripts, running against production data, which do not occur running them against staging data. Examples are:
  * The update does not run correctly because of integrity errors.
  * The update takes a very long time to run.
 * So, we test new database schemas (patch levels above production) out on the staging server, which uses a daily-updated copy of the production database.
  1. We take a copy of the production database, then run whatever schema changes and updates are needed to bring it up to the patch level currently on mainline.
  1. Then we run mainline code against that new staging database.
 * The fact that staging is actually running, and that it got updated in a reasonable amount of time, shows that an update from production to the code in mainline is actually possible. The output from the staging update process shows the time it took to process each database patch.
 1. Wait until there are '''no''' blockers to deploying the patch. One common blocker is having code changes made prior to the patch sitting in stable and not yet deployed to all affected service instances.
Line 25: Line 16:
 * Running on staging still doesn't catch all the errors we may find when actually doing a production database schema update; although the data will be the same, the organisation within the database may be different. For example, tables may need vacuuming, or be stored in a way that causes scripts to take a long time to run when they did not take so long when run on the same data on staging. This is because when we move the data from production to staging, we're moving just the data, not the exact internal arrangement of data in the database.  1. Land the branch on `master` (for hot patches) or `db-devel` (for cold patches) by setting the merge proposal to Approved as usual, unless someone has stated it is being landed it on your behalf. (Only Canonical staff can do the landing step).

 1. [Cold patches] Wait until the branch reaches staging. Use the [[https://deployable.ols.canonical.com/project/launchpad-db|db-stable deployment report]] to check this.

 1. [Cold patches] After the branch reaches staging check the duration that the patch took to apply by rsyncing `pamola.internal::staging-logs/dbupgrade.log` from carob. If it took more than 15 seconds, mark the revision bad and revert it.

 1. [Cold patches] QA the patch as usual, check things still work on staging.

 1. [Hot patches] Ask a member of IS to manually apply the patch to qastaging, and check that things still work. ''Note: This is a temporary exception from normal deployment rules, to be reviewed when buildbot is faster.''

 1. Request a deployment per the internal [[https://wiki.canonical.com/Launchpad/PolicyandProcess/ProductionChange|production change process]].

 1. Now get your python code that builds on the schema landed and deployed as usual.

= Background =

We change the schema through database patches. These live in branches that go through review and landing, just like code changes.

Schema patches are either applied while the system has no activity ('cold patches') or while the system is under load ('hot patches'). Some things, like changing a table '''and''' adding an index, will need to be split into two separate patches - one hot and one cold.

Schema patches must '''not''' be combined with changes to the Launchpad python code - any branch landing on devel or db-devel must be one or the other. Test *only* code may be included if absolutely necessary. Exceptions to this rule require approval from the project lead, because deploying them will require a 1 hour plus complete downtime window.

Cold schema patches always land on `db-devel`. After they are made live they will be promoted to `master` as part of the go-live process.

== Hot Patches ==

[[Database/LivePatching]] explains how hot-patching works and what sorts of things we can hot-patch. It's the authority — we may be able to hot-patch more as our tooling improves.

== Cold Patches ==

Anything that is not a hot patch is a cold patch and can only be applied while the appservers and so on are disconnected from the database cluster.

For qa on a cold patch, check the application time from the staging log - it must be < 15 seconds [even with cold disk cache], or we will exceed our downtime window. To exceed that window requires signoff by the technical architect or project lead.

If some parts of a change can be applied as a hot patch, it is a good idea to do it that way, in order to keep application time minimal. For example, applying an index to thousands of rows may be okay, but tens of thousands of rows is typically too slow to do in a cold patch.

= Deploying patches =

After successful QA on a patch, request a deploy via the internal process [[https://wiki.canonical.com/Launchpad/PolicyandProcess/ProductionChange|production change process]]..

= Reviews =

All schema changes should have reviews of type "db" requested from the regular review team (`launchpad-reviewers`). An approve vote from any DB reviewer is sufficient to land the patch.

As schema changes have no appserver code changes landed at the same time, no other reviews are needed (unless an exception to the no-mixing rule has been granted, in which case a normal review is also needed).

Changes to the permissions in `database/schema/security.cfg` or the comments in `database/schema/comments.sql` are not schema patches and do not require db review when done on their own. If they are included in a schema patch then the db reviewer will review them.

= Patch ids =

The schema application code needs a unique id for each patch. This is allocated by editing a shared document stored in the [[https://code.launchpad.net/~launchpad/+git/dbpatches|dbpatches repository]]. If you are in `~launchpad` please allocate this yourself. Other developers can ask any ~launchpad member to allocate a patch number for them.

As we no longer synchronise appserver deployments with schema deployments, '''no-one should use a -0 patch.'''

Instructions for choosing a patch number are in the docs in the dbpatches repository.

= Making a database patch =

You need to run these steps whenever you make a schema change, regardless of
whether you intend to delete sample data or not. For example, if you are
adding a new column to the `Person` table, these steps ensure that the sample
data will include this new column.

 1. '''Run `make schema` to get a pristine database of sample data.'''

 1. Claim a patch number in [[https://code.launchpad.net/~launchpad/+git/dbpatches|the dbpatches repository]] (be sure to commit and push back to the `main` branch).

 1. Create a SQL file in `database/schema/` containing the changes you want. It should look like this: {{{
-- Copyright 2011 Canonical Ltd. This software is licensed under the
-- GNU Affero General Public License version 3 (see the file LICENSE).

SET client_min_messages=ERROR;

-- put your changes in here.

INSERT INTO LaunchpadDatabaseRevision VALUES (XXXX, YY, Z);
}}}
 1. Run your new SQL patch on the development database to ensure that it works. Do this by running `psql launchpad_dev -1 -f your-patch.sql`

 1. Run `psql launchpad_ftest_playground -f your-patch.sql` as the ftest playground db is used to regenerate sampledata snapshots in the following step. (Also be sure you ran `psql launchpad_dev -f your-patch.sql` in the previous step-- this updates the dev database's sampledata. Note that this is ''not'' sufficient to let the test suite see your changes: for that, you'll need to update `launchpad_ftest_template`, though it's simplest to run `make schema` or `make -C database/schema test` as described below.)

 1. You may wish to run `make newsampledata`, although it isn't critical; this will let you see what changes your patch would make to initial setups.

    1. This will produce a lot of noise. Feel free to ignore it.

 1. Review the sample data changes that occured using `git diff database/sampledata`. This diff can be hard to review as-is. You might want to use a graphical diff viewer like `kompare` or `meld` which will make it easier. Make sure that you understand all the changes you see.

 1. Move your pending SQL file into `database/schema/` with a name like `patch-xx-yy-zz.sql` (where ''xx'' matches the existing patches), and ending with the line `INSERT INTO LaunchpadDatabaseRevision VALUES (xx, yy, zz);`.

 1. If you have removed or renamed a table or column, update `database/schema/comments.sql`.

 1. '''Run `make schema` again to ensure that it works, and that you now have a pristine database with the new sample data.''' If you don't want to blow away your `launchpad_dev` database, then you can use `make -C database/schema test` instead to update only the test databases.

 1. Comments on new tables and columns need to be added to `database/schema/comments.sql`.

 1. Make any necessary changes to `database/schema/fti.py`, `database/schema/security.cfg`.

 1. '''Run the full test suite to ensure that your new schema doesn't break any existing tests/code by side effect.'''

 1. Commit without sample data changes, push and propose for merging to `db-devel`
Line 28: Line 118:
=== Triggers === == Rules for patches ==
Line 30: Line 120:
 1. Developer needs a database schema change implemented as part of his/her development activities
 1. Developer needs a database schema change implemented as part of his/her bug fixing activities
 1. Don't use the TRUNCATE or DROP TABLE statements as they don't work with Slony-I replication.

 1. To drop a table, move it into the `todrop` namespace using a statement like `ALTER TABLE FooBar SET SCHEMA todrop`. Then `upgrade.py` will automatically drop these tables during the downtime. Be careful about foreign keys: the drop order is undefined so foreign keys between the tables must be dropped explicitly, and foreign keys handled specially by application code (most branch and person) may need to be dropped first in a separate patch.

 1. Do not migrate data in schema patches unless the data size is extraordinarily small (< 100's of rows).

 1. Similarly, new columns must default NULL unless the data size is extraordinarily small (< 100's of rows).

 1. When changing existing DB functions, start your patch with the original version (`SELECT pg_get_functiondef(oid) FROM pg_proc WHERE proname IN ('foofunc', 'barfunc') ORDER BY proname;`). This makes it much easier to review the diff.

== Sample data ==

Let's say your branch needs to make changes to the database schema. You need
to follow the steps on this page to ensure that the sample data is updated to
match your schema changes.

We have deprecated sample data. That means that you should never ''add'' new rows to the sample data.

In fact, there are now two sets of sampledata that need to be updated.

We use sample data to provide well-known baseline data for the test suite, and to populate a developer's Launchpad instance so that `launchpad.dev` can display interesting stuff. There are some guidelines and recommendations you should be aware of before you make changes to the test suite sample data, or you may break the tests for yourself or others.

Please note that sample data is for developer's instances only. It would make
no sense to use the sample data on production systems!

If your tests require new data, you should create the data
in your test's harness instead of adding new sample data. This will often
make the tests themselves more readable because you're not relying on magical
values in the sample database. Doing it this way also reduces the chance
that your changes will break other tests by side-effect. Add the new data in
your test's `setUp()` or in the narrative of your doctest. Because the test
suite uses the `launchpad_ftest_template` database, there is no chance that running the
test suite will accidentally alter the sample data.

However, if you interact with the web U/I for `launchpad.dev` your changes
will end up in the `launchpad_dev` database. This database is used to create
the new sample data, so it is imperative that you run `make schema` to start
with a pristine database before generating new sample data. If in fact you do
want the effects of your u/i interactions to land in the new sample data, then
the general process is to
 * run `make schema`
 * interact with `launchpad.dev`
 * follow the `make newsampledata` steps above.

'''Be aware though that your generation of new sample data will probably have
an effect on tests not related to your changes!''' For example, if you
generate new karma events, you will break the `karma_sample_data`
tests because they expect all karma events to be dated prior to the year 2002.
If you make changes to the sample data, you '''must''' run the full test suite
and ensure that you get no failures, otherwise there is a very high likelihood
that you will break the trunk.
Line 34: Line 173:
=== Inputs === === Resolving schema conflicts ===
Line 36: Line 175:
 1. Developer originated schema change concept Resolving conflicts in `current.sql` manually is usually more trouble than it's worth. Instead, first resolve any conflicts in `comments.sql`, then: {{{
cd database/schema/
mv {patch-in-question}-0.sql comments.sql pending/
cp {parent branch, e.g. rocketfuel}/database/schema/comments.sql ./
cp ../sampledata/current.sql.OTHER ../sampledata/current.sql
make
psql launchpad_dev -f pending/patch-xx-99-0.sql
make newsampledata
mv pending/{patch-in-question}-0.sql pending/comments.sql ./
make # Just to make sure everything works
cd ../..
bzr resolve database/sampledata/current.sql
}}}
Line 38: Line 189:
=== Outputs === = Notes on Changing security.cfg =
Line 40: Line 191:
 1. SQL Patch File It is possible to land changes to security.cfg on `master` rather than `db-devel`. These changes are deployed during nodowntime rollouts.
Line 42: Line 193:
=== Participants ===

 1. Developer
 2. DBA (in our case, Stuart)

== Subprocesses ==

 N/A.

== Standard Path Events/Activities ==

 1. When a developer has a change they want to make to the database schema, they write a database patch (see database/schema in the source tree).
 1. They can give themselves a provisional patch number on their development tree.
 1. When they're sure about what they want, they should add their branch details to the DBA section of PendingReviews to have their database patch approved. This works a bit like a code review, but just on the database SQL patch (including comments).
 1. When it has passed review, Stuart gives them a database patch number, which becomes the filename of the SQL.
 1. Stuart (or the acting DBA) will also issue further instuctions on how to procede. This generally involves adding the branch details to PendingDatabasePatches, where they will be landed in a batch, or to land the branch immediately to rocketfuel/launchpad/devel.

Notes:

 * Because each database patch number is unique, various database schema changes can be worked on by different members of the team in parallel, and successfully merged together in different ways.
 * This is most often used just for schema changes.
 * Sometimes schema changes have accompanying data updates.
 * See supporting documentation section for Stuart's official documentation

== Alternative Path Events/Activities ==

 * If code changes need to land in tandem with a database patch, these needs to be discussed on a case by case basis with StuartBishop.

== Supporting Documentation ==
 * DatabaseSchemaChanges - schema change HOWTO

== Comments ==
Note that adding new users requires manual DB reconfiguration, so you need to file an RT ticket to grant access to relevant machines and make sure it is resolved '''before landing the branch''' that needs them.

Step-by-step procedure

  1. Prepare a branch containing just your database patch for review.

    • The patch must either be a hot patch (function / trigger / index) or a cold patch (model change / model change + function/trigger). If you need both hot and cold changes, you require multiple branches.
  2. Submit a merge proposal for your branch, requesting a db review from launchpad-reviewers. For hot patches the target branch is master, but for cold patches it should be db-devel.

  3. Iterate on the branch as needed. The DB review process can sometimes require significant changes to achieve acceptable performance on either the patch application or queries / updates to the resulting schema.
  4. The schema change is approved for landing when you have an 'Approved' vote from a DB reviewer (unless the reviewer in question explicitly sets a higher barrier).
  5. Wait until there are no blockers to deploying the patch. One common blocker is having code changes made prior to the patch sitting in stable and not yet deployed to all affected service instances.

  6. Land the branch on master (for hot patches) or db-devel (for cold patches) by setting the merge proposal to Approved as usual, unless someone has stated it is being landed it on your behalf. (Only Canonical staff can do the landing step).

  7. [Cold patches] Wait until the branch reaches staging. Use the db-stable deployment report to check this.

  8. [Cold patches] After the branch reaches staging check the duration that the patch took to apply by rsyncing pamola.internal::staging-logs/dbupgrade.log from carob. If it took more than 15 seconds, mark the revision bad and revert it.

  9. [Cold patches] QA the patch as usual, check things still work on staging.
  10. [Hot patches] Ask a member of IS to manually apply the patch to qastaging, and check that things still work. Note: This is a temporary exception from normal deployment rules, to be reviewed when buildbot is faster.

  11. Request a deployment per the internal production change process.

  12. Now get your python code that builds on the schema landed and deployed as usual.

Background

We change the schema through database patches. These live in branches that go through review and landing, just like code changes.

Schema patches are either applied while the system has no activity ('cold patches') or while the system is under load ('hot patches'). Some things, like changing a table and adding an index, will need to be split into two separate patches - one hot and one cold.

Schema patches must not be combined with changes to the Launchpad python code - any branch landing on devel or db-devel must be one or the other. Test *only* code may be included if absolutely necessary. Exceptions to this rule require approval from the project lead, because deploying them will require a 1 hour plus complete downtime window.

Cold schema patches always land on db-devel. After they are made live they will be promoted to master as part of the go-live process.

Hot Patches

Database/LivePatching explains how hot-patching works and what sorts of things we can hot-patch. It's the authority — we may be able to hot-patch more as our tooling improves.

Cold Patches

Anything that is not a hot patch is a cold patch and can only be applied while the appservers and so on are disconnected from the database cluster.

For qa on a cold patch, check the application time from the staging log - it must be < 15 seconds [even with cold disk cache], or we will exceed our downtime window. To exceed that window requires signoff by the technical architect or project lead.

If some parts of a change can be applied as a hot patch, it is a good idea to do it that way, in order to keep application time minimal. For example, applying an index to thousands of rows may be okay, but tens of thousands of rows is typically too slow to do in a cold patch.

Deploying patches

After successful QA on a patch, request a deploy via the internal process production change process..

Reviews

All schema changes should have reviews of type "db" requested from the regular review team (launchpad-reviewers). An approve vote from any DB reviewer is sufficient to land the patch.

As schema changes have no appserver code changes landed at the same time, no other reviews are needed (unless an exception to the no-mixing rule has been granted, in which case a normal review is also needed).

Changes to the permissions in database/schema/security.cfg or the comments in database/schema/comments.sql are not schema patches and do not require db review when done on their own. If they are included in a schema patch then the db reviewer will review them.

Patch ids

The schema application code needs a unique id for each patch. This is allocated by editing a shared document stored in the dbpatches repository. If you are in ~launchpad please allocate this yourself. Other developers can ask any ~launchpad member to allocate a patch number for them.

As we no longer synchronise appserver deployments with schema deployments, no-one should use a -0 patch.

Instructions for choosing a patch number are in the docs in the dbpatches repository.

Making a database patch

You need to run these steps whenever you make a schema change, regardless of whether you intend to delete sample data or not. For example, if you are adding a new column to the Person table, these steps ensure that the sample data will include this new column.

  1. Run make schema to get a pristine database of sample data.

  2. Claim a patch number in the dbpatches repository (be sure to commit and push back to the main branch).

  3. Create a SQL file in database/schema/ containing the changes you want. It should look like this:

    -- Copyright 2011 Canonical Ltd.  This software is licensed under the
    -- GNU Affero General Public License version 3 (see the file LICENSE).
    
    SET client_min_messages=ERROR;
    
    -- put your changes in here.
    
    INSERT INTO LaunchpadDatabaseRevision VALUES (XXXX, YY, Z);
  4. Run your new SQL patch on the development database to ensure that it works. Do this by running psql launchpad_dev -1 -f your-patch.sql

  5. Run psql launchpad_ftest_playground -f your-patch.sql as the ftest playground db is used to regenerate sampledata snapshots in the following step. (Also be sure you ran psql launchpad_dev -f your-patch.sql in the previous step-- this updates the dev database's sampledata. Note that this is not sufficient to let the test suite see your changes: for that, you'll need to update launchpad_ftest_template, though it's simplest to run make schema or make -C database/schema test as described below.)

  6. You may wish to run make newsampledata, although it isn't critical; this will let you see what changes your patch would make to initial setups.

    1. This will produce a lot of noise. Feel free to ignore it.
  7. Review the sample data changes that occured using git diff database/sampledata. This diff can be hard to review as-is. You might want to use a graphical diff viewer like kompare or meld which will make it easier. Make sure that you understand all the changes you see.

  8. Move your pending SQL file into database/schema/ with a name like patch-xx-yy-zz.sql (where xx matches the existing patches), and ending with the line INSERT INTO LaunchpadDatabaseRevision VALUES (xx, yy, zz);.

  9. If you have removed or renamed a table or column, update database/schema/comments.sql.

  10. Run make schema again to ensure that it works, and that you now have a pristine database with the new sample data. If you don't want to blow away your launchpad_dev database, then you can use make -C database/schema test instead to update only the test databases.

  11. Comments on new tables and columns need to be added to database/schema/comments.sql.

  12. Make any necessary changes to database/schema/fti.py, database/schema/security.cfg.

  13. Run the full test suite to ensure that your new schema doesn't break any existing tests/code by side effect.

  14. Commit without sample data changes, push and propose for merging to db-devel

Rules for patches

  1. Don't use the TRUNCATE or DROP TABLE statements as they don't work with Slony-I replication.
  2. To drop a table, move it into the todrop namespace using a statement like ALTER TABLE FooBar SET SCHEMA todrop. Then upgrade.py will automatically drop these tables during the downtime. Be careful about foreign keys: the drop order is undefined so foreign keys between the tables must be dropped explicitly, and foreign keys handled specially by application code (most branch and person) may need to be dropped first in a separate patch.

  3. Do not migrate data in schema patches unless the data size is extraordinarily small (< 100's of rows).

  4. Similarly, new columns must default NULL unless the data size is extraordinarily small (< 100's of rows).

  5. When changing existing DB functions, start your patch with the original version (SELECT pg_get_functiondef(oid) FROM pg_proc WHERE proname IN ('foofunc', 'barfunc') ORDER BY proname;). This makes it much easier to review the diff.

Sample data

Let's say your branch needs to make changes to the database schema. You need to follow the steps on this page to ensure that the sample data is updated to match your schema changes.

We have deprecated sample data. That means that you should never add new rows to the sample data.

In fact, there are now two sets of sampledata that need to be updated.

We use sample data to provide well-known baseline data for the test suite, and to populate a developer's Launchpad instance so that launchpad.dev can display interesting stuff. There are some guidelines and recommendations you should be aware of before you make changes to the test suite sample data, or you may break the tests for yourself or others.

Please note that sample data is for developer's instances only. It would make no sense to use the sample data on production systems!

If your tests require new data, you should create the data in your test's harness instead of adding new sample data. This will often make the tests themselves more readable because you're not relying on magical values in the sample database. Doing it this way also reduces the chance that your changes will break other tests by side-effect. Add the new data in your test's setUp() or in the narrative of your doctest. Because the test suite uses the launchpad_ftest_template database, there is no chance that running the test suite will accidentally alter the sample data.

However, if you interact with the web U/I for launchpad.dev your changes will end up in the launchpad_dev database. This database is used to create the new sample data, so it is imperative that you run make schema to start with a pristine database before generating new sample data. If in fact you do want the effects of your u/i interactions to land in the new sample data, then the general process is to

  • run make schema

  • interact with launchpad.dev

  • follow the make newsampledata steps above.

Be aware though that your generation of new sample data will probably have an effect on tests not related to your changes! For example, if you generate new karma events, you will break the karma_sample_data tests because they expect all karma events to be dated prior to the year 2002. If you make changes to the sample data, you must run the full test suite and ensure that you get no failures, otherwise there is a very high likelihood that you will break the trunk.

Resolving schema conflicts

Resolving conflicts in current.sql manually is usually more trouble than it's worth. Instead, first resolve any conflicts in comments.sql, then:

cd database/schema/
mv {patch-in-question}-0.sql comments.sql pending/
cp {parent branch, e.g. rocketfuel}/database/schema/comments.sql ./
cp ../sampledata/current.sql.OTHER ../sampledata/current.sql
make
psql launchpad_dev -f pending/patch-xx-99-0.sql
make newsampledata
mv pending/{patch-in-question}-0.sql pending/comments.sql ./
make   # Just to make sure everything works
cd ../..
bzr resolve database/sampledata/current.sql

Notes on Changing security.cfg

It is possible to land changes to security.cfg on master rather than db-devel. These changes are deployed during nodowntime rollouts.

Note that adding new users requires manual DB reconfiguration, so you need to file an RT ticket to grant access to relevant machines and make sure it is resolved before landing the branch that needs them.

PolicyAndProcess/DatabaseSchemaChangesProcess (last edited 2022-09-29 10:26:50 by cjwatson)