Toward Online Schema Evolution for Non
Transcript Toward Online Schema Evolution for Non
Toward Online Schema Evolution for
Amol Deshpande, University of Maryland
Michael Hicks, University of Maryland
Schemas Evolve !
Changing business logic
For improved performance
QuickTime™ and a
TIFF (Uncompressed) decompressor
are needed to see this picture.
Current Approach (??)
Modify the application(s) to use new schema.
Shut down the applications, and stop access to the
Extract the DDL and authorizations for the table and
dependent objects (views, indexes etc).
Manually create the new table, re-create dependent
objects, and populate the table.
Drop the old table.
Restart the application(s).
Problems with this approach
Time-consuming (mostly) manual process
Database unavailable during the process
Application soft state (e.g. caches) is lost
What we would like
Near-instantaneous online schema changes
– Without stopping the access to the database for
– Without killing the applications using the DB
• Even if the applications are affected by the change
DB2 for z/OS V8 provides some support
Provide both by combining:
– Dynamic software updating
• Suite of techniques developed for dynamically
updating running applications
– Lazy data migration
• To affect immediate schema changes by
delaying populating the new tables
Modify the application(s) to use new schema
Create a dynamic patch using old and new copies of
When all applications reach “safe point”’s:
– Apply the patches to dynamically change the (running)
– Change the schema, and create skeleton tables
Convert the data to conform to new schema lazily
– May need to use conversion functions for changes
– E.g. changing char(15) to char(10)
– Detected through static analysis
– Based on putting constraints on changes that can be made
to the schema at various points in applications
• Function f() performs an SQL query on the Street column of an
• Can’t change Address to change/remove Street column while
– An application typically has many safe points
Static analysis also helps in detecting affected
Lazy data migration:
– Empty skeleton tables are created initially
– New tuples are directly inserted into the new tables
– When a query is posed, sufficient data is accessed from the
old table to populate the new table
• Use caching techniques to reason about contents of tables
– “Drastic” option of converting the whole table anytime
– Need to worry about:
• Integrity constraints ?