Transaction Management Overview
Download
Report
Transcript Transaction Management Overview
Transaction Management Overview
Chapter 16
1
Transactions
A transaction is the DBMS’s abstract view of a user
program: a sequence of reads and writes.
Concurrent execution of user programs is essential for good
DBMS performance.
Increasing system throughput (# of completed transactions in any
given time) by overlapping I/O and CPU operations
Increasing response time (time for completing a transaction) by
avoiding short transactions getting stuck behind long ones
A user’s program may carry out many (in-memory)
operations on the data retrieved from the database, but the
DBMS is only concerned about what data is read/written
from/to the database.
2
The ACID Properties (in a Nutshell)
Atomicity: Either all actions of the transactions
are executed or none at all.
Consistency: Any transaction that starts executing
in a consistent database state must leave it in a
consistent state upon completion.
Isolation: A transaction is protected from effects
of concurrently running transactions.
Durability: Effects of committed transactions
must persist and overcome any system failure
(system crash/media failure).
3
Consistency and Isolation
Users submit transactions, and can think of each
transaction as executing by itself.
Concurrency is achieved by the DBMS, which interleaves
actions (reads/writes of DB objects) of various transactions. The
net effect is the same as serially executing the transactions one
after the other.
Each transaction must leave the database in a consistent state if
the DB is consistent when the transaction begins.
• DBMS will enforce some ICs, depending on the ICs declared
in CREATE TABLE statements.
• Beyond this, the DBMS does not really understand the
semantics of the data. (e.g., it does not understand how the
interest on a bank account is computed).
Issue: Coping with effects of interleaving transactions
(Concurrency control).
4
Atomicity and Durability
A transaction might commit after completing all its actions,
or it could terminate unsuccessfully:
It could abort (or be aborted by the DBMS) after executing some
actions.
The system may crash while transactions are in progress.
There could be a media failure preventing reads/writes.
How does the DBMS achieve atomicity and durability of all
transactions?
Atomicity: the DBMS logs all actions so that it can undo the actions
of aborted transactions.
Durability: committed actions are written to disk or (in case of a
crash) the system must redo actions of committed Xacts which
were not yet written to disk.
Issue: Coping with effects of crashes (Recovery).
5
Scheduling Transactions
Schedule: Interleaving of the actions of different
transactions.
Serial schedule: Schedule that does not interleave the
actions of different transactions.
Equivalent schedules: For any database state, the
effect (on the set of objects in the database) of
executing the first schedule is identical to the effect
of executing the second schedule.
Serializable schedule: A schedule that is equivalent to
some serial execution of the transactions.
(Note: If each transaction preserves consistency,
every serializable schedule preserves consistency. )
6
Concurrency Control: Example
Consider two transactions:
T1:
T2:
BEGIN A=A+100, B=B-100 END
BEGIN A=1.06*A, B=1.06*B END
Intuitively, the first transaction is transferring $100
from B’s account to A’s account. The second is
crediting both accounts with a 6% interest payment.
There is no guarantee that T1 will execute before T2 or
vice-versa, if both are submitted together. However,
the net effect must be equivalent to these two
transactions running serially in some order.
7
Example (Contd.)
Consider a possible interleaving (schedule / history):
T1:
T2:
A=1.06*A,
B=B-100,
C1
B=1.06*B, C2
This is serializable. But the following is not:
T1:
T2:
A=A+100,
A=A+100,
A=1.06*A, B=1.06*B, C2
B=B-100, C1
The DBMS’s view of the second schedule:
T1:
T2:
R(A), W(A),
R(A), W(A), R(B), W(B), C2
R(B), W(B), C1
8
Anomalies with Interleaved Execution
T1:
T2:
T1:
T2:
Reading Uncommitted Data (WR Conflicts,
“dirty reads”):
R(A), W(A),
R(A), W(A), C
R(B), W(B), Abort
Unrepeatable Reads (RW Conflicts):
R(A),
R(A), W(A), C
R(A), W(A), C
9
Anomalies (Continued)
T1:
T2:
Overwriting Uncommitted Data (WW
Conflicts, ‘’lost updates’’):
W(A),
W(A), W(B), C
W(B), C
10
Lock-Based Concurrency Control
A DBMS ensures that only serializable schedules are
allowed by using a suitable CC protocol.
Strict Two-phase Locking (Strict 2PL) Protocol:
Each transaction must obtain a S (shared) lock on object before
reading, and an X (exclusive) lock on object before writing.
All locks held by a transaction are released when the transaction
completes
If a transaction holds an X lock on an object, no other transaction can
get a lock (S or X) on that object.
Strict 2PL does however allow deadlocks, i.e. cycles of
transactions waiting for locks to be released. A DBMS must
either prevent or detect (and resolve) deadlocks.
11
Aborting a Transaction
If a transaction Ti is aborted, all its actions have to be
undone. Not only that, if Tj reads an object last
written by Ti, Tj must be aborted as well!
Most systems avoid such cascading aborts by releasing
a transaction’s locks only at commit time.
If Ti writes an object, Tj can read this only after Ti commits.
In order to undo the actions of an aborted transaction,
the DBMS maintains a log in which every write is
recorded. The log is also used to recover from system
crashes: all active transactions at the time of the crash
are aborted when the system comes back up.
12
Support for Transactions in SQL
SQL provides users with support to specify transaction-level behavior.
A transaction is automatically started with each user SQL statement
(except with CONNECT) and terminated with either a COMMIT or a
ROLLBACK command.
SQL99 supports transactions with savepoints and chained transactions.
One defines a savepoint and may later selectively roll back to it: e.g.,
SAVEPOINT point1 ... ROLLBACK TO SAVEPOINT point1.
One may commit/rollback and immediately initiate another transaction: e.g.
SELECT * FROM Sailors COMMIT AND CHAIN
SELECT * FROM Students ROLLBACK
A DBMS can lock DB objects at different granularities: row-level
granularity vs. table-level granularity.
Row-level granularity is not immune to the phantom phenomenon, i.e. a
transaction sees twice a different collection of objects (tuples). The
vanishing/new tuples are called phantoms.
13
Support for Transactions in SQL (Contd.)
SQL provides support for controlling the access mode to DB
objects, the diagnostics size, and the isolation level of
transactions.
Diagnostics size: # of error conditions recordable by the
DBMS
Access mode: READ ONLY / WRITE ONLY / READ WRITE
Isolation level: controls what other transactions see of a given
transaction.
Level
READ UNCOMMITTED
READ COMMITTED
REPEATABLE READ
SERIALIZABLE
Dirty read Unrepeatable read Phantom
Possible
Possible
Possible
No
Possible
Possible
No
No
Possible
No
No
No
Example:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE READ WRITE
14
Crash Recovery: Stealing / Forcing Pages
The recovery manager ensures atomicity (by undoing actions of
uncommitted transactions) and durability (by guaranteeing that
committed transactions survive crashes/failures) of transactions.
The transaction manager ensures serializability and consistency of
transactions by using an appropriate locking protocol.
Alternatives in managing buffers:
Steal approach: the changes made to an object O by a transaction T may be
written to disk before T commits. This arises when the buffer manager
decides to replace the frame containing O by a page (belonging to a different
transaction) from disk.
Force approach: Force all writes of a committed transaction to disk.
Most systems use a steal, no-force approach which is a realistic one:
If any dirty frame is chosen for replacement, the page it contains is written to
disk (steal).
Dirty pages in buffer are not forced to disk at commit times of associated
transactions (no-force).
15
Crash Recovery: the Log
The following actions are recorded in the log:
Ti writes an object: the old value and the new value are recorded.
• Log record must go to disk before the changed page (WAL)!
Ti commits/aborts: a log record indicating this action.
Log records are chained together by transaction id, so it’s easy
to undo a specific transaction.
The log is often duplexed and archived on stable storage.
All log related activities (and in fact, all concurrency control
related activities such as lock/unlock, dealing with deadlocks
etc.) are handled transparently by the DBMS.
16
Crash Recovery: the ARIES Approach
A steal, no-force approach in 3 phases:
Analysis: Scan the log forward (from the most recent
checkpoint) to identify
• all transactions that were active, and
• all dirty pages in the buffer pool at the time of the
crash.
Redo: Redoes all updates to dirty pages in the buffer pool,
as needed, to ensure that all logged updates are in fact
carried out and written to disk.
Undo: Undoes writes of all transactions that were active at
the crash time (by restoring the before value of the update,
which is in the log record for the update), working
backwards in the log.
17
Summary
Concurrency control and recovery are among the
most important functions provided by a DBMS.
Users need not worry about concurrency.
System automatically inserts lock/unlock requests and
schedules actions of different Xacts in such a way as to
ensure that the resulting execution is equivalent to
executing the Xacts one after the other in some order.
Write-ahead logging (WAL) is used to undo the
actions of aborted transactions and to restore the
system to a consistent state after a crash.
Consistent state: Only the effects of commited Xacts seen.
18