absorbing Markov chains

Download Report

Transcript absorbing Markov chains

PHARMACOECONOMIC EVALUATIONS &
METHODS
MARKOV MODELING IN DECISION ANALYSIS
FROM THE PHARMACOECONOMICS ON THE INTERNET ®SERIES
©Paul C Langley 2004
Maimon Research LLC
OBJECTIVES
• To describe the place of Markov models in
pharmacoeconomic analysis
• To provide an introduction to Matrix
methods
• To illustrate the steps required to set up a
Markov model
• To consider the limitations of Markov
models
THE PLACE OF MARKOV
MODELS
• Markov models represent a variant of
decision analysis for pharmacoeconomic
evaluations where the treatment pathways
and options may be both complex and
repetitive
• Markov models can also be used in
situations where prevalence as well as
incidence based impact assessments are
required.
TYPES OF MARKOV MODEL
• In representing complex decision
processes in simple and convenient
mathematical form, we can use two types
of Markov model:
– regular Markov chain models
– absorbing Markov chain models
ABSORBING MARKOV
CHAINS
• These are the most widely used form of
Markov model in pharmacoeconomics
• They are used to estimate time spent by a
patient group in a particular disease state
when all patients eventually leave that
disease state by recovery or death
• They can be used to estimate both time
spent in a disease state or time spent in a
budget period
MATRIX ALGEBRA
• The principal obstacles to utilizing Markov
models are the lack of familiarity with the
rules of matrix manipulation and their
application to Markov processes
• These rules are summarized in the
downloadable text which includes a brief
introduction to Markov processes and the
mathematics of absorbing Markov chains
WHAT DO I NEED TO KNOW?
•
•
•
•
•
•
What are matrices and vectors?
What are the rules of matrix manipulation?
What is an identity matrix?
What is matrix inversion?
What is a transition matrix?
What is a fundamental matrix?
MATRICES AND VECTORS
• Matrices and vectors are arrays or ordered
collections of real numbers
• Vectors, which can be row vectors or
column vectors, are designated by lower
bold case (i.e., u = [ 5 6 7] is a row vector
with three real numbers as components)
• Matrices are rectangular or square arrays
of real numbers designated by upper case
bold letters (i.e., N or I or Q)
MATRIX NOTATION
• When we describe a matrix it is in terms of
the number of rows (the i th row) and the
number of columns (the j th column)
• We can describe matrices in terms of their
size by using i and j (i.e., size i x j), where
a square matrix is a special case of i = j
• Row vectors would be written (1 x j) and
column vectors (i x 1)
IDENTITY MATRIX
• In order to manipulate matrices and apply
Markov models we require a special type
of square matrix which we call an identity
matrix
• By construction an identity matrix has
ones in its leading diagonal and zeros
everywhere else
MATRIX ADDITION AND
SUBTRACTION
• In standard arithmetic we can add and
subtract one number from another
• In matrix manipulation, where we are
dealing with arrays of numbers, we can
similarly add or subtract one matrix from
another by operating on the corresponding
components
• However, to do addition and subtraction
matrices must be of the same size
MATRIX MULTIPLICATION
• The process of matrix multiplication is
somewhat more complex
• Note also that we don’t require matrices to be
of the same size or order but they must follow
the rule where we have two matrices (m x n)
and (n x k) we can only multiply if n is
common, i.e., (m x n)(n x k) = (m x k) which is
a product matrix of size (m x k)
• The mechanics of multiplication involves
combining row and column components
OPERATIONS NOTATION
• Addition: A + B = C
• Subtraction: A - B = D
• Multiplication: AB = E but note that
multiplication is not commutative but
depends upon the order of multiplication
(as a result of the rules of matrix
multiplication
• Hence we use the terms pre-multiplication
and post-multiplication so that BA  E
MATRIX DIVISION
• There is no division as such in matrix algebra,
we cannot divide the components in one
matrix by the components in another
• Rather, we talk about multiplying by a matrix
which is the inverse of that matrix
• We talk about an operation analogous to
division: if we have two matrices A and B and
we are told B is the product of A and come
unknown matrix X, where AX = B, then we
solve for X
SOLUTION
• To perform our operation and solve for X
we pre-multiply by the inverse of A to give
A-1AX = A-1B
since A-1A = I (by construction) and
IX = X then we have X = A-1B (providing
A has an inverse)
TRANSITION MATRIX
• The only other element of matrix algebra
we require is to define a transition matrix
• This is a square matrix where the
components are non-negative real
numbers expressed as probabilities and
the sum of each row is equal to unity
• Usually denoted by P
MARKOV CHAINS
• There are two types of Markov chain which
are of interest in pharmacoeconomic
modeling and which will now consider:
• regular Markov chains
• absorbing Markov chains
REGULAR MARKOV CHAIN
• Regular Markov chains illustrate an
interesting property of the behavior of
transition matrices
• If we multiply the transition matrix P by itself
we get the probability of being in a particular
state after 2 periods (P x P = P2)
• Eventually Pn converges to a fixed point
matrix solution (a W matrix) where each of
the rows w is an identical probability vector
ABSORBING MARKOV
CHAINS
• Our principal interest is in absorbing
Markov chains
• There are three steps in applying
absorbing Markov chains to a health care
decision problem
– identifying mutually exclusive treatment states
– specifying a transition matrix for these states
– solving for the fundamental matrix of the
absorbing Markov chain
TREATMENT STATES
• Defining mutually exclusive and exhaustive
disease states through which a patient might
move relies upon the clinical knowledge of
the analyst
• These are most easily thought of as treatment
stages which eventually result in the patient
leaving the system via death or recovery,
although the patient does not have to move
through each of them and some transitions
may be barred
TRANSITION
PROBABILITIES
• Each row of the transition matrix summarizes
the probability of persons moving between
treatment states
• Transition probabilities are defined for a fixed
time interval (e.g., month, quarter)
• The rows sum to one including the probability
of moving to the absorbing state
• Matrix manipulation focuses on the square
transition matrix hence everyone eventually
leaves the system
FUNDAMENTAL MATRIX
• The part of the transition matrix that is
manipulated to derive the fundamental
matrix (denoted N) is called the Q matrix
• N is solved as the solution to an infinite
series I + Q + Q2 + Q3 + or as N = (I - Q)-1
• Each element of the N matrix is the mean
number of times that the chain is in state aj
given it started in state ai
APPLICATION (I)
• This final property of the fundamental
matrix means that if we add up all of the
elements in any row of the matrix,
multiplying this by the fixed time interval,
we get the survival time of patients who
entered the system in this row state
• Hence the fundamental matrix generates
survival times and time spent in each
treatment state
APPLICATION (II)
• Rather than solve for the fundamental
matrix we can take any number of the
elements of the geometric sequence and
generate time spent in treatment states
over a given time period
• This means that the absorbing Markov
process can generate both incidence and
prevalence based estimates
APPLICATION (III)
• In order to apply the absorbing Markov
chain to the impact of new therapies all we
need to do is to vary the transition
probabilities (while maintaining
consistency in the probabilities)
• Hence we can compare new drug impacts
on time spent in treatment states
APPLICATION (IV)
• Estimates of time spent in treatment states
are the basis for costs and outcomes impact
assessments
• If we estimate resources and costs used to
support fixed interval treatment times in the
various treatment states we can estimate
treatment costs for the time spent in each
• If we have outcome measures (e.g., QALYs)
for each state we can estimate outcomes in
QALY terms
OVERVIEW
• Markov models are potentially a major
contribution to decision analysis
• Even so, they still embody assumptions of
constant cost and complete therapy
switching
• They are very demanding of data and are
difficult to populate