to Dr. Aiken`s PowerPoint slides.
Download
Report
Transcript to Dr. Aiken`s PowerPoint slides.
Analysis of Count Outcomes:
Single Level and Multilevel Models for
Unclustered and Clustered
Count Data
Leona S. Aiken
Department of Psychology
Arizona State University
Goals for Today’s Presentation
• Introduce Generalized Linear Model (GLM)
• Develop GLM for count outcomes
• Introduce the Generalized Linear Mixed Model
(GLMM)
• Illustrate GLMM for clustered count outcomes
• Illuminate challenges and complexities in use of
GLM and GLMM
Aiken, ASU
2
Outline
• Definition of unbounded and bounded counts
• Problems with using OLS regression for count
outcomes
• Introduce Generalized Linear Model (GLM)
• Develop Poisson regression for count outcomes
• Address over-dispersion and excess zeros
• Clustered data and multilevel (i.e., mixed) models
• Generalized Linear Mixed Model (GLMM) for
Clustered Count Outcomes
• Conditional versus marginal models in GLMM
Aiken, ASU
3
References
Coxe, S., West, S. G., & Aiken, L. S. (2009). The analysis of count
data: A gentle introduction to Poisson regression and its
alternatives. Journal of Personality Assessment, 91(2), 121-136.
Aiken, L. S,. Mistler, S. A., Coxe, S., & West, S.G. (2015). Analyzing
count variables in individuals and groups: Single level and
multilevel Models. Group Processes and Intergroup Relations,
18(3), 290-314.
Hilbe, Joseph. M. (2014). Modeling count data. New York, NY:
Cambridge University Press.
Blevins, D. P., Tsang, E. W. K., & Spain, S. M. (2015). Count-Based
Research in Management: Suggestions for Improvement.
Organizational Research Methods, 18 (1), 47-69.
Aiken, ASU
4
Count Outcomes
pinterest.com
Sedona, Arizona
Aiken, ASU
5
Count Outcomes
• Unbounded counts (no upper bound)
– Number of discrete events in a fixed time period
– Range upward from minimum of zero: 0, 1, 2… + ∞
– Focus today, counts with no upper bound
•
Examples, no upper bound (our focus today)
– Number of emails received from 8:30-8:45 on Monday
– Number of new work assignments in a week
• Bounded Counts (upper bound)
– Number of days exercised last week (7 day upper bound)
– Different statistical treatment unbounded counts
Aiken, ASU
6
Characteristics of Count Outcomes
(no upper bound)
• Count outcomes often have many zeros
• Count outcomes are often positively skewed
• Count outcomes exhibit heteroscedasticity:
variance of a count outcome increases with mean
– Mean of 1 count, very low variance of scores to
produce a mean of 1
– Mean of 5 counts, much higher variance of
individual scores is possible
Aiken, ASU
7
Traditional OLS regression and
unbounded count outcomes
Aiken, ASU
8
Traditional Approach:
ln(Y) and OLS
• Traditional approach to count outcomes, use OLS
regression with logarithm of count ln(Y) as
dependent variable*
• The following problems remain:
– Excess number of very low scores
– Nonnormality and heteroscedasticity of residuals
– Biased and inconsistent OLS regression
coefficients
*For zero count—started logs (Tukey, 1977): add .1 to all counts,
and then analyze ln (Y+.1)
Aiken, ASU
9
Generalized
Linear
Model
Family of
regression models
for the analysis of
“limited dependent
variables” (i.e.,
not continuous
and normally
distributed)
pinterest.com
Aiken, ASU
10
Generalized Linear Model (GLM)
• A family of highly flexible regression models that
handle a wide range of forms of “limited dependent
variables”
– Binary variables (logistic, probit regression)
– Multiple unordered categories (multinomial
logit/multinomial probit regression)
– Ordered categories (ordinal logistic/probit
regression)
– Counts (Poisson, binomial regression)
– Time to failure (survival models)
Aiken, ASU
11
Generalized Linear Model Family of
Regression Models:
Three Components
of each Regression Model
• Component One: Error structure
the random component
• Component Two: Linear predictor
the fixed component
• Component Three: Link Function
Aiken, ASU
12
Generalized Linear Model Family of
Regression Models: Three Components
Component One: Error Structure
• Random component (error structure)—
distribution of residuals, depends on the
particular dependent variable structure
– Variety of probability distributions beyond the
normal distribution (e.g., binomial, gamma, beta,
chi square, Wishart, Poisson, negative binomial
– All probability distributions: exponential family
Aiken, ASU
13
Generalized Linear Model
Component Two: Linear Predictor
• The regression models in the generalized
linear model family* have a nonlinear
relationship of the predictors to criterion
• For each regression model, the regression
equation can be written in linear form if the
predicted score is a transformed version of
the units of the dependent variable
*One exception-forthcoming
Aiken, ASU
14
Generalized Linear Model
Component Two: Linear Predictor
Linear predictor: the regression equation
expressed in linear form,
predicted score is transformed version of
units of the observed score.
Linear predictor: linear form of regression equation
Aiken, ASU
15
Generalized Linear Model
Component Two: Linear Predictor
Aiken, ASU
16
Generalized Linear Model
Component Three: Link Function
Aiken, ASU
17
OLS Regression as one of the
Generalized Linear Model Family
• Error Structure: Normal
• Linear Predictor:
• Link Function: Identity function, since the
predicted score and observed criterion score
are in the same units
Aiken, ASU
18
Poisson Regression for Unbounded
Count Dependent Variables
Wildnatureimages.com
Sonoran Desert, Arizona
Aiken, ASU
19
Poisson Regression—History 1I
Siméon Denis Poisson, French, 1781-1840
Student of Laplace and Lagrange
Professor of pure mathematics;
eletromagnetic theory, probability, celestial
mechanics, definite integrals, Fourier
Integrals, law of large numbers
marcelacofre.wordpress.com
Invented Poisson Distribution (1831)
as approximation to binomial
distribution, modeling probability of
criminal and civil verdicts (a sideline)
Context: Johann Carl Friedrich Gauss , 1777-1855
http://www.britannica.com/biography/Simeon-Denis-Poisson
Aiken, ASU
20
Poisson Regression-Component 1
for y = 0, 1,2,…
Aiken, ASU
21
Poisson Distribution as a Function of µ,
the arithmetic mean count (rate parameter)
Why Poisson won’t work for
bounded counts. For example,
number of days exercised in the
past week, with µ = 3; probability of
8 days of exercise is greater than
zero
After mun.ca
Aiken, ASU
22
Poisson Regression—History II
Ladislaus Bortkiewicz,
b. 1868 St. Petersburg, d. 1931 Berlin
Economist and statistician
Professor of Statistics, U. Berlin
Myweb.liu.edu
The Law of Small Numbers 1898
(book about Poisson distribution)
Number of deaths by horse kick in the
Prussian army over 20 years, in 14 cavalry corps;
Follow a Poisson distribution (many zeros; max=4)
J J O'Connor and E F Robertson, St. Andrews, Scotland
http://www-history.mcs.st-andrews.ac.uk/Biographies/Bortkiewicz.html
Aiken, ASU
23
Poisson Distribution as a Function of µ,
the arithmetic mean count
µ
umass.edu
Note: approaches normality, does not become right skewed
Aiken, ASU
24
Poisson Regression-Component 2
Linear Predictor: Regression equation in
linear form
Logarithm of Predicted Count (can take
antilog and get back to predicted count)
Not the predicted logarithm of the count
ln(count) is wrong!!
Aiken, ASU
25
Poisson Regression-Component 2
Linear Predictor: Regression equation in
linear form
Poisson Regression-Component 3
Link Function: Natural logarithm
Aiken, ASU
26
Two Forms of Poisson Regression Equation
Linear Predictor
Exponentiated Form
or, equivalently
Aiken, ASU
27
Two Forms of Poisson Regression Equation
The natural logarithm of the predicted count is an
additive function of the predictors
The predicted count is a
multiplicative function of the predictors
Aiken, ASU
28
Hypothetical Example:
In-Group Out-Group Bias
• Undergraduates at Purdue University
• Quantify student identification with Purdue
(individual difference variable)
1-7 point scale, 7 = strongly identify with Purdue
• View recruitment video of another university—
famous faculty, brilliant students, outstanding research
centers, high graduate employment in prestigious positions
• Write 100-word narrative of impressions
of the other university in the video (n=500 students)
Acknowledge Steve Mistler, Stefany Coxe
Steve West
Aiken, ASU
29
Competition Manipulation
Write a 100-word narrative—impressions of
the other university in the recruitment video
High Competition
Low Competition
Aiken, ASU
University of Hawaii
30
Dependent Variable
• Count of number of derogatory (negative)
remarks about the other university in the 100
word theme
• They think they are the
Cambridge U. of the Midwest
• They are all just waiting for
the big wave
Aiken, ASU
31
Observed
Poisson
Distribution
with mean µ
= observed
mean
proportion
Full sample, n=500
proportion
Number of derogatory remarks
High Competition
Low Competition
N=250
Number of derogatory remarks
N=250
Number of derogatory remarks
Aiken, ASU
32
Note—
Variances
exceed
means—
“overdispersion”
(ignore for
now)
proportion
Observed
Poisson
proportion
Number of derogatory remarks
Number of derogatory remarks
Number of derogatory remarks
Aiken, ASU
33
Analysis in Generalized Linear Model
• Estimation: maximum likelihood
(solving for area under likelihood function)
• Model fit and predictor contribution
– Deviance (badness of fit), D
– Contribution of individual predictors
and sets of predictors assessed through
reduction in deviance by addition of
predictors, requiring nested models
Aiken, ASU
34
Analysis in Generalized Linear Model
Aiken, ASU
35
Analysis in Generalized Linear Model
Two alternative test statistics
Wald test of
predictor
contribution
(asymptotic)
Wald test, asymptotic, may exhibit lower power
for individual predictors; familiar as z in
structural equation modeling (SEM)
Aiken, ASU
36
Numerical Example—Sequence of Models
Analysis
Model
Deviance
Dp
Likelihood Ratio
chi square (df)
A. Intercept
913.504
---
766.777
B. Intercept,
Identification
Proportional
reduction in
deviance
B vs A
.161
C. Intercept,
Ident, Cond
593.333
C vs B .226
D. Intercept,
Ident, Cond,
Interaction
569.757
D vs C .040
Prediction
from included
predictors
Aiken, ASU
Contribution of
each added
predictor
37
Pseudo-R2 Index of Fit (Predictor Contribution)
Analysis
Model
Deviance
Dp
Likelihood Ratio
chi square (df)
A. Intercept
913.504
---
766.777
B. Intercept,
Identification
C. Intercept,
Ident,Cond
Proportional
reduction in
deviance
B vs A
593.333
.161
C vs B .226
A versus B, contribution of Identification
Proportional reduction in Deviance = Pseudo-R2
Pseudo-R2 =
(Dp-1
– Dp)
/
Dp-1
= 913.504 - 766.777)/ 913.504=.161
Aiken, ASU
38
Tests of Individual Coefficients
(SPSS GENLIN, SAS GENMOD, R glm)
Parameter
B
Std.
Error
95% Wald Confidence
Interval
Lower
Upper
Hypothesis df
Test
Wald
ChiSquare
(Intercept)
.624
.0346
.557
.692
326.284
1
identc
.162
.0167
.129
.194
93.838
1
condc
.731
.0691
.595
.866
111.686
1
Interaction
.163
.0334
.097
.228
23.763
1
(Scale)
1a
Aiken, ASU
39
Model Equation including Interaction
Condition: +.5,-.5, for Hi, Lo Competition, respectively
Identification is centered at sample grand mean
Aiken, ASU
40
Poisson Regression Equations
No Interaction
predicted count
ln(predicted count)
Exponentiated Form
Linear Predictor Form
+, high competition;
Aiken, ASU
• low competition
41
Poisson Regression, with/without Interaction
predicted count
ln(predicted count)
No
Interaction
With
Interaction
Aiken, ASU
42
Poisson
Predicted Count
OLS
Predicted Count
No
Interaction
With
Interaction
Aiken, ASU
43
Count Outcomes: Issues, Alternative Models
Overdispersion
but Poisson
Over-dispersion produces negatively biased standard errors
Sources of overdispersion: Non-independent events
Contagion, one event
increases probability of next
event
Clustering of cases
100+ pileup on Pa. turnpike
Missing predictor
Aiken, ASU
44
• Quasi-Poisson represents excess variance as a
multiplicative function of the rate parameter μ, that is,
φμ = scaled variance where φ= χ2Pearson/df ; φ = 1 for Poisson
• Poisson regression with robust standard errors
• Negative binomial (NB) regression, combines Poisson and
gamma distributions; σ2NB =μ + a μ2 = NB variance;
• a=0 for Poisson.
• Blevins et al., (2015). Organizational Research Methods,
finds much over-dispersion in management data, 10 year
review of 10 management journals
Aiken, ASU
45
Problems with zeros1
Poisson Distribution assumes zero counts
umass.ed
1Hilbe
(2014) . Modeling count data. Chapter 7
Aiken, ASU
46
Multiple ways in which zero counts don’t fit
standard probability distributions
1. Zero-Truncated Distribution
Zero count is impossible (e.g., hospital data of
length of hospital stay—hospitals start counting a day
at admission)
2. Distribution with Excess Zeros
More zeros than expected from Poisson (or
other) distribution (e.g., number of drinks per attendee
at a reception—both nondrinkers and drinkers who just
didn’t drink at that occasion generate zero counts)
Acknowledge Milica Miočević
Aiken, ASU
47
Zero-Truncated Models
Special probability distributions are used to model to
model zero truncated data
Probability distribution on which the analysis is based is
modified to eliminate the probability of zero
Common models:
Zero-truncated Poisson
Zero-truncated Negative Binomial
Caveat: With mean count > 5, probability of a zero
Is extremely low. May not need a zero-truncated model.
Aiken, ASU
48
Excess Zeros – Two part Hurdle models
Two classes of zeros
Structural zeros: (e.g., nondrinkers at reception)
Random/Sampling zeros: (e.g., drinkers who did
not drink a reception)
Hurdle models: All zero scores are structural
(e.g., days hospitalized this year, 1 day at admission)
Two parts to Hurdle models (mixture models)
1. Binary model: zero versus > non-zero
(e.g., not hospitalized versus hospitalized)
2. Zero truncated model (e.g. number of
days hospitalized)
Parts may be modeled separately or together
Aiken, ASU
49
Zero Inflated Mixture Models
Overlapping Zeros: structural and random/sampling
Two part model that estimates the proportion of
structural versus random zeros.
Binary model, (e.g., logit, probit)-yields
predicted probability of structural zero)
Count model (e.g., Poisson)
Two parts must be estimated simultaneously, given
the overlapping zeros in the two parts
Zero-inflated Poisson (ZIP)
Zero-inflated Negative Binomial (ZINB)
Aiken, ASU
50
Transition to
Clustered
Counts
(e.g. number of ideas by
each individual, working
in groups)
Multilevel Model in the
Generalized Linear Model Framework
Aiken, ASU
51
Multilevel Model
• Variety of names of the approach to analysis
– Hierarchical linear models
– Multilevel models
– Mixed models (mixture of fixed and random
components—from statistics literature)
• Data structure: data gathered at multiple levels:
• (e.g., individual employees in work groups)
Data are clustered: scores from individuals within
group are correlated
Scores from individuals within a group are more
similar to one another than are randomly
selected scores
Clustering of data must be accounted for
Aiken, ASU
52
Generalized Linear Mixed Model (GLMM)
• Combine regression models from the
Generalized Linear Model (GLM) framework with
• Multilevel (Mixed model) framework
For example, model the number of ideas per individual
as a function of group characteristics and characteristics
of the individuals themselves.
Aiken, ASU
53
Two Classes of Parameter Estimates
Fixed Regression Coefficients: exactly the same as in
OLS regression and in the Generalized Linear Model,
weight assigned to each predictor in predicting some
outcome
Variance Components: measure of the variance among
the groups on the dependent variable.
Work group example:
a) measure the mean number of ideas per group
b) Variance component is the variance of these
group means, reflects group differences
Aiken, ASU
54
Generalized Linear Mixed Models (GLMM)
Estimation
• Maximum likelihood estimation
(solving for area under the likelihood function)
• Estimation computationally intensive
• Solution requires integration of likelihoods over
random-effects distributions
• Much slower estimation or computationally
infeasible compared to GLM
Aiken, ASU
55
Generalized Linear Mixed Models (GLMM)
Estimation
• Simplified approaches to estimation are required
• Approximations for evaluating the integral over random
effects are employed
• Broad classes of approaches
– Linearization methods
– Integral Approximation
– Also Markov chain Monte Carlo (MCMC)
Aiken, ASU
56
Generalized Linear Mixed Models (GLMM)
Estimation Approximations I
Linearization Methods
• Penalized quasilikelihood (PQL) or
pseudolikelihood (PL)
• Uses a linear transformation of the nonlinear outcome
variable for parameter estimation
• Thus, nonlinear estimation converted into simpler
linear estimation
• There are problems…
Aiken, ASU
57
GLMM: Penalized quasilikelihood (PQL) or
pseudolikelihood (PL) estimation
• Does not produce true likelihood estimates
• Regression coefficients and variance components
may be biased toward zero
• Does not work well with Poisson models with low
mean counts
• May not yield parameter estimates when count
data contain many zeros
• Only estimation procedure in SPSS for GLMMS,
default in SAS for GLMMS
Aiken, ASU
58
(GLMM) Estimation Approximations IIII
Integral Approximation
Integral approximation
methods by approximating
the area under complex
curves with a series of
rectangles
Integral approximation methods in
GLMM estimate the actual likelihood
Aiken, ASU
59
(GLMM) Estimation Approximations II
Integral Approximation
Approximates the true GLMM likelihood
• Laplace approximation
– Slower, more accurate than Pseudolikelihood (PL)
• Adaptive Gauss-Hermite Quadrature
– “adaptive” - controls approximation error by improving
steps during estimation
– Slower, more accurate than Laplace approximation
– Limited to models with few random effects
Aiken, ASU
60
Inference in GLMM, Fixed Effects
• Two approaches to testing individual coefficients
– Wald Chi Square Tests
– Likelihood Ratio (LR) Chi Square Tests
• If pseudolikelihood (PL) estimation is used, we don’t
have estimate of true likelihood, so no LR tests
– Wald Chi Square Tests only
• With complex models with computationally slow
integral approximation, Wald tests often used
Aiken, ASU
61
Inference in GLMM, Variance Components
• Inference for variance components does
not exist with PL estimation
• Wald statistics (available with PL
estimation) are grossly conservative,
should not be used
• Likelihood ratio testing of variance
components can only be carried out with
integral approximation
Aiken, ASU
62
Back to experimental study-modify design
High Competition
Low Competition
University of Hawaii
After viewing video, create experimental groups
meet in randomly constituted groups of n=5
students each to discuss the recruitment video
Write a 100-word narrative—impressions of the
other university in the recruitment video
Aiken, ASU
63
The data set is NOT the same data set as
presented for the unclustered data
Please don’t try to compare effects across
the two data sets
Aiken, ASU
64
Examine impact of clustering
Poisson regression no Intraclass Correlation (ICC)
Analysis, two steps to estimate clustering effect
Analysis
Fixed
Effect
A. No
cluster
Intercept
only
Intercept
B. With
cluster
Intercept
Intercept
Variance Model
Compo- Deviance
nents
----
4775.77
Random 2763.64
Intercept
LR chi
square
(1 df)
Proportional
reduction in
deviance
-----
B versus A:
.42 due to
clustering
[4775.772763.64]/
4775.77=.42
A. GLM (Poisson);
B. GLMM (Poisson with clustering modeled)
Note: Gauss-Hermite adaptive quadrature, estimation
Aiken, ASU
65
Sequential Tests of Nested Model Effects
Fixed
Effects
Model
Deviance
LR chi
square
(1 df)
Proportional
reduction in
deviance
2763.64
2012.13**
B versus A =
.42
Clustering
C. Intercept, Intercept
ID=Ident
2149.29
614.35**
C versus
B=.29
Identification
D. Intercept, Intercept
ID, Cond.
2143.10
6.19*
D versus
C=.002
Condition
E. Intercept, Intercept
ID, Cond.
ID x Cond
2142.77
.33
E versus D=
.00015
Interaction
B. Intercept
only
Variance
Components
Intercept
Aiken, ASU
66
Final Model of Fixed Effects
(output format different from GLM)
Fixed Effects
Coefficient
Estimate
Standard
Error
Intercept
1.1029
.0768
Identification
df
t
value
p
14.35
<.0001
.2948
.0132We expect
398 to 22.32
<.0001
Condition
.3701
see Wald χ2 (1)
.1527for each98
2.42
coefficient
.0172
Identification
x
Condition
.0151
.0265
.5698
98
398
.57
Variance component, Intercept Variance τ00 = .4998
Standard error = .0836
Aiken, ASU
67
ANOVA format for fixed effects tests
Type III F Tests of Fixed Effects
Effect
Numerator df
Denominator df
F value
p value
Identification
1
398
498.02
<.0001
Condition
1
98
5.87
0.0172
Identification
by Condition
1
398
0.33
0.5689
BETWEEN
50 groups per condition
m= 2 conditions
m(k-1) = 98 df
WITHIN
n=500 cases
minus m=2 condition means
minus 2k=100 group means= 398 df
Aiken, ASU
68
68
Back to the ANOVA framework
Mixed model developments are outside familiar
fixed effects models and are treated in the context of
random effects ANOVA
GLM— regression format , Wald χ2 (1)
or Wald z-tests for coefficients (as in SEM)
GLMM—ANOVA format, t-tests for coefficients
—or Type III F tests for coefficients
Recall t goes to z asymptotically, and t2(df) = F(1,df)
Aiken, ASU
69
Common Software for
Generalized Linear Model
Generalized Linear Mixed Model
Software
Generalized
Linear Model
(GLM)
Generalized
Linear Mixed
Model (GLMM)
SAS
GENMOD
GLIMMIX
FMM
SPSS
GENLIN
GENLINMIXED
R
glm
glmm
Aiken, ASU
Notes
PL Only,
Estimation
SPSS 23;
70
Conditional versus Marginal Models
pinterest.com
Aiken, ASU
71
Conditional versus Marginal Models
Two aspects of a data structure in combination
lead us to a new level of complexity
(1) Categorical or limited dependent variables
as outcomes (binary, counts, proportions,
survival times) plus
(2) Clustered data
We have a choice of two models for
characterizing the fixed effects of predictors—
Conditional versus Marginal Models
Distinct approaches to estimation of fixed effects
Acknowledge
Stefany Coxe
Aiken, ASU
72
Conditional versus Marginal Coefficient Estimates
• Marginal estimates: estimates of parameters in the total
population ignoring clustering (i.e., ignoring the random
effects*)
– Conceptually, estimating the fixed effects, say the
Level 1 intercept, by pooling all the data regardless of
cluster
– In ANOVA, this would be like calculating the estimate
of the population grand mean by averaging all the
individual cases
*correlated errors are accounted for, not ignored; statistically proper
Aiken, ASU
73
Conditional versus Marginal Coefficient Estimates
• Conditional estimates: estimates of
parameters that take into account the
random effects (i.e., taking into account
cluster values)
– Conceptually, estimating the fixed effects,
say the Level 1 intercept, by averaging the
intercepts in each of the clusters
– In ANOVA, this would be like calculating the
estimate of the grand mean as the weighted
average of the means of all the clusters
Aiken, ASU
74
Conditional versus Marginal Coefficient Estimates—
Why has this not come up before?
• In normal theory based statistical models,
including usual multilevel modeling of
continuous outcomes
– the model is linear, and therefore
– the conditional and marginal estimates are
identical
– the mean of the cluster intercepts estimates
the intercept of the “average cluster” in the
population
– the mean of the cluster intercepts also
estimates the overall population intercept
Aiken, ASU
75
Conditional versus Marginal Coefficient Estimates—
Why is this coming up now?
• GLMMs have non-linear relationships of the
predictors to the outcomes, and
• Estimation of fixed effect in GLMMS involves
integrating over the random effects
• Therefore the marginal and the conditional
estimates are not equal
• Conditional and marginal models produce
different effects and conclusions;
• Neither is “correct”
• Choice of model depends on study design and
research questions
Aiken, ASU
76
Conditional versus Marginal Coefficient Estimates—
Interpretation of Coefficients
• Conditional Effects:
Effects in the population
of all clusters:
the average cluster
• Marginal Effects: Effects
in the population of all
individual cases:
the average case
Aiken, ASU
Generalized
Linear Mixed
Models (GLMM)
for estimating
conditional
models
Generalized
Estimating
Equations (GEE)
for estimating
marginal
models
77
In closing
The GLM and the GLMM permit us to
examine a far wider range of outcome
variables than do classic normal theory
based models.
All the models I presented today
are available in familiar statistical
software.
I hope you will find them of use in
your own research.
Aiken, ASU
78
Thank
you
Arizona State Flower
Sugaro Cactus Blossom
Aiken, ASU
79
Arizona State Bird
Cactus Wren