Transcript Lecture 24

Research Methodology
Lecture No :24
Recap Lecture
In the last lecture we discussed about:
• Frequencies
• Bar charts and pie charts
• Histogram
• Stem and leaf display
• Pareto diagram
• Box plot
• SPSS cross tabulation
Lecture Objectives
Getting the feel for the data
• Measure of central tendency
• Measure of Dispersion
• Relationship Between Variables
• χ² Test
Lecture Objectives Cont.
Testing the goodness of data
Reliability
• Cronbach’s alpha
• Split half
Validity
• Factorial
• Criterion
• Convergent
• Discriminant
Measure of Central Tendency
There are three measures of central tendency
1. The mean
2. The median
3. The mode
Measure of Central Tendency Cont.
The mean
• The mean or the average, is a measure of
central tendency that offers a general picture of
the data.
• The mean or average of a set of, say, ten
observations, is the sum of ten individual
observations divided by ten (the total no of
observations).
• (54+50+35+67+50)/5=51.2
Measure of Central Tendency Cont.
The median
• The median is the central item in a group of
observations when they are arrayed in either an
ascending or a descending order.
• 35,50,50,54,67------50
Measure of Central Tendency Cont.
The mode
• In some cases, a set of observations does not
lend itself to meaningful representation through
either the mean or the median, but can be
signified by the most frequently occurring
phenomenon.
• 54,50,35,67,50-----50
Measure of Dispersion
• Dispersion is the variability that exist in a set of
observations.
• Two sets of data might have the same mean, but
the dispersion could be different.
mean
sdv
54
50
50
34
50
50
35
35
67
87
51.2
51.2
11.43241
21.46392
Measure of Dispersion Cont.
The three measures of dispersions connected with
the mean are
1. The range
2. The variance
3. The standard deviation
Measure of Dispersion Cont.
The range
• Range refers to the extreme values in a set of
observations.
• 54,50,35,67,50
• (35,67)
Measure of Dispersion Cont.
The variance
• The variance is calculated by subtracting the
mean from each of the observations in the data
set, taking a square of this difference, and
dividing the total of these by the number of
observations.
Measure of Dispersion Cont.
The standard deviation
• Another measure of dispersion for interval and
ratio scaled data, offers an index of the spread
of a distribution or the variability in the data.
• It is a very commonly used, measure of
dispersion, and is simply square root of the
variance.
Relationship Between Variables
• Parametric tests from testing relationship
between variables such as Person Correlation
using interval and ratio scales
• Nonparametric tests are available to assess the
relationship between variables measured on a
nominal or an ordinal scale.
• Spearman’s rank correlation and Kendall’s rank
correlation are used to examine relationships
between interval and/or ratio variables.
Pearson Correlation
Rank Correlations
• To test the strength and direction of
association that exists between two
variables
• The variables are using ordinal scale
• E.g Students’ score in two different exams
i.e. English and Math
• Correlations (SPSS)
» Bi vitiate
» Spearman
– Check for value of r and P
Relationship Between Nominal Variables:
χ² Test
• Sometimes we want to know if there is a
relationship between two nominal variables or
whether they are independent of each other.
• The χ² test compares the expected frequencies
(based on the probability) and the observed
frequency.
Testing Goodness of Data
Goodness of data can be tested by two measures
• Reliability
• Validity
Reliability
• The reliability of a measure is established by
testing for both consistency and stability.
• Consistency indicates how well the items
measured a concept having together as a set.
Reliability Cont.
• Cronbach’s alpha is a reliability coefficient that
indicates how well the items in a set are
positively correlated to one another.
• Cronbach’s alpha is computed in terms of the
average intercorrelations among the items
measuring the concept.
• The closer Cronbach’s alpha is to one, the
higher the internal consistency reliability.
Reliability Cont.
• Another measure of consistency reliability used
in specific situations is the split half reliability
coefficient.
• Split half reliability is obtained to test for
consistency when more than one scale,
dimensions, or factor is assessed.
Validity
• Factorial validity can be established by
submitting the data for factor analysis.
• Factor analysis reveals whether the dimensions
are indeed tapped by the items in the measure,
as theorized.
Validity Cont.
• Criterion related validity can be established by
testing for the power of the measure to
differentiate individuals who are known to be
different.
Validity Cont.
• Convergent validity can be established when
there is high degree of correlation between two
different sources responding to the same
measure.
• Example: Both supervisors and subordinates
respond similarly to a perceived reward system
measure administered to them.
Validity Cont.
• Discriminant validity can be established when
two distinctly different concepts are not
correlated to each other .
• Example: Courage and honesty, leadership and
motivation, attitudes and behaviors.
SPSS
• Cronbach Alpha (Reliability)
• Factor Analysis (Validity)
Recap
• Goodness of data is measured by reliability and
validity.
• Three measures of central tendency: mean,
median and mode.
• Dispersion is the variability.
• Three measures of dispersion are: range,
variance and standard deviation.
• Correlation
• SPSS Cronbach Alpha (Reliability) Factor
Analysis (Validity)