Lecture notes

Download Report

Transcript Lecture notes

Statistics and Linear Algebra
(the real thing)
Vector
Definitions


A vector is a rectangular arrangement of number in several rows and one
column. A vector is denoted by a lowercase bold font weight: x
A vector element is given by the value (scalar) of a given row.


The number of elements gives its dimensionality



x1=6 and x2=1
Since there are two elements, the dimensionality of x is 2.
(Each elements can be considered as a subject score)
The transpose of vector, denoted T (or `), is a rotation of the column and rows.
6 
x 
1 
x   6 1
T
T
Vector


If the dimensionality of a vector is equal or less than 3, it can be represented
graphically
A vector is represented as an arrow (orientation and length)
6 
x 
1 
Vector
Operations: scalar multiplication


When a vector is multiply by a scalar, each element of the vector is multiply by
the scalar.
When a vector is multiply by a scalar (number), then its length is increased by
the factor of the scalar.
s  1.5
6   9 
s  x  1.5      
1  1.5
Vector
Operations: -1 multiplication

When a vector is multiply by -1, then it will reverse is direction.
s  1
6   6 
s  x  1     
1   1
Vector
Operations: Addition of two vectors


When two vectors are added together, we sum their corresponding elements.
Graphically, we put the beginning of the second vector at the end of the first.
6 
 2
x    and y   
1 
5
 6   2  8 
x y    +    
1   5   6 
Vector
Operations: Subtraction of two vectors


When two vectors are added together, we sum their corresponding elements.
Graphically, we put the beginning of the second vector at the end of the first one,
once the second vector has been multiply by -1.
6
 2
x    and y   
1 
5
6   2  6   2   4 
x  y    -     +    
1   5  1   5  4 

What 2x-3y will gives ?
Matrices
Definitions



A matrix can be view as a collection of vectors.
A matrix is denoted by an uppercase bold font weight: M
The matrix dimension is given by its number of rows and columns




Example: 2 rows and 4 columns: 24
A vector can be view as a matrix with many rows and one column.
A matrix is called “square”, if n×n.
A matrix element is given by the junction of the given row and column, denoted
at Mij
1 2 5 6 
M

2
4
5
6


M13  5
Matrices
Operation: multiplication of matrix by a scalar

When a matrix is multiply by a scalar, each element is multiply by the scalar.
s  1.5
1 2 5 6  1.5 3 7.5 9 
s  M  1.5  



2
4
5
6
3
6
7.5
9

 

Matrices
Operation: Product of two vectors

If two vectors have the same dimension, then they can be multiply together
There will be two possible results: a) A scalar or b) A matrix

Scalar (inner product, dot product)

Two vectors will output a scalar, if the first vector is transposed before being multiplied
with the second vector (of equal dimension). The row of the first vector is multiplied by
the corresponding element of the second vector, and the resulting products are sum up.

 x1 
 y1 
x   x2  and y   y2  ;
 x3 
 y3 
 y1 
T
x T y   x1 x2 x3   y2   x1  y1  x2  y2  x3  y3
 y3 

4
 3
x    and y   
6
5 
xT y  ?
If we divided xTy by their corresponding degrees of freedom (n-1) we obtain the
covariance between the two variables (if the mean is zero).
Matrices


Operation: Product of two vectors
Matrix (outer product)
Two vectors will output a matrix, if the second vector is transposed before being
multiplied. The column of first vector is multiplied by the corresponding
element of the second vector row.
 x1 
 y1 
x   x2  and y   y2  ;
 x3 
 y3 
 x1 
 x1 y1
T
xy T   x2   y1 y2 y3    x2 y1
 x3 
 x3 y1
x1 y2
x2 y2
x3 y2
x1 y3 
x2 y3 
x3 y3 
4
 3
x    and y   
6
5 
xy T  ?
Matrices
Operation: Product of two matrices

Two matrices can be multiplied together, if the number of columns of the first
matrix is equal to the number of rows of the second matrix.


Ex: If A is a m3 matrix, then B must be a 3n matrix. The resulting matrix C will
be a mn matrix
The matrix product is not commutative: ABBA
3 1 
 2 3 1 
A
and B   4 2  ;

 1 4 0 
 5 3
3 1 
 2 3 1  
   (2  3)  ( 3  4)  (1  5) (2 1)  ( 3  2)  (1  3)    1 7 
C  AB  
4
2

 

 
 1 4 0   5 3 (1 3)  (4  4)  (0  5) (11)  (4  2)  (0  3)  13 7 


9
6
1 2 3 4 
X
and Y  

3
5 6 7 8 

0
8 7
5 4 
; XY  ?
2 1

9 8
Matrices
Identity matrix



There is a special kind of matrix that is similar to the arithmetic multiplication
by one. 51=5
This matrix is called: Identity, denoted by I, where all its diagonal elements are
set to one and the remaining elements to 0.
Since this matrix has the same number of columns and rows: AI=A or IA=A
1
0

0

0
1
0
IA  
0

0
0
1
0
0
0
0
1
0
0
1
0
0
0
1 2 3 4 
5 6 7 8 
0 
;
and A = 
9 0 1 2 
0



1
3
4
5
6


0 0  1 2 3 4  1
0 0  5 6 7 8  5

1 0   9 0 1 2  9
 
 
0 1  3 4 5 6  3
2
6
0
4
3
7
1
5
4
8 
2

6
Matrices
Addition-Operator Vector

A vector whose all elements are equal to 1. It is denoted by 1
1
1
1 
1
 
1
30
 25

X   28

32
 22
30
 25

X1   28

32
 22
15
10 
1

and
1

12
1


14 
13
15 
 45
 35 
10 
1  
12      40 
 1  
14 
 46 
 35 
13
Matrices
The Norm-Operation of a vector
6 
x 
1 
1
6
By Pythagoras, x  62  12  37
In vector notation, x  x T x 

6.08276
6 
  37
1
 
6 1 
If the norm is divided by the degrees of freedom (n-1), then the standard
deviation (if the mean is zero) is obtained.
Matrices
The Determinant of Matrix



Is a function that associates a scalar, det(A), to every n×n square matrix A.
This can be interpreted as the volume of the matrix.
In 2D, the area of the parallelogram
 4 3
A

 2 5
Matrices
The Determinant of Matrix


Is a function depending on n that associates a scalar, det(A), to every n×n square
matrix A.
S4 = the area of the rectangle
Area  S4  2T1  2T2  2S3
Area  S4  S1  S2  2S3
 4 3
A

 2 5
S1  T1  T1
S2  T2  T2
S3
T1
Area  (4  2)(5  3)  4  3
2  5  2  (2  3)  14
T2
T2
T1
S3
Matrices
The Determinant of Matrix


Is a function depending on n that associates a scalar, det(A), to every n×n square
matrix A.
S4 = the area of the rectangle
Area  S4  2T1  2T2  2S3
a b 
A

c d 
S1  T1  T1
S2  T2  T2
Area  S4  S1  S2  2S3
S3
Area  (a+c)(b+d)-ab-cd-2bc
T1
Area  ab+ad+bc+cd-ab-cd-2bc
Area  ad-bc
T2
T2
T1
S3
 4 3
A

 2 5
det(A)  4  5  2  3  14
Matrices
The Inverse of Matrix



In analogy with the exponential notation for reciprocal of a number (1/a=a-1), the
inverse of matrix is denoted A-1.
If a matrix is square, then A-1A=AA-1=I.
For a 22 matrix, the operation is
1 4 
A
;
3
2


a b 
A
;
c
d


1  d b 
1  d b 
A 1 

ad  bc  c a  det( A)  c a 
1 4   0.2 0.4  1
AA 1  




3 2   0.3 0.1 0
 0.2 0.4  1 4  1
A 1 A  
 3 2    0
0.3

0.1


 
0
1 
0
1 
A 1 
1  d b 
1  2 4 

2  12  c a  10  3 1 
 0.2 0.4 
A 1  

 0.3 0.1
Linear Algebra and Statistics
The Normalization of Vector


A vector is normalized if its length is equal to one.
Normalizing a vector = data standardization.
z
x
x
zT z  1
3
x    ; x  25  5
4
 3
 4
0.6 
z   
5
 0.8 
Linear Algebra and Statistics
Relation between two vectors

If two variables (u and v) have the same score, then the two vectors are
superposed on each other. However, as the two variables differs from one
another, the angle between them will increase.
Linear Algebra and Statistics
Relation between two vectors

The greater the angle between the two vectors, the lesser they share in common.
If the angle reach 90° then there are no common part.
Linear Algebra and Statistics
Relation between two vectors


The cosine of that angle is the correlation coefficient.
If the angle is null (or 180°) then the cosine is 1 (or -1); indicating a perfect
relation. If the relation is 90° (or 270 °) then the cosine is 0; indication an
absence of relation.
n
ui v i

covuv
uT v
i 1
cos  


 ruv
u v
u v
su sv
Linear Algebra and Statistics
Relation between two vectors
n
 ui v i
covuv
uT v
cos  
 i 1

 ruv
u v
u v
su sv
6
 2
u    and v   
1 
5
uT v  17
u  37 and v  29
uT v
17
cos  

 0.52  ruv
u v
37 29
uT v  cos u v
Linear Algebra and Statistics
The Mean


From the previously defined addition-operation, the mean is straightforward.
Let us say that we have one variable with 5 participants.
 30 
 25
 
X   28 , x  ?
 
 32 
 22 
x  1T X / n
1
1

x  30 25 28 32 22 1 / 5

1
1
x   27.4
Linear Algebra and Statistics
The Mean (several variables)

Let us say that we have two variables with 5 participants.
30
 25

M   28

32
 22

15
10 
12  , x  [27.4 12.8]

14 
13
Let us define a 52 mean-score matrix as
 27.4
 27.4

X   27.4

 27.4
 27.4
12.8
12.8
12.8

12.8
12.8
Linear Algebra and Statistics
Example of statistical Import
Linear Algebra and Statistics
Example of statistical Import

Deviation Score Matrix
XMX
 30
 25

M   28

 32
 22
15 
 27.4
 27.4
10 

12  , X   27.4


14 
 27.4
 27.4
13
30
 25

X  M  X   28

32
 22
12.8
12.8
12.8

12.8
12.8
15   27.4
10   27.4
12    27.4
 
14   27.4
13  27.4
12.8  2.6 2.2 
12.8  2.4 2.8
12.8   0.6 0.8
 

12.8  4.6 1.2 
12.8  5.4 0.2 
Linear Algebra and Statistics
Example of statistical Import

Sums of Square and Cross Product (SSCP)
SSCP  XT X, where X is a deviation score matrix
 2.6 2.2 
 2.4 2.8


X   0.6 0.8


4.6
1.2


 5.4 0.2 
 2.6 2.2 
 2.4 2.8
 63.2 16.4 
 2.6 2.4 0.6 4.6 5.4  
T
 0.6 0.8  
SSCP  X X  


 16.4 14.8
 2.2 2.8 0.8 1.2 0.2  
 4.6 1.2 
 5.4 0.2 
Linear Algebra and Statistics
Example of statistical Import

Sums of Square and Cross Product (SSCP)


Note: the SSCP is square and symmetric
If the SSCP is divided by the number of degrees of freedom, then we get the
information about variance and covariance for all the data.
63.2 16.4 
SSCP  

16.4 14.8 
Variance for the first variable ( s1 s1  s12 )
s1 15.8 4.1
63.2 16.4 
VARCOV  
 /(5  1)  s  4.1 3.7 
16.4
14.8
2 



s1
s2
Covariance(s1s2  s2 s1  Cov12 )
Variance for the second variable
( s2 s2  s22 )
Linear Algebra and Statistics
Example of statistical Import

Simple regression

How can we find the weights that describe (optimally) the following function ?
v̂  b0  b1u

The solution is to find the shadow of v on u that has the shortest distance

The shortest distance
is the one that crosses
at 90° the vector u
Linear Algebra and Statistics
Example of statistical Import

Simple regression

Therefore, the error e can be defined as:
e  v  u  b1

Where b1 is the value that multiply u that makes the shadow of v the shortest (90°)
Linear Algebra and Statistics
Example of statistical Import

Simple regression


As a consequence, the angle between u and e will also be 90°
Therefore, the correlation (and covariance) between u and e will be zero.
uT e  0

By substitution, we can isolate the b1 coefficient.
uTe  0
u T ( v  u  b1 )  0
u T v  u T u  b1  0
This is the least mean
squared method
u T v  u T u  b1
(u T u) 1 (u T v )  (u T u) 1 (u T u)  b1
(u T u) 1 (u T v )  1 b1  b1
Linear Algebra and Statistics
Example of statistical Import

Simple regression

With 2 variables this is identical to the solution given in textbooks.

b1  u u
T

-1
 
u v s
T
2
u
-1
covuv
cov
 2uv
su
Deviation score matrix

 2.6  

 2.4  



b1    2.6 2.4 0.6 4.6 5.4  0.6  



4.6




 5.4  


-1
u
v
 2.6 2.2 
 2.4 2.8


X   0.6 0.8


 4.6 1.2 
 5.4 0.2 

 2.2  

 2.8 



  2.6 2.4 0.6 4.6 5.4  0.8   63.21  16.4  0.26



1.2




 0.2  


Linear Algebra and Statistics
Example of statistical Import

Simple regression

The constant b0 is obtained the usual way
v  b0  b1u
 b0  v  b1u
u  27.4 and v  12.8
b0  12.8  0.26  27.4  5.69

Therefore, the final regression equation is
vˆ  5.69  0.26u