Math & Linear Algebra/Concepts

eigendecomposition

metamong 2023. 2. 19.

 

๐Ÿคพ๐Ÿฝ‍โ™‚๏ธ ์ €๋ฒˆ ์‹œ๊ฐ„์— eigenvalue์™€ eigenvector์— ๋Œ€ํ•ด ํ•™์Šตํ–ˆ๋‹ค.

 

eigenvalue & eigenvector

* intro ๐Ÿ”… ์œ„์—์„œ ๋ฐฐ์šด transform ์—ฐ์‚ฐ์—์„œ transformation์— ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š๋Š” ๋ถ€๋ถ„, ์ฆ‰ transform์„ ํ•ด๋„ ๋ฐฉํ–ฅ์ด ๋ณ€ํ•˜์ง€ ์•Š๋Š” ๋ฒกํ„ฐ(๊ฐ’์€ ๋ณ€ํ•  ์ˆ˜ ์žˆ์Œ)๋ฅผ '๊ณ ์œ ๋ฒกํ„ฐ(eigenvector)' (โ€ป ๋ฒกํ„ฐ์˜ ์ฐจ์›์—์„œ๋Š” transfo

sh-avid-learner.tistory.com

 

๐Ÿคพ๐Ÿฝ‍โ™‚๏ธ ์ด ๋‘ ๊ฐ€์ง€ ๊ฐœ๋…์„ ์‚ฌ์šฉํ•ด eigen decomposition์„ ๊ฐ„๋‹จํžˆ ์•Œ์•„๋ณด๋ ค ํ•จ


eigendecomposition์ด๋ž€?

๐Ÿคพ๐Ÿฝ‍โ™‚๏ธ ์ •์˜

$$A = V \Lambda V^{-1}$$

· A: nxn ์ •์‚ฌ๊ฐํ–‰๋ ฌ

· V: A์˜ ๊ณ ์œ ๋ฒกํ„ฐ๋“ค์„ ์—ด๋กœ ๊ฐ–๋Š” ํ–‰๋ ฌ

· $\Lambda$: ๊ณ ์œ ๊ฐ’๋“ค์„ ๋Œ€๊ฐ์„ฑ๋ถ„์œผ๋กœ ๊ฐ–๋Š” ํ–‰๋ ฌ ($\Lambda = diag(\lambda_1, \lambda_2, ... , \lambda_n$)

 

๐Ÿคพ๐Ÿฝ‍โ™‚๏ธ ex) A๋Š” 2x2 ์ •์‚ฌ๊ฐํ–‰๋ ฌ, eigenvector $v_1$๊ณผ $v_2$, ๊ทธ๋ฆฌ๊ณ  ๋‘ ๊ฐœ์˜ eigenvalue($\lambda_1, \lambda_2)$๋ฅผ ๋Œ€๊ฐ์„ฑ๋ถ„์œผ๋กœ ๊ฐ–๋Š” $\Lambda$

 

โ‘  $\lambda_1, \lambda_2$๋Š” ๊ฐ๊ฐ ํ–‰๋ ฌ A์˜ eigenvalue์ด๋ฏ€๋กœ

$$Av_1 = \lambda_1v_1, Av_2 = \lambda_2v_2$$

 

โ‘ก eigenvector $v_1, v_2$์˜ ์ง‘ํ•ฉ์„ V๋ผ๊ณ  ํ•˜๋ฉด

$$v_1 = \begin{bmatrix}
v_{11} \\ v_{12}
\end{bmatrix},

v_2 = \begin{bmatrix}
v_{21} \\ v_{22}
\end{bmatrix}

V =  \begin{bmatrix}
v_{11} & v_{21} \\
v_{12} & v_{22} \\
\end{bmatrix}$$

 

โ‘ข โ‘ ๊ณผ โ‘ก

$$A \begin{bmatrix}
v_1 & v_2 \\
\end{bmatrix}
=
\begin{bmatrix}
\lambda_1v_1 & \lambda_2v_2 \\
\end{bmatrix} 
=
\begin{bmatrix}
v_1 & v_2 \\
\end{bmatrix}
\begin{bmatrix}
\lambda_1 & 0 \\
0 & \lambda_2 \\
\end{bmatrix} $$

 

โ‘ฃ ์ฆ‰,

$$AV = V\Lambda$$

$v_1$๊ณผ $v_2$๋Š” linearly independentํ•˜๋ฏ€๋กœ V์˜ rank๋Š” 2, ์ฆ‰ V๋Š” invertible

 

$$A = V \Lambda V^{-1}$$

ํ–‰๋ ฌ A๋ฅผ eigendecomposition ํ•˜์˜€๋‹ค.

 

$$V^{-1}AV = \Lambda$$

ํ–‰๋ ฌ A๊ฐ€ diagonalizableํ•œ matrix์ด๋ฉฐ, independent eigenvector๊ฐ€ n๊ฐœ ์กด์žฌ (nxn A matrix)

 

๐Ÿคพ๐Ÿฝ‍โ™‚๏ธ ๊ธฐํ•˜ํ•™์  ์˜๋ฏธ (๋Œ๋ฆฌ๊ณ  - ๋Š˜๋ฆฌ๊ณ  ๋Œ๋ฆฌ๊ณ )

eigendecomposition์„ ์ด์šฉํ•œ ์—ฌ๋Ÿฌ ๊ณ„์‚ฐ

โ‘  ํ–‰๋ ฌ A์˜ ๊ฑฐ๋“ญ์ œ๊ณฑ ๊ณ„์‚ฐ

$$A^k = V \Lambda V^{-1} V \Lambda V^{-1} ... = V \Lambda^k V^{-1}$$

$$(\Lambda^k = \begin{bmatrix}
\lambda_1^k & 0 \\
0 & \lambda_2^k \\
\end{bmatrix})$$

 

โ‘ก ํ–‰๋ ฌ A์˜ ์—ญํ–‰๋ ฌ ๊ณ„์‚ฐ

$$A^{-1} = (V \Lambda V^{-1})^{-1} = V \Lambda^{-1} V^{-1}$$

 

โ‘ข ํ–‰๋ ฌ A์˜ determinant(det(A)) ๊ณ„์‚ฐ (๋ชจ๋“  eigenvalue์˜ ๊ณฑ)

$$det(A)$$

$$= det(V \Lambda V^{-1}) = det(V)det(\Lambda)det(V^{-1})$$

$$= det(\Lambda) = \lambda_1 \lambda_2 \lambda_3 ... = \prod_{i=1}^{n} \lambda_i$$

 

โ‘ฃ ํ–‰๋ ฌ A์˜ ๋Œ€๊ฐํ•ฉ tr(A) ๊ณ„์‚ฐ (๋ชจ๋“  eigenvalue์˜ ํ•ฉ)

$$tr(A)$$

$$= tr(V\Lambda V^{-1}) = tr(\Lambda V^{-1} V)$$

$$= tr(\Lambda) = \lambda_1 + \lambda_2 + ... = \sum_{i=1}^{n} \lambda_i$$

 

โ‘ค rank-deficientํ•œ A๋Š” det(A) = 0 ์ด๊ณ  โ‘ข์— ์˜ํ•ด 0์ธ eigenvalue๊ฐ€ ์ ์–ด๋„ 1๊ฐœ ์ด์ƒ ์กด์žฌ

python code

โ‘  $AV = V\Lambda$ ๋ณด์ด๊ธฐ

import numpy as np
A = np.array([[5,1],
              [3,3]])

eigVals, V = np.linalg.eig(A)

v1 = V[:, 0]
v2 = V[:, 1]

np.dot(A, v1) #array([4.24264069, 4.24264069])
eigVals[0]*v1 #array([4.24264069, 4.24264069])

np.dot(A, v2) #array([-0.63245553,  1.8973666 ])
eigVals[1]*v2 #array([-0.63245553,  1.8973666 ])

np.linalg.matrix_rank(V) #2 (number of eigenvalues)

 

โ‘ก $A = V\Lambda V^{-1}$ ํ™•์ธ

#eigendecomposition 
L = np.diag(eigVals)

print(np.dot(np.dot(V,L),np.linalg.inv(V)))

'''
array([[5., 1.],
       [3., 3.]])
'''

* ์ถœ์ฒ˜1) https://rfriend.tistory.com/183

* ์ถœ์ฒ˜2) ํ˜ํŽœํ•˜์ž„ ๐Ÿ’š https://youtu.be/PP9VQXKvSCY

 

'Math & Linear Algebra > Concepts' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

Probability fundamentals  (0) 2023.02.27
SVD(Singular Value Decomposition)  (0) 2023.02.20
vector similarity  (0) 2023.02.09
Linear Equation & Linear System / Rank & det(A)  (0) 2023.02.01
Matrix (fundamentals)  (0) 2022.07.31

๋Œ“๊ธ€