[[File:Mona Lisa eigenvector grid.png|thumb|270px|In this [[shear mapping]] the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping, and since its length is unchanged its eigenvalue is 1.]] An '''eigenvector''' of a [[matrix (mathematics)|square matrix]] is a non-zero [[vector (mathematics)|vector]] that, when [[matrix multiplication|multiplied]] by the matrix, yields a vector that differs from the original vector at most by a [[scalar multiplication|multiplicative scalar]]. For example, if three-element vectors are seen as arrows in three-dimensional space, an eigenvector of a 3 × 3 matrix A is an arrow whose direction is either preserved or exactly reversed after multiplication by A. The corresponding eigenvalue determines how the length and sense of the arrow is changed by the operation. Specifically, a non-zero [[column vector]] v is a ([[#Left_and_right_eigenvectors|right]]) '''eigenvector''' of a matrix A if (and only if) there exists a number \lambda such that A v = \lambda v. The number \lambda is called the '''eigenvalue''' corresponding to that vector. The set of all eigenvectors of a matrix, each paired with its corresponding eigenvalue, is called the '''eigensystem''' of that matrix. Wolfram Research, Inc. (2010) [http://mathworld.wolfram.com/Eigenvector.html ''Eigenvector'']. Accessed on 2010-01-29. William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery (2007), [http://www.nr.com/ ''[[Numerical Recipes]]: The Art of Scientific Computing''], Chapter 11: ''Eigensystems.'', pages=563-597. Third edition, Cambridge University Press. ISBN 9780521880688}} An '''eigenspace''' of A is the set of all eigenvectors with the same eigenvalue, together with the [[zero vector]]. An '''eigenbasis''' for A is any [[basis (linear algebra)|basis]] for the set of all vectors that consists of linearly independent eigenvectors of A. Not every [[real number|real]] matrix has real eigenvalues, but every [[complex number|complex]] matrix has at least one complex eigenvalue. The terms '''characteristic vector''', '''characteristic value''', and '''characteristic space''' are also used for these concepts. The prefix [[wiktionary:eigen|'''eigen-''']] is adopted from the [[German language|German]] word ''eigen'' for "self" or "proper". These concepts are naturally extended to more general situations, where the set of real scale factors is replaced by any [[field (mathematics)|field]] of [[scalar (mathematics)|scalar]]s (such as [[algebraic numbers|algebraic]] or complex numbers); the set of [[Cartesian coordinates|Cartesian]] vectors \mathbb{R}^n is replaced by any [[vector space]] (such as the [[continuous function]]s, the [[polynomial]]s or the [[trigonometric series]]), and matrix multiplication is replaced by any [[linear operator]] that maps vectors to vectors (such as the [[derivative (calculus)|derivative]] from [[calculus]]). In such cases, the concept of "parallel to" is interpreted as "scalar multiple of", and the "vector" in "eigenvector" may be replaced by a more specific term, as in "[[eigenfunction]]", "[[eigenmode]]", "[[eigenface]]", "[[eigenstate]]", and "[[eigenfrequency]]". Thus, for example, the exponential function f(x) = a^x is an eigenfunction of the derivative operator " {}' ", with eigenvalue \ln a, since its derivative is f'(x) = (\ln a)a^x = (\ln a)f(x). Eigenvalues and eigenvectors have many applications in both pure and applied mathematics. They are used in [[matrix factorization]], in [[quantum mechanics]], and in many other areas. ==Definition== ===Eigenvectors and eigenvalues of a real matrix=== [[File:Eigenvalue equation.svg|thumb|right|250px|Matrix A acts by stretching the vector v, not changing its direction, so v is an eigenvector of A.]] In many contexts, a vector can be assumed to be a list of real numbers; for example, the three coordinates of a point in three-dimensional space, relative to some [[Cartesian coordinates|Cartesian coordinate system]]. (It helps to think of such a vector as the tip of an arrow whose tail is at the origin of the coordinate system.) Two vectors are said to be [[parallel]] to each other (or [[collinearity| collinear]]) if every element of one is the corresponding element of the other times the same scaling factor. For example, the vectors :u = \begin{bmatrix}1\\3\\4\end{bmatrix}\quad\quad\quad and \quad\quad\quad v = \begin{bmatrix}-20\\-60\\-80\end{bmatrix} are parallel, because each component of v is -20 times the corresponding component of u. If u and v are arrows in three-dimensional space, the condition "u is parallel to v" means that their arrows lie on the same straight line, and may differ only in length and direction along that line. The elements of a vector are usually arranged as a matrix with some number n of rows and a single column. If we [[matrix multiplication|multiply]] any square matrix A with n rows and n columns by such a vector v, the result will be another vector w = A v , also with n rows and one column. That is, :\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} \quad\quad is mapped to \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} \;=\; \begin{bmatrix} A_{1,1} & A_{1,2} & \ldots & A_{1,n} \\ A_{2,1} & A_{2,2} & \ldots & A_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ A_{n,1} & A_{n,2} & \ldots & A_{n,n} \\ \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} where, for each index i, : w_i = A_{i 1} v_1 + A_{i 2} v_2 + \cdots + A_{i n} v_n = \sum_{j = 1}^{n} A_{i j} v_j In general, if v is not all zeros, the vectors v and A v will not be parallel. When they ''are'' parallel (that is, when there is some real number \lambda such that A v = \lambda v) we say that v is an '''eigenvector''' of A. In that case, the scale factor \lambda is said to be the '''eigenvalue''' corresponding to that eigenvector. In particular, multiplication by a 3 × 3 matrix A may change both the direction and the magnitude of an arrow v in three-dimensional space. However, if v is an eigenvector of A with eigenvalue \lambda, the operation may only change its length, and either keep its direction or [[point reflection|flip]] it (make the arrow point in the exact opposite direction). Specifically, the length of the arrow will increase if |\lambda| > 1, remain the same if |\lambda| = 1, and decrease it if |\lambda|< 1. Moreover, the direction will be precisely the same if \lambda > 0, and flipped if \lambda < 0. If \lambda = 0, then the length of the arrow becomes zero. ===Examples=== [[Image:Eigenvectors.gif|frame|right|The transformation matrix \bigl[ \begin{smallmatrix} 2 & 1\\ 1 & 2 \end{smallmatrix} \bigr] preserves the direction of vectors parallel to \bigl[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \bigr] (in blue) and \bigl[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \bigr] (in violet). The points that lie on the line through the origin, parallel to an eigenvector, remain on the line after the transformation. The vectors in red are not eigenvectors, therefore their direction is altered by the transformation. See also: [[:File:Eigenvectors-extended.gif|An extended version, showing all four quadrants]].]] For the transformation matrix :A = \begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix}, the vector :\mathbf v = \begin{bmatrix} 4 \\ -4 \end{bmatrix} is an eigenvector with eigenvalue 2. Indeed, :A \mathbf v = \begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ -4 \end{bmatrix} = \begin{bmatrix} 3 \cdot 4 + 1 \cdot (-4) \\ 1 \cdot 4 + 3 \cdot (-4) \end{bmatrix} = \begin{bmatrix} 8 \\ -8 \end{bmatrix} = 2 \cdot \begin{bmatrix} 4 \\ -4 \end{bmatrix}. On the other hand the vector :\mathbf v = \begin{bmatrix} 0 \\ 1 \end{bmatrix} is ''not'' an eigenvector, since :\begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \cdot 0 + 1 \cdot 1 \\ 1 \cdot 0 + 3 \cdot 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \end{bmatrix}, and this vector is not a multiple of the original vector v. For the matrix :A= \begin{bmatrix} 0 & 1 & 0\\0 & 0 & 2\\ 0 & 0 & 3\end{bmatrix}, we have :A \begin{bmatrix} 1\\0\\0 \end{bmatrix} = \begin{bmatrix} 0\\0\\0 \end{bmatrix} = 0 \cdot \begin{bmatrix} 1\\0\\0 \end{bmatrix},\quad\quad and :A \begin{bmatrix} 0\\0\\1 \end{bmatrix} = \begin{bmatrix} 0\\0\\3 \end{bmatrix} = 3 \cdot \begin{bmatrix} 0\\0\\1 \end{bmatrix}. Therefore, the vectors [1,0,0]' and [0,0,1]' are eigenvectors of A, with eigenvalues 0 and 3, respectively. (Here the symbol {}' indicates [[transpose of a matrix|matrix transposition]], in this case turning the row vectors into column vectors.) For the [[permutation matrix|cyclic permutation matrix]] :A = \begin{bmatrix} 0 & 1 & 0\\0 & 0 & 1\\ 1 & 0 & 0\end{bmatrix}, we have :A \begin{bmatrix} 5\\5\\5 \end{bmatrix} = \begin{bmatrix} 5\\5\\5 \end{bmatrix} = 1 \cdot \begin{bmatrix} 5\\5\\5 \end{bmatrix} Moreover, let c be the complex number \sqrt{3}/2 + \mathbf{i}/2, where \mathbf{i}= \sqrt{-1} is the imaginary unit; and let c^* be its [[complex conjugate]], namely \sqrt{3}/2 - \mathbf{i}/2. Note that c\cdot c^* is 1. Then :A \begin{bmatrix} 1 \\ c \\ c^* \end{bmatrix} = \begin{bmatrix} c\\c^*\\1 \end{bmatrix} = c \cdot \begin{bmatrix} 1\\c\\c^* \end{bmatrix} \quad\quad and \quad\quad A \begin{bmatrix} 1 \\ c^* \\ c \end{bmatrix} = \begin{bmatrix} c^*\\c\\1 \end{bmatrix} = c^* \cdot \begin{bmatrix} 1\\c^*\\c \end{bmatrix} Therefore, the vectors [5,5,5]', [1,c,c^*]' and [1,c^*,c]' are eigenvectors of A, with eigenvalues 1, c, and c^*, respectively. The [[identity matrix]] I (whose general element I_{i j} is 1 if i = j, and 0 otherwise) maps every vector to itself. Therefore, every vector is an eigenvector of I, with eigenvalue 1. More generally, if A is a [[diagonal matrix]] (with A_{i j} = 0 whenever i \neq j), and v is a vector parallel to axis i (that is, v_i \neq 0, and v_j = 0 if j \neq i), then A v = \lambda v where \lambda = A_{i i}. That is, the eigenvalues of a diagonal matrix are the elements of its main diagonal. This is trivially the case of ''any'' 1 × 1 matrix. ===General definition=== The concept of eigenvectors and eigenvalues extends naturally to abstract [[linear transformation]]s on abstract [[vector space]]s. Namely, let \mathbb{V} be any vector space over some [[field (algebra)|field]] \mathbb{K} of [[scalar (mathematics)|scalars]], and let T be a linear transformation mapping \mathbb{V} into \mathbb{V}. We say that a non-zero vector v of \mathbb{V} is an '''eigenvector''' of T if (and only if) there is a scalar \lambda in \mathbb{K} such that :T(v) = \lambda v. This equation is called the [[eigenvalue equation]] for T, and the scalar \lambda is the '''eigenvalue''' of T corresponding to the eigenvector v. Note that T(v) means the result of applying the operator T to the vector v, while \lambda v means the product of the scalar \lambda by v.See {{Harvnb|Korn|Korn|2000|loc=Section 14.3.5a}}; {{Harvnb|Friedberg|Insel|Spence|1989|loc=p. 217}} The matrix-specific definition is a special case of this abstract definition. Namely, the vector space \mathbb{V} is the set of all column vectors of a certain size n×1, and T is the linear transformation that consists in multiplying a vector by the given n × n matrix A. Some authors allow v to be the [[zero vector]] in the definition of eigenvector.{{Citation|last=Axler|first= Sheldon |title=Linear Algebra Done Right|edition=2nd |chapter=Ch. 5|page= 77}} With that choice, however, every scalar would be an eigenvalue of any linear operator. ===Eigenspace and spectrum=== If v is an eigenvector of T, with eigenvalue \lambda, then any [[scalar multiplication|scalar multiple]] \alpha v of v with nonzero \alpha is also an eigenvector with eigenvalue \lambda, since T(\alpha v) = \alpha T(v) = \alpha(\lambda v) = \lambda(\alpha v). Moreover, if u and v are eigenvectors with the same eigenvalue \lambda, then u+v is also an eigenvector with the same eigenvalue \lambda. Therefore, the set of all eigenvectors with the same eigenvalue \lambda, together with the zero vector, is a [[linear subspace]] of \mathbb{V}, called the '''eigenspace''' of T associated to \lambda.{{Harvnb|Shilov|1977|loc=p. 109}}[[b:The Book of Mathematical Proofs/Algebra/Linear Transformations#Lemma for the eigenspace|Lemma for the eigenspace]] If that subspace has dimension 1, it is sometimes called an '''eigenline'''.''[http://books.google.com/books?id=pkESXAcIiCQC&pg=PA111 Schaum's Easy Outline of Linear Algebra]'', p. 111 The ''geometric multiplicity'' \gamma_T(\lambda) of an eigenvalue \lambda is the dimension of the eigenspace associated to \lambda, i.e. number of [[linear independence|linearly independent]] eigenvectors with that eigenvalue. These eigenvectors can be chosen so that they are pairwise [[orthogonal]] and have unit length under some arbitrary [[inner product]] defined on \mathbb{V}. In other words, every eigenspace has an [[orthonormal basis]] of eigenvectors. Conversely, any eigenvector with eigenvalue \lambda must be linearly independent from all eigenvalues that are associated to a different eigenvalue \lambda'. Therefore a linear transformation T that operates on an n-[[dimension (mathematics)|dimensional space]] cannot have more than n distinct eigenvalues (or eigenspaces).For a proof of this lemma, see {{Harvnb|Roman|2008|loc=Theorem 8.2 on p. 186}}; {{Harvnb|Shilov|1977|loc=p. 109}}; {{Harvnb|Hefferon|2001|loc=p. 364}}; {{Harvnb|Beezer|2006|loc=Theorem EDELI on p. 469}}; and [[b:Famous_Theorems_of_Mathematics/Algebra/Linear_Transformations#Lemma_for_linear_independence_of_eigenvectors|Lemma for linear independence of eigenvectors]] Any subspace spanned by eigenvectors of T is an [[invariant subspace]] of T. The list of eigenvalues of T is sometimes called the '''spectrum''' of T. The order of this list is arbitrary, but the number of times that an eigenvalue \lambda appears is important. There is no unique way to choose a basis for an eigenspace of an abstract linear operator T based only on T itself, without some additional data such as a choice of coordinate basis for \mathbb{V}. Even for an eigenline the basis vector is indeterminate in both magnitude and orientation. If the scalar field \mathbb{K} is the real numbers \mathbb{R}, one can order the eigenspaces by their eigenvalues. Since the [[modulus (mathematics)|modulus]] |\lambda| of an eigenvalue is important in many applications, the eigenspaces are often ordered by that criterion. ===Eigenbasis=== An '''eigenbasis''' for a linear operator T that operates on a vector space \mathbb{V} is a basis for \mathbb{V} that consists entirely of eigenvalues of T (possibly with different eigenvalues). Such a basis may not exist. Suppose \mathbb{V} has finite dimension n, and let \boldsymbol{\gamma}_T be the sum of the geometric mutiplicities \gamma_T(\lambda_i) over all distinct eigenvalues \lambda_i of T. This integer is the maximum number of linearly independent eigenvectors of T, and therefore cannot exceed n. If \boldsymbol{\gamma}_T is exactly n, then T admits an eigenbasis; that is, there exists a basis for \mathbb{V} that consists of n eigenvectors. The matrix A that represents T relative to this basis is a diagonal matrix, whose diagonal elements are the eigenvalues associated to each basis vector. Conversely, if the sum \boldsymbol{\gamma}_T is less than n, then T admits no eigenbasis, and there is no choice of coordinates that will allow T to be represented by a diagonal matrix. Note that \boldsymbol{\gamma}_T is at least equal to the number of ''distinct'' eigenvalues of T, but may be larger than that.{{Citation|first=Gilbert|last=Strang|title=Linear Algebra and Its Applications|edition=3rd|publisher=Harcourt|location= San Diego| year=1988}} For example, the identity opeator I on \mathbb{V} has \boldsymbol{\gamma}_I = n, and any basis of \mathbb{V} is an eigenbasis of I; but its only eigenvalue is 1, with \gamma_T(1) = n. ==Eigenvalues and eigenvectors of matrices== ===Characteristic polynomial=== The eigenvalue equation for a matrix A is : A v - \lambda v = 0, which is equivalent to : (A-\lambda I)v = 0, where I is the n × n [[identity matrix]]. It is a fundamental result of linear algebra that an equation M v = 0 has a non-zero solution v if and only if the [[determinant]] \det(M) of the matrix M is zero. It follows that the eigenvalues of A are precisely the real numbers \lambda that satisfy the equation : \det(A-\lambda I) = 0 The left-hand side of this equation can be seen (using [[Leibniz formula for determinants|Leibniz' rule]] for the determinant) to be a [[polynomial]] function of the variable \lambda. The [[degree of a polynomial|degree]] of this polynomial is n, the order of the matrix. Its [[coefficient]]s depend on the entries of A, except that its term of degree n is always (-1)^n\lambda^n. This polynomial is called the ''[[characteristic polynomial]]'' of A; and the above equation is called the ''characteristic equation'' (or, less often, the ''secular equation'') of A. For example, let A be the matrix :A = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{bmatrix} The characteristic polynomial of A is :\det (A-\lambda I) \;=\; \det \left(\begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 4 \\ 0 & 4 & 9 \end{bmatrix} - \lambda \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\right) \;=\; \det \begin{bmatrix} 2 - \lambda & 0 & 0 \\ 0 & 3 - \lambda & 4 \\ 0 & 4 & 9 - \lambda \end{bmatrix} which is : (2 - \lambda) \left( (3 - \lambda) (9 - \lambda) - 16 \right) = 22 - 35\lambda + 14\lambda^2 - \lambda^3 The roots of this polynomial are 2, 1, and 11. Indeed these are the only three eigenvalues of A, corresponding to the eigenvectors [1,0,0]', [0,2,-1]', and [0,1,2]' (or any non-zero multiples thereof). It follows that any n × n matrix has at most n eigenvalues. If the matrix has real entries, the coefficients of the characteristic polynomial are all real; but it may not have any real roots. For example, consider the 2 × 2 matrix :R = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} Multiplying R by any vector has the effect of [[rotation matrix|rotating]] it by 90 degrees; so this matrix has no eigenvectors with real entries. Indeed, its characteristic equation is \lambda^2 + 1 = 0, which has no real solutions. Howwever, it has two ''complex'' solutions, namely \lambda = \mathbf{i} and \lambda = -\mathbf{i}. Indeed, :\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} = \begin{bmatrix} 1 \\ \mathbf{i} \end{bmatrix} = \begin{bmatrix} \mathbf{i} \\ -1 \end{bmatrix} = \mathbf{i} \begin{bmatrix} 1 \\ \mathbf{i} \end{bmatrix} so the vector [1,\mathbf{i}]' is an eigenvector of R with eigenvalue \mathbf{i}. In fact, the [[fundamental theorem of algebra]] implies that the characteristic polinomial of an n × n matrix A, being a polynomial of degree n, has exactly n complex [[root]]s. More precisely, it can be [[factorization|factored]] into the product of n linear terms, : \det(A-\lambda I) = (\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots(\lambda_n - \lambda) where each \lambda_i is a complex number. The numbers \lambda_1, \lambda_2, ... \lambda_n, (which may not be all distinct) are obviously roots of the polynomial, and are precisely the eigenvalues of A. If the entries of A are real numbers, the roots may still have non-zero imaginary parts (and the elements of the corresponding eigenvectors will therefore also have non-zero imaginary parts). Also, the eigenvalues may be [[irrational number]]s even if all the entries of A are [[rational number]]s, or all are integers. If the entries of A are [[algebraic number]]s, however, the eigenvalues will be algebraic numbers too. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of [[complex conjugate]] values, namely with the two members of each pair having the same real part and imaginary parts that differ only in sign. If the degree is odd at least one of the roots will be real. Therefore, any real matrix with odd order will have at least one real eigenvalue; whereas a real matrix with even order may have no real eigenvalues. ===Algebraic multiplicities=== Let \lambda_i be an eigenvalue of an n × n matrix A. The ''algebraic multiplicity'' \mu_A(\lambda_i) of \lambda_i is its [[multiplicity (mathematics)#Multiplicity of a root of a polynomial|multiplicity as a root]] of the characteristic polynomial, that is, the largest integer k such that (\lambda - \lambda_i)^k [[polynomial division|divides evenly]] that polynomial. Like the geometric multiplicity \gamma_A(\lambda_i), the algebraic multiplicity it is an integer between 1 and n; and the sum \boldsymbol{\mu}_A of \mu_A(\lambda_i) over all ''distinct'' eigenvalues also cannot exceed n. If complex eigenvalues are considered, \boldsymbol{\mu}_A is exactly n. It can be proved that the geometric multiplicity \gamma_A(\lambda_i) of an eigenvalue never exceeds its algebraic multiplicity \mu_A(\lambda_i). Therefore, \boldsymbol{\gamma}_A is at most \boldsymbol{\mu}_A. ===Diagonalization and eigendecomposition=== If the sum \boldsymbol{\gamma}_A of the geometric multiplicities of all eigenvalues is exactly n, then A has a set of n linearly independent eigenvectors. Let Q be a square matrix whose columns are those eigenvectors, in any order. Then we will have A Q = Q\Lambda , where \Lambda is the diagonal matrix such that \Lambda_{i i} is the eigenvalue associated to column i of Q. Since the columns of Q are linearly independent, the matrix Q is invertible. Multiplying both sides on the left by Q^{-1} we get Q^{-1}A Q = \Lambda. By definition, therefore, the matrix A is [[diagonalizable matrix|diagonalizable]]. Conversely, if A is diagonalizable, let Q be a non-singular square matrix such that Q^{-1} A Q is some diagonal matrix D. Multiplying both sides on the left by Q we get A Q = Q D . Therefore each column of Q must be an eigenvector of A, whose eigenvalue is the corresponding element on the diagonal of D. Since the columns of Q must be linearly independent, it follows that \boldsymbol{\gamma}_A = n. Thus \boldsymbol{\gamma}_A is equal to n if and only if A is diagonalizable. If A is diagonalizable, the space of all n-element vectors can be decomposed into the direct sum of the eigenspaces of A. This decomposition is called the [[eigendecomposition of a matrix|eigendecomposition]] of A, and it is the preserved under change of coordinates. A matrix that is not diagonalizable is said to be [[defective matrix|defective]]. For defective matrices, the notion of eigenvector can be generalized to [[generalized eigenvector]]s, and that of diagonal matrix to a [[Jordan form]] matrix. Over an algebraically closed field, any matrix A has a [[Jordan form]] and therefore admits a basis of generalized eigenvectors, and a decomposition into [[generalized eigenspace]]s ===Further properties=== Let A be an arbitrary n × n matrix of complex numbers with eigenvalues \lambda_1, \lambda_2, ... \lambda_n. (Here it is understood that an eigenvalue with algebraic multiplicity \mu occurs \mu times in this list.) Then * The [[trace (linear algebra)|trace]] of A, defined as the sum of its diagonal elements, is also the sum of all eigenvalues: :\operatorname{tr}(A) = \sum_{i=1}^n A_{i i} = \sum_{i=1}^n \lambda_i = \lambda_1+ \lambda_2 +\cdots+ \lambda_n. * The [[determinant]] of A is the product of all eigenvalues: :\operatorname{det}(A) = \prod_{i=1}^n \lambda_i=\lambda_1\lambda_2\cdots\lambda_n. * The eigenvalues of the kth power A^k of A, for any positive inteer k, are \lambda_1^k,\lambda_2^k,\dots,\lambda_n^k * The matrix A is invertible if and only if all the eigenvalues \lambda_2 are nonzero. * If A is invertible, then the eigenvalues of A^{-1} are 1/\lambda_1,1/\lambda_2,\dots,1/\lambda_n * If A = L U is the [[LU decomposition|Gaussian decomposition]] of A into [[triangular matrix|triangular matrices]], with L lower triangular, U upper triangular, and U_{i i} =1 for all i, then the diagonal elements of L are the eigenvalues math>\lambda_1, \lambda_2, ... \lambda_n, in some order. * If A is equal to its [[conjugate transpose]] A^* (in other words, if A is [[Hermitian matrix|Hermitian]]), then every eigenvalue is real. The same is true of any a [[symmetric matrix|symmetric]] real matrix. If A is also [[Positive-definite matrix|positive-definite]], positive-semidefinite, negative-definite, or negative-semidefinite every eigenvalue is positive, non-negative, negative, or non-positive respectively. * Every eigenvalue of a [[unitary matrix]] has absolute value |\lambda|=1. === Left and right eigenvectors === The use of matrices with a single column (rather than a single row) to represent vectors is traditional in many disciplines. For that reason, the word "eigenvector" almost always means a '''right eigenvector''', namely a ''column'' vector that must placed to the ''right'' of the matrix A in the defining equation :A v = \lambda v. There may be also single-''row'' vectors that are unchanged when they occur on the ''left'' side of a product with a square matrix A; that is, which satisfy the equation :u A = \lambda u Any such row vector u is called a '''left eigenvector''' of A. The left eigenvectors of A are transposes of the right eigenvectors of the transposed matrix A', since their defining equation is qequivalent to :A' u' = \lambda u' It follows that, if A is Hermitian, its left and right eigenvectors are [[complex conjugate vector space|complex conjugate]]s. In particular if A is a real symmetric matrix, they are the same except for transposition. ==Infinite-dimensional spaces== {{details|Spectral theorem}} The definition of eigenvalue of a linear transformation T remains valid even if the underlying space \mathbb{V} is an infinite dimensional [[Hilbert space|Hilbert]] or [[Banach space]]. Namely, a scalar \lambda is an eigenvalue if and only if there is some nonzero vector v such that T(v) = \lambda v. ===Spectral theory=== If \lambda is an eigenvalue of T, then the operator T - \lambda I is not one-to-one, and therefore its inverse (T - \lambda I)^{-1} is not defined. However, the converse statement is not true: the operator T - \lambda I may not have an inverse, even if \lambda is not an eigenvalue. For example,consider the Hilbert space \ell^2(\mathbb{Z}), that consists of all [[bi-infinte sequence]]s of real numbers :v = (\ldots, v_{-2},v_{-1},v_0,v_1,v_2,\ldots) that have a finite sum of squares \sum_{i=-\infty}^{+\infty} v_i^2. The [[bilateral shift]] operator T simply displaces every element of the sequence by one position; namely if u = T(v) then u_i = v_{i-1} for every integer i. The eigenvalue equation T(v) = \lambda v has no solution in this space, since it implies that all the values v_i have the same absolute value (if \lambda = 1) or are a geometric progression (if \lambda \neq 1); either way, the sum of their squares would not be finite. However, the operator T - \lambda I is not invertible if |\lambda| = 1. For example, the sequence u such that u_i = 1/(|i|+1) is in \ell^2(\mathbb{Z}); but there is no sequence v in \ell^2(\mathbb{Z}) such that (T - I)v = u (that is, v_{i-1} = u_i + v_i for all i). For this reason, in [[functional analysis]] one defines the [[spectrum (functional analysis)|spectrum]] of a linear operator T as the set of all scalars \lambda for which (T - \lambda I)^{-1} is not defined; that is, such that the operator T - \lambda I has no [[bounded operator|bounded]] inverse. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them. In infinite-dimensional spaces, the spectrum of a [[bounded operator]] is always nonempty. This is also true for an unbounded [[self adjoint operator]]. Via its [[spectral measure]]s, one can define a [[decomposition of spectrum (functional analysis)|decomposition of the spectrum]] of any self adjoint operator, bounded or otherwise into absolutely continuous, pure point, and singular parts. The [[hydrogen atom]] provides an example of this decomposition. The eigenfunctions of the [[molecular Hamiltonian|hydrogen atom Hamiltonian]] are called '''eigenstates''' and are grouped into two categories. The [[bound state]]s of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues that can be computed by [[Rydberg formula]]) while the [[ionization]] processes are described by the continuous part (the energy of the collision/ionization is not quantized). ===Eigenfunctions=== {{Main|Eigenfunction}} A widely used class of linear operators acting on infinite dimensional spaces are the [[differential operator]]s on [[function space]]s. As an example, on the space \mathbf{C^\infty} of infinitely [[derivative|differentiable]] real functions of a real argument t, the process of differentiation is a linear operator since : \displaystyle\frac{d}{dt}(af+bg) = a \frac{df}{dt} + b \frac{dg}{dt}, for any functions f and g in \mathbf{C^\infty}, and any real numbers a and b. The eigenvalue equation for a linear differential operator D in \mathbf{C^\infty} is then a [[differential equation]] :D f = \lambda f The functions that satisfy this equation are commonly called '''eigenfunctions'''. For the derivative operator d/dt, an eigenfunction is a function that, when differentiated, yields a constant times the original function. That is, : \displaystyle\frac{d}{dt} f(t) = \lambda f(f) for all t. This equation can be solved for any value of \lambda. If \lambda is zero, the generic solution is : f(t) = C,\, where C is any constant. If \lambda is non-zero, the solution is an [[exponential function]] : f(t) = Ae^{\lambda t}.\ The derivative operator is defined also for complex-valued functions of a complex argument. In the complex version of the space \mathbf{C^\infty}, the eigenvalue equation has a solution for any complex constant \lambda. The spectrum of the operator d/dt is therefore the whole [[complex plane]]. This is an example of a [[continuous spectrum]]. ====Example: waves on a string==== [[File:Standing wave.gif|thumb|270px|The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admissible eigenvalues are governed by the length of the string and determine the frequency of oscillation.]] The displacement, h(x,t), of a stressed elastic chord fixed at both ends, like the [[vibrating string]]s of a [[string instrument]], satisfies the [[wave equation]] : \frac{\partial^2 h}{\partial t^2} = c^2\frac{\partial^2 h}{\partial x^2}, which is a linear [[partial differential equation]], where c is the constant wave speed. The normal method of solving such an equation is [[separation of variables]]. If we assume that h(x,t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations: : \frac{d^2}{dx^2}X=-\frac{\omega^2}{c^2}X\quad\quad\quad and \quad\quad\quad\frac{d^2}{dt^2}T=-\omega^2 T.\ Each of these is an eigenvalue equation, for eigenvalues -\omega^2/c^2 and -\omega^2, respectively. For any values of \omega and c, the equations are satisfied by the functions : X(x) = \sin \left(\frac{\omega x}{c} + \phi \right)\quad\quad\quad and \quad\quad\quad T(t) = \sin(\omega t + \psi),\ where \phi and \psi are arbitrary real constants. If we impose boundary conditions (that the ends of the string are fixed with X(x) = 0 at x = 0 and x = L, for example) we can constrain the eigenvalues. For those [[Boundary value problem|boundary conditions]], we find : \sin(\phi) = 0\ , and so the phase angle \phi=0\ and : \sin\left(\frac{\omega L}{c}\right) = 0.\ Thus, the constant \omega is constrained to take one of the values \omega_n = n c\pi/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form : h(x,t) = \sin(n\pi x/L)\sin(\omega_n t).\ From the point of view of our musical instrument, the frequency \omega_n\ is the frequency of the nth [[harmonic]], which is called the (n-1)th [[overtone]]. ===Associative algebras and representation theory=== {{main|Representation theory|Weight (representation theory)}} More algebraically, rather than generalizing the vector space to an infinite dimensional space, one can generalize the algebraic object that is acting on the space, replacing a single operator acting on a vector space with an [[algebra representation]] – an [[associative algebra]] acting on a module. The study of such actions is the field of [[representation theory]]. To understand these representations, one breaks them into [[indecomposable representation]]s, and, if possible, into [[irreducible representation]]s; these correspond respectively to generalized eigenspaces and eigenspaces, or rather the indecomposable and irreducible components of these. While a single operator on a vector space can be understood in terms of eigenvectors – 1-dimensional invariant subspaces – in general in representation theory the building blocks (the irreducible representations) are higher-dimensiona. A closer analog of eigenvalues is given by the notion of a ''[[Weight (representation theory)|weight]],'' with the analogs of eigenvectors and eigenspaces being ''weight vectors'' and ''weight spaces.'' For an associative algebra \mathcal{A} over a field \mathbb{F}, the analog of an eigenvalue is a one-dimensional representation \lambda \colon \mathcal{A} \to \mathbf{F} (a map of algebras; a [[linear functional]] that is also multiplicative), called the ''weight,'' rather than a single scalar. A map of algebras is used because if a vector is an eigenvector for two elements of an algebra, then it is also an eigenvector for any linear combination of these, and the eigenvalue is the corresponding linear combination of the eigenvalues, and likewise for multiplication. This is related to the classical eigenvalue as follows: a single operator T corresponds to the algebra \mathbb{F}[T] (the polynomials in T), and a map of algebras \mathbf{F}[T] \to \mathbf{F} is determined by its value on the generator T; this value is the eigenvalue. A vector v on which the algebra acts by this weight (i.e., by scalar multiplication, with the scalar determined by the weight) is called a ''weight vector,'' and other concepts generalize similarly. The generalization of a diagonalizable matrix (having an eigenbasis) is a ''[[weight module]]''. Because a weight is a map to a field, which is commutative, the map factors through the abelianization of the algebra \mathcal{A} – equivalently, it vanishes on the [[derived algebra]] – in terms of matrices, if v is a common eigenvector of operators T and U, then T U v = U T v (because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the [[polynomial algebra]]s. In this particularly simple and important case of the polynomial algebra \mathbf{F}[T_1,\dots,T_k] in a set of commuting matrices, a weight vector of this algebra is a [[simultaneous eigenvector]] of the matrices, while a weight of this algebra is simply a k-tuple of scalars \lambda = (\lambda_1,\dots,\lambda_k) corresponding to the eigenvalue of each matrix, and hence geometrically to a point in k-space. These weights – in particularly their geometry – are of central importance in understanding the [[representation theory of Lie algebras]], specifically the [[Lie algebra representation#Finite-dimensional representations of semisimple Lie algebras|finite-dimensional representations of semisimple Lie algebras]]. As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on k generators, it corresponds geometrically to an [[algebraic variety]] in k-dimensional space, and the weight must fall on the variety – i.e., it satisfies defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable. ==Calculation== ===Computing the eigenvalues=== The eigenvalues of a matrix A can be determined by finding the roots of the characteristic polynomial. Explicit [[algebraic solution|algebraic formulas]] for the roots of a polynomial exist only if the degree n is 4 or less. According to the [[Abel–Ruffini theorem]] there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. It turns out that any polynomial with degree n is the characteristic polynomial of some [[companion matrix]] of order n. Therfore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by anexplicit algebraic formula, and must therefore be computed by approximate [[numerical method]]s. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required [[accuracy]].. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable [[round-off error]]s, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by [[Wilkinson's polynomial]]).{{Citation|first1=Lloyd N. |last1=Trefethen |first2= David|last2= Bau|title=Numerical Linear Algebra|publisher=SIAM|year=1997}} Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the advent of the [[QR algorithm]] in 1961. Combining the [[Householder transformation]] with the LU decomposition results in an algorithm with better convergence than the QR algorithm.[http://l1032265.myweb.hinet.net/xeigenval.htm LU Householder Transformation] For large [[Hermitian matrix|Hermitian]] [[sparse matrix|sparse matrices]], the [[Lanczos algorithm]] is one example of an efficient [[iterative method]] to compute eigenvalues and eigenvectors, among several other possibilities. ===Computing the eigenvectors=== Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding non-zero solutiond of the eigenvalue equation, that becomes a [[linear system|system of linear equations]] with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix :A = \begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix} we can find its eigenvalues by solving the equation A v = 6 v, that is :\begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = 6 \cdot \begin{bmatrix}x\\y\end{bmatrix} This matrix equation is equivalent to two [[linear equation]]s : \left\{\begin{matrix} 4x + \phantom{1}y &{}= 6x\\6x + 3y &{}=6 y\end{matrix}\right. \quad\quad\quad that is \left\{\begin{matrix} -2x+\phantom{1}y &{}=0\\+6x-3y &{}=0\end{matrix}\right. Both equations reduce to the single linear equation y=2x. Therefore, any vector of the form [a,2a]', for any non-zero real number a, is an eigenvector of A with eigenvalue \lambda = 6. The matrix A above has another eigenvalue \lambda=1. A similar calculation shows that the corresponding eigenvectors are the non-zero solutions of 3x+y=0, that is, any vector of the form [b,-3b]', for any non-zero real number b. Some numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation. ==History== Eigenvalues are often introduced in the context of [[linear algebra]] or [[matrix (mathematics)|matrix theory]]. Historically, however, they arose in the study of [[quadratic form]]s and [[differential equation]]s. [[Leonhard Euler|Euler]] studied the rotational motion of a [[rigid body]] and discovered the importance of the [[Principal axis (mechanics)|principal axes]]. [[Lagrange]] realized that the principal axes are the eigenvectors of the inertia matrix.See {{Harvnb|Hawkins|1975|loc=§2}} In the early 19th century, [[Augustin Louis Cauchy|Cauchy]] saw how their work could be used to classify the [[quadric surface]]s, and generalized it to arbitrary dimensions.See {{Harvnb|Hawkins|1975|loc=§3}} Cauchy also coined the term ''racine caractéristique'' (characteristic root) for what is now called ''eigenvalue''; his term survives in ''[[Characteristic polynomial#Characteristic equation|characteristic equation]]''.See {{Harvnb|Kline|1972|loc=pp. 807–808}} [[Joseph Fourier|Fourier]] used the work of Laplace and Lagrange to solve the [[heat equation]] by [[separation of variables]] in his famous 1822 book ''[[Théorie analytique de la chaleur]]''.See {{Harvnb|Kline|1972|loc=p. 673}} [[Jacques Charles François Sturm|Sturm]] developed Fourier's ideas further and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by [[Charles Hermite|Hermite]] in 1855 to what are now called [[Hermitian matrix|Hermitian matrices]]. Around the same time, [[Francesco Brioschi|Brioschi]] proved that the eigenvalues of [[orthogonal matrix|orthogonal matrices]] lie on the [[unit circle]], and [[Alfred Clebsch|Clebsch]] found the corresponding result for [[skew-symmetric matrix|skew-symmetric matrices]]. Finally, [[Karl Weierstrass|Weierstrass]] clarified an important aspect in the [[stability theory]] started by Laplace by realizing that [[defective matrix|defective matrices]] can cause instability. In the meantime, [[Joseph Liouville|Liouville]] studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called ''[[Sturm–Liouville theory]]''.See {{Harvnb|Kline|1972|loc=pp. 715–716}} [[Hermann Schwarz|Schwarz]] studied the first eigenvalue of [[Laplace's equation]] on general domains towards the end of the 19th century, while [[Henri Poincaré|Poincaré]] studied [[Poisson's equation]] a few years later.See {{Harvnb|Kline|1972|loc=pp. 706–707}} At the start of the 20th century, [[David Hilbert|Hilbert]] studied the eigenvalues of [[integral operator]]s by viewing the operators as infinite matrices.See {{Harvnb|Kline|1972|loc=p. 1063}} He was the first to use the [[German language|German]] word ''eigen'' to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by [[Helmholtz]]. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.See {{Harvnb|Aldrich|2006}} The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when [[Richard Edler von Mises|Von Mises]] published the [[power method]]. One of the most popular methods today, the [[QR algorithm]], was proposed independently by [[John G.F. Francis]]{{Citation|first=J. G. F. |last=Francis|title=The QR Transformation, I (part 1)|journal=The Computer Journal|volume= 4|issue= 3|pages =265–271 |year=1961|doi=10.1093/comjnl/4.3.265}} and {{Citation|doi=10.1093/comjnl/4.4.332|first=J. G. F. |last=Francis|title=The QR Transformation, II (part 2)|journal=The Computer Journal|volume=4|issue= 4| pages= 332–345|year=1962}} and [[Vera Kublanovskaya]]{{Citation|first=Vera N. |last=Kublanovskaya|title=On some algorithms for the solution of the complete eigenvalue problem|journal=USSR Computational Mathematics and Mathematical Physics|volume= 3| pages= 637–657 |year=1961}}. Also published in: {{Citation|journal=Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki|volume=1|issue=4| pages =555–570 |year=1961}} in 1961.See {{Harvnb|Golub|van Loan|1996|loc=§7.3}}; {{Harvnb|Meyer|2000|loc=§7.3}} ==Applications== ==Eigenvalues of geometric transformations== The following table presents some example transformations in the plane along with their 2 × 2 matrices, eigenvalues, and eigenvectors. {| class="wikitable" style="text-align:center; margin:1em auto 1em auto;" |- | | [[Scaling (geometry)|scaling]] | unequal scaling | [[Rotation (geometry)|rotation]] | [[Shear mapping|horizontal shear]] | [[hyperbolic rotation]] |- |illustration || [[File:Homothety in two dim.svg|100px|Equal scaling ([[homothety]])]] || [[File:Unequal scaling.svg|100px|Vertical shrink (k_2 < 1) and horizontal stretch (k_1 > 1) of a unit square.]] || [[File:Rot placeholder.svg|100px|Rot by 30 degrees]] || [[File:Shear.svg|100px|center|Horizontal shear mapping]] || [[File:Squeeze r=1.5.svg|100px|e^t = \frac 3 2]] |- |matrix | \begin{bmatrix}k & 0\\0 & k\end{bmatrix}
 
  | \begin{bmatrix}k_1 & 0\\0 & k_2\end{bmatrix}
 
  | \begin{bmatrix}c & -s \\ s & c\end{bmatrix}
c=\cos\theta
s=\sin\theta | \begin{bmatrix}1 & k\\ 0 & 1\end{bmatrix}
 
  | \begin{bmatrix} c & s \\ s & c \end{bmatrix}
c=\cosh \varphi
s=\sinh \varphi |- |characteristic
polynomial | \ (\lambda - k)^2 | (\lambda - k_1)(\lambda - k_2) | \lambda^2 - 2c\lambda + 1 | (1 - \lambda)^2 | \lambda^2 - 2c\lambda + 1 |- |eigenvalues \lambda_i |\lambda_1 = \lambda_2 = k |\lambda_1 = k_1
\lambda_2 = k_2 |\lambda_1 = e^{\mathbf{i}\theta}=c+s\mathbf{i}
\lambda_2 = e^{-\mathbf{i}\theta}=c-s\mathbf{i} |\lambda_1 = \lambda_2 = 1 |\lambda_1 = e^\varphi
\lambda_2 = e^{-\varphi}, |- |algebraic multipl.
\mu_i=\mu(\lambda_i) |\mu_1 = 2 |\mu_1 = 1
\mu_2 = 1 |\mu_1 = 1
\mu_2 = 1 |\mu_1 = 1 |\mu_1 = 1
\mu_2 = 1 |- |geometric multipl.
\gamma_i = \gamma(\lambda_i) |\gamma_1 = 2 |\gamma_1 = 1
\gamma_2 = 1 |\gamma_1 = 1
\gamma_2 = 1 |\gamma_1 = 2 |\gamma_1 = 1
\gamma_2 = 1 |- |eigenvectors |All non-zero vectors |u_1 = \begin{bmatrix}1\\0\end{bmatrix}
u_2 = \begin{bmatrix}0\\1\end{bmatrix} |u_1 = \begin{bmatrix}\phantom{+}1\\-\mathbf{i}\end{bmatrix}
u_2 = \begin{bmatrix}\phantom{+}1\\ +\mathbf{i}\end{bmatrix} |u_1 = \begin{bmatrix}1\\0\end{bmatrix} |u_1 = \begin{bmatrix}\phantom{+}1\\\phantom{+}1\end{bmatrix}
u_2 = \begin{bmatrix}\phantom{+}1\\-1\end{bmatrix}. |} ===Geometric description=== ====Shear==== Shear in the plane is a transformation where all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line.Definition according to Weisstein, Eric W. [http://mathworld.wolfram.com/Shear.html Shear] From MathWorld − A Wolfram Web Resource In the horizontal shear depicted above, a point P of the plane moves parallel to the x-axis to the place P' so that its coordinate y does not change while the x coordinate increments to become x' = x + k y where k is called the shear factor, which is the [[cotangent]] of the shear angle \varphi. Repeatedly applying the shear transformation changes the direction of any vector in the plane closer and closer to the direction of the eigenvector. ====Uniform scaling and reflection==== Multiplying every vector with a constant real number k is represented by the [[diagonal matrix]] whose entries on the diagonal are all equal to k. Mechanically, this corresponds to stretching a rubber sheet equally in all directions such as a small area of the surface of an inflating balloon. All vectors originating at [[origin (mathematics)|origin]] (i.e., the fixed point on the balloon surface) are stretched equally with the same scaling factor k while preserving its original direction. Thus, every non-zero vector is an eigenvector with eigenvalue k. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if k > 1, it is stretching; if 0 < k < 1, it is shrinking. Negative values of k correspond to a reversal of direction, followed by a stretch or a shrink, depending on the absolute value of k. ====Unequal scaling==== For a slightly more complicated example, consider a sheet that is stretched unequally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: k_1 for the scaling in direction x, and k_2 for the scaling in direction y. If a given eigenvalue is greater than 1, the vectors are stretched in the direction of the corresponding eigenvector; if less than 1, they are shrunken in that direction. Negative eigenvalues correspond to reflections followed by a stretch or shrink. In general, matrices that are [[diagonalizable]] over the real numbers represent scalings and reflections: the eigenvalues represent the scaling factors (and appear as the diagonal terms), and the eigenvectors are the directions of the scalings. The figure shows the case where k_1>1 and 0. The rubber sheet is stretched along the x axis and simultaneously shrunk along the y axis. After repeatedly applying this transformation of stretching/shrinking many times, almost any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching). The exceptions are vectors along the y axis, which will gradually shrink away to nothing. ====Hyperbolic rotation==== The eigenvalues are [[multiplicative inverse]]s of each other. ====Rotation==== A [[Rotation (mathematics)|rotation]] in a [[Euclidean plane|plane]] is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. A rotation by any integer number of full turns (0°, 360°, 720°, etc.) is just the identity transformation (a uniform scaling by +1), while a rotation by an odd number of half-turns (180°, 540°, etc.) is a [[point reflection]] (uniform scaling by -1). Clearly, except for these special cases, every vector in the real plane will have its direction changed, and thus there cannot be any real eigenvectors. Indeed, the characteristic equation is a [[quadratic equation]] with [[discriminant]] D = -4 (\sin\theta)^2, which is a negative number whenever \theta is not an integer multiple of 180°. Therefore the two eigenvalues are complex numbers, \cos\theta \pm \mathbf{i}\sin\theta; and all eigenvectors have non-real entries. ===Schrödinger equation=== [[File:HAtomOrbitals.png|thumb|271px|The [[wavefunction]]s associated with the [[bound state]]s of an [[electron]] in a [[hydrogen atom]] can be seen as the eigenvectors of the [[hydrogen atom|hydrogen atom Hamiltonian]] as well as of the [[angular momentum operator]]. They are associated with eigenvalues interpreted as their energies (increasing downward: n = 1, 2, 3, … ) and [[angular momentum]] (increasing across: s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher [[probability density function|probability density]] for a position [[measurement in quantum mechanics|measurement]]. The center of each figure is the [[atomic nucleus]], a [[proton]].]] An example of an eigenvalue equation where the transformation T is represented in terms of a differential operator is the time-independent [[Schrödinger equation]] in [[quantum mechanics]]: : H\psi_E = E\psi_E \, where H, the [[Hamiltonian (quantum mechanics)|Hamiltonian]], is a second-order [[differential operator]] and \psi_E, the [[wavefunction]], is one of its eigenfunctions corresponding to the eigenvalue E, interpreted as its [[energy]]. However, in the case where one is interested only in the [[bound state]] solutions of the Schrödinger equation, one looks for \psi_E within the space of [[Square-integrable function|square integrable]] functions. Since this space is a [[Hilbert space]] with a well-defined [[scalar product]], one can introduce a [[Basis (linear algebra)|basis set]] in which \psi_E and H can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. [[Bra-ket notation]] is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by |\Psi_E\rangle. In this notation, the Schrödinger equation is: : H|\Psi_E\rangle = E|\Psi_E\rangle where |\Psi_E\rangle is an '''eigenstate''' of H. It is a [[self adjoint operator]], the infinite dimensional analog of Hermitian matrices (''see [[Observable]]''). As in the matrix case, in the equation above H|\Psi_E\rangle is understood to be the vector obtained by application of the transformation H to |\Psi_E\rangle. ===Molecular orbitals=== In [[quantum mechanics]], and in particular in [[atomic physics|atomic]] and [[molecular physics]], within the [[Hartree–Fock]] theory, the [[atomic orbital|atomic]] and [[molecular orbital]]s can be defined by the eigenvectors of the [[Fock operator]]. The corresponding eigenvalues are interpreted as [[ionization potential]]s via [[Koopmans' theorem]]. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an [[iteration]] procedure, called in this case [[self-consistent field]] method. In [[quantum chemistry]], one often represents the Hartree–Fock equation in a non-[[orthogonal]] [[basis set (chemistry)|basis set]]. This particular representation is a [[generalized eigenvalue problem]] called [[Roothaan equations]]. ===Geology and glaciology=== In [[geology]], especially in the study of [[glacial till]], eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of [[clasts]] in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram,{{Citation|doi=10.1002/1096-9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C|last1=Graham|first1=D.|last2=Midgley|first2= N.|title=Graphical representation of particle shape using triangular diagrams: an Excel spreadsheet method|year= 2000|journal= Earth Surface Processes and Landforms |volume=25|pages=1473–1477|issue=13}}{{Citation|doi=10.1086/626490|last1=Sneed|first1= E. D.|last2=Folk|first2= R. L.|year= 1958|title=Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis|journal= Journal of Geology|volume= 66|issue=2|pages=114–150}} or as a Stereonet on a Wulff Net.{{Citation |doi=10.1016/S0098-3004(97)00122-2 |last1=Knox-Robinson |year=1998 |first1=C |pages=243 |volume=24 |journal=Computers & Geosciences|title= GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system |issue=3 |last2=Gardoll |first2=Stephen J}} The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered v_1, v_2, v_3 by their eigenvalues E_1 \geq E_2 \geq E_3;[http://www.ruhr-uni-bochum.de/hardrock/downloads.htm Stereo32 software] v_1 then is the primary orientation/dip of clast, v_2 is the secondary and v_3 is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a [[compass rose]] of [[turn (geometry)|360°]]. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E_1, E_2, and E_3 are dictated by the nature of the sediment's fabric. If E_1 = E_2 = E_3, the fabric is said to be isotropic. If E_1 = E_2 > E_3, the fabric is said to be planar. If E_1 > E_2 > E_3, the fabric is said to be linear.{{Citation|last1=Benn|first1= D.|last2=Evans|first2=D.|year=2004|title= A Practical Guide to the study of Glacial Sediments|location= London|publisher=Arnold|pages=103–107}} ===Principal components analysis=== [[File:GaussianScatterPCA.png|thumb|right|PCA of the [[multivariate Gaussian distribution]] centered at (1, 3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) [[covariance matrix]] scaled by the square root of the corresponding eigenvalue. (Just as in the one-dimensional case, the square root is taken because the [[standard deviation]] is more readily visualized than the [[variance]].]] {{Main|Principal components analysis}} {{See also|Positive semidefinite matrix|Factor analysis}} The [[Eigendecomposition_of_a_matrix#Symmetric_matrices|eigendecomposition]] of a [[symmetric matrix|symmetric]] [[positive semidefinite matrix|positive semidefinite]] (PSD) [[positive semidefinite matrix|matrix]] yields an [[orthogonal basis]] of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in [[multivariate statistics|multivariate analysis]], where the [[sample variance|sample]] [[covariance matrix|covariance matrices]] are PSD. This orthogonal decomposition is called [[principal components analysis]] (PCA) in statistics. PCA studies [[linear relation]]s among variables. PCA is performed on the [[covariance matrix]] or the [[correlation matrix]] (in which each variable is scaled to have its [[sample variance]] equal to one). For the covariance or correlation matrix, the eigenvectors correspond to [[principal components analysis|principal components]] and the eigenvalues to the [[explained variance|variance explained]] by the principal components. Principal component analysis of the correlation matrix provides an [[orthogonal basis|orthonormal eigen-basis]] for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal-components that are associated with most of the covariability among a number of observed data. Principal component analysis is used to study [[data mining|large]] [[data set]]s, such as those encountered in [[data mining]], [[chemometrics|chemical research]], [[psychometrics|psychology]], and in [[marketing]]. PCA is popular especially in psychology, in the field of [[psychometrics]]. In [[Q methodology]], the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of ''practical'' significance (which differs from the [[statistical significance]] of [[hypothesis testing]]): The factors with eigenvalues greater than 1.00 are considered practically significant, that is, as explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered practically insignificant, as explaining only a negligible portion of the data variability. More generally, principal component analysis can be used as a method of [[factor analysis]] in [[structural equation model]]ing. ===Vibration analysis=== [[File:beam mode 1.gif|thumb|225px|1st lateral bending (See [[vibration]] for more types of vibration)]] {{Main|Vibration}} Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many [[Degrees of freedom (mechanics)|degrees of freedom]]. The eigenvalues are used to determine the natural frequencies (or '''eigenfrequencies''') of vibration, and the eigenvectors determine the shapes of these vibrational modes. In particular, undamped vibration is governed by :m\ddot x + kx = 0 or :m\ddot x = -k x that is, acceleration is proportional to position (i.e., we expect x to be sinusoidal in time). In n dimensions, m becomes a [[mass matrix]] and k a [[stiffness matrix]]. Admissible solutions are then a linear combination of solutions to the [[generalized eigenvalue problem]] :-k x = \omega^2 m x where \omega^2 is the eigenvalue and \omega is the [[angular frequency]]. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k alone. Furthermore, [[damped vibration]], governed by :m\ddot x + c \dot x + kx = 0 leads to what is called a so-called [[quadratic eigenvalue problem]], :(\omega^2 m + \omega c + k)x = 0. This can be reduced to a generalized eigenvalue problem by [[Quadratic_eigenvalue_problem#Methods of Solution|clever algebra]] at the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using [[finite element analysis]], but neatly generalize the solution to scalar-valued vibration problems. ===Eigenfaces=== [[File:Eigenfaces.png|thumb|200px|[[Eigenface]]s as examples of eigenvectors]] {{Main|Eigenface}} In [[image processing]], processed images of [[face]]s can be seen as vectors whose components are the [[brightness]]es of each [[pixel]].{{Citation | last=Xirouhakis | first=A. | first2=G. | last2=Votsis | first3=A. | last3=Delopoulus | title=Estimation of 3D motion and structure of human faces | publisher=Online paper in PDF format, National Technical University of Athens | url=http://www.image.ece.ntua.gr/papers/43.pdf |format=PDF| year=2004 }} The dimension of this vector space is the number of pixels. The eigenvectors of the [[covariance matrix]] associated with a large set of normalized pictures of faces are called '''[[eigenface]]s'''; this is an example of [[principal components analysis]]. They are very useful for expressing any face image as a [[linear combination]] of some of them. In the [[Facial recognition system|facial recognition]] branch of [[biometrics]], eigenfaces provide a means of applying [[data compression]] to faces for [[Recognition of human individuals|identification]] purposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept, '''eigenvoices''' represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation. ===Tensor of moment of inertia=== In [[mechanics]], the eigenvectors of the [[moment of inertia#Inertia tensor|moment of inertia tensor]] define the [[principal axis (mechanics)|principal axes]] of a [[rigid body]]. The [[tensor]] of moment of [[inertia]] is a key quantity required to determine the rotation of a rigid body around its [[center of mass]]. ===Stress tensor=== In [[solid mechanics]], the [[stress (mechanics)|stress]] tensor is symmetric and so can be decomposed into a [[diagonal]] tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no [[Shear (mathematics)|shear]] components; the components it does have are the principal components. ===Eigenvalues of a graph=== In [[spectral graph theory]], an eigenvalue of a [[graph theory|graph]] is defined as an eigenvalue of the graph's [[adjacency matrix]] A, or (increasingly) of the graph's [[Laplacian matrix]] (see also [[Discrete Laplace operator]]), which is either T - A (sometimes called the ''combinatorial Laplacian'') or I - T^{-1/2}A T^{-1/2} (sometimes called the ''normalized Laplacian''), where T is a diagonal matrix with T_{i i} equal to the degree of vertex v_i, and in T^{-1/2}, the ith diagonal entry is \sqrt{\operatorname{deg}(v_i)}. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest or kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the [[eigenvector centrality|centrality]] of its vertices. An example is [[Google]]'s [[PageRank]] algorithm. The principal eigenvector of a modified [[adjacency matrix]] of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the [[stationary distribution]] of the [[Markov chain]] represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via [[spectral clustering]]. Other methods are also available for clustering. ===Basic reproduction number=== ::''See [[Basic reproduction number]]'' The basic reproduction number (R_0) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then R_0 is the average number of people that one infectious person will infect. The generation time of an infection is the time, t_G, from one person becoming infected to the next person becoming infected. In a heterogenous population, the next generation matrix defines how many people in the population will become infected after time t_G has passed. R_0 is then the largest eigenvalue of the next generation matrix.{{Citation | author = Diekmann O, Heesterbeek JAP, Metz JAJ | year = 1990 | title = On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations | journal = Journal of Mathematical Biology | volume = 28 | issue = 4 | pages =365–382 | pmid = 2117040 | doi = 10.1007/BF00178324 }}{{Citation | author = Odo Diekmann and J. A. P. Heesterbeek | title = Mathematical epidemiology of infectious diseases | series = Wiley series in mathematical and computational biology | publisher = John Wiley & Sons | location = West Sussex, England | year = 2000 }} ==See also== * [[Nonlinear eigenproblem]] * [[Quadratic eigenvalue problem]] * [[Introduction to eigenstates]] * [[Eigenplane]] * [[Jordan normal form]] * [[List of numerical analysis software]] * [[Antieigenvalue theory]] ==Notes== {{reflist|2}} ==References==
* {{Citation | last=Korn | first=Granino A. | first2=Theresa M. | last2=Korn | title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review | publisher=1152 p., Dover Publications, 2 Revised edition | year=2000 | isbn=0-486-41147-8 | bibcode=1968mhse.book.....K | journal=New York: McGraw-Hill }}. * {{Citation | last = Lipschutz | first = Seymour | title = Schaum's outline of theory and problems of linear algebra | edition = 2nd | publisher = McGraw-Hill Companies | location = New York, NY | series = Schaum's outline series | year = 1991 | isbn = 0-07-038007-4 }}. * {{Citation | last = Friedberg | first = Stephen H. | first2 = Arnold J. | last2 = Insel | first3 = Lawrence E. | last3 = Spence | title = Linear algebra | edition = 2nd | publisher = Prentice Hall | location = Englewood Cliffs, NJ 07632 | year = 1989 | isbn = 0-13-537102-3 }}. * {{Citation | last = Aldrich | first = John | title = Earliest Known Uses of Some of the Words of Mathematics | url = http://jeff560.tripod.com/e.html | editor = Jeff Miller (Editor) | year = 2006 | chapter = Eigenvalue, eigenfunction, eigenvector, and related terms | chapterurl = http://jeff560.tripod.com/e.html | accessdate = 2006-08-22 }} * {{Citation | last=Strang | first=Gilbert | title=Introduction to linear algebra | publisher=Wellesley-Cambridge Press, Wellesley, MA | year=1993 | isbn=0-9614088-5-5 }}. * {{Citation | last=Strang | first=Gilbert | title=Linear algebra and its applications | publisher=Thomson, Brooks/Cole, Belmont, CA | year=2006 | isbn=0-03-010567-6 }}. * {{Citation | last=Bowen | first=Ray M. | first2=Chao-Cheng | last2=Wang | title=Linear and multilinear algebra | publisher=Plenum Press, New York, NY | year=1980 | isbn=0-306-37508-7 }}. * {{Citation | last = Cohen-Tannoudji | first = Claude | author-link = Claude Cohen-Tannoudji | title = Quantum mechanics | publisher = John Wiley & Sons | year = 1977 | chapter = Chapter II. The mathematical tools of quantum mechanics | isbn = 0-471-16432-1 }}. * {{Citation | last = Fraleigh | first = John B. | first2 = Raymond A. | last2 = Beauregard | title = Linear algebra | edition = 3rd | publisher = Addison-Wesley Publishing Company | year = 1995 | isbn = 0-201-83999-7 (international edition) }}. * {{Citation | last=Golub | first=Gene H. | authorlink1 = Gene_H._Golub | first2=Charles F. | last2=Van Loan | authorlink2 = Charles_F._Van_Loan | title=Matrix computations (3rd Edition) | publisher=Johns Hopkins University Press, Baltimore, MD | year=1996 | isbn=978-0-8018-5414-9 }}. * {{Citation | last = Hawkins | first = T. | title = Cauchy and the spectral theory of matrices | journal = Historia Mathematica | volume = 2 | pages = 1–29 | year = 1975 | doi = 10.1016/0315-0860(75)90032-4 }}. * {{Citation | last=Horn | first=Roger A. | first2=Charles F. | last2=Johnson | title=Matrix analysis | publisher=Cambridge University Press | year=1985 | isbn=0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback) }}. * {{Citation | last=Kline | first=Morris | title=Mathematical thought from ancient to modern times | publisher=Oxford University Press | year=1972 | isbn=0-19-501496-0 }}. * {{Citation | last=Meyer | first=Carl D. | title=Matrix analysis and applied linear algebra | publisher=Society for Industrial and Applied Mathematics (SIAM), Philadelphia | year=2000 | isbn=978-0-89871-454-8 }}. * {{Citation | last=Brown | first=Maureen | title=Illuminating Patterns of Perception: An Overview of Q Methodology | date=October 2004 | isbn= }}. * {{Citation | last = Golub | first = Gene F. | first2 = Henk A. | last2 = van der Vorst | title = Eigenvalue computation in the 20th century | journal = Journal of Computational and Applied Mathematics | volume = 123 | pages = 35–65 | year = 2000 | doi = 10.1016/S0377-0427(00)00413-1 }}. * {{Citation | last=Akivis | first=Max A. | coauthors=Vladislav V. Goldberg | title=Tensor calculus | series=Russian | publisher=Science Publishers, Moscow | year=1969 }}. * {{Citation | last=Gelfand | first=I. M. | title=Lecture notes in linear algebra | series=Russian | publisher=Science Publishers, Moscow | year=1971 | isbn= }}. * {{Citation | last=Alexandrov | first=Pavel S. | title=Lecture notes in analytical geometry | series=Russian | publisher=Science Publishers, Moscow | year=1968 | isbn= }}. * {{Citation | last=Carter | first=Tamara A. | first2=Richard A. | last2=Tapia | first3=Anne | last3=Papaconstantinou | title=Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students | publisher=Rice University, Online Edition | url=http://ceee.rice.edu/Books/LA/index.html | accessdate=2008-02-19 }}. * {{Citation | last=Roman | first=Steven | title=Advanced linear algebra | edition=3rd | publisher=Springer Science + Business Media, LLC | place=New York, NY | year=2008 | isbn=978-0-387-72828-5 }}. * {{Citation | last=Shilov | first=Georgi E. | title=Linear algebra | edition=translated and edited by Richard A. Silverman | publisher=Dover Publications | place=New York | year=1977 | isbn=0-486-63518-X }}. * {{Citation | last=Hefferon | first=Jim | title=Linear Algebra | publisher=Online book, St Michael's College, Colchester, Vermont, USA | url=http://joshua.smcvt.edu/linearalgebra/ | year=2001 | isbn= }}. * {{Citation | last=Kuttler | first=Kenneth | title=An introduction to linear algebra | publisher=Online e-book in PDF format, Brigham Young University | url=http://www.math.byu.edu/~klkuttle/Linearalgebra.pdf |format=PDF| year=2007 | isbn= }}. * {{Citation | last=Demmel | first=James W. | authorlink = James Demmel | title=Applied numerical linear algebra | publisher=SIAM | year=1997 | isbn=0-89871-389-7 }}. * {{Citation | last=Beezer | first=Robert A. | title=A first course in linear algebra | url=http://linear.ups.edu/ | publisher=Free online book under GNU licence, University of Puget Sound | year=2006 | isbn= }}. * {{Citation | last = Lancaster | first = P. | title = Matrix theory | series = Russian | publisher = Science Publishers | location = Moscow, Russia | year = 1973 }}. * {{Citation | last = Halmos | first = Paul R. | author-link = Paul Halmos | title = Finite-dimensional vector spaces | edition = 8th | publisher = Springer-Verlag | location = New York, NY | year = 1987 | isbn = 0-387-90093-4 }}. * Pigolkina, T. S. and Shulman, V. S., ''Eigenvalue'' (in Russian), In:Vinogradov, I. M. (Ed.), ''Mathematical Encyclopedia'', Vol. 5, Soviet Encyclopedia, Moscow, 1977. * {{Citation | last=Greub | first=Werner H. | title=Linear Algebra (4th Edition) | publisher=Springer-Verlag, New York, NY | year=1975 | isbn=0-387-90110-8 }}. * {{Citation | last=Larson | first=Ron | first2=Bruce H. | last2=Edwards | title=Elementary linear algebra | edition=5th | publisher=Houghton Mifflin Company | year=2003 | isbn=0-618-33567-6 }}. * [[Charles W. Curtis|Curtis, Charles W.]], ''Linear Algebra: An Introductory Approach'', 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0-387-90992-3. * {{Citation | last=Shores | first=Thomas S. | title=Applied linear algebra and matrix analysis | publisher=Springer Science+Business Media, LLC | year=2007 | isbn=0-387-33194-8 }}. * {{Citation | last=Sharipov | first=Ruslan A. | title=Course of Linear Algebra and Multidimensional Geometry: the textbook | year=1996 | isbn=5-7477-0099-5 | arxiv=math/0405323 }}. * {{Citation | last=Gohberg | first=Israel | first2=Peter | last2=Lancaster | first3=Leiba | last3=Rodman | title=Indefinite linear algebra and applications | publisher=Birkhäuser Verlag | place=Basel-Boston-Berlin | year=2005 | isbn=3-7643-7349-0 }}.
==External links== {{Wikibooks|Linear Algebra|Eigenvalues and Eigenvectors}} {{Wikibooks|The Book of Mathematical Proofs|Algebra/Linear Transformations}} * [http://www.physlink.com/education/AskExperts/ae520.cfm What are Eigen Values?] — non-technical introduction from PhysLink.com's "Ask the Experts" *[http://people.revoledu.com/kardi/tutorial/LinearAlgebra/EigenValueEigenVector.html Eigen Values and Eigen Vectors Numerical Examples] – Tutorial and Interactive Program from Revoledu. *[http://khanexercises.appspot.com/video?v=PhfbEr2btGQ Introduction to Eigen Vectors and Eigen Values] – lecture from Khan Academy '''Theory''' * {{springer|title=Eigen value|id=p/e035150}} * {{springer|title=Eigen vector|id=p/e035180}} * {{planetmath reference|id=4397|title=Eigenvalue (of a matrix)}} * [http://mathworld.wolfram.com/Eigenvector.html Eigenvector] — Wolfram [[MathWorld]] * [http://ocw.mit.edu/ans7870/18/18.06/javademo/Eigen/ Eigen Vector Examination working applet] * [http://web.mit.edu/18.06/www/Demos/eigen-applet-all/eigen_sound_all.html Same Eigen Vector Examination as above in a Flash demo with sound] * [http://www.sosmath.com/matrix/eigen1/eigen1.html Computation of Eigenvalues] * [http://www.cs.utk.edu/~dongarra/etemplates/index.html Numerical solution of eigenvalue problems] Edited by Zhaojun Bai, [[James Demmel]], Jack Dongarra, Axel Ruhe, and [[Henk van der Vorst]] * Eigenvalues and Eigenvectors on the Ask Dr. Math forums: [http://mathforum.org/library/drmath/view/55483.html], [http://mathforum.org/library/drmath/view/51989.html] '''Online calculators''' * [http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert.htm arndt-bruenner.de] * [http://www.bluebit.gr/matrix-calculator/ bluebit.gr] * [http://wims.unice.fr/wims/wims.cgi?session=6S051ABAFA.2&+lang=en&+module=tool%2Flinear%2Fmatrix.en wims.unice.fr] '''Demonstration applets''' * [http://scienceapplets.blogspot.com/2012/03/eigenvalues-and-eigenvectors.html Java applet about eigenvectors in the real plane] {{Linear algebra}} {{Mathematics-footer}} {{DEFAULTSORT:Eigenvalues And Eigenvectors}} [[Category:Mathematical physics]] [[Category:Abstract algebra]] [[Category:Linear algebra]] [[Category:Matrix theory]] [[Category:Singular value decomposition]] [[Category:Articles including recorded pronunciations]] [[Category:German loanwords]] {{Link FA|es}} {{Link FA|zh}} [[ar:القيم الذاتية والمتجهات الذاتية]] [[be-x-old:Уласныя лікі, вэктары й прасторы]] [[ca:Valor propi, vector propi i espai propi]] [[cs:Vlastní číslo]] [[da:Egenværdi, egenvektor og egenrum]] [[de:Eigenwertproblem]] [[es:Vector propio y valor propio]] [[eo:Ajgeno kaj ajgenvektoro]] [[fa:مقدار ویژه و بردار ویژه]] [[fr:Valeur propre, vecteur propre et espace propre]] [[ko:고유값]] [[it:Autovettore e autovalore]] [[he:ערך עצמי]] [[kk:Өзіндік функция]] [[lv:Īpašvērtības un īpašvektori]] [[lt:Tikrinių verčių lygtis]] [[hu:Sajátvektor és sajátérték]] [[nl:Eigenwaarde (wiskunde)]] [[ja:固有値]] [[no:Egenvektor]] [[nn:Eigenverdi, eigenvektor og eigerom]] [[pl:Wektory i wartości własne]] [[pt:Valor próprio]] [[ro:Vectori și valori proprii]] [[ru:Собственные векторы, значения и пространства]] [[simple:Eigenvectors and eigenvalues]] [[sl:Lastna vrednost]] [[fi:Ominaisarvo, ominaisvektori ja ominaisavaruus]] [[sv:Egenvärde, egenvektor och egenrum]] [[ta:ஐகென் மதிப்பு]] [[th:เวกเตอร์ลักษณะเฉพาะ]] [[uk:Власний вектор]] [[ur:ویژہ قدر]] [[vi:Vectơ riêng]] [[zh-yue:特徵向量]] [[zh:特征向量]]