. The rubber sheet is stretched along the axis and simultaneously shrunk along the axis. After repeatedly applying this transformation of stretching/shrinking many times, almost any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the axis (the direction of stretching). The exceptions are vectors along the axis, which will gradually shrink away to nothing.
====Hyperbolic rotation====
The eigenvalues are [[multiplicative inverse]]s of each other.
====Rotation====
A [[Rotation (mathematics)|rotation]] in a [[Euclidean plane|plane]] is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. A rotation by any integer number of full turns (0°, 360°, 720°, etc.) is just the identity transformation (a uniform scaling by +1), while a rotation by an odd number of half-turns (180°, 540°, etc.) is a [[point reflection]] (uniform scaling by -1). Clearly, except for these special cases, every vector in the real plane will have its direction changed, and thus there cannot be any real eigenvectors. Indeed, the characteristic equation is a [[quadratic equation]] with [[discriminant]] , which is a negative number whenever is not an integer multiple of 180°. Therefore the two eigenvalues are complex numbers, ; and all eigenvectors have non-real entries.
===Schrödinger equation===
[[File:HAtomOrbitals.png|thumb|271px|The [[wavefunction]]s associated with the [[bound state]]s of an [[electron]] in a [[hydrogen atom]] can be seen as the eigenvectors of the [[hydrogen atom|hydrogen atom Hamiltonian]] as well as of the [[angular momentum operator]]. They are associated with eigenvalues interpreted as their energies (increasing downward: ) and [[angular momentum]] (increasing across: s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher [[probability density function|probability density]] for a position [[measurement in quantum mechanics|measurement]]. The center of each figure is the [[atomic nucleus]], a [[proton]].]]
An example of an eigenvalue equation where the transformation is represented in terms of a differential operator is the time-independent [[Schrödinger equation]] in [[quantum mechanics]]:
:
where , the [[Hamiltonian (quantum mechanics)|Hamiltonian]], is a second-order [[differential operator]] and , the [[wavefunction]], is one of its eigenfunctions corresponding to the eigenvalue , interpreted as its [[energy]].
However, in the case where one is interested only in the [[bound state]] solutions of the Schrödinger equation, one looks for within the space of [[Square-integrable function|square integrable]] functions. Since this space is a [[Hilbert space]] with a well-defined [[scalar product]], one can introduce a [[Basis (linear algebra)|basis set]] in which and can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
[[Bra-ket notation]] is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by . In this notation, the Schrödinger equation is:
:
where is an '''eigenstate''' of . It is a [[self adjoint operator]], the infinite dimensional analog of Hermitian matrices (''see [[Observable]]''). As in the matrix case, in the equation above is understood to be the vector obtained by application of the transformation to .
===Molecular orbitals===
In [[quantum mechanics]], and in particular in [[atomic physics|atomic]] and [[molecular physics]], within the [[Hartree–Fock]] theory, the [[atomic orbital|atomic]] and [[molecular orbital]]s can be defined by the eigenvectors of the [[Fock operator]]. The corresponding eigenvalues are interpreted as [[ionization potential]]s via [[Koopmans' theorem]]. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an [[iteration]] procedure, called in this case [[self-consistent field]] method. In [[quantum chemistry]], one often represents the Hartree–Fock equation in a non-[[orthogonal]] [[basis set (chemistry)|basis set]]. This particular representation is a [[generalized eigenvalue problem]] called [[Roothaan equations]].
===Geology and glaciology===
In [[geology]], especially in the study of [[glacial till]], eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of [[clasts]] in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram,[{{Citation|doi=10.1002/1096-9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C|last1=Graham|first1=D.|last2=Midgley|first2= N.|title=Graphical representation of particle shape using triangular diagrams: an Excel spreadsheet method|year= 2000|journal= Earth Surface Processes and Landforms |volume=25|pages=1473–1477|issue=13}}][{{Citation|doi=10.1086/626490|last1=Sneed|first1= E. D.|last2=Folk|first2= R. L.|year= 1958|title=Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis|journal= Journal of Geology|volume= 66|issue=2|pages=114–150}}] or as a Stereonet on a Wulff Net.[{{Citation |doi=10.1016/S0098-3004(97)00122-2 |last1=Knox-Robinson |year=1998 |first1=C |pages=243 |volume=24 |journal=Computers & Geosciences|title= GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system |issue=3 |last2=Gardoll |first2=Stephen J}}]
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered by their eigenvalues ;[[http://www.ruhr-uni-bochum.de/hardrock/downloads.htm Stereo32 software]] then is the primary orientation/dip of clast, is the secondary and is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a [[compass rose]] of [[turn (geometry)|360°]]. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of , , and are dictated by the nature of the sediment's fabric. If , the fabric is said to be isotropic. If , the fabric is said to be planar. If , the fabric is said to be linear.[{{Citation|last1=Benn|first1= D.|last2=Evans|first2=D.|year=2004|title= A Practical Guide to the study of Glacial Sediments|location= London|publisher=Arnold|pages=103–107}}]
===Principal components analysis===
[[File:GaussianScatterPCA.png|thumb|right|PCA of the [[multivariate Gaussian distribution]] centered at with a standard deviation of 3 in roughly the direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) [[covariance matrix]] scaled by the square root of the corresponding eigenvalue. (Just as in the one-dimensional case, the square root is taken because the [[standard deviation]] is more readily visualized than the [[variance]].]]
{{Main|Principal components analysis}}
{{See also|Positive semidefinite matrix|Factor analysis}}
The [[Eigendecomposition_of_a_matrix#Symmetric_matrices|eigendecomposition]] of a [[symmetric matrix|symmetric]] [[positive semidefinite matrix|positive semidefinite]] (PSD) [[positive semidefinite matrix|matrix]] yields an [[orthogonal basis]] of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in [[multivariate statistics|multivariate analysis]], where the [[sample variance|sample]] [[covariance matrix|covariance matrices]] are PSD. This orthogonal decomposition is called [[principal components analysis]] (PCA) in statistics. PCA studies [[linear relation]]s among variables. PCA is performed on the [[covariance matrix]] or the [[correlation matrix]] (in which each variable is scaled to have its [[sample variance]] equal to one). For the covariance or correlation matrix, the eigenvectors correspond to [[principal components analysis|principal components]] and the eigenvalues to the [[explained variance|variance explained]] by the principal components. Principal component analysis of the correlation matrix provides an [[orthogonal basis|orthonormal eigen-basis]] for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal-components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used to study [[data mining|large]] [[data set]]s, such as those encountered in [[data mining]], [[chemometrics|chemical research]], [[psychometrics|psychology]], and in [[marketing]]. PCA is popular especially in psychology, in the field of [[psychometrics]]. In [[Q methodology]], the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of ''practical'' significance (which differs from the [[statistical significance]] of [[hypothesis testing]]): The factors with eigenvalues greater than 1.00 are considered practically significant, that is, as explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered practically insignificant, as explaining only a negligible portion of the data variability. More generally, principal component analysis can be used as a method of [[factor analysis]] in [[structural equation model]]ing.
===Vibration analysis===
[[File:beam mode 1.gif|thumb|225px|1st lateral bending (See [[vibration]] for more types of vibration)]]
{{Main|Vibration}}
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many [[Degrees of freedom (mechanics)|degrees of freedom]]. The eigenvalues are used to determine the natural frequencies (or '''eigenfrequencies''') of vibration, and the eigenvectors determine the shapes of these vibrational modes. In particular, undamped vibration is governed by
:
or
:
that is, acceleration is proportional to position (i.e., we expect to be sinusoidal in time). In dimensions, becomes a [[mass matrix]] and a [[stiffness matrix]]. Admissible solutions are then a linear combination of solutions to the [[generalized eigenvalue problem]]
:
where is the eigenvalue and is the [[angular frequency]]. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of alone. Furthermore, [[damped vibration]], governed by
:
leads to what is called a so-called [[quadratic eigenvalue problem]],
:
This can be reduced to a generalized eigenvalue problem by [[Quadratic_eigenvalue_problem#Methods of Solution|clever algebra]] at the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using [[finite element analysis]], but neatly generalize the solution to scalar-valued vibration problems.
===Eigenfaces===
[[File:Eigenfaces.png|thumb|200px|[[Eigenface]]s as examples of eigenvectors]]
{{Main|Eigenface}}
In [[image processing]], processed images of [[face]]s can be seen as vectors whose components are the [[brightness]]es of each [[pixel]].[{{Citation
| last=Xirouhakis
| first=A.
| first2=G.
| last2=Votsis
| first3=A.
| last3=Delopoulus
| title=Estimation of 3D motion and structure of human faces
| publisher=Online paper in PDF format, National Technical University of Athens
| url=http://www.image.ece.ntua.gr/papers/43.pdf
|format=PDF| year=2004
}}] The dimension of this vector space is the number of pixels. The eigenvectors of the [[covariance matrix]] associated with a large set of normalized pictures of faces are called '''[[eigenface]]s'''; this is an example of [[principal components analysis]]. They are very useful for expressing any face image as a [[linear combination]] of some of them. In the [[Facial recognition system|facial recognition]] branch of [[biometrics]], eigenfaces provide a means of applying [[data compression]] to faces for [[Recognition of human individuals|identification]] purposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept, '''eigenvoices''' represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.
===Tensor of moment of inertia===
In [[mechanics]], the eigenvectors of the [[moment of inertia#Inertia tensor|moment of inertia tensor]] define the [[principal axis (mechanics)|principal axes]] of a [[rigid body]]. The [[tensor]] of moment of [[inertia]] is a key quantity required to determine the rotation of a rigid body around its [[center of mass]].
===Stress tensor===
In [[solid mechanics]], the [[stress (mechanics)|stress]] tensor is symmetric and so can be decomposed into a [[diagonal]] tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no [[Shear (mathematics)|shear]] components; the components it does have are the principal components.
===Eigenvalues of a graph===
In [[spectral graph theory]], an eigenvalue of a [[graph theory|graph]] is defined as an eigenvalue of the graph's [[adjacency matrix]] , or (increasingly) of the graph's [[Laplacian matrix]] (see also [[Discrete Laplace operator]]), which is either (sometimes called the ''combinatorial Laplacian'') or (sometimes called the ''normalized Laplacian''), where is a diagonal matrix with equal to the degree of vertex , and in , the th diagonal entry is . The th principal eigenvector of a graph is defined as either the eigenvector corresponding to the th largest or th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure the [[eigenvector centrality|centrality]] of its vertices. An example is [[Google]]'s [[PageRank]] algorithm. The principal eigenvector of a modified [[adjacency matrix]] of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the [[stationary distribution]] of the [[Markov chain]] represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via [[spectral clustering]]. Other methods are also available for clustering.
===Basic reproduction number===
::''See [[Basic reproduction number]]''
The basic reproduction number () is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then is the average number of people that one infectious person will infect. The generation time of an infection is the time, , from one person becoming infected to the next person becoming infected. In a heterogenous population, the next generation matrix defines how many people in the population will become infected after time has passed. is then the largest eigenvalue of the next generation matrix.[{{Citation
| author = Diekmann O, Heesterbeek JAP, Metz JAJ
| year = 1990
| title = On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations
| journal = Journal of Mathematical Biology
| volume = 28
| issue = 4
| pages =365–382
| pmid = 2117040
| doi = 10.1007/BF00178324
}}][{{Citation
| author = Odo Diekmann and J. A. P. Heesterbeek
| title = Mathematical epidemiology of infectious diseases
| series = Wiley series in mathematical and computational biology
| publisher = John Wiley & Sons
| location = West Sussex, England
| year = 2000
}}]
==See also==
* [[Nonlinear eigenproblem]]
* [[Quadratic eigenvalue problem]]
* [[Introduction to eigenstates]]
* [[Eigenplane]]
* [[Jordan normal form]]
* [[List of numerical analysis software]]
* [[Antieigenvalue theory]]
==Notes==
{{reflist|2}}
==References==
* {{Citation
| last=Korn
| first=Granino A.
| first2=Theresa M.
| last2=Korn
| title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review
| publisher=1152 p., Dover Publications, 2 Revised edition
| year=2000
| isbn=0-486-41147-8
| bibcode=1968mhse.book.....K
| journal=New York: McGraw-Hill
}}.
* {{Citation
| last = Lipschutz
| first = Seymour
| title = Schaum's outline of theory and problems of linear algebra
| edition = 2nd
| publisher = McGraw-Hill Companies
| location = New York, NY
| series = Schaum's outline series
| year = 1991
| isbn = 0-07-038007-4 }}.
* {{Citation
| last = Friedberg
| first = Stephen H.
| first2 = Arnold J.
| last2 = Insel
| first3 = Lawrence E.
| last3 = Spence
| title = Linear algebra
| edition = 2nd
| publisher = Prentice Hall
| location = Englewood Cliffs, NJ 07632
| year = 1989
| isbn = 0-13-537102-3 }}.
* {{Citation
| last = Aldrich
| first = John
| title = Earliest Known Uses of Some of the Words of Mathematics
| url = http://jeff560.tripod.com/e.html
| editor = Jeff Miller (Editor)
| year = 2006
| chapter = Eigenvalue, eigenfunction, eigenvector, and related terms
| chapterurl = http://jeff560.tripod.com/e.html
| accessdate = 2006-08-22 }}
* {{Citation
| last=Strang
| first=Gilbert
| title=Introduction to linear algebra
| publisher=Wellesley-Cambridge Press, Wellesley, MA
| year=1993
| isbn=0-9614088-5-5
}}.
* {{Citation
| last=Strang
| first=Gilbert
| title=Linear algebra and its applications
| publisher=Thomson, Brooks/Cole, Belmont, CA
| year=2006
| isbn=0-03-010567-6
}}.
* {{Citation
| last=Bowen
| first=Ray M.
| first2=Chao-Cheng
| last2=Wang
| title=Linear and multilinear algebra
| publisher=Plenum Press, New York, NY
| year=1980
| isbn=0-306-37508-7
}}.
* {{Citation
| last = Cohen-Tannoudji
| first = Claude
| author-link = Claude Cohen-Tannoudji
| title = Quantum mechanics
| publisher = John Wiley & Sons
| year = 1977
| chapter = Chapter II. The mathematical tools of quantum mechanics
| isbn = 0-471-16432-1 }}.
* {{Citation
| last = Fraleigh
| first = John B.
| first2 = Raymond A.
| last2 = Beauregard
| title = Linear algebra
| edition = 3rd
| publisher = Addison-Wesley Publishing Company
| year = 1995
| isbn = 0-201-83999-7 (international edition) }}.
* {{Citation
| last=Golub
| first=Gene H.
| authorlink1 = Gene_H._Golub
| first2=Charles F.
| last2=Van Loan
| authorlink2 = Charles_F._Van_Loan
| title=Matrix computations (3rd Edition)
| publisher=Johns Hopkins University Press, Baltimore, MD
| year=1996
| isbn=978-0-8018-5414-9
}}.
* {{Citation
| last = Hawkins
| first = T.
| title = Cauchy and the spectral theory of matrices
| journal = Historia Mathematica
| volume = 2
| pages = 1–29
| year = 1975
| doi = 10.1016/0315-0860(75)90032-4 }}.
* {{Citation
| last=Horn
| first=Roger A.
| first2=Charles F.
| last2=Johnson
| title=Matrix analysis
| publisher=Cambridge University Press
| year=1985
| isbn=0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback)
}}.
* {{Citation
| last=Kline
| first=Morris
| title=Mathematical thought from ancient to modern times
| publisher=Oxford University Press
| year=1972
| isbn=0-19-501496-0
}}.
* {{Citation
| last=Meyer
| first=Carl D.
| title=Matrix analysis and applied linear algebra
| publisher=Society for Industrial and Applied Mathematics (SIAM), Philadelphia
| year=2000
| isbn=978-0-89871-454-8
}}.
* {{Citation
| last=Brown
| first=Maureen
| title=Illuminating Patterns of Perception: An Overview of Q Methodology
| date=October 2004
| isbn=
}}.
* {{Citation
| last = Golub
| first = Gene F.
| first2 = Henk A.
| last2 = van der Vorst
| title = Eigenvalue computation in the 20th century
| journal = Journal of Computational and Applied Mathematics
| volume = 123
| pages = 35–65
| year = 2000
| doi = 10.1016/S0377-0427(00)00413-1 }}.
* {{Citation
| last=Akivis
| first=Max A.
| coauthors=Vladislav V. Goldberg
| title=Tensor calculus
| series=Russian
| publisher=Science Publishers, Moscow
| year=1969
}}.
* {{Citation
| last=Gelfand
| first=I. M.
| title=Lecture notes in linear algebra
| series=Russian
| publisher=Science Publishers, Moscow
| year=1971
| isbn=
}}.
* {{Citation
| last=Alexandrov
| first=Pavel S.
| title=Lecture notes in analytical geometry
| series=Russian
| publisher=Science Publishers, Moscow
| year=1968
| isbn=
}}.
* {{Citation
| last=Carter
| first=Tamara A.
| first2=Richard A.
| last2=Tapia
| first3=Anne
| last3=Papaconstantinou
| title=Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students
| publisher=Rice University, Online Edition
| url=http://ceee.rice.edu/Books/LA/index.html
| accessdate=2008-02-19
}}.
* {{Citation
| last=Roman
| first=Steven
| title=Advanced linear algebra
| edition=3rd
| publisher=Springer Science + Business Media, LLC
| place=New York, NY
| year=2008
| isbn=978-0-387-72828-5
}}.
* {{Citation
| last=Shilov
| first=Georgi E.
| title=Linear algebra
| edition=translated and edited by Richard A. Silverman
| publisher=Dover Publications
| place=New York
| year=1977
| isbn=0-486-63518-X
}}.
* {{Citation
| last=Hefferon
| first=Jim
| title=Linear Algebra
| publisher=Online book, St Michael's College, Colchester, Vermont, USA
| url=http://joshua.smcvt.edu/linearalgebra/
| year=2001
| isbn=
}}.
* {{Citation
| last=Kuttler
| first=Kenneth
| title=An introduction to linear algebra
| publisher=Online e-book in PDF format, Brigham Young University
| url=http://www.math.byu.edu/~klkuttle/Linearalgebra.pdf
|format=PDF| year=2007
| isbn=
}}.
* {{Citation
| last=Demmel
| first=James W. | authorlink = James Demmel
| title=Applied numerical linear algebra
| publisher=SIAM
| year=1997
| isbn=0-89871-389-7
}}.
* {{Citation
| last=Beezer
| first=Robert A.
| title=A first course in linear algebra
| url=http://linear.ups.edu/
| publisher=Free online book under GNU licence, University of Puget Sound
| year=2006
| isbn=
}}.
* {{Citation
| last = Lancaster
| first = P.
| title = Matrix theory
| series = Russian
| publisher = Science Publishers
| location = Moscow, Russia
| year = 1973 }}.
* {{Citation
| last = Halmos
| first = Paul R.
| author-link = Paul Halmos
| title = Finite-dimensional vector spaces
| edition = 8th
| publisher = Springer-Verlag
| location = New York, NY
| year = 1987
| isbn = 0-387-90093-4 }}.
* Pigolkina, T. S. and Shulman, V. S., ''Eigenvalue'' (in Russian), In:Vinogradov, I. M. (Ed.), ''Mathematical Encyclopedia'', Vol. 5, Soviet Encyclopedia, Moscow, 1977.
* {{Citation
| last=Greub
| first=Werner H.
| title=Linear Algebra (4th Edition)
| publisher=Springer-Verlag, New York, NY
| year=1975
| isbn=0-387-90110-8
}}.
* {{Citation
| last=Larson
| first=Ron
| first2=Bruce H.
| last2=Edwards
| title=Elementary linear algebra
| edition=5th
| publisher=Houghton Mifflin Company
| year=2003
| isbn=0-618-33567-6
}}.
* [[Charles W. Curtis|Curtis, Charles W.]], ''Linear Algebra: An Introductory Approach'', 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0-387-90992-3.
* {{Citation
| last=Shores
| first=Thomas S.
| title=Applied linear algebra and matrix analysis
| publisher=Springer Science+Business Media, LLC
| year=2007
| isbn=0-387-33194-8
}}.
* {{Citation
| last=Sharipov
| first=Ruslan A.
| title=Course of Linear Algebra and Multidimensional Geometry: the textbook
| year=1996
| isbn=5-7477-0099-5
| arxiv=math/0405323
}}.
* {{Citation
| last=Gohberg
| first=Israel
| first2=Peter
| last2=Lancaster
| first3=Leiba
| last3=Rodman
| title=Indefinite linear algebra and applications
| publisher=Birkhäuser Verlag
| place=Basel-Boston-Berlin
| year=2005
| isbn=3-7643-7349-0
}}.
==External links==
{{Wikibooks|Linear Algebra|Eigenvalues and Eigenvectors}}
{{Wikibooks|The Book of Mathematical Proofs|Algebra/Linear Transformations}}
* [http://www.physlink.com/education/AskExperts/ae520.cfm What are Eigen Values?] — non-technical introduction from PhysLink.com's "Ask the Experts"
*[http://people.revoledu.com/kardi/tutorial/LinearAlgebra/EigenValueEigenVector.html Eigen Values and Eigen Vectors Numerical Examples] – Tutorial and Interactive Program from Revoledu.
*[http://khanexercises.appspot.com/video?v=PhfbEr2btGQ Introduction to Eigen Vectors and Eigen Values] – lecture from Khan Academy
'''Theory'''
* {{springer|title=Eigen value|id=p/e035150}}
* {{springer|title=Eigen vector|id=p/e035180}}
* {{planetmath reference|id=4397|title=Eigenvalue (of a matrix)}}
* [http://mathworld.wolfram.com/Eigenvector.html Eigenvector] — Wolfram [[MathWorld]]
* [http://ocw.mit.edu/ans7870/18/18.06/javademo/Eigen/ Eigen Vector Examination working applet]
* [http://web.mit.edu/18.06/www/Demos/eigen-applet-all/eigen_sound_all.html Same Eigen Vector Examination as above in a Flash demo with sound]
* [http://www.sosmath.com/matrix/eigen1/eigen1.html Computation of Eigenvalues]
* [http://www.cs.utk.edu/~dongarra/etemplates/index.html Numerical solution of eigenvalue problems] Edited by Zhaojun Bai, [[James Demmel]], Jack Dongarra, Axel Ruhe, and [[Henk van der Vorst]]
* Eigenvalues and Eigenvectors on the Ask Dr. Math forums: [http://mathforum.org/library/drmath/view/55483.html], [http://mathforum.org/library/drmath/view/51989.html]
'''Online calculators'''
* [http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert.htm arndt-bruenner.de]
* [http://www.bluebit.gr/matrix-calculator/ bluebit.gr]
* [http://wims.unice.fr/wims/wims.cgi?session=6S051ABAFA.2&+lang=en&+module=tool%2Flinear%2Fmatrix.en wims.unice.fr]
'''Demonstration applets'''
* [http://scienceapplets.blogspot.com/2012/03/eigenvalues-and-eigenvectors.html Java applet about eigenvectors in the real plane]
{{Linear algebra}}
{{Mathematics-footer}}
{{DEFAULTSORT:Eigenvalues And Eigenvectors}}
[[Category:Mathematical physics]]
[[Category:Abstract algebra]]
[[Category:Linear algebra]]
[[Category:Matrix theory]]
[[Category:Singular value decomposition]]
[[Category:Articles including recorded pronunciations]]
[[Category:German loanwords]]
{{Link FA|es}}
{{Link FA|zh}}
[[ar:القيم الذاتية والمتجهات الذاتية]]
[[be-x-old:Уласныя лікі, вэктары й прасторы]]
[[ca:Valor propi, vector propi i espai propi]]
[[cs:Vlastní číslo]]
[[da:Egenværdi, egenvektor og egenrum]]
[[de:Eigenwertproblem]]
[[es:Vector propio y valor propio]]
[[eo:Ajgeno kaj ajgenvektoro]]
[[fa:مقدار ویژه و بردار ویژه]]
[[fr:Valeur propre, vecteur propre et espace propre]]
[[ko:고유값]]
[[it:Autovettore e autovalore]]
[[he:ערך עצמי]]
[[kk:Өзіндік функция]]
[[lv:Īpašvērtības un īpašvektori]]
[[lt:Tikrinių verčių lygtis]]
[[hu:Sajátvektor és sajátérték]]
[[nl:Eigenwaarde (wiskunde)]]
[[ja:固有値]]
[[no:Egenvektor]]
[[nn:Eigenverdi, eigenvektor og eigerom]]
[[pl:Wektory i wartości własne]]
[[pt:Valor próprio]]
[[ro:Vectori și valori proprii]]
[[ru:Собственные векторы, значения и пространства]]
[[simple:Eigenvectors and eigenvalues]]
[[sl:Lastna vrednost]]
[[fi:Ominaisarvo, ominaisvektori ja ominaisavaruus]]
[[sv:Egenvärde, egenvektor och egenrum]]
[[ta:ஐகென் மதிப்பு]]
[[th:เวกเตอร์ลักษณะเฉพาะ]]
[[uk:Власний вектор]]
[[ur:ویژہ قدر]]
[[vi:Vectơ riêng]]
[[zh-yue:特徵向量]]
[[zh:特征向量]]