Evalue-magic-tricks-handout

.pdf
Studocu is not sponsored or endorsed by any college or university Evalue-magic-tricks-handout Mathematics IA (The University of Adelaide) Studocu is not sponsored or endorsed by any college or university Evalue-magic-tricks-handout Mathematics IA (The University of Adelaide) Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n × n matrix. An eigenvalue of A is a number λ such that A v = λ v for some nonzero vector v . An eigenvector of A is a nonzero vector v such that A v = λ v for some number λ . Terminology Let A be an n × n matrix. The determinant | λI A | (for unknown λ ) is called the characteristic polynomial of A . (The zeros of this polynomial are the eigenvalues of A .) The equation | λI A | = 0 is called the characteristic equation of A . (The solutions of this equation are the eigenvalues of A .) If λ is an eigenvalue of A , then the subspace E λ = { v | A v = λ v } is called the eigenspace of A associated with λ . (This subspace contains all the eigenvectors with eigenvalue λ , and also the zero vector.) An eigenvalue λ * of A is said to have multiplicity m if, when the characteristic polynomial is factorised into linear factors, the factor ( λ λ * ) appears m times. Theorems Let A be an n × n matrix. The matrix A has n eigenvalues (including each according to its multiplicity). The sum of the n eigenvalues of A is the same as the trace of A (that is, the sum of the diagonal elements of A ). The product of the n eigenvalues of A is the same as the determinant of A . If λ is an eigenvalue of A , then the dimension of E λ is at most the multiplicity of λ . A set of eigenvectors of A , each corresponding to a different eigenvalue of A , is a linearly independent set. If λ n + c n - 1 λ n - 1 + · · · + c 1 λ + c 0 is the characteristic polynomial of A , then c n - 1 = trace( A ) and c 0 = ( 1) n | A | . If λ n + c n - 1 λ n - 1 + · · · + c 1 λ + c 0 is the characteristic polynomial of A , then A n + c n - 1 A n - 1 + · · · + c 1 A + c 0 I = O . (The Cayley-Hamilton Theorem.) 1 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Examples of Problems using Eigenvalues Problem: If λ is an eigenvalue of the matrix A , prove that λ 2 is an eigenvalue of A 2 . Solution: Since λ is an eigenvalue of A , A v = λ v for some v negationslash = 0 . Multiplying both sides by A gives A ( A v ) = A ( λ v ) A 2 v = λA v = λλ v = λ 2 v Therefore λ 2 is an eigenvalue of A 2 . squaresolid Problem: Prove that the n × n matrix A and its transpose A T have the same eigenvalues. Solution: Consider the characteristic polynomial of A T : | λI A T | = | ( λI A ) T | = | λI A | (since a matrix and its transpose have the same determinant). This result is the characteristic polynomial of A , so A T and A have the same characteristic polynomial, and hence they have the same eigenvalues. squaresolid Problem: The matrix A has (1 , 2 , 1) T and (1 , 1 , 0) T as eigenvectors, both with eigenvalue 7, and its trace is 2. Find the determinant of A . Solution: The matrix A is a 3 × 3 matrix, so it has 3 eigenvalues in total. The eigenspace E 7 contains the vectors (1 , 2 , 1) T and (1 , 1 , 0) T , which are linearly independent. So E 7 must have dimension at least 2, which implies that the eigenvalue 7 has multiplicity at least 2. Let the other eigenvalue be λ , then from the trace λ +7+7 = 2, so λ = 12. So the three eigenvalues are 7, 7 and -12. Hence, the determinant of A is 7 × 7 × − 12 = 588. squaresolid 2 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
The sum and product of eigenvalues Theorem: If A is an n × n matrix, then the sum of the n eigenvalues of A is the trace of A and the product of the n eigenvalues is the determinant of A . Proof: Write A = a 11 . . . a 1 n . . . . . . . . . a n 1 . . . a nn . Also let the n eigenvalues of A be λ 1 , . . . , λ n . Finally, denote the characteristic polynomial of A by p ( λ ) = | λI A | = λ n + c n - 1 λ n - 1 + · · · + c 1 λ + c 0 . Note that since the eigenvalues of A are the zeros of p ( λ ), this implies that p ( λ ) can be factorised as p ( λ ) = ( λ λ 1 ) . . . ( λ λ n ). Consider the constant term of p ( λ ), c 0 . The constant term of p ( λ ) is given by p (0), which can be calculated in two ways: Firstly, p (0) = (0 λ 1 ) . . . (0 λ n ) = ( 1) n λ 1 . . . λ n . Secondly, p (0) = | 0 I A | = | − A | = ( 1) n | A | . Therefore c 0 = ( 1) n λ 1 . . . λ n = ( 1) n | A | , and so λ 1 . . . λ n = | A | . That is, the product of the n eigenvalues of A is the determinant of A . Consider the coefficient of λ n - 1 , c n - 1 . This is also calculated in two ways. Firstly, it can be calculated by expanding p ( λ ) = ( λ λ 1 ) . . . ( λ λ n ). In order to get the λ n - 1 term, the λ must be chosen from n 1 of the factors, and the constant from the other. Hence, the λ n - 1 term will be λ 1 λ n - 1 − · · · − λλ n - 1 = ( λ 1 + · · · + λ n ) λ n - 1 . Thus c n - 1 = ( λ 1 + · · · + λ n ). Secondly, this coefficient can be calculated by expanding | λI A | : | λI A | = vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle λ a 11 a 12 . . . a 1 n a 21 λ a 22 . . . a 2 n . . . . . . . . . . . . a n 1 a n 2 . . . λ a nn vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle One way of calculating determinants is to multiply the elements in positions 1 j 1 , 2 j 2 , . . . , nj n , for each possible permutation j 1 . . . j n of 1 . . . n . If the permutation is odd, then the product is also multiplied by 1. Then all of these n ! products are added together to produce the determinant. One of these products is ( λ a 11 ) . . . ( λ a nn ). Every other possible product can contain at most n 2 elements on the diagonal of the matrix, and so will contain at most n 2 λ 's. Hence, when all of these other products are expanded, they will produce a polynomial in λ of degree at most n 2. Denote this polynomial by q ( λ ). Hence, p ( λ ) = ( λ a 11 ) . . . ( λ a nn ) + q ( λ ). Since q ( λ ) has degree at most n 2, it has no λ n - 1 term, and so the λ n - 1 term of p ( λ ) must be the λ n - 1 term from ( λ a 11 ) . . . ( λ a nn ). However, the argument above for ( λ λ 1 ) . . . ( λ λ n ) shows that this term must be ( a 11 + · · · + a nn ) λ n - 1 . Therefore c n - 1 = ( λ 1 + · · · + λ n ) = ( a 11 + · · · + a nn ), and so λ 1 + · · · + λ n = a 11 + · · · + a nn . That is, the sum of the n eigenvalues of A is the trace of A . squaresolid 3 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
The Cayley-Hamilton Theorem Theorem: Let A be an n × n matrix. If λ n + c n - 1 λ n - 1 + · · · + c 1 λ + c 0 is the characteristic polynomial of A , then A n + c n - 1 A n - 1 + · · · + c 1 A + c 0 I = O . Proof: Consider the matrix λI A . If this matrix is multiplied by its adjoint matrix, the result will be its determinant multiplied by the identity matrix. That is, ( λI A )adj( λI A ) = | λI A | I (1) Consider the matrix adj( λI A ). Each entry of this matrix is either the positive or the negative of the determinant of a smaller matrix produced by deleting one row and one column of λI A . The determinant of such a matrix is a polynomial in λ of degree at most n 1 (since removing one row and one column is guaranteed to remove at least one λ ). Let the polynomial in position ij of adj( λI A ) be b ij 0 + b ij 1 λ + · · · + b ij ( n - 1) λ n - 1 . Then adj( λI A ) = b 110 + b 111 λ + · · · + b 11( n - 1) λ n - 1 . . . b 1 n 0 + b 1 n 1 λ + · · · + b 1 n ( n - 1) λ n - 1 . . . . . . . . . b n 10 + b n 11 λ + · · · + b n 1( n - 1) λ n - 1 . . . b nn 0 + b nn 1 λ + · · · + b nn ( n - 1) λ n - 1 = b 110 . . . b 1 n 0 . . . . . . . . . b n 10 . . . b nn 0 + b 111 λ . . . b 1 n 1 λ . . . . . . . . . b n 11 λ . . . b nn 1 λ + · · · + b 11( n - 1) λ n - 1 . . . b 1 n ( n - 1) λ n - 1 . . . . . . . . . b n 1( n - 1) λ n - 1 . . . b nn ( n - 1) λ n - 1 = b 110 . . . b 1 n 0 . . . . . . . . . b n 10 . . . b nn 0 + λ b 111 . . . b 1 n 1 . . . . . . . . . b n 11 . . . b nn 1 + · · · + λ n - 1 b 11( n - 1) . . . b 1 n ( n - 1) . . . . . . . . . b n 1( n - 1) . . . b nn ( n - 1) Denote the matrices appearing in the above expression by B 0 , B 1 , . . . , B n - 1 , respectively so that adj( λI A ) = B 0 + λB 1 + · · · + λ n - 1 B n - 1 Then ( λI A )adj( λI A ) = ( λI A )( B 0 + λB 1 + · · · + λ n - 1 B n - 1 = λB 0 + λ 2 B 1 + · · · + λ n B n - 1 AB 0 λAB 1 − · · · − λ n - 1 AB n - 1 = AB 0 + λ ( B 0 AB 1 ) + · · · + λ n - 1 ( B n - 2 AB n - 1 ) + λ n B n - 1 Next consider | λI A | I . This is the characteristic polynomial of A , multiplied by I. That is, | λI A | I = ( λ n + c n - 1 λ n - 1 + · · · + c 1 λ + c 0 ) I = λ n I + c n - 1 λ n - 1 I + · · · + c 1 λI + c 0 I = c 0 I + λ ( c 1 I ) + · · · + λ ( c n - 1 I ) + λ n I Substituting these two expressions into the equation ( λI A )adj( λI A ) = | λI A | I gives AB 0 + λ ( B 0 AB 1 ) + . . . + λ n - 1 ( B n - 2 AB n - 1 ) + λ n B n - 1 = c 0 I + λ ( c 1 I ) + . . . + λ n - 1 ( c n - 1 I ) + λ n I 4 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
If the two sides of this equation were evaluated separately, each would be an n × n matrix with each entry a polynomial in λ . Since these two resulting matrices are equal, the entries in each position are also equal. That is, for each choice of i and j , the polynomials in position ij of the two matrices are equal. Since these two polynomials are equal, the coefficients of the matching powers of λ must also be equal. That is, for each choice of i , j and k , the coefficient of λ k in position ij of one matrix is equal to the coefficient of λ k in position ij of the other matrix. Hence, when each matrix is rewritten as sum of coefficient matrices multiplied by powers of λ (as was done above for adj( λI A ) ), then for every k , the matrix multiplied by λ k in one expression must be the same as the matrix multiplied by λ k in the other. In other words, we can equate the matrix coeffiicients of the powers of λ in each expression. This results in the following equations: c 0 I = AB 0 c 1 I = B 0 AB 1 . . . c n - 1 I = B n - 2 AB n - 1 I = B n - 1 Now right-multiply each equation by successive powers of A (that is, the first is multiplied by I , the second is multiplied by A , the third is multiplied by A 2 , and so on until the last is multiplied by A n ). This produces the following equations: c 0 I = AB 0 c 1 A = AB 0 A 2 B 1 . . . c n - 1 A n - 1 = A n - 1 B n - 2 A n B n - 1 A n I = A n B n - 1 Adding all of these equations together produces: c 0 I + c 1 A + · · · + c n - 1 A n - 1 + A n - 1 = AB 0 + AB 0 A 2 B 1 + · · · + A n - 1 B n - 2 A n B n - 1 + A n B n - 1 c 0 I + c 1 A + · · · + c n - 1 A n - 1 + A n - 1 = O squaresolid 5 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Polynomials acted upon matrices Theorem: Let A be an n × n matrix with eigenvalues λ 1 , . . . , λ n (including multiplicity). Let g ( x ) = a 0 + a 1 x + · · · + a k x k be a polynomial, and let g ( A ) = a 0 I + a 1 A + · · · + a k A k . Then the eigenvalues of g ( A ) are g ( λ 1 ), . . . , g ( λ n ) (including multiplicity). Proof: We will begin by showing that the determinant of g ( A ) is g ( λ 1 ) . . . g ( λ n ). By the fundamental theorem of algebra, the polynomial g ( x ) can be factorised into k linear factors over the complex numbers. Hence we can write g ( x ) = a k ( x c 1 ) . . . ( x c k ) for some complex numbers c 1 , . . . , c k . Now a matrix commutes with all its powers, and with the identity, so it is also possible to write g ( A ) as g ( A ) = a k ( A c 1 I ) . . . ( A c k I ). Also, denote the characteristic polynomial of A by p ( λ ) = | λI A | . Since the eigenvalues of A are λ 1 , . . . , λ n , the characteristic polynomial can be factorised as p ( λ ) = ( λ λ 1 ) . . . ( λ λ n ). Consider the determinant of g ( A ): | g ( A ) | = | a k ( A c 1 I ) . . . ( A c k I ) | = ( a k ) n | A c 1 I | . . . | A c k I | = ( a k ) n | − ( c 1 I A ) | . . . | − ( c k I A ) | = ( a k ) n ( 1) n | c 1 I A | . . . ( 1) n | c k I A | = ( a k ) n ( 1) nk | c 1 I A | . . . | c k I A | Now | c i I A | is | λI A | with λ replaced by c i , that is, it is the characteristic polynomial of A evaluated at λ = c i . Thus | c i I A | = p ( c i ) = ( c i λ 1 ) . . . ( c i λ n ). So, | g ( A ) | = ( a k ) n ( 1) nk p ( c 1 ) . . . p ( c k ) = ( a k ) n ( 1) nk × ( c 1 λ 1 ) . . . ( c 1 λ n ) × . . . × ( c k λ 1 ) . . . ( c k λ n ) = ( a k ) n × ( λ 1 c 1 ) . . . ( λ n c 1 ) × . . . × ( λ 1 c k ) . . . ( λ n c k ) = ( a k ) n × ( λ 1 c 1 ) . . . ( λ 1 c k ) × . . . × ( λ n c 1 ) . . . ( λ n c k ) = a k ( λ 1 c 1 ) . . . ( λ 1 c k ) × . . . × a k ( λ n c 1 ) . . . ( λ n c k ) = g ( λ 1 ) × · · · × g ( λ n ) The above argument shows that if g ( x ) is any polynomial, then | g ( A ) | = g ( λ 1 ) . . . g ( λ n ). 6 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Now we will show that the eigenvalues of g ( A ) are g ( λ 1 ), . . . , g ( λ n ). Let a be any number and consider the polynomial h ( x ) = a g ( x ). Then h ( A ) = aI g ( A ), and the argument above shows that | h ( A ) | = h ( λ 1 ) . . . h ( λ n ). Substituting the formulas for h ( x ) and h ( A ) into this equation gives that | aI g ( A ) | = ( a g ( λ 1 )) . . . ( a g ( λ n )). Since this is true for all possible a , it can be concluded that as polynomials, | λI g ( A ) | = ( λ g ( λ 1 )) . . . ( λ g ( λ n )). But | λI g ( A ) | is the characteristic polynomial of g ( A ), which has been fully factorised here, so this implies that the eigenvalues of g ( A ) are g ( λ 1 ), . . . , g ( λ n ). squaresolid Some corollaries: Let A be an n × n matrix with eigenvalues λ 1 , . . . , λ n . Then: 2 A has eigenvalues 2 λ 1 , . . . , 2 λ n . A 2 has eigenvalues λ 2 1 , . . . , λ 2 n . A + 2 I has eigenvalues λ 1 + 2, . . . , λ n + 2. If p ( λ ) is the characteristic polynomial of A , then p ( A ) has eigenvalues 0, . . . , 0. 7 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Similar matrices Definition: Two matrices A and B are called similar if there exists an invertible matrix X such that A = X - 1 BX . Theorem: Suppose A and B are similar matrices. Then A and B have the same characteristic polynomial and hence the same eigenvalues. Proof: Consider the characteristic polynomial of A : | λI A | = | λI X - 1 BX | = | λX - 1 IX X - 1 BX | = | X - 1 ( λI B ) X | = | X - 1 || λI B || X | = 1 | X | | λI B || X | = | λI B | This is the characteristic polynomial of B , so A and B have the same characteristic polynomial. Hence A and B have the same eigenvalues squaresolid . 8 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Multiplicity and the dimension of an eigenspace Theorem: If λ * is an eigenvalue of A , then the multiplicity of λ * is at least the dimension of the eigenspace E λ * . Proof: Suppose the dimension of E λ * is m and let v 1 , . . . , v m form a basis for E λ * . It is possible to find n m other vectors u m +1 , . . . , u n so that v 1 , . . . , v m , u m +1 , . . . , u n form a basis for R n . Let X be the n × n matrix with these n basis vectors as its columns. This matrix X is invertible since its columns are linearly independent. Consider the matrix B = X - 1 AX . This matrix is similar to A and so it has the same characteristic polynomial as A . In order to describe the entries of B , we will first investigate AX . AX = A [ v 1 | · · · | v m | u m +1 | · · · | u n ] = [ A v 1 | · · · | A v m | A u m +1 | · · · | A u n ] = [ λ * v 1 | · · · | λ * v m | A u m +1 | · · · | A u n ] (since v 1 , . . . , v m are eigenvalues of A ) B = X - 1 AX = X - 1 [ λ * v 1 | · · · | λ * v m | A u m +1 | · · · | A u n ] = [ X - 1 ( λ * v 1 ) | · · · | X - 1 ( λ * v m ) | X - 1 A u m +1 | · · · | X - 1 A u n ] = [ λ * X - 1 v 1 | · · · | λ * X - 1 v m | X - 1 A u m +1 | · · · | X - 1 A u n ] Now consider X - 1 v i . This is X - 1 multiplied by the i 'th column of X , and so it is the i 'th column of X - 1 X . However X - 1 X = I , so its i 'th column is the i 'th standard basis vector e i . Thus: B = [ λ * X - 1 v 1 | · · · | λ * X - 1 v m | X - 1 A u m +1 | · · · | X - 1 A u n ] = [ λ * e 1 | · · · | λ * e m | X - 1 A u m +1 | · · · | X - 1 A u n ] = λ * 0 . . . 0 b 1( m +1) . . . b 1 n 0 λ * . . . 0 b 2( m +1) . . . b 2 n . . . . . . . . . . . . . . . . . . . . . 0 0 . . . λ * b m ( m +1) . . . b mn 0 0 . . . 0 b ( m +1)( m +1) . . . b ( m +1) n . . . . . . . . . . . . . . . . . . . . . 0 0 . . . 0 b n ( m +1) . . . b nn So, λI B = λ λ * 0 . . . 0 b 1( m +1) . . . b 1 n 0 λ λ * . . . 0 b 2( m +1) . . . b 2 n . . . . . . . . . . . . . . . . . . . . . 0 0 . . . λ λ * b m ( m +1) . . . b mn 0 0 . . . 0 λ b ( m +1)( m +1) . . . b ( m +1) n . . . . . . . . . . . . . . . . . . . . . 0 0 . . . 0 b n ( m +1) . . . λ b nn If the determinant of this matrix λI B is expanded progressively along the first m columns, it 9 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
results in the following: | λI B | = ( λ λ * ) . . . ( λ λ * ) vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle λ b ( m +1)( m +1) . . . b ( m +1) n . . . . . . . . . b n ( m +1) . . . λ b nn vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle = ( λ λ * ) m vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle λ b ( m +1)( m +1) . . . b ( m +1) n . . . . . . . . . b n ( m +1) . . . λ b nn vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle vextendsingle Hence, the factor ( λ λ * ) appears at least m times in the characteristic polynomial of B (it may appear more times because of the part of the determinant that is as yet uncalculated). Since A and B have the same characteristic polynomial, the factor ( λ λ * ) appears at least m times in the characteristic polynomial of A . That is, the multiplicity of the eigenvalue λ * is at least m . squaresolid 10 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
The product of two matrices Theorem: Let A be an m × n matrix and let B be an n × m matrix, with n m . Then the n eigenvalues of BA are the m eigenvalues of AB with the extra eigenvalues being 0. Proof: Consider the ( m + n ) × ( m + n ) matrices: M = parenleftbigg O n × n O n × m A AB parenrightbigg , N = parenleftbigg BA O n × m A O m × m parenrightbigg Also let X = parenleftbigg I n × n B O m × n I m × m parenrightbigg Then Then XM = parenleftbigg I B O I parenrightbigg parenleftbigg O O A AB parenrightbigg = parenleftbigg BA BAB A AB parenrightbigg And NX = parenleftbigg BA O A O parenrightbigg parenleftbigg I B O I parenrightbigg = parenleftbigg BA BAB A AB parenrightbigg So XM = NX . Now X is an upper triangular matrix with every entry on the diagonal equal to 1. Therefore it is invertible. Hence we can multiply both sides of this equation by X - 1 to get M = X - 1 NX . Thus M and N are similar and so have the same characteristic polynomial. Consider the characteristic polynomial of each: | λI M | = vextendsingle vextendsingle vextendsingle vextendsingle λI parenleftbigg O n × n O n × m A AB parenrightbiggvextendsingle vextendsingle vextendsingle vextendsingle = vextendsingle vextendsingle vextendsingle vextendsingle parenleftbigg λI n × n O n × m A λI m × m AB parenrightbiggvextendsingle vextendsingle vextendsingle vextendsingle = | λI n × n || λI m × m AB | = λ n | λI m × m AB | | λI N | = vextendsingle vextendsingle vextendsingle vextendsingle λI parenleftbigg BA O n × m A O m × m parenrightbiggvextendsingle vextendsingle vextendsingle vextendsingle = vextendsingle vextendsingle vextendsingle vextendsingle parenleftbigg λI n × n AB O n × m A λI m × m parenrightbiggvextendsingle vextendsingle vextendsingle vextendsingle = | λI n × n BA || λI m × m | = λ m | λI m × m BA | 11 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Since M and N have the same characteristic polynomial, | λI M | = | λI N | λ n | λI m × m AB | = λ m | λI m × m BA | λ n - m | λI m × m AB | = | λI m × m BA | So the characteristic polynomial of BA is the same as the characteristic polynomial of AB , but multiplied by λ n - m . Hence BA has all of the eigenvalues of AB , but with n m extra zeros. squaresolid 12 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
A proof in finite geometry with a surprising use of eigenvalues Preliminaries: A finite projective plane is a collection of finitely many points and finitely many lines such that - Every two points are contained in precisely one line. - Every two lines share precisely one point. - There are at least three points on every line. - Not all the points are on the same line. For a finite projective plane, there is a number q called the order such that: - There are q + 1 points on every line. - There are q + 1 lines through every point. - There are q 2 + q + 1 points in total. - There are q 2 + q + 1 lines in total. A polarity of a finite projective plane is a one-to-one map σ which maps points to lines and lines to points, so that if the point P is on the line , then the point σ ( ) is on the line σ ( P ), and also for any point or line X , σ ( σ ( X )) = X . An absolute point with respect to a polarity σ of a projective plane is a point P such that P is on the line σ ( P ). Theorem: A polarity of a finite projective plane plane must have an absolute point. Proof: Let σ be a polarity of a finite projective plane of order q . Denote the points in the projective plane by P 1 , P 2 , . . . , P q 2 + q +1 , and denote the line σ ( P i ) by i for each i = 1, . . . , q 2 + q + 1. Note that since σ is a polarity, then σ ( i ) = σ ( σ ( P i )) = P i for any i . Create a ( q 2 + q + 1) × ( q 2 + q + 1) matrix A as follows: if the point P i is on the line j , then put a 1 in entry ij of the matrix A , otherwise, put a 0 in position ij . The matrix A is called an incidence matrix of the projective plane. Since σ is a polarity, if P i is on j , then σ ( P i ) is on σ ( j ) = σ ( σ ( P j )) = P j . Hence if there is a 1 in position ij , then there is also a 1 in position ji . Thus the matrix A is symmetric and A T = A . Now an abolute point is a point P i such that P i is on σ ( P i ). That is, an absolute point is a point such that P i is on i . Hence, if σ has an absolute point, then there is a 1 on the diagonal of A . Therefore, the number of absolute points of σ is the sum of the diagonal elements of A . That is, it is the trace of A . To find the trace of A , we may instead find the sum of the eigenvalues of A . Firstly note that every row of A contains precisely q + 1 entries that are 1 since each point lies on q + 1 lines. Hence the rows of A all add to q + 1. Therefore when A is multiplied by (1 , 1 , . . . , 1) T , 13 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
the result is ( q + 1 , . . . , q + 1) T . This means that (1 , . . . , 1) T is an eigenvector of A with eigenvalue q + 1. Consider the matrix A 2 . If A has eigenvalues of λ 1 , . . . , λ q 2 + q +1 , then A 2 will have eigenvalues of λ 2 1 , . . . , λ 2 q 2 + q +1 . Hence information about the eigenvalues of A can be obtained from the eigenvalues of A 2 . Since A = A T , we have that A 2 = AA = A T A . The ij element of A T A is the dot product of the i th row of A T with the j th column of A , which is the dot product of the i th column of A with the j th column of A . Now each column of A represents a line of the projective plane and has a 1 in the position of each of the points on this line. Two lines share precisely one point, and so two columns of A are both 1 in precisely one position. Hence the dot product of two distinct columns of A is 1. One the other hand, any line contains q + 1 points, and therefore any column of A has q + 1 entries equal to 1. Hence the dot product of a column of A with itself is q + 1. Therefore, the diagonal entries of A 2 are q +1 and the other entries of A 2 are 1. That is A 2 = J + qI , where J is a matrix with all of its entries equal to 1. Now the matrix J is equal to the product of two vectors (1 , . . . , 1) T (1 , . . . , 1). Multiplying these vectors in the opposite order gives a 1 × 1 matrix: (1 , . . . , 1)(1 , . . . , 1) T = [ q 2 + q + 1]. This 1 × 1 matrix has eigenvalue q 2 + q + 1. Now for any two matrices such that AB and BA are both defined, the eigenvalues of the larger matrix are the same as the eigenvalues of the smaller matrix with the extra eigenvalues all being zero. Thus the q 2 + q + 1 eigenvalues of J must be q 2 + q + 1, 0, . . . , 0. Consider the polynomial g ( x ) = x + q . The matrix g ( J ) = J + qI = A 2 . Therefore the eigenvalues of A 2 are g ( q 2 + q + 1), g (0) . . . , g (0). That is, the eigenvalues of A 2 are q 2 + 2 q + 1, q , . . . , q . One of the eigenvalues of A is q + 1, and so the remaining eigenvalues of A must all be q or q Suppose there are k eigenvalues equal to q and q 2 + q k equal to q . Then the trace of A is equal to k q + ( q 2 + q k ) q + q + 1 = q ( q 2 + q 2 k ) + q + 1 If q = p 2 for some natural number p , then the trace is p ( q 2 + q 2 k ) + p 2 + 1, which is one more than a multiple of p , and so it cannot be 0. If q is not a square number, then q is irrational and so if q 2 + q 2 k negationslash = 0 then the trace of A must be irrational also. However, the entries of A are all 0 or 1, so its trace is an integer. Hence q 2 + q 2 k = 0 and the trace is q + 1. In either case, the trace is not 0, so the polarity must have an absolute point. squaresolid 14 Downloaded by Alice Yu ([email protected]) lOMoARcPSD|32041740
Uploaded by MinisterSandpiperMaster945 on coursehero.com