ebook img

Geometrical Methods in Physics PDF

87 Pages·0.393 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Geometrical Methods in Physics

Chapter 0 Introduction 0.1 We will study geometry as it is used in physics. 0.2 Mathematics is the most exact possible thought. 0.3 Physics is the most exact description of nature; the laws of physics are expressed in terms of mathematics. 0.4 The oldest branch of mathematics is geometry; its earliest scientific use was in astronomy. 0.5 Geometry was formulated axiomatically by Euclid in his Elements. 0.6 Newton used that as a model in his Principia to formulate the laws of mechanics. Newton proved most of his results using geometric methods. 0.7 Einstein showed that gravity can be understood as the curvature of the geometry of space-time. 0.8 Theotherforcesofnature(theelectromagnetic,weakandstrongforces) are also explained in terms of the geometry of connections on fibre bundles. 0.9 We dont yet have a unifiedtheory of allforces; allattempts to construct such a theory are based on geometric ideas. 0.10 No one ignorant of geometry can be a physicist. 1 Chapter 1 Vector Spaces 1.1 A vector space is a set on which the operations of addition and multi- plication by a number are defined. 1.2 The numbers can be real or complex; then we get real or complex vector spaces respectively. We will (unless said otherwise) work with real vector spaces. 1.3 The elements of a vector space are called vectors; the numbers we multiply vectors with are often called scalars. 1.4 Addition of vectors must be commutative and associative; there must be a vector 0 which when added to any vector produces itself. 1.5 Multiplication of a vector by a scalar must be distributive with respect to vector addition as well as scalar addition. 1.6 The smallest vector space is the set consisting of just the zero vector; this is called the trivial vector space and is denoted by 0 . 1.7 The set of real numbers is itself a real vector space, called R . 1.8 The set of ordered n -tuples of real numbers is a vector space, with addition being defined component-wise and the scalar multiplying each com- ponent. 1.9 The above vector space is said to have dimension n ; we will see an abstract definition of dimension later. 1 1.10 There are also vector spaces of infinite dimension; the set of all real valued functions on any set is a vector space. The dimension of this vector space is the cardinality of the set. 1.11 A map f : V → W between vector spaces is linear if f(αu+βv) = αf(u)+βf(v). 1.12 If a linear map is one-one and onto, it is an isomorphism; the corre- sponding vector spaces are said to be isomorphic, V ∼ W . ‘Isomorphic’ means ‘having the same structure’. 1.13 The set V(cid:1) of linear maps of a vector space V to R is called its dual. V(cid:1) is also, of course, a vector space. The elements of V (cid:1) are often called 1 -forms. 1.14 A linear operator is a linear map from V to itself. It makes sense to multiply linear operators: LM(u) = L(M(u)) . Operator multiplication is associative but not always commutative. 1.15 An algebra is a vector space along with a bilinear multiplication V × V → V ;i.e., (αu+βv)w = αuw+βvw,u(αv+βw) = αuv+βuw. 1.16 An algebra is commutative if uv = vu for all pairs; it is associative if u(vw) = (uv)w for all triples. 1.17 The set of linear operators on a vector space is an associative algebra; it is commutative only when the the vector space is either 0 or R . 1.18 There are two ways of combining two vector spaces to get a new one: the direct sum and the direct product. 1.19 The direct sum V ⊕W of two vector spaces V and W is the set of ordered pairs (v,w) with the obvious addition and scalar multiplication : (v,w)+(v(cid:1),w(cid:1)) = (v +v(cid:1),w+w(cid:1)) and α(v,w) = (αv,αw) . 1.20 Transposition, (v,w) (cid:4)→ (w,v) , is a natural isomorphism between V ⊕ W and W ⊕ V ; also U ⊕ (V ⊕ W) and (U ⊕ V) ⊕ W can both be thought of as the space of triples (u,v,w) . Hence we will just write U ⊕V ⊕W etc. 2 1.21 It is clear that Rm+n = Rm ⊕Rn . 1.22 The direct or tensor product V ⊗W of two vector spaces is the set of linear maps from V (cid:1) to W . 1.23 Again V ⊗W ∼ W ⊗V ; U ⊗(V ⊗W) ∼ (U ⊗V)⊗W . 1.24 Also, V ×R = V ; Rm ⊗Rn = Rmn . 1.25 Wecaniteratethe directproduct n timestoget V ⊗n = V ⊗V ⊗···V . Its elements are called contravariant tensors of order n . 1.26 Since V⊗m⊗V⊗n = V⊗(m+n) it is natural to define V⊗0 = R;V⊗1 = V . Thus scalars are tensors of order zero while vectors are contravariant tensors of order one. 1.27 We can then take the direct sum of all of these to get the total tensor space T (V) = ⊗∞ V⊗n = R⊕V ⊕V ⊗V ··· . n=0 1.28 V(cid:1)⊗n can be viewed also as the space of multilinear functions of n vectors. Its elements are also called covariant tensors. 1.29 The direct product of two 1-forms u(cid:1)⊗v(cid:1) can defined by its action on a pair of vectors: u(cid:1) ⊗v(cid:1)(u,v) = u(cid:1)(u)v(cid:1)(v) . A general element of V(cid:1) ⊗V(cid:1) is a linear combination of such ‘factorizable’ elements. 1.30 More generally we can define the direct product of two covariant ten- sors of order m and n : t⊗t˜(u ,···u ,v ,···v ) = t(u ,···,u )t˜(v ,···v ). 1 m 1 n 1 m 1 n Thisturns T (V(cid:1)) intoanassociativebutingeneralnotcommutativealgebra. It is commutative only if V ∼ 0,R . 1.31 An element t ∈ V(cid:1)⊗n is symmetric if it is invariant under permuta- tions; e.g., t(u,v) = t(v,u) . The subspace of symmetric tensors is denoted (cid:1) by Sn(V(cid:1)) and S(V(cid:1)) = ∞ Sn(V(cid:1)) . n=0 1.32 Averaging over all possible orderings gives a projection σ : V (cid:1)⊗n → Sn(V(cid:1)) ; e.g., σ(t)(u,v) = 1[t(u,v)+t(v,u)] . 2 3 1.33 We can define a ‘symmetrized multiplication’ ss˜ of two symmetric tensors: ss˜= σ(s⊗s˜ ). This turns S(V(cid:1)) into a commutative associative algebra. 1.34 A tensor t ∈ V(cid:1)⊗n is antisymmetric if it changes sign under an odd permutation of its arguments; e.g., t(u,v) = −t(v,u). A covariant anti- symmetric tensor is also called a form; the space of anti-symmetric tensors is denoted by Λn(V(cid:1)) . 1.35 Averaging all permutations weighted by the sign of the permutation givesa projection λ : V(cid:1)⊗n → Λn(V(cid:1)) tothespaceofanti-symmetrictensors. 1.36 The wedge product or exterior product of forms is defined by a∧b = λ(a⊗b ). It is skew-commutative: a∧b = (−1)mnb∧a if a ∈ Λm(V(cid:1)) and b ∈ Λn(V(cid:1)) . (cid:1) 1.37 The wedge product turns Λ(V(cid:1)) = Λn(V(cid:1)) into an associative n algebra called the exterior algebra. 1.38 An inner product < .,. > on V is a symmetric bilinear map V × V → R which is non-degenerate; i.e., < u,v >=< v,u > and < u,v >= 0∀u ⇒ v = 0 . It is positive if < u,u >≥ 0 . 1.39 Clearly an inner product is a kind of covariant tensor; it is called the metric tensor. We may denote < u,v >= g(u,v) . 1.40 A positive inner product gives a notion of length for every vector; the square of the length of a vector is just its inner product with itself. 1.41 An inner product can be thought of as an invertible linear map from V to V(cid:1) ; i.e., it is an isomorphism between V and V (cid:1) . 1.42 The inverse of the above map gives an inner product on V (cid:1) given one on V . (cid:2) 1.43 The simplest example of an inner product is < u,v >= uivi in i Rn . The length of a vector is just its Euclidean distance from the center. 1.44 A symplectic form is an anti-symmetric non-degenerate bilinear map V ×V → R . A vector space along with a symplectic form is a symplectic vector space. 4 1.45 Just as the metrictensor measures the lengthof a vector, the symplec- tic form measures the area of the parallelogram formed by a pair of vectors. The sign of the area contains information about the relative orientation of the vectors: if we reverse the direction of one of them the area changes sign. 1.46 A complex structure is a linear map J : V → V which satisfies J2 = −1 , where 1 denotes the identity map. 1.47 Let ζ = α+iβ be a complex number. Define ζu = αu+βJv . This turns a real vector space with a complex structure into a complex vector space. Every complex vector space can be obtained this way. 1.48 The definition of inner product on complex vector spaces is a bit dif- ferent. We require the complex number < u,v > to be hermitean rather than symmetric: < u,v >=< v,u >∗ . Moreover it is linear in the second argument but anti-linear in the first : < αu+βv,w >= α∗ < u,w > +β∗ < v,w > . 1.49 A complex vector space with an inner product can be thought of as a real vector space with a complex structure and an inner product, with the inner product satisfying the additional condition that ω(u,v) = g(u,Jv) is antisymmetric; i.e., ω is a symplectic form. 1.50 Conversely given a complex structure J and a symplectic form ω we have a hermitean inner product if g(u,v) = ω(Ju,v) is symmetric. 1.51 The elements of the space V⊗m ⊗V(cid:1)⊗n are called tensors of order (m,n) . For example, a complex structure is a tensor of order (1,1) . 1.52 The multiplication rule of an algebra can also be viewed as a tensor . For, a bilinear map V × V → V can be viewed as a trilinear map m : V(cid:1)×V ×V → R . The structure tensor m is an element of V ⊗V(cid:1)⊗2 ; i.e., a tensor of order (1,2) . 5 Chapter 2 Index Notation for Tensors 2.1 Just as it is useful to represent numbers in the decimal (or other base) to perform calculations, it is useful to represent vectors by their components with respect to a basis. (cid:2) 2.2 A set of vectors e ,···e ∈ V is linearly independent if α e = 1 k i i i 0 ⇒ α = 0 . i 2.3 A basis is a maximal set of linearly independent vectors. (cid:2) 2.4 That is, any vector v ∈ V can be expressed as v = vie for some i i unique n -tuple of real numbers (v1,···vn). These numbers are called the components of v with respect to e . i 2.5 We will see soon why we place the indices on the components as super- scripts and not subscripts. 2.6 The maximum number of linearly independent vectors in V is called the dimension of V . We will consider the case of finite dimensional vector spaces unless stated otherwise. 2.7 Two bases e ,e˜ are related by a linear transformation e˜ = Aje . i i i i j Aj are the components of an invertible matrix. i 2.8 The components of a vector with respect to these bases are also related (cid:2) (cid:2) (cid:2) by a linear transformation: v = vie = v˜ie˜,vi = Aiv˜j . i i i i j j 1 2.9 To simplify notation we will often drop the summation symbol. Any index that appears more than once in a factor willbe assumed to be summed over. This convention was introduced by Einstein. 2.10 It will be convenient to introduce the Kronecker delta symbol: δi =0 j if i (cid:11)= j and δi = 0 if i = j . j 2.11 Given a basis e in V , we can construct its dual basis in V(cid:1) : i e(cid:1)i(e ) = δi . j j 2.12 A formcan be expanded interms of this dualbasis: φ = φ e(cid:1)i . These i components transform in the opposite way to the components of a vector: φ˜ = Ajφ . i i j 2.13 Although the components depend on the basis, the scalar φ(u) = φ ui i is invariant under changes of basis. 2.14 More generally, the collection e ⊗···e ⊗e(cid:1)j1⊗···e(cid:1)jn is a basis in i1 im the space of tensors of order (m,n) : t = ti1···ime ⊗···e ⊗e(cid:1)j1⊗···e(cid:1)jn . j1···jn i1 im 2.15 Each upper index transforms contravariantly under changes of basis; each lower index transforms covariantly. This is the reason for distinguishing between the two types of indices. A contraction of an upper index with a lower index is invariant under changes of basis. 2.16 The metric tensor of an inner product space has components that satisfy g = g . Moreover, the determinant of the matrix g is non-zero. ij ji ij 2.17 Given an inner product in V , we can construct one on V (cid:1) ; its components gij form the matrix inverse to g : gijg = δi . ij jk k 2.18 We call the gij the contravariant components of the metric tensor and g the covariant components. ij 2.19 Because the metric tensor can be thought of as an isomorphism be- tween a vector space and its dual, it can be used to convert a contravariant index into a covariant index and vice-versa: vi → g vj or w → gijw . ij i j This is called lowering and raising of indices. 2 2.20 A symplectic structure ω has components ω which form an invert- ij ible anti-symmetric matrix. Since the determinant of an odd dimensional anti-symmetric matrix is zero, symplectic forms exist only in even dimen- sional vector spaces. 2.21 A complex structure J has components Ji satisfying the condition j JiJj = −δi. j k k 2.22 The multiplication rule of an algebra can be thought of as a tensor of order (1,2) . It has components mi ; the components of the product of jk u and v is mi ujvk . The mi are called the structure constants of the jk jk algebra. 2.23 If the algebra is commutative, the structure constants are symmetric in the two lower indices: mi = mi . If it is associative, it satisfies the jk kj quadratic relation mpmq = mq mp . ij pk ip jk 2.24 A symmetric tensor t ∈ Sn(V(cid:1)) can be thought of as defining a polynomial of order n in the components of a vector: t(v,v,···v) = t vi ···vin . Infactthispolynomialcompletelydeterminesthesymemtric i1i2···in 1 tensor. The mutplication law for symmetric tensors we defined earlier is just the ordinary multiplication of polynomials: this explains why it is commu- tative. 2.25 The wedge product of forms can be thought of in a similar way as the polynomialsinsome‘anti-commuting’variable. Thisisasuggestivenotation, particularly favored by physicists. 2.26 We define some abstract variables ( Grassmann variables) ψi satisfy- ing ψiψj = −ψjψi for all i,j . In particular (ψi)2 = 0 . A polynomial or order n in these variables will be t ψi1···ψin . The multiplication of i1···in these polynomials is equivalent to the wedge prodcut of forms. 2.27 More precisely, the exterior algebra is the free algebra generated by the ψi quotiented by the ideal generated by the relations ψiψj = −ψjψi . 2.28 The free algebra on n variables is just the tensor algebra on an n -dimensional vector space. 3 2.29 No physicist can work with symmetric and anti-symmetric tensors without being reminded of bosonic and fermionic quantum systems. Indeed if V is the space of states of a particle, the space of states of an n particle system is Sn(V) for bosons and Λn(V) for fermions. The only point to remember isthat the vector spaces of interest inquantum mechanics are over the field of complex numbers. 2.30 The components of a vector can be thought of as the Cartesian co- ordinates on V . Given a function f : V → R and a basis e in V we i can construct a function f˜ : Rn → R,f˜(x ,x ,···x ) = f(xie ). We can 1 2 n i ˜ now define a f to be differentiable if f has continuous partial derivatives as a function on Rn .This class of functions is called C1(V) . 2.31 Analogously we define Ck(V) , C∞(V) , Cω(V) . They are respec- tively, the class of functions whose k th derivatives are continuous, then those with continuous partial derivatives of any order and finally functions which have a convergent Taylor series expansion. C∞ functions are called smooth and the Cω are analytic. 2.32 Analytic functions are completely defined by their derivatives at the origin; these derivatives are symmetric tensors. Thus, each element of T(V) (cid:2) for which the infinite series ∞ t xi1 ···xin converges characterizes an n=0 i1···in analytic function. There is no similar characterization of smooth or differen- tiable functions. For polynomials this series terminates. 4

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.