ebook img

Stochastic Airy semigroup through tridiagonal matrices PDF

0.63 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Stochastic Airy semigroup through tridiagonal matrices

STOCHASTIC AIRY SEMIGROUP THROUGH TRIDIAGONAL MATRICES VADIM GORIN AND MYKHAYLO SHKOLNIKOV Abstract. We determine the operatorlimitfor largepowersofrandomtridiagonal matrices as the size of the matrix grows. The result provides a novel expression in terms of functionals of Brownian motions for the Laplace transform of the Airy β process,whichdescribesthelargesteigenvaluesintheβ ensemblesofrandommatrix 6 theory. Another consequence is a Feynman-Kac formula for the stochastic Airy 1 operator of Ram´ırez, Rider, and Vir´ag. 0 Asasideresult,wefindthatthedifferencebetweentheareaunderneathastandard 2 BrownianexcursionandonehalfoftheintegralofitssquaredlocaltimesisaGaussian n random variable. a J 5 2 ] 1. Introduction R P This article is about spectral properties of random matrices, and we refer to [AGZ], . [PaS2], [Fo], [ABF] for modern general reviews. In random matrix ensembles one dis- h t tinguishes a parameter β, which is typically equal to 1, 2, or 4 in full Hermitian matrix a m models (such as Wigner or Wishart ensembles) and corresponds to real, complex, or [ quaternion matrix elements. More generally, β can be taken to be an arbitrary pos- itive number, in relation with Coulomb log-gases, the Calogero-Sutherland quantum 1 v many-body system, random tridiagonal matrices, Heckman-Opdam and Macdonald 0 processes, see e.g. [ABF, Chapter 20 “Beta Ensembles”], [Dum], [BG2] for the details. 0 8 Here, we concentrate on edge limits of random matrix ensembles describing the as- 6 0 ymptotic behavior of the largest eigenvalues (and the corresponding eigenvectors). At . 1 β = 2, i.e. for complex Hermitian matrices, there are many deep results in this di- 0 rection. In particular, the properly centered and rescaled largest eigenvalue converges 6 1 to the Tracy-Widom law F2 [TW], the point process describing all largest eigenvalues : converges to the Airy point process, which is a part of 2D Airy line ensemble [PrSp], v i [CH] (the latter can be obtained by considering the largest eigenvalues of corners of X random matrices). All these results are very robust and have been proved rigorously r a in great generality, see e.g. [Sos], [Pe], [So1], [PaS1], [DG], [BEY], [KRV], [BFG]. Fur- thermore, the universality of these objects extends far beyond random matrix theory, see e.g. [BG1], [BP], [J2], [C] and references therein. Several descriptions of the lim- iting objects are known: tracktable expressions of their correlation functions (see e.g. [TW], [PrSp]) and Laplace transforms (see e.g. [Sos], [Ok], [So2]), and a conjectural description through the so-called Brownian Gibbs property (see [CH, Section 3.2]). While for β = 1 there are still many results parallel to the β = 2 case, there is much less understanding for general values of β > 0. For the general β analogues of F and the Airy point process, the only known identification is via the spectrum 2 of the stochastic Airy operator [ES], [RRV], and no analytic formulas for correlation functions or Laplace transforms are known. Moreover, even the existence of the Airy line ensemble for general β has not been established. 1 2 VADIMGORINAND MYKHAYLOSHKOLNIKOV Fromthe analytic point of view, the main difficulty for general β is that the determi- nantal/Pfaffian formulas for the correlation functions available for β = 1, 2, 4 are not known to extend to other values of β. A recent alternative approach producing explicit formulas through Macdonald processes [BC], [BCGS] does work for β ensembles (see [BG2], [BG3]), but the edge limits are not yet accessible through these techniques. Another approach, which has proved to be very successful for β = 1, 2, 4, is the moments method. In the present article we prove that the latter approach can be adapted to the study of the edge limits of general β ensembles. This leads to several outcomes. First, we prove that the Laplace transform of the point process of the rescaled largest eigenvalues in the Gaussian β ensemble (and more general random matrices) converges to the Laplace transform of the Airy point process and establish β a novel formula for the latter in terms of a functional of Brownian motion. This is closely related to our second result: the identification and proof of a Feynman-Kac formulaforthestochastic Airy semigroup, thesemigroupassociatedwiththestochastic Airy operator. It is known that Laplace transforms can be used to study various properties of the underlying point processes. For instance, by integrating one should be able to access linear statistics of the Airy point process (in addition to their intrinsic interest, the β latter are also important for the study of rigidity properties, cf. [GhPe]). On the other hand, by sending the parameter of the Laplace transform to infinity, one should be able to find the Tracy-Widom laws F . We postpone the discussion of these possible β applications to future papers. Instead, we present a rather unexpected consequence: comparing our results with the literature for β = 2 we find a novel identity involving the Brownian excursion area and the local times of the same excursion. From the technical point of view, our main result is the computation of the asymp- totics of matrix elements of large powers of random tridiagonal matrices. More pre- cisely, for a matrix of size N N we deal with powers of the order N2/3. In the case × when the powers do not grow or grow slower than N2/3, such asymptotics has been previously analyzed in [DE2], [Wo], [Duy], but for the analysis of the fast growing powers (directly related to the edge asymptotics of β ensembles) many new ideas are necessary. In particular, in our proofs we heavily rely on strong invariance principles, i.e. statements about the convergence of the trajectories of (conditioned) random walks to those of Brownian motions (or bridges) with a very precise control of errors, as in [Kh], [LTF], [BK2]. In addition, we use path transformations linking discrete local times of random walks to time-changed versions of the same random walk, see [AFP], and also [Je], [BY], [CSY] for the continuous analogues. We proceed to a detailed exposition of our results. Notation. In what follows C stands for a positive constant whose exact value is not important for us and might change from line to line. Acknowledgement. We would like to thank Simone Warzel for suggesting the strat- egy for the proof of the trace formula (2.5), and Sasha Sodin for helpful discussions. V. G. was partially supported by the NSF grant DMS-1407562. M. S. was partially supported by the NSF grant DMS-1506290. STOCHASTIC AIRY SEMIGROUP THROUGH TRIDIAGONAL MATRICES 3 2. Setup and results Given two sequences of independent random variables a(m), m N and b(m), ∈ m N, we define for each N N the N N symmetric tridiagonal matrix M = N (M∈[m,n])N by setting M∈[m,m] = a×(m), m = 1,2,...,N and M[m,m + 1] = N m,n=1 N b(m), m = 1,2,...,N 1: − a(1) b(1) 0 0 b(1) a(2) b(2) ·.·.·. ...   (2.1) MN = 0 b(2) a(3) ... 0    ... ... ... ... b(N 1)   0 0 b(N 1) a(N−)   ··· −  In this paper, we study M in the asymptotic regime N . 1  N → ∞ The case when, for a fixed β > 0, all a(m), m N have the normal distribution ∈ N(0,2/β), and the b(m), m N are β 1/2 multiples of χ-distributed random variables − ∈ with parameters βm, m N is of particular interest. Here, the density of the χ ∈ distribution with parameter a on R is 0 ≥ 21 k/2 − xa 1e x2/2, x > 0. − − Γ(a/2) In this situation, the joint density of the N eigenvalues λ λ λ of M is 1 2 N N ≤ ≤ ··· ≤ proportional to N (λj λi)β e−β4λ2i, − i<j i=1 Y Y and the corresponding joint distribution is usually referred to as the Gaussian β en- semble, seee.g. [DE]. Moregenerally, weworkwitharbitrarysequences ofindependent random variables a(m), m N and b(m), m N satisfying the following assumption. ∈ ∈ Assumption 2.1. The sequences a(m), m N and b(m), m N of independent ∈ ∈ random variables satisfy, with the notation (2.2) b(m) = √m+ξ(m), m N, ∈ (a) as m , E[a(m)] = o(m 1/3), E[ξ(m)] = o(m 1/3); − − → ∞ (b) there exist n(cid:12)onnegati(cid:12)ve constants s(cid:12)a, sξ suc(cid:12)h that s42a + s2ξ = β1 and, as m → ∞, E[a(m)2] = s(cid:12)2 +o(1),(cid:12)E[ξ(m)2] = s2(cid:12)+o(1);(cid:12) a ξ (c) there exist constants C > 0 and 0 < γ < 2/3 such that E a(m) ℓ Cℓℓγℓ and E ξ(m) ℓ Cℓℓγℓ for all m,ℓ N. | | ≤ | | ≤ ∈ In particu(cid:2)lar, we h(cid:3)ave the following s(cid:2)imple le(cid:3)mma. Lemma 2.2. If all a(m), m N are N(0,2/β)-distributed and the √βb(m), m N ∈ ∈ are χ-distributed with parameters βm, m N, respectively, then Assumption 2.1 holds ∈ with s /2 = s = 1 . a ξ √2β Proof. The result is immediate for a(m), m N. For ξ(m), m N it follows from known tail estimates for χ random variables, s∈ee e.g. [LM, Section∈4.1, Lemma 1]. (cid:3) 1All our arguments extend to the case when the entries a(m), m = 1,2,...,N and b(m), m = 1,2,...,N 1 vary with N. However, to keep the notation reasonable we have decided to work in − the less general setup of no dependence on N. 4 VADIMGORINAND MYKHAYLOSHKOLNIKOV We start the study of the N limit of M by recalling the semicircle law. For N → ∞ each N N, let λ1 λ2 λN denote the ordered eigenvalues of M . Consider ∈ N ≥ N ≥ ··· ≥ N N the random probability measure N 1 (2.3) ρ = δ . N N λiN/√N i=1 X Proposition 2.3. Under Assumption 2.1, as N , the random measure ρ con- N → ∞ verge weakly, in probability, to the deterministic measure µ with density 1 √4 x2, 2 < x < 2. 2π − − For the Gaussian β ensemble, Proposition 2.3 is well-known and can be proven in several ways, cf. [AGZ], [Dum]. For the sake of completeness, we provide a proof of our more general statement in the appendix. Proposition 2.3 gives the leading order asymptotics of the normalized spectral mea- sure ρ . Refinements of this statement are available in at least three different direc- N tions. The first onestudies thehigher order asymptotics ofρ inthesame coordinates, N that is, the fluctuations of ρ around the semicircle distribution µ. This is referred N to as global asymptotics. In this direction, we prove in the appendix a Central Limit Theorem (CLT) for the joint fluctuations of multiple corners of M . Note that the N Gaussian nature of the fluctuations (and the corresponding covariance structure) for a single matrix is well-known, cf. [J1], [AGZ], [Dum]. The second refinement is the study of the local asymptotics of the eigenvalues in the bulkofthespectrum. Atypical questioninthisdirectionistheasymptoticdistribution of the rescaled spacing √N(λ⌊N/2⌋ λ⌊N/2⌋+1). We do not address this limiting regime N − N in the present paper and instead refer to [VV] for results of this type for random tridiagonal matrices. The third refinement is the investigation of the asymptotics of the extreme eigenval- ues of M and the corresponding eigenvectors, which is known as edge asymptotics. N In this direction, it is shown in [RRV] (see also [KRV]) that the random variable N1/6(λi 2√N) converges in distribution for every fixed i N. The limit of the N − ∈ corresponding eigenvector is also studied therein. Our main results are closely related to this work. Let us now present the main results of this paper. Fix a (possibly unbounded) interval R , consider a probability space which supports a standard Brownian 0 motion WA,⊂and≥consider for each T > 0 the following (random) kernel on R R : 0 0 ≥ × ≥ (2.4) 1 (x y)2 K (x,y;T) = exp − A √2πT − 2T (cid:18) (cid:19) 1 T 1 E 1 exp Bx,y(t)dt+ ∞L (Bx,y)dW(a) . Bx,y t:Bx,y(t) a " {∀ ∈A} (cid:18)−2 Z0 √β Z0 (cid:19)# Here, Bx,y is a standard Brownian bridge starting at x at time 0 and ending at y at time T which is independent of W; the L (Bx,y) are the local times accumulated by a Bx,y at level a on [0,T]; and the expectation E is taken only with respect to Bx,y. Bx,y We define (T), T > 0 as the integral operators on R with kernels K (x,y;T), 0 UA ≥ A T > 0, respectively. In order to be able to make statements about multiple operators STOCHASTIC AIRY SEMIGROUP THROUGH TRIDIAGONAL MATRICES 5 (T), we use the same path of W in (2.4) and define the stochastic integral with UA respect to W therein according to the almost sure procedure described in [Ka]. For notational convenience, we let (0) be the the orthogonal projector from L2(R ) 0 onto L2( ); in particular, whenUA = R , then (0) is the identity operator. ≥ 0 A A ≥ UA Proposition 2.4. For each T > 0, (T) is almost surely a symmetric non-negative trace class operator on L2(R ) satisUfyAing the trace formula 0 ≥ (2.5) Trace( (T)) = K (x,x;T)dx. UA ZR≥0 A Proposition 2.5. Operators (T), T 0 have the almost sure semigroup property: UA ≥ for any T ,T 0, it holds (T ) (T ) = (T +T ) with probability one. 1 2 1 2 1 2 ≥ UA UA UA Proposition 2.6. The semigroup (T), T 0 is L2-strongly continuous, that is, for any T 0 and f L2(R ), it holUdAs lim ≥E (T)f (t)f 2 = 0. 0 t T ≥ ∈ ≥ → kUA −UA k Proposition 2.7. There exists an orthonorma(cid:2)l basis of random vec(cid:3)tors v1,v2,... ∈ L2( ) L2(R ) and random variables η1 η2 ... defined on the sameAproAbability 0 spacAe a⊂s (T≥), T > 0 such that, for eAac≥h TA ≥> 0, the spectrum of (T) (as an operator oUnAL2( )) consists of eigenvalues exp(Tηi /2), i N correspUoAnding to the A ∈ eigenvectors vi , i N, respectively. A ∈ A The proofs of Propositions 2.4, 2.5, and 2.6 are given in Section 5, and Proposition 2.7 is established in Section 7. Our interest in the operators (T), T > 0 is based on their appearance in the UA N edge limit of the matrix M and its submatrices. More specifically, let N → ∞ S denote the set of all locally integrable functions f on R which grow subexponentially 0 fast at infinity (that is, for which there exists a δ > 0≥such that f(x) = O(exp(x1 δ)) − as x ). Further, for any N N and f , write π f for the vector in RN with N → ∞ ∈ ∈ S components N1/6 NN−−11//33((NN−ii)+1)f(x)dx, i = 1,2,...,N and (πNf)′ for its transpose. In addition, define the N −N matrix R × TN2/3 TN2/3 1 1 MN; ⌊ ⌋ MN; ⌊ ⌋− (T, ,N) = A + A , M A 2 2√N 2√N ! (cid:18) (cid:19) (cid:18) (cid:19) where M is the restriction of M onto , so that the (i,j)-th entry of M N; N N, is equal toAthat of MN if N−Ni1+/31/2, N−Nj1+/31/2 ∈AA and zero otherwise. In particularA, M = M . N;[0, ) N ∞ Theorem 2.8. Under the Assumption 2.1 we have lim (T, ,N) = (T), T 0 N M A UA ≥ →∞ in the following senses. (a) Weak convergence: For any f,g and T 0, we have ∈ S ≥ lim (π f) (T, ,N)(π g) = (T)f (x)g(x)dx N ′ N A N→∞ M A ZR≥0 U (cid:0) (cid:1) in distribution and in the sense of moments. (b) Convergence of traces: For any T 0 we have ≥ lim Trace (T, ,N) = Trace (T) N M A UA →∞ in distribution and in the sen(cid:0)se of momen(cid:1)ts. (cid:0) (cid:1) 6 VADIMGORINAND MYKHAYLOSHKOLNIKOV (c) The convergences in parts (a) and (b) also hold jointly for any finite collection of T’s, ’s, f’s, and g’s. A The Brownian motion W in the definition of (T) arises hereby as the following UA limit in distribution with respect to the Skorokhod topology: N a(n) (2.6) W(a) = β lim N 1/6 ξ(n)+ , a 0. − N 2 ≥ p →∞ n=N−X⌊N1/3a⌋(cid:18) (cid:19) The proof of Theorem 2.8 is given in Section 4.3. Remark 2.9. We recall that, for deterministic operators, weak convergence together with the convergence of their traces imply other stronger forms of convergence, in particular, the convergence in the trace-class norm, see e.g. [Si2, Section 2]. Yet, when we speak about the convergence of finite-dimensional distributions of random operators, sticking to the statements (a) and (b) seems quite natural. The convergence of the traces Trace (T, ,N) as N implies the conver- M A → ∞ gence of the eigenvalues of M in the same limit. Let λ1 λ2 λN N;A (cid:0) (cid:1) N;A ≥ N;A ≥ ··· ≥ N;A denote the eigenvalues of the matrix N1/6(M 2√N). N; A − Corollary 2.10. In the notations of Proposition 2.7, one has the convergence in dis- tribution N (2.7) eTλiN;A/2 N ∞ eTηAi /2 = Trace( (T)) −→ →∞ UA i=1 i=1 X X jointly for any finitely many T’s and ’s. Therefore, one also has A (2.8) λi ηi N;A −→N→∞ A jointly for any finitely many i’s and ’s. A Remark 2.11. If we replace (T, ,N) by (T, ,N), then limit theorems similar M A −M A to Theorem 2.8 and Corollary 2.10 will hold for this new object (see Remark 4.21 for more details). The latter give the asymptotics of the smallest eigenvalues of M . N Interestingly, while for the variance constants s , s corresponding to the Gaussian β a ξ ensemble the limits of the largest and the smallest eigenvalues are independent, this is not true in general. The proof of Corollary 2.10 is given in Section 6. In order to compare with the previous work on the subject, we take = R and 0 A ≥ omit it from the notations. In other words, we consider only the spectrum of the original matrix M . In this case, an alternative derivation of the edge limit theorem N and another interpretation of the limits ηi were given in [RRV]. There, the authors make sense of the stochastic Airy operator SAO β d2 2 SAO = +a+ W (a) β ′ −da2 √β on L2(R ) with a Dirichlet boundary condition at zero by appropriately defining an 0 orthnorm≥al basis of its eigenfunctions and the corresponding eigenvalues ηi, i N − ∈ (see [RRV, Section 2], and also [Bl], [Mi]). In addition, they show that the leading eigenvalues of M (and the corresponding eigenvectors) converge to the leading eigen- N values (eigenvectors) of SAO . Note that, since the white noise W (a), a 0 is a β ′ ≥ generalized function, special care is required in defining the operator SAO . β STOCHASTIC AIRY SEMIGROUP THROUGH TRIDIAGONAL MATRICES 7 Corollary 2.12. Let = R 0 and, for any T 0, define e−T2SAOβ as the unique operator on L2(R ) wAith the s≥ame orthonormal b≥asis of eigenfunctions as SAO and 0 β the corresponding≥eigenvalues eTη1/2 eTη2/2 . If one couples e−T2SAOβ with (T) ≥ ≥ ··· U by identifying the Brownian motions W in their respective definitions, then for each T 0, the operators e−T2SAOβ and (T) coincide with probability one. ≥ U The proof of Corollary 2.12 is given in Section 7. Proposition 2.5 and Corollary 2.12 lead to the name stochastic Airy semigroup for the operators (T), T 0. U ≥ The relationship between SAO and the operators (T), T > 0 can be viewed β U as a variant of the Feynman-Kac formula for Schroedinger operators (see e.g. [Si1, Section 6, equation (6.6)] for the case when the potential is a deterministic function). However, since in the case of SAO the potential is given by the generalized function β a+ 2 W (a), such a result seems to be beyond the scope of the previous literature. A √β ′ notableexception is the“zero temperature case” β = which fallsinto the framework ∞ of the usual Feynman-Kac formula. In that case, a path transformation argument allows to recast the Feynman-Kac identity 1 1 T Trace( (T)) = ∞E 1 exp Bx,x(t)dt dx Bx,x t:Bx,x(t) 0 U √2πT {∀ ≥ } −2 Z0 (cid:20) (cid:18) Z0 (cid:19)(cid:21) for the trace of (T) as U 2 T3/2 1 Trace( (T)) = T 3/2E exp e(t)dt − U π − 2 r (cid:20) Z0 (cid:21) (cid:16) (cid:17) where e is a standard Brownian excursion on the time interval [0,1] (see the proof of Proposition 2.14 for the details). Since SAO is the deterministic Airy operator, the ∞ latter formula is the well-known series representation for the Laplace transform of the Brownian excursion area 1e(t)dt (see e.g. [Ja, Section 13]). 0 We now turn to the speRcial value β = 2, in which case the edge asymptotics is much better understood. In particular, there exist formulas for the moments of the limiting traces of the form (2.7). The first moment admits a particularly simple formula and is given (in our notation) by the following proposition from [Ok]. Proposition 2.13 ([Ok, Section 2.6.1]). Take β = 2 and = R . Then, for all 0 A ≥ T > 0, 2 E Trace( (T)) = E eTηi/2 = T 3/2eT3/96. − U π (cid:20) i 1 (cid:21) r (cid:2) (cid:3) X≥ On the other hand, our expression for the kernel K(x,y;T) and a suitable path tran- formationallowtowritethesametraceinterms ofafunctional ofastandardBrownian excursion e on the time interval [0,1]. Proposition 2.14. Take = R . Then, for all T > 0, 0 A ≥ (2.9) E Trace( (T)) = E eTηi/2 U (cid:20) i 1 (cid:21) (cid:2) (cid:3) X≥ 2 T3/2 1 T3/2 = T 3/2E exp e(t)dt+ ∞(l )2dy , − y π − 2 2β r (cid:20) (cid:18) Z0 Z0 (cid:19)(cid:21) 8 VADIMGORINAND MYKHAYLOSHKOLNIKOV where e is a standard Brownian excursion on the time interval [0,1], and each l is the y total local time of e at level y. The proof of Proposition 2.14 is given in Section 7. Comparing Propositions 2.13 and 2.14 one obtains the following corollary of inde- pendent interest. Corollary 2.15. Let e be a standard Brownian excursion on the time interval [0,1] and, for each y 0, let l be the total local time of e at level y. Then, y ≥ 1 1 (2.10) e(t)dt ∞(l )2dy y − 2 Z0 Z0 is a Gaussian random variable of mean 0 and variance 1 . 12 The proof of Corollary 2.15 is given in Section 7. To the best of our knowledge, Corol- lary 2.15 is new and did not appear previously in the path transformation literature. However, it has been established in that literature (see [Je], [BY], [CSY, Theorem 2.1]) that the two terms in (2.10) have the same distribution. In particular, this implies that the expectation of the random variable in (2.10) indeed equals to 0. It would be interesting to find an independent proof of Corollary 2.15, which does not rely on the random matrix theory. Proposition 2.14 also gives a partial explanation for the special role that the value β = 2 plays. Indeed, expanding the last exponential function in (2.9) into a power series we get 2 1 1 1 E Trace( (T)) = T 3/2 E e(t)dt ∞(l )2dy − y U π − 2√2π − β (2.11) r (cid:20)Z0 Z0 (cid:21) (cid:2) (cid:3) 1 1 1 2 + T3/2E e(t)dt ∞(l )2dy + . y 4√2π − β ··· (cid:20)(cid:18)Z0 Z0 (cid:19) (cid:21) In particular, β = 2 is the only case in which the second term in the expansion (2.11) vanishes. 3. Combinatorics of high powers of tridiagonal matrices In this section, we give a sketch of the proof of Theorem 2.8 and, in particular, explain how the Brownian bridges Bx,y and the Brownian motion W in the definition of the kernel K (x,y;T) arise in the study of high powers of the matrix M . The N A technical estimates required to justify the steps of this sketch are then presented in Section 4 below, culminating in the complete proof of the theorem in Section 4.3. Our aim is to study the matrix elements and the trace of a high power of the matrix M and its principal submatrices. For the sake of a cleaner notation, we consider only N the full matrix M . To study its submatrices one only needs to restrict the (scaled) N indices to the corresponding set . A By definition, (3.1) (M )k[i,i] = M [i ,i ]M [i ,i ] M [i ,i ]M [i ,i ], N ′ N 0 1 N 1 2 N k 2 k 1 N k 1 k ··· − − − where the sum is taken ovXer all sequences of integers i ,i ,...,i in 1,2,...,N such 0 1 k { } that i = i, i = i, and i i 1 for all j = 1,2,...,k. Hereby, the factors of the 0 k ′ j j 1 form M [m,m+1] or M| −[m,m− | ≤1] (given by b(m) = √m+ξ(m)) are given by √m N N − STOCHASTIC AIRY SEMIGROUP THROUGH TRIDIAGONAL MATRICES 9 at the leading order in m, whereas factors of the form M [m,m] (given by a(m)) are N of order 1 in m. We are interested in (T, ,N) and take first k = TN2/3 . Throughout the M A ⌊ ⌋ argument we assume that k is even, with the odd case being very similar. Let us consider the sequences in (3.1) without “horizontal” segments i = i . Note that we j 1 j − need to assume that i i is even, as otherwise the sum is empty. With the notation ′ − a b for min(a,b), the corresponding part of the sum in (3.1) is ∧ k 1 √i i ξ(i i ) (3.2) 2√N k l ∧ l−1 1+ l ∧ l−1 . · 2k √N √i i (cid:0) (cid:1) |ij1−≤iiji00−=,1iX1i|,=,.i.1k.,=ifkoi≤r′aNlljYl=1 (cid:18) l ∧ l−1 (cid:19) The prefactor 2√N k corresponds to the scaling under which the limiting spectral interval is [ 1,1]. It is also precisely the normalization of M used in the definition N − (cid:0) (cid:1) of (T, ,N), and we need to identify the N limit of the rest of the expression M A → ∞ in (3.2). Write i for min(i ,i ,...,i ). It is not hard to see that the contribution of the ∗ 0 1 k sequences with N i∗ to the sum in (3.2) becomes negligible in the limit, so that N−1/3 → ∞ one can restrict the attention to sequences with limsup N i∗ < . In particular, we choose i, i such that N→∞ N−1/3 ∞ ′ N i N i ′ x := lim − < , y := lim − < . N N1/3 ∞ N N1/3 ∞ →∞ →∞ Note further that we are summing over the trajectories of a simple random walk bridge with k steps connecting i to i (that is, a simple random walk with k steps conditioned ′ on having the prescribed endpoints). Our aim is to prove that the normalized sum converges to an integral with respect to the law of the Brownian bridge connecting x to y. Each summand in (3.2) can be trivially rewritten as k k 1 N i i ξ(i i ) l l 1 l l 1 (3.3) exp log 1 − ∧ − + log 1+ ∧ − . 2 − N √i i (cid:18) Xl=1 (cid:18) (cid:19) Xl=1 (cid:18) l ∧ l−1 (cid:19)(cid:19) For terms with limsup N i∗ < , the arguments of the logarithms in the first N→∞ N−1/3 ∞ sum are close to 1, and one can use the formula log(1 + z) z (here, z is of the ≈ order N 2/3, and there are order N2/3 summands). Similarly, for the logarithms in the − second sum, consider the Taylor expansion ξ(i i ) ξ(i i ) 1 ξ(i i )2 l l 1 l l 1 l l 1 log 1+ ∧ − = ∧ − ∧ − + √i i √i i − 2 i i ··· (cid:18) l ∧ l−1 (cid:19) l ∧ l−1 l ∧ l−1 and note that already the second term is of order N 1 (in expectation). Since there − are order N2/3 summands, only the first term can contribute to the N limit. → ∞ Consequently, in that limit the expression from (3.3) can be replaced by k k 1 ξ(i i ) l l 1 (3.4) exp (N il il+1)+ ∧ − . − 2N − ∧ √i i (cid:18) Xl=1 Xl=1 l ∧ l−1 (cid:19) 10 VADIMGORINAND MYKHAYLOSHKOLNIKOV At this stage, we observe that k N ξ(i i ) ξ(h) l l 1 (3.5) ∧ − = l : il il+1 = h . √i i √h { ∧ } l l 1 Xl=1 ∧ − hX=i∗ (cid:12) (cid:12) (cid:12) (cid:12) Atypicaltrajectoryofasimplerandomwalkbridgewithk stepsconnectingitoi visits ′ an order of k1/2 sites, and the corresponding “occupation times” l : i i = h l l 1 { ∧ − } are of the order k1/2. Therefore, for every such trajectory, the sum on the right- (cid:12) (cid:12) hand side of (3.5) is a sum of independent random variables wi(cid:12)th means of orders(cid:12) o(h 2/3k1/2) = o(N 1/3) and variances of orders O(h 1k) = O(N 1/3). Since there are − − − − anorderofN1/3 summands, thelimitofthesumisgivenbytheCentralLimitTheorem. Morespecifically, thedescribed randomwalk bridgeconverges inthelimit N toa → ∞ standard Brownian bridge on [0,T] connecting x to y, its occupation times normalized by N1/3 converge to the local times of the Brownian bridge, and so the variance of the limiting centered Gaussian random variable comes out to s2ξ 0∞La(Bx,y)2da (see Assumption 2.1 (b) and (2.4) for the notations). That random variable can be written R more explicitly as s ∞L (Bx,y)dW (a), ξ a ξ Z0 where the Brownian motion s W (a) is the limit of N 1/6 N ξ(h). ξ ξ − h=N N1/3a −⌊ ⌋ Next, by a standard application of Stirling’s formula wPe find that the number of random walk bridges of length k = TN2/3 connecting i to i behaves asymptotically ′ ⌊ ⌋ as 2kN 1/3 2 e (x y)2/(2T). Since the expression in (3.2) can be viewed as a multiple − πT − − ofthe expecqtationof afunctional withrespect tothe lawofsuch arandomwalk bridge, its asymptotic behavior is given by the same multiple of the corresponding functional of the Brownian bridge Bx,y: 2 2√N kN 1/3 e (x y)2/(2T) − − − πT r (cid:0) (cid:1) 1 T E 1 exp Bx,y(t)dt+s ∞L (Bx,y)dW(a) . Bx,y t:Bx,y(t) 0 ξ a · {∀ ≥ } − 2 (cid:20) (cid:18) Z0 Z0 (cid:19)(cid:21) Next, weturntothesequencesin(3.1)whichhavehorizontalsegments. Westillwork with an even k and write 2n for the number of horizontal segments. The corresponding sequences can be thought of as follows: take a sequence of length k 2n with no − horizontal segments and insert 2n horizontal segments at arbitrary spots. To analyze the effect of such insertion we start with the case n = 1. The corresponding part of the sum in (3.1), normalized by 2√N k, is given by k 2 (cid:0) (cid:1) 1 − √i i ξ(i i ) 1 2k 2 √l ∧Nl−1 1+ √il ∧ il−1 · (2√N)2 a(ij)a(il) . − 1≤i0,i1X,...,ik−2≤N Yl=1 (cid:18) l ∧ l−1 (cid:19) (cid:18) 0≤jX≤l≤k−2 (cid:19) |ij−ii0j=−i1,|=ik1−2fo=ri′allj Note that the last factor can be written as the sum of the terms 12 (2√1N)2 jk=−02a(ij) 2 and 12 (2√1N)2 jk=−02a(ij)2. An analysis as for the left-hand side of (3.5) shows that the first term tends to 1 times the square of a Gaussian random (cid:0)P (cid:1) P 2

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.