Disorder chaos in the spherical mean-field model Wei-Kuo Chen Hsi-Wei Hsieh Chii-Ruey Hwang Yuan-Chung Sheu ∗ † ‡ § 5 Abstract 1 0 Weconsidertheproblemofdisorderchaosinthesphericalmean-fieldmodel. Itisconcerned 2 about the behavior of the overlapbetween two independently sampled spin configurationsfrom n two Gibbs measures with the same external parameters. The prediction states that if the a disordersinthe Hamiltoniansareslightlydecoupled,thenthe overlapwillbe concentratednear J a constant value. Following Guerra’s replica symmetry breaking scheme, we establish this at 9 the level of the free energy as well as the Gibbs measure irrespective the presence or absence of ] the external field. R P Keywords: Crisanti-Sommers formula, Disorder chaos, Replica symmetry breaking . h Mathematics Subject Classification(2000): 60K35, 82B44 t a m 1 Introduction and main results [ 1 This paper is concerned about the chaos problem in mean-field spin glasses. It arose from the v discovery that in some models, a small perturbation to the external parameters will result in a 1 9 dramatic change to the overall energy landscape and the organization of the pure states of the 2 Gibbs measure. Over the past decades, physicists have intensively studied chaos phenomenon at 2 thefreeenergylevelutilizingthereplicamethod,wheremostrelatedworkswerediscussedinmodels 0 . with Ising spin. We refer readers to the survey of Rizzo [9] and the references therein along this 1 0 direction. Recently, mathematical results alsohave beenobtained intheIsing-spinmixedeven-spin 5 model. Chaosindisorderwithoutexternalfieldwas consideredinChatterjee[1]andamoregeneral 1 situation with external field was handledin Chen[2]. Somespecial cases of temperaturechaos were : v obtained in Chen and Panchenko [4] and Chen [5]. i X The aim of this work is to investigate the problem of disorder chaos in the spherical mean-field r model. Our approach is based on Guerra’s replica symmetry breaking bound for the coupled free a energy with overlap constraint. This methodology was adapted in Chen [2] to establish chaos in disorderforIsing-spinmixedeven-spinmodelwithexternalfield,wheremanyestimates werehighly involved due to the nature of the Ising spin. In this paper, we first want to illustrate how the same method may as well be applied to the spherical model and clarify several ideas behind the proof sketch of Research Problem 15.7.14. about disorder chaos problem in Talagrand [11] and Chen [2] with more explicit and simpler computations. Our results cover both situations when the external field is present or absent. On the technical ground, we intend to understand to what extent the ∗Department of Mathematics, University of Chicago. Email: [email protected] †Instituteof Mathematics, Academia Sinica. Email: [email protected] ‡Instituteof Mathematics, Academia Sinica. Email: [email protected] §Department of Applied Mathematics, National Chiao Tung University. Email: [email protected] 1 current approach can reach. In Panchenko and Talagrand [8], the same approach as the present paper was formerly used to study the conjectures of ultrametricity and chaos in temperature for spherical pure even-spin model, where it has been pointed out that these problems can not be achieved at the level of the free energy. We show that chaos in disorder is indeed a much stronger effect and can still be established at the free energy level even in the mixed even-spin model. We now state ourmainresults. For each N N, letX beacentered Gaussian processindexed N ∈ by the configuration space S = σ = (σ ,...,σ ) RN : σ2 = N N 1 N i ∈ n iX≤N o and equipped with the covariance structure EX (σ1)X (σ2) = Nξ(R ), N N 1,2 where R = N 1σ1 σ2 is called the overlap between two configurations σ1,σ2 S and ξ : 1,2 − N · ∈ [0,1] R is an even convex function with ξ (x) > 0 for x > 0 and ξ (x) 0 for x 0. The ′′ ′′′ → ≥ ≥ spherical model is defined on S and its Hamiltonian takes the form, N N H (σ) = X (σ)+h σ . N N i − i=1 X Set the corresponding Gibbs measure, 1 dG (σ)= exp H (σ) dλ (σ), N N N Z − N (cid:16) (cid:17) where dλ is the uniform probability measure on S and the normalizing factor Z is called the N N N partition function. An important example of ξ is the mixed even-spin model, ξ(x) = β2x2p p 1 p for some sequence of real numbers(β ) with 2pβ2 < . Denote by p = N 1Elog≥Z the limiting free energy. Probably the mpospt≥i1mportanpt≥f1act apbout∞the spherical mNodel i−s thPe CriNsanti- P Sommers formula [6] for the limiting free energy, lim p = inf (x,b). (1.1) N N x,bP →∞ 1 Here for any distribution function x on [0,1] and b > max 1, ξ (s)x(s)ds , 0 ′′ 1 h2 1 ξ (q) (cid:8) R 1 (cid:9) ′′ (x,b) := + dq+b 1 logb ξ (q)x(q)dq , (1.2) ′′ P 2 b d(0) b d(q) − − − (cid:18) − Z0 − Z0 (cid:19) 1 where d(q) := ξ (s)x(s)ds. The formula (1.1) was firstly verified by Talagrand [10] and later q ′′ generalized to the spherical mixed p-spin model including odd p in Chen [3]. A key fact of the R variational formula (1.1) is the existence and uniqueness of the optimizer or the functional order parameter, which is guaranteed by Talagrand [10, Theorem 1.2]. In the problem of disorder chaos, we are interested in understanding how the system would behave when the disorder is perturbed. To this end, we shall consider two copies X1 and X2 of N N X with covariance N EX1(σ1)X2 (σ2)= tNξ(R ) N N 1,2 2 forsomet [0,1].InthesamemannerasH ,G andZ ,wedenotebyH1,H2 theHamiltonians, ∈ N N N N N G1 ,G2 the Gibbs measures and Z1 ,Z2 the partition functions corresponding to (X1 ,h) and N N N N N (X2 ,h), respectively. Let denote the Gibbs expectation with respect to the product measure N h·i dG1 (σ1) dG2 (σ2). If t = 1, these two systems are identically the same in which case the N × N limiting distribution of the overlap R under the measure E is typically non-trivial in the 1,2 h·i replica symmetry breaking region. Contrary to the situation t = 1, our main results on disorder chaos stated in the following theorems say that the system will change dramatically at the level of the free energy and the Gibbs measure if the two systems are decoupled 0 < t < 1. Theorem 1.1. For u [ 1,1] and α> 0, define the coupled partition function, ∈ − Z = exp H1(σ1) H2(σ2) dλ (σ1)dλ (σ2). N,u,α N N N N − − ZR1,2 u<α | − | (cid:0) (cid:1) and set the coupled free energy, 1 p = ElogZ . (1.3) N,u,α N,u,α N If 0 < t < 1, there exists some u [0,1) such that for all u= u , ∗ ∗ ∈ 6 limsuplimsupp < 2inf (x,b). (1.4) N,u,α α 0 N x,bP ↓ →∞ In other words, there is free energy cost if u= u for 0< t < 1. Here the determination of u is ∗ ∗ 6 a technical issue, which is described through an equation related to the Crisanti-Sommers formula as well as the associated optimizer. We shall leave the details to Section 3. Roughly speaking, u ∗ is equal to zero if h = 0 and it stays positive if h= 0. As an immediate application of the Gaussian 6 concentration of measure, Theorem 1.1 yields the concentration of the overlap near the constant u in the following theorem. ∗ Theorem 1.2. If 0< t < 1, then there exists some u [0,1) such that for any ε > 0, ∗ ∈ N E 1 Kexp (1.5) h {|R1,2−u∗|>ε}i ≤ −K (cid:18) (cid:19) for all N 1, where K is a constant independent of N. ≥ This paper is organized as follows. Our approach is based on a two-dimensional extension of the Guerra replica symmetry breaking bound for (1.3) and a sketch of the proof for disorder chaos in the Ising-spin mixed even-spin model as was outlined in Talagrand [11, Section 15.7] and later implemented in Chen [2]. In Section 2, using Guerra’s bound, we will compute explicitly manageable upper bounds for the coupled free energy (1.3). These results will be used in Section 3. We first describe how to determine the constant u and then conclude Theorem 1.1. Finally, we ∗ carry out the proof of Theorem 1.2. 2 Guerra’s interpolation The main goal of this section is to derive the following upper bound for the coupled free energy (1.3), which is an extended version of Proposition 7.8 in [10]. 3 Proposition 2.1. For any distribution function x on [0,1], λ R and b > 1ξ (s)x(s)ds+ λ , ∈ 0 ′′ | | we have that for any u [ 1,1], ∈ − R limsuplimsupp (x,b,λ), (2.1) N,u,α u ≤ P α 0 N ↓ →∞ 1 where the functional (x,b,λ) is defined as follows. Set d(q) = ξ (s)x(s)ds and Pu q ′′ 1 t R φ (q) = d(u)+ − (d(q) d(u)). u 1+t − Define T (x,b,λ)+ h2 , if u [0,1], Pu(x,b,λ) := Tu(x,b,λ)+ b−λ−hd2(0) , if u∈ [ 1,0]), ( u b−λ−φ|u|(0) ∈ − where b2 1+t u ξ (s) 1 t u ξ (s) | | ′′ | | ′′ T (x,b,λ) = log + ds+ − ds u b2 λ2 2 b ηλ d(s) 2 b+ηλ φ (s) r − Z0 − − Z0 − |u| 1 1 ξ (s) 1 1 ξ (s) + ′′ ds+ ′′ ds (2.2) 2 b λ d(s) 2 b+λ d(s) Z|u| − − Z|u| − 1 λu+b 1 logb ξ (q)x(q)dq. ′′ − − − − Z0 We mainly follow the procedure of the proof of Theorem 5.3 in [10] to prove Proposition 2.1. Fix u [ 1,1] and η 1, 1 with u = η u. It suffices to prove (2.1) only for discrete x. For ∈ − ∈ { − } | | k 0, consider two sequences of real numbers m= (m ) and q= (q ) that satisfy ℓ 0 ℓ k ℓ 0 ℓ k+1 ≥ ≤ ≤ ≤ ≤ 0= m m m m = 1, 0 1 k k ≤ ≤ ··· ≤ ≤ (2.3) 0= q q q q = 1. 0 1 k+1 k+1 ≤ ≤ ··· ≤ ≤ Let x be a distribution function on [0,1] associated to this triplet (k,m,q), that is, x(q) = m for ℓ q [q ,q ) and 0 ℓ k and x(1) = 1. Without loss of generality, we may assume that q = u ℓ ℓ+1 τ ∈ ≤ ≤ | | for some 0 τ k+1. Define the sequence n = (n ) by p 0 p k ≤ ≤ ≤ ≤ m m 1 τ 1 0= n0,n1 = ,...,nτ 1 = − ,nτ = mτ,...,nk = mk. (2.4) 1+t − 1+t We consider further independent pairs of centered Gaussian random vectors (y1,y2) that p p 0 p k ≤ ≤ possess covariance E(yj)2 = ξ (q ) ξ (q ), 0 p k, j = 1,2, p ′ p+1 ′ p − ≤ ≤ Ey1y2 = ηtξ ((q ) ξ (q )), 0 p <τ, (2.5) p p ′ p+1 ′ p − ≤ y1,y2 are independent, τ p k. p p ≤ ≤ Let (y1 ,y2 ) be independent copies of (y1,y2) for 1 i N and be independent i,p i,p 0≤p≤k p p 0≤p≤k ≤ ≤ of X1 ,X2. Following Guerra’s scheme, we define the interpolated Hamiltonian H (σ1,σ2) for N N N,a a [0,1], ∈ 2 N H (σ1,σ2)= √a(X1 (σ1)+X2 (σ2))+ √1 a yj +h σj. − N,a N N − i,p i Xj=1Xi=1(cid:16) 0≤Xp≤k (cid:17) 4 Define F (a) = log exp H (σ1,σ2) dλ (σ1)dλ (σ2). k+1 N,t N N − ZR1,2 u<α | − | (cid:0) (cid:1) Denote by E the expectation in the random variables (y1 ,y2 ),...,(y1 ,y2 ) and define recur- p i,p i,p i,k i,k sively for 0 p k, ≤ ≤ 1 logE expn F (a), if n = 0, np p p p+1 p 6 F (a) = p EpFp+1(a), if np = 0. Finally set φ(a) = N 1EF (a) anddenote F = φ(0). Following essentially the same proof as either − 0 0 Theorem 5 in [8] or Theorem 7.1 in [10], one can prove that the interpolated free energy φ yields Proposition 2.2. For any α > 0 and x corresponding to (k,m,q), we have that p F (1+t) n (θ(ρ ) θ(ρ )) n (θ(ρ ) θ(ρ ))+ (2.6) N,u,α 0 p p+1 p p p+1 p ≤ − − − − R 0 p τ τ<p k ≤X≤ X≤ where θ(q)= qξ (q) ξ(q) and limsup = 0. ′ N − →∞|R| Substituting (2.4) into the right-hand side of (2.6), a direct computation gives (1+t) n (θ(q ) θ(q ))+ n (θ(q ) θ(q )) p p+1 p p p+1 p − − 0 p<τ τ p k+1 ≤X ≤X≤ = m (θ(q ) θ(q )) p p+1 p − 0 p k ≤X≤ 1 = ξ (q)x(q)dq. (2.7) ′′ Z0 We now turn to the control of the quantity F . For b > 1, we denote by νb the probability measure 0 N of N i.i.d. Gaussian random variables with mean zero and variance b 1, that is, − N b 2 b dνb (y)= exp y 2 dy. N 2π −2k k (cid:18) (cid:19) (cid:18) (cid:19) Let τb = N 1logνb ( σ : σ 2 N ). Without ambiguity, we simply write νb for νb. Given a N − − N { k k ≥ } 1 number λ, we define the function B (x1,x2,λ) = log exp(x1 σ1+x2 σ2+λσ1 σ2)dνb (σ1)dνb (σ2) k+1 N N · · · Z for x1,x2 RN and recursively, for 1 p k, ∈ ≤ ≤ 1 logEexpn B (x1+y1,x2+y2,λ), if n = 0, np p p+1 p p p 6 B (x1,x2,λ) = p EBp+1(x1+y1p,x2+y2p,λ), if np = 0, where yj = (yj ) for j = 1,2 and 0 p k. Let h = (h,...,h). Following the same p i,p 1≤i≤N ≤ ≤ argument as in the proof of Lemma 7.1 [10], we obtain 5 Lemma 2.1. Let u [ 1,1], α > 0 and λ R. If b > 1ξ (s)x(s)ds+ λ , then ∈ − ∈ 0 ′′ | | F λu+ λ α+2τb + 1 EBR(h+y1,h+y2,λ). (2.8) 0 ≤− | | N N 1 0 0 To compute the term N 1EB (h+y1,h+y2,λ), we will need a technical lemma. − 1 0 0 Lemma 2.2. For x1, x2 R, we define ∈ x1+x2 (x1+x2)2 J1 (x1,x2,λ) = log expρ1 dνb λ(ρ1)= √2 , k+1 √2 1− 2(b λ) Z (cid:18) (cid:19) − (2.9) J2 (x1,x2,λ) = log expρ2 x1−x2 dνb+λ(ρ2)= (x1√−2x2)2, k+1 √2 1 2(b+λ) Z (cid:18) (cid:19) and recursively for 1 p k and j = 1,2, ≤ ≤ 1 logEexpn Jj (x1+y1,x2+y2,λ), if n = 0, np p p+1 p p p 6 Jj(x1,x2,λ) = p EJpj+1(x1+yp1,x2+yp2,λ), if np = 0, then 1 EB (h+y1,h+y2,λ) N 1 0 0 (2.10) b2 = log +EJ1(h+y1,h+y2,λ)+EJ2(h+y1,h+y2,λ). b2 λ2 1 0 0 1 0 0 r − Proof. For x1, x2 R and 1 p k+1, we define the following functions ∈ ≤ ≤ Γ (x1,x2,λ) = log exp x1σ1+x2σ2+λσ1σ2 dνb(σ1)dνb(σ2), k+1 Z 1 (cid:0) (cid:1) Γ (x1,x2,λ) = logEexpn Γ (x1+y1,x2+y2,λ), 1 p k. p n p p+1 p p ≤ ≤ p Since (σ1,σ2),...,(σ1 ,σ2 ) are independent under the measure νb νb , we see recursively that 1 1 N N N × N B (x1,x2,λ) = Γ (x1,x2,λ), where xj = (xj) for j = 1,2. Consequently, p i N p i i i i N ≤ ≤ P 1 EB (h+y1,h+y2,λ) = EΓ (h+y1,h+y2,λ). N 1 0 0 1 0 0 Now making change of variables ρ1+ρ2 ρ1 ρ2 σ1 = , σ2 = − √2 √2 and noting that ρ1,ρ2 are i.i.d. Gaussian with mean zero and variance b 1, we obtain − Γ (x1,x2,λ) k+1 x1+x2 x1 x2 (ρ1)2 (ρ2)2 = log exp ρ1+ − ρ2+λ λ dνb(ρ1)dνb(ρ2) √2 √2 2 − 2 Z (cid:18) (cid:19) x1+x2 (ρ1)2 x1 x2 (ρ2)2 = log exp ρ1+λ dνb(ρ1)+log exp − ρ2 λ dνb(ρ2) √2 2 √2 − 2 Z (cid:18) (cid:19) Z (cid:18) (cid:19) b2 = log +J1 (x1,x2,λ)+J2 (x1,x2,λ). b2 λ2 k+1 k+1 r − 6 Since y1+y2 and y1 y2 are independent, starting with (2.9), an iterative argument implies that p p p p − J1 (x1+y1,x2+y2) and J2 (x1+y1,x2+y2) are independent of each other, which yields p+1 p p p+1 p p b2 Γ (x1,x2,λ) = log +J1(x1,x2,λ)+J2(x1,x2,λ) p b2 λ2 p p r − for 1 p k+1 and hence (2.10). ≤ ≤ ⊔⊓ Proof of Proposition 2.1. Theproofisessentially basedonanexplicit calculation oftherighthand- side of (2.10). To lighten notations, we set v = ξ (q ) ξ (q ) if 0 p k, p ′ p+1 ′ p − ≤ ≤ 1 qτ d = n v = ξ (s)x(s)ds if 0 p τ 1, d = 0, ′p ℓ ℓ 1+t ′′ ≤ ≤ − ′τ p ℓ τ 1 Zqp ≤X≤ − 1 d = n v = ξ (s)x(s)ds if τ p k, d = 0. p ℓ ℓ ′′ k+1 ≤ ≤ p ℓ k Zqp ≤X≤ Recall (2.5). It is straightforward to obtain that for τ p k ≤ ≤ 2 2 y1+y2 y1 y2 E p p = E p − p = v p √2 ! √2 ! and for 0 p < τ, ≤ 2 2 y1+y2 y1 y2 E p p = (1+ηt)v , E p − p =(1 ηt)v . p p √2 ! √2 ! − Combining these with the formula that for a standard Gaussian random variable z, nv < L and y R, ∈ 1 n y2 + 1 log L , if n > 0, logEexp (y+√vz)2 = 2(L nv) 2n L nv n 2L ( y2 −+ v , − if n = 0, 2L 2L an iterative procedure leads to 2h2 EJ1(h+y1,h+y2,λ) = 1 0 0 2(b λ (d +(1+ηt)d )) − − τ ′0 + 1 τ−1 1 log b−λ−(dτ +(1+ηt)d′p+1) 2 n b λ (d +(1+ηt)d ) (2.11) p=0 p − − τ ′p X k 1 1 b λ d p+1 + log − − 2 n b λ d p p p=τ − − X and EJ2(h+y1,h+y2,λ) = 1 τ−1 1 log b+λ−(dτ +(1−ηt)d′p+1) 1 0 0 2 n b+λ (d +(1 ηt)d ) p=0 p − τ − ′p X (2.12) k 1 1 b+λ d p+1 + log − . 2 n b+λ d p p p=τ − X 7 1 Now recall from the statement of Proposition 2.1, d(q) = ξ (s)x(s)ds and q ′′ 1 t R φ (q) = d(q )+ − (d(q) d(q )). u τ τ 1+t − Since d = d(q ) d(q ) and ′p p τ − 1 ηt d(q ), if u 0, p dτ +(1±ηt)d′p = d(qτ)+ 1±+t (d(qp)−d(qτ)) = φu(qp), if u≥< 0, (cid:26) we have 2h2 2h2 , if u 0, = 2(b λ d(0)) ≥ (2.13) 2(b∓λ−(dτ +(1±ηt)d′0)) ( 2(b∓λ2−hφ2u(0)), if u < 0. ∓ − Using the fundamental theorem of calculus and the fact that x(s) = m for s [q ,q ), p p p+1 ∈ τ−1 1 log b∓λ−(dτ +(1±ηt)d′p+1) n b λ (d +(1 ηt)d ) p=0 p ∓ − τ ± ′p X τ 1 − 1 = log(b λ (d +(1 ηt)d )) log(b λ (d +(1 ηt)d )) n ∓ − τ ± ′p+1 − ∓ − τ ± ′p p p=0 X (cid:0) (cid:1) τ−1 (1+t)(1 ηt) qp+1 ξ′′(s)x(s) = ± ds m 1+t b λ (d +(1 ηt)d ) p=0 p Zqp ∓ − τ ± ′p X (1 t) qτ ξ′′(s) ds, if u 0, = ± 0 b λ d(s) ≥ (2.14) ( (1∓t)R0qτ b∓λξ−′′(φsu)(s)ds, if u < 0, ∓ − and R k k 1 b λ d 1 p+1 log ∓ − = log(b λ d(q )) log(b λ d(q )) p+1 p n b λ d n ∓ − − ∓ − p p p p=τ ∓ − p=τ X X (cid:0) (cid:1) k 1 qp+1 ξ (s)x(s) ′′ = ds m b λ d(s) p=τ p Zqp ∓ − X 1 ξ (s) ′′ = ds. (2.15) b λ d(s) Zqτ ∓ − Plugging (2.13), (2.14) and (2.15) into (2.11) and (2.12), the equation (2.2), Lemmas 2.1, 2.2 and Proposition 2.2 together complete our proof by taking N and α 0 in (2.8) and noting the → ∞ ↓ usual large deviation principle lim τb = 2 1(b 1 logb). N→∞ N − − − ⊔⊓ 3 Proofs of main results Nowwearereadytoproveourmainresults. Throughoutthissection,(x,b)standsfortheoptimizer 1 in (1.1). Denote d(q) = ξ (s)x(s)ds and q ′′ R 1 t φ (q) = d(u)+ − (d(q) d(u)). u 1+t − 8 First of all, we start with a proposition that is used to determine the value u stated in Theorems ∗ 1.1 and 1.2. Let u be the smallest value of the support of x. A crucial fact about u is that it x x must satisfy the following equation, h2+ξ (u ) ′ x = u . (3.1) x (b d(0))2 − This can be seen from the proof of Theorem 7.2 in [10]. In particular, (3.1) implies u > 0 if h = 0. x 6 Proposition 3.1. For t (0,1), define the function ∈ h2 +tξ (u) ′ f(u) = u (3.2) (b d(0))2 − − for u [ u ,u ]. Then f(u) = 0 has a unique solution u . Moreover, u = 0 when h = 0 and x x ∗ ∗ ∈ − u (0,u ) when h = 0. ∗ x ∈ 6 Proof. Note that ξ is an odd function. This implies that f is convex on [0,u ] and is concave on ′′′ x [ u ,0]. Assume that h = 0. In this case, since f(0) > 0 and f(u ) < 0 by (3.1), the intermediate x x − 6 value theorem and the convexity of f on [0,u ] conclude that f(u)= 0 has a unique solution u on x ∗ [0,u ] and it satisfies u (0,u ). In addition, since from (3.1), x ∗ x ∈ h2+tξ (u ) h2+ξ (u ) ′ x ′ x f( u )= − +u > +u = 0, x x x − − (b d(0))2 −(b d(0))2 − − the concavity of f on [ u ,0] and f(0) > 0 imply that f(u) = 0 has no solution on [ u ,0]. This x x − − finishes the proof for the case h = 0. The situation for h = 0 is essentially identical. If u = 0, x 6 obviously u = 0. If u = 0, we still have f( u ) > 0 > f(u ), but now f(0) = 0. The convexity ∗ x x x 6 − and concavity of f on [0,u ] and [ u ,0] respectively conclude that 0 is the unique solution to x x − f(u)= 0 on [ u ,u ]. x x − ⊔⊓ Proof of Theorem 1.1. Note that T (x,b,0)+ h2 , if u [0,1], Pu(x,b,0) = Tu(x,b,0)+ b−dh(02) , if u∈ [ 1,0), (3.3) ( u b−φ|u|(0) ∈ − where from (2.2), 1+t u ξ (s) 1 t u ξ (s) | | ′′ | | ′′ T (x,b,0) := ds+ − ds u 2 b d(s) 2 b φ (s) Z0 − Z0 − |u| (3.4) 1 ξ (s) 1 ′′ + ds+b 1 logb ξ (q)x(q)dq. ′′ b d(s) − − − Z|u| − Z0 Consider first that u > u . Since x(q) >0 for q (u , u), we have that for all s [0, u), x x | | ∈ | | ∈ | | 2t u | | d(s) φ (s) = (d(s) d(u)) = ξ (q)x(q)dq > 0 (3.5) u ′′ − | | 1+t − | | Zs and from (3.3), h2 (x,b,0) T (x,b,0)+ u u P ≤ b d(0) − 9 for any u [ 1,1]. In addition, from (3.5), the first line of (3.4) is strictly bounded above by ∈ − 1+t u ξ (s) 1 t u ξ (s) u ξ (s) | | ′′ | | ′′ | | ′′ ds+ − ds = ds 2 b d(s) 2 b d(s) b d(s) Z0 − Z0 − Z0 − and as a result, these inequalities together with the equation (1.2) lead to (x,b,0) < 2 (x,b). u P P This completes the proof for (1.4) with u > u by using Proposition 2.1. As for the case u u , x x | | | | ≤ since x(q)= 0 for q [0, u), we have that for all s [0, u], ∈ | | ∈ | | 1 d(s) = ξ (q)x(q)dq = φ (s). ′′ u Zux | | This allows us to write b2 h2 (x,b,λ) = log + u P b2 λ2 b λ d(0) r − − − 1+t u ξ (s) 1 t u ξ (s) | | ′′ | | ′′ + ds+ − ds 2 b ηλ d(s) 2 b+ηλ d(s) Z0 − − Z0 − 1 1 ξ (s) 1 1 ξ (s) ′′ ′′ + ds+ ds 2 b λ d(s) 2 b+λ d(s) Z|u| − − Z|u| − 1 λu+b 1 logb ξ (q)x(q)dq ′′ − − − − Z0 for all u [ u ,u ]. A direct computation gives that x x ∈ − (x,b,0) = 2 (x,b), ∂ (x,b,0) = f(u) u λ u P P P and moreover, for λ in a small open neighborhood of 0, ∂ (x,b,λ) L, λλ u | P | ≤ where L is a positive constant independent of λ. Consequently, applying the Taylor theorem and taking λ = δf(u)/L for sufficiently small δ > 0, if u [ u ,u ] and u= u , then Proposition 3.1 x x ∗ − ∈ − 6 yields L limsuplimsupp (x,b,0)+∂ (x,b,0)λ+ λ2 N,u,α u λ u ≤ P P 2 α 0 N ↓ →∞ δf(u)2 δ = 2 (x,b) 1 P − L − 2 (cid:16) (cid:17) < 2 (x,b). P This proves (1.4) for u u with u= u . x ∗ | | ≤ 6 ⊔⊓ At the end of this section, we prove Theorem 1.2. It will need an inequality of Gaussian concentration of measure from the appendix of [7] stated below. 10