ebook img

Randomly biased walks on subcritical trees PDF

0.52 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Randomly biased walks on subcritical trees

Randomly biased walks on subcritical trees G´erard Ben Arous∗, Alan Hammond † 1 January 24, 2011 1 0 2 n a Abstract J 0 As a model of trapping by biased motion in random structure, we 2 studythe timetaken for abiased random walk to returnto therootof ] asubcriticalGalton-Watson tree. Wedosofortreesinwhichthesebi- R ases are randomly chosen, independently for distinct edges, according P to a law that satisfies a logarithmic non-lattice condition. The mean . h return time of the walk is in essence given by the total conductance t a of the tree. We determine the asymptotic decay of this total conduc- m tance, finding it to have a pure power-law decay. In the case of the [ conductanceassociatedtoasinglevertexatmaximaldepthinthetree, 1 this asymptotic decay may be analysed by the classical defective re- v newaltheorem, duetothenon-lattice edge-biasassumption. However, 1 4 the derivation of the decay for total conductance requires computing 0 an additional constant multiple outside the power-law that allows for 4 the contribution of all vertices close to the base of the tree. This com- . 1 putation entails a detailed study of a convenient decomposition of the 0 1 tree, under conditioning on the tree having high total conductance. 1 As such, our principal conclusion may be viewed as a development of : v renewal theory in the context of random environments. Xi ForrandomlybiasedrandomwalkonasupercriticalGalton-Watson tree with positive extinction probability, our main results may be re- r a garded as a description of the slowdown mechanism caused by the presence of subcritical trees adjacent to the backbone that may act as traps that detain the walker. Indeed, this conclusion is exploited in [18] to obtain a stable limiting law for walker displacement in such a tree. ∗CourantInstituteofMathematicalSciences,NewYorkUniversity. Researchsupported inpartbytheNationalScienceFoundationundergrantsDMS-0806180andOISE-0730136. †Department of Statistics, Oxford University. Research supported in part by NSF grants DMS-0806180 and OISE-0730136 and by EPSRC grant EP/I004378/1. This re- search was initiated while the second author was at N.Y.U. 1 1 Introduction The inquiry into drift and trapping for biased random walks on random structures has been pursued both by physicists and mathematicians. It was noted long ago in the physics literature [7] that, in disordered media, due to trapping in dead-end branches, the mean velocity would not be a monotone function of the bias. It was then pointed out that, in fact, strong biases could even produce sub-ballistic regimes, i.e., with zero velocity; (see [11], [12], or [19] for a physics survey of the general phenomena of anomalous diffusion). This hypothesis was confirmed for biased random walks on random trees by Lyons, Pemantle and Peres [25], and on supercritical percolation clusters, independently by Berger, Gantert and Peres in dimension two [6], and by Sznitman in all dimensions d ≥ 2 [28]. For d ≥ 2, the existence of a critical bias separating the sub-ballistic and ballistic regimes will be demonstrated in [17]. These studies naturally raise the question of understanding more deeply the means by which the walk is slowed down by trapping structures in this sub-ballisticregime. A.Sznitmanmentionsin[29]thatthismechanismseems similar to that responsible for aging in Bouchaud’s trap model ([4],[8]) whose mainfeaturesarethatthemeantrappingtimeshaveapower-lawdistribution and that the tail of the distribution of the trapping times, conditionally on their mean, is exponential. For random walks on supercritical Galton-Watson trees whose bias away from the root is constant, such a study of the trapping phenomenon was undertaken in [3]. In this context, the dead-ends responsible for the slowing down of the walk are simply subcritical Galton-Watson trees that hang off the backbone of the supercritical tree. Thus, the trapping times are related to return times to the root for biased random walks on subcritical trees. The paper [3] analyses a log-periodicity phenomenon which is very reminiscent of the classical lattice effect for random walks in random environments [21], [13], [14], [30]. This lattice effect, which has been discussed in the physics literature [27], causes holding times in traps to cluster around powers of the bias parameter. That is to say, although it may seem reasonable that such a walk has ascaling limit similar to theone obtainedfor Bouchaud trapmodels that is discussed in [4] and [5], the lattice effect is a discrete inhomogeniety in walker displacement that is persistent on all time scales and that prevents the existence of such a scaling limit. In this paper, we study the question of how long it takes a biased random walk in a subcritical Galton-Watson tree to return to the root of the tree. In the context of biased walks on supercritical trees, our inquiry amounts to an investigation of the nature of the delaying mechanism of the subcritical trees that act as potential traps for the walk. We undertake the inquiry in 2 the case that the biases on edges that determine the distribution of the walk are random, rather than identically equal, imposing a non-lattice condition on this randomness that serves to eliminate the log-periodic effect from the return-time distribution. This paper is the first of two. Indeed, its sequel [18] exploits the under- standing of the trapping mechanism that we obtain in the present article to prove a stable scaling limit for the randomly biased random walk on super- critical Galton-Watson trees with leaves; in this way, the two papers form a counterpart to [3], in that they show how edge-bias randomization dissipates the persistent discrete inhomogeneity that obtains in the case of constant bias. Randomlybiasedrandomwalksoninfinitetreeshavebeenstudiedby[24], who provide a criterion for recurrence, or transience, valid for a very broad class of trees. In the case of supercritical Galton-Watson trees, randomly biased walk is studied in the recurrent regime by [15], who investigate the high-n asymptotic of the maximal displacement of the walk during [0,n]. In [1], the same model is analysed in the case of zero extinction probability and in the transient regime, the author presenting criteria for zero speed and for positive speed. The tree being leafless in [1], this work is not concerned with trapping, but rather is a contribution to understanding how increasing bias causes more rapid progress for the walk on the backbone of a supercritical Galton-Watson tree. Regarding this last question, the monotonicity of speed as a function of bias is not known even when the bias is a constant: see Question 2.1 of [26]. We now define the general class of biased random walks on finite rooted trees in which we are interested. Let T be a finite rooted tree, with vertex set V(T), edge set E(T), and root φ. Such a tree will be called weighted if, to each unoriented edge e ∈ E(T) is assigned a number β ∈ [q,Q], that e we will call the bias of the edge e. Here, Q ≥ q > 1 are fixed constants. In this paper, in referring to a tree, we will generally mean a weighted tree. We define the law P of the β-biased random walk {X : i ∈ N} on the T,β i set of vertices of the tree T as a Markov chain on V(T) with the following transition rules. Definition 1 If a vertex v ∈ V(T), v 6= φ, has offspring v ,...,v , then, for 1 k each n ∈ N, 1 ←− P X = v X = v = , T,β(cid:16) n+1 (cid:12)(cid:12) n (cid:17) 1+ ki=1βv,vi (cid:12) Pβ P X = v X = v = v,vj , for 1 ≤ j ≤ k, T,β(cid:16) n+1 j(cid:12)(cid:12) n (cid:17) 1+ ki=1βv,vi (cid:12) P 3 ←− where v denotes the parent of v, i.e., the neighbor of v which is the closest to the root. The jump-law from the root is given by β P X = φ X = φ = φ,φj , for 1 ≤ j ≤ k(φ). T,β(cid:16) n+1 j(cid:12)(cid:12) n (cid:17) ki=1βφ,φi (cid:12) P For v ∈ V(T), we write Pv for the law of P given that X(0) = v. We T,β T,β write Ev for the expectation value under Pv . T,β T,β It is easy to see these biased random walks are reversible and to compute their invariant measures. Indeed, they fall in the general class of “random walks on weighted graphs”(see [22], and [2] for recent expositions). Definition 2 For every vertex v ∈ V(T), define P to be the unique simple φ,v path from the root to v, and ω(v) to be the product of the biases β of the e ←− edges e along the path P . If e = (v,w) is an edge of T, with v = w, we φ,v define the weight (or conductance) of e by µ(e) = µ(v,w) = ω(w) . Finally, we define the unnormalized measure µ on V(T) by setting µ(v) equal to the sum of the conductances of the edges adjacent to the vertex v ∈ V(T). It is clear that the measure µ is reversible and thus invariant for the Markov chain X . Moreover, by the general random walks on weighted n graphs, the transition rules between the adjacent vertices v and w can simply be rewritten in terms of the conductance of the edge between v and w as well as the measure µ(v): µ(v,w) P X = w X = v = . (1) T,β n+1 n µ(v) (cid:16) (cid:12) (cid:17) (cid:12) We will be interested in the first(cid:12)return time to the root of the walk that starts at the root, where the return time is naturally defined as inf{n > 1,X = φ} under the law Pφ . n T,β As we have mentioned, the random trees that we consider are subcritical Galton-Watson trees. Definition 3 Let h = {h : i ∈ N}, ∞ h = 1, denote an offspring distri- i i=0 i bution. We write Ph for the Galton-PWatson tree with offspring distribution h, that is, for the law on rooted trees in which each vertex has an independent and h-distributed number of offspring. We will make the following assumption on the offspring distribution. Hypothesis 1 Let {h : i ∈ N}, ∞ kh < 1, be a subcritical offspring i k=1 k distribution for which there exists cP> 0 such that l≥khk ≤ exp{−ck} for each k ∈ N. We write mh = ∞k=1khk for the meaPn number of offspring. P 4 We now describe the law of the random biases. Definition 4 Let ν be a probability distribution on (1,∞). For a tree T sampled from P , let (β ) be independent and identically distributed h e e∈E(T) random variables, with common distribution ν. We write P for the Galton- h,ν Watson tree with offspring distribution h equipped with this set of random biases. We make the following important non-lattice assumption on the distribu- tion ν. Hypothesis 2 There exist Q > q > 1 such that the support of the measure ν is contained in [q,Q]. Moreover, the support of ν ◦ log−1 is non-lattice. That is, the Z-linear span of logsupp(ν) is dense in R. The fact that the distribution of the bias is supported in (1,∞) means that thewalkisbiasedawayfromtheroot. Wenowneedtointroducethefollowing important exponent. Definition 5 Let χ > 0 be the unique value satisfying ∞ 1 yχν(dy) = . (2) Z m 1 h Our first result shows that, under the assumptions above, the mean return time to the root has a pure power-law tail. The regularity of the tail of the random variable constrasts with the case of constant bias studied in [3]. Definition 6 Under the law Pφ , let H = inf{n ≥ 1,X = φ} denote the T,β φ n hitting time of the root. Here and throughout, by A(x) ∼ B(x) is meant A(x) → 1 as x → ∞. B(x) Theorem 1 Assume Hypotheses 1 and 2. Then, here exists a constant c ∈ 1 (0,∞) such that P Eφ (H ) > x ∼ c x−χ. h,ν T,β φ 1 (cid:16) (cid:17) where c ∈ (0,∞) is a constant that will be specified after Theorem 2. 1 The electrical resistance theory of random walks, or, equivalently, the theory of random walk on weighted graphs (see [2] or [22]), may be used to compute mean return times in terms of the total conductance (or weight) of the tree. Indeed, defining ω(T) to be this total weight c(e), we see e∈E(T) that P ω(T) = ω(v). (3) X v∈V(T) 5 The mean return time is then given by 2 Eφ (H ) = (ω(T)−1) (4) T,β φ N where N be the number of offspring of the root. This formula arises by averaging thereturntimeover theoffspringoftherootvisited atthefirststep of the walk, with the commute-time formula of electrical resistance theory being used to express the mean return time from offspring to root. See the upcoming (7) for the relevant expression for this mean return time. Using (4), Theorem 1 isa simple consequence ofthefollowing pure power- law tail for the quantity ω(T). Theorem 2 Assume Hypotheses 1 and 2. There exists a constant d ∈ 1 (0,∞) such that P ω(T) > u ∼ d u−χ, h,ν 1 (cid:16) (cid:17) Remark. The two constants c and d in Theorems 1 and 2 are explicitly 1 1 given by ∞ c = 2−χd h k1−χ (5) 1 1 k X k=1 and, denoting by D(T) = sup{d(φ,v) : v ∈ V(T)} the depth of the tree T, 1 ∞ −1 d = yχlog(y)ν(dy) lim E ω(T)χ11 . (6) 1 h,ν D(T)=k χmh(cid:16)Z1 (cid:17) k→∞ (cid:16) (cid:17) It is natural to ask about not only the mean of the return time but also about how it is distributed. We will present here an asymptotically accurate description of this distribution in the case that the tree is large (in a natural sense). This description plays a central role in the analysis of randomly biased walk on a supercritical Galton-Watson tree undertaken in [18]. We will see that the behaviour of the return time distribution has two entirely distinct regimes, according to whether or not the walker has the opportunity to visit the base of the tree (and so explore most of the tree) before its return to the root. We will condition on the events that the tree is large and that the walker makes such a deep visit, and conclude that the tail of the return time is then exponential, completing the analogy with the Bouchaud trap model that we have mentioned. Firstly, we introduce notation for conditioning that a tree be “large” enough. A tree T will be regarded as large if ω(T) is large. Definition 7 For any u > 0 we set P = P (·|ω(T) > u). h,ν,u h,ν 6 We then introduce our notation for the distribution of the random walks. WesampleatreeT equippedwithitsbiases(β )’s, chooseavertexv ∈ V(T), e andruntheMarkovchainonV(T),startedatv,thatisspecifiedinDefinition 1. We denote by P ×Pv the joint distribution of the tree, the biases and h,ν T,β the random walk started at v. We will also need to sample the weighted tree from the conditioned measure P . Naturally, we will then denote the joint h,ν,u distribution of the tree, the biases and the walk by P ×Pv . h,ν,u T,β A technical point is that, for a random tree T, we need to specify a selection rule for the starting point v ∈ V(T) of the walk. A selection rule can be easily defined using the classical formal coding of Galton-Watson trees (see [23] Section 1.1). In Appendix A, we review this coding and the definition of a selection rule. Wealso need tointroduce notationfor conditioning that the randomwalk explore enough of the tree. In the next definition, we again use the coding of trees in Appendix A to specify a lexicographical ordering on the set of vertices of any Galton-Watson tree. Definition 8 We call v the lexicographically minimal vertex of maximal base depth in V(T). Definition 9 Under the law Pφ , the walk X is said to make a deep ex- T,β cursion into T if it hits v before returning to the root. We write DE = base {H < H } for the event that X makes a deep excursion into T. We will vbase φ write Pφ the distribution of the random walk conditioned by the event T,β,DE that it makes a deep excursion. We remark that, for a highchoice ofu, it isreasonable tosuppose that, under the law P , the tree is typically long and thin, consisting of a long path h,ν,u connecting the root to the base vertex v , with only small subtrees hanging base off this path. Indeed, in Theorem 6, we will present a result to this effect. For a typical large tree, then, a walk from φ that makes a deep excursion into T will be forced after reaching v to make a geometrically distributed base number of highly ineffectual attempts to reach the root from v , so that base H will be approximately exponentially distributed in this case. This is the φ content of our next result. Theorem 3 Assume Hypothesis 1, and that there exist Q > q > 1 such that the support of the measure ν is contained in [q,Q]. For all t > 0, H P ×Pφ φ > t → exp{−t} (cid:16) h,ν,u T,β,DE(cid:17)(cid:18)EφT,β,DE(Hφ) (cid:19) as u → ∞. 7 We also present an asymptotically accurate expression for the mean return time Eφ (H ). To do so, we introduce some notation. T,β,DE φ Definition 10 Given a tree T and v ∈ V(T), we define the descendent tree T of v to be the subgraph induced by the set of all descendents of v (among v which, we include v). The root of T is taken to be v. v Definition 11 Let T be a weighted tree. Set v to be the neighbour of the child root lying in the path P from the root to v . The descendent tree T φ,vbase base vchild is itself a weighted tree with root v . We set ω equal to the weight of this child ∗ tree, as defined by the formula (3), where, to compute the summand on the right-hand-side, we note that, for each element v ∈ V(T ), the weight of vchild v is computed with respect to the root being v . child We further write p = Pvchild(DE) for the probability that the walk starting de T,β at v makes a deep excursion into T. child The significance of ω is its appearance in the mean hitting time formula ∗ Evchild(H ) = 2ω −1, (7) T,β φ ∗ which follows from the commute-time formula in a reversible network, origi- nally proved in [9], and presented as Theorem 3.3 in [2]. Theorem 4 Assume the hypotheses of Theorem 3. For each ǫ > 0, Eφ (H ) P T,β,DE φ ∈ 1−ǫ,1+ǫ → 1 h,ν,u(cid:18) 2ω p−1 (cid:19) ∗ de (cid:16) (cid:17) as u → ∞. For all t > 0, H P ×Pφ φ > t → exp{−t} h,ν,u T,β,DE (cid:18)2ω p−1 (cid:19) (cid:16) (cid:17) ∗ de as u → ∞. The new information presented in Theorem 4 beyond that of Theorem 3 is that, for a typical large tree, under Pφ , if DE does not occur, then H is T,β φ negligible. Indeed, that H is negligible in the event DEc is equivalent to φ saying that most of the mean Eφ (H ) arises on the event DE. It easy to see T,β φ that this in turn is the same as saying that Eφ (H ) is well approximated T,β,DE φ by 2ω p−1. (The key ingredients to see the latter equivalence are (7), and ∗ de the fact that, under DE, the walk from φ must immediately travel to v .) child 8 1.1 The structure of the paper We begin in Section 2 by introducing some necessary tools from classical defective renewal theory and the extension needed for the proof of Theorem 2, which forms the core of the argument for Theorem 1. We will illustrate how the defective renewal theory can be used straightforwardly to show that, for avertex v ∈ V(T)of maximal depth, ω(v)hasa purepower-law tail. This power-law differs by a constant from the one given in Theorem 2 for ω(T), the reason being that several vertices near the base of the tree contribute to ω(T) on roughly equal terms. In order to better approximate ω(T) and thus prove Theorem 2, we introduce a convenient decomposition of the tree in Section 3. The main result, Theorem 6, about this decomposition states that the components are small conditionally on ω(T) being large. The proof of Theorem 6 is fairly involved and in places delicate, and it is deferred to Appendix B. In Section 4, we prove Theorem 2. In Section 5, we study the exponential tail of the return time, and prove Theorems 3 and 4. In its exploration of time spent in traps, the paper [18] requires a different decomposition from the one introduced in Section 3. In Appendix C, we introduce this new decomposition of the tree, which we call the renewal decomposition. The renewal decomposition has components that enjoy more independence than the corresponding objects in the decompositions used in this paper. The analogue for the renewal decomposition of Theorem 6 is needed in [18]. Its proof also appears in Appendix C. 2 A first use of defective renewal theory We recall and improve slightly on the classical non-lattice defective renewal theorem [16]. Suppose that a succession of light bulbs has been produced. Independently of the earlier ones, a given light bulb is permanent with prob- ability 1−p, in which case, it will shine eternally. A light bulb which is not permanent is called temporary; given that a bulb is temporary (and inde- pendently of the status of the preceding bulbs), it has a lifetime distributed according to the law µ. At time zero, the first bulb is installed and begins to shine. A bulb is replaced whenever it fails, until a permanent one is installed. The defective renewal theorem is concerned with the law of the time at which the final replacement is made (after which, a bulb will eternally shine). This is the time Z in the next lemma, whose first part is the classical defective renewal theorem. Lemma 1 On a probability space (Ω,P), let {X : i ∈ N} be a sequence i of independent and identically distributed random variables. The common 9 distribution µ is assumed to have a non-negative, bounded and non-lattice support. Let Y denote an independent random variable taking values in the positive integers. Set Z = Y X . i=1 i P 1. Suppose that, under P, Y has the law of a geometric random variable of parameter p ∈ (0,1): that is, for each j ≥ 0, P(Y = j) = pj(1−p). Define κ > 0 as the unique value such that E(eκX) = 1, and let c = p 2 1−p . Then κpE(XeκX) P(Z ≥ u) ∼ c exp{−κu}, 2 where recall that f(u) ∼ g(u) denotes lim f(u) = 1. u→∞ g(u) 2. Suppose that there exist p ∈ (0,1), c ∈ (0,∞) and k ∈ N such that, 0 for k ≥ k , P(Y = k) = cpk. Then 0 P(Z ≥ u) ∼ c exp{−κu}, 3 with c = cc2 . 3 1−p 3. Suppose that, for such p and c, we assume merely that P(Y = k) ∼ cpk as k → ∞. In this case also, P(Z ≥ u) ∼ c exp{−κu}. 3 Proof. The first statement is (6.16) on page 377 of [16]. (Note that there is typographical error in (6.16) and that µ should be replaced by µ♯.) In the second statement, Y assumes high values according to a law whose density is adjusted from that of a geometric law by multiplication by a factor of α = c/(1 − p) ∈ (0,∞). Suppose that α ∈ (0,1]. Then we may equip the probability space (Ω,P) with a geometric random variable V and an independent event A satisfying P(A) = α in such a way that Y11 = Y≥k0 V11 11 . Writing Q for the supremum of the support of µ, we have that, V≥k0 A if u ≥ k Q, then Z ≥ u implies that Y ≥ k . We find then that, for such u, 0 0 P(Y ≥ u) = P(V ≥ u)P(A) = αP(V ≥ u), so that the second statement of the lemma follows from the first one in this case. In the case that α > 1, we similarly equip (Ω,P) with a geometric random variable V and an independent event A for which P(A) = α−1 such that V11 = Y11 11 . We may then treat this case analogously to the V≥k0 Y≥k0 A preceding one. The third statement may be reduced to the second one by a coupling argument. Let Y′ denote a random variable satisfying P(Y′ = k) = cpk for 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.