Random Trees in Electrical networks Hariharan.N Department of Computer Science University of Chicago 6 email: [email protected] 0 0 Abstract 2 l u Thispapercontainsresultsrelatingcurrentsandvoltagesinresistivenetworksto J appropriaterandomtrees or forestsin those networks. Since eachresistivenetwork 1 hasa reversibleMarkovchainequivalent, weobtainequivalentresultsforthe latter as well. We describe a way of obtaining a harmonic function on a weighted graph ] given the boundary values, by choosing random forests of the graph. As applica- R tions of the theorems discussed, (which give formulae of the Kirchhoff tree kind), P we obtain an expression for the expected transit time from one state to another in . h a reversible Markov chain in terms of its arborescences. t a m The methods of this paper canalso be used to give alternativeproofs of the Kirch- [ hoff tree formula. 1 v 1 Introduction 1 1 The first formula involving trees in electrical networks is due to Kirchhoff (see 0 7 [1]). He gave an expression for the equivalent conductance between two nodes of 0 a resistive (or as we shall henceforth say, conductive) network. Kirchoff’s formula 06 states that the equivalent conductance between two nodes of a network is Piti, / Pjfj h where,ti is the product ofthe conductances ofthe ith tree andfj is the productof t the conductances of the jth forest that separates the nodes in question. a m Inrecentyears,formulaeinvolvingtreeshavebeendiscoveredforgeneralMarkov chains too (see [2], [3]). : v The physical model used to explain Ohm’s law involves Brownian motions of i X charge carriers. So it is not surprising that a conductive network may be mapped onto a Markov chain with very similar properties. Trees seem to figure in Markov r a chains as “partial histories”. In a Markov chain started at time −∞ and ended at timeτ, thesetofdirectededgescorrespondingtothe finalvisitstoeachstate,form adirectedtreerootedatthestatereachedattimeτ. Theevolutionofthesehistories as the Markov chain progresses is another Markov chain which has arborescences (i.e rooted trees) as states. This Markov chain has interesting properties, and can be regarded as underlying the results that appear in this paper. The formulae presented here are multiterminal extensions of the Kirchhoff tree formula. They all have equivalent forms in reversible Markov chains, which can be written out by simply substituting for injected current, branch current and node voltage, their Markov chain equivalents given in Section 2 . We obtain Kirchhoff’s formula for the equivalent conductance between two nodes, fromtheresultwehavecalledtheVJTheorem,whenweapplyaunitvoltageacross them. Since the proof of the VJ Theorem is purely graph theoretic, we get a new derivation of the Kirchhoff conductance formula as well, quite different in nature from the usual proof using Binet - Cauchy Theorem. Although this is not the concernof this paper, we state that these formulae can be implementedinanaturalwayusingrandomizedalgorithmstogivefastapproximate 1 solutions to network problems. The essential idea is that we make numerous “ob- servations”of the network,eachofwhich is computationally light, and the “overall picture” that we get is an approximation to the network’s actual behaviour. The more observations we make, the better our approximation is expected to be. The criticalstepineachobservationisthatofpickingarandomtree(forest),thechance of picking a particular tree (forest) being proportional to the product of its branch conductances. Picking random trees has been found useful in other contexts as well, such as op- timizing on server positions in a computer network, and finding euler paths in a graph, and this problem has been well studied (see [4], [5], [10], [13]). There are a variety of efficient algorithms available that perform this task. Here is a preview of three of the theorems in network form. The VV Theorem (special case): In a conductive networkwith nodes S ,...,S , apply an externalvoltagesource of 1 n 1 volt across S and S and choose S as the ground. Let v be the voltage at S . 1 2 2 k k (Thus v = 1, v = 0). 1 2 Then v is given as follows : k Considerallthoseforestsf (ofthe network)whichhaven−2branches,(consisting of two disjoint trees, whose union touches every vertex of the network), such that S and S are not vertices of the same tree in f. 1 2 Call this set F . Choose a forest randomly out of F , with probability propor- 12 12 tional to the product of its branch conductances. The voltage v at node S is the k k probability that in the chosen forest f, S is a node of the same tree as S . k 1 This theorem is really a result on harmonic functions on weighted graphs, since butfor the “poles”S andS , the voltageatanyvertexisthe weightedmeanofits 1 2 neighbour’s voltages. The JI Theorem: In the network mentioned above, let currents J ,...,J be injected externally into 1 n nodes S ,...,S . Choose a randomspanning tree t of the network,the probability 1 n ofchoosingt being proportionaltothe productofits branchconductances. Setting all conductances other than those of t to 0, we get a certain current distribution according to which the current in any branch not in t is 0. The actual current distribution is the expected distribution, under a random choice of tree t, or the average of the distributions taken over all trees (the weight of a distribution being the product of the conductances of the corresponding tree). The IV Theorem: Let the J ’s be as above. Suppose we are given any current distribution in the k branches of the graph that is consistent with the injected currents, but which does not necessarily satisfy the Kirchhoff voltage law. Choose a random tree t as in the JITheorem. Takeanyvertexasground(potentialzero). Calculatethenodepoten- tial of a vertex by adding the branch voltages along the unique path in t from the groundto that vertex. Acorrectnode voltagedistributionis givenby the expected value of node potentials under the choice of a random tree t. The JI and IV theorems above, involve choosing random trees while the VV Theo- rem involves choosing a random forest. In each case, the probability of a choice is proportional to the product of its conductances. There is a well-known algorithm using Markov-chains for choosing random trees with this probability distribution (see [4], [5]). As we shall see in the section with Theorem VV, this algorithm can modified to choose random forests too. 2 2 The network - Markov equivalence We now talk about the correspondence between electrical networks and re- versible Markov chains. This is well-known (see [6], [7], [8], [9]), but has been included for the sake of completeness. ConsideraconductivenetworkwithnodesS ,...,S . Letg =g be the branch 1 n kj jk conductance between the nodes S and S . If j =k the conductance obviously has j k no effect onthe electricalbehaviorofthe network,however,because in ourMarkov analogue they do contribute, we allow g to be positive. Let v be the voltage of jj k S , J the current injected from outside the network into the node S and i the k k k kl branch current flowing from S to S in the conductance g . (Conductance is the k l kl inverse of resistance. In this paper, where there is no ambiguity, we shall use the word conductance for a branch as well.) The Kirchhoff current law is expressed in the following equation :- n J = g (v −v ). Let the injected current vector be called J, the vector k Pm=1 mk k m of node voltagesbe V, andthe branchcurrentmatrix be I. Thus J =(J ,...,J ), 1 n V =(v ,...,v ), and I ={i } . 1 n kl n×n ConsidertheMarkovchainwithstatesS ,...,S (thesewillcorrespondtonodes 1 n ofournetworkanyway,sothereisnoclashofnotations). Lettransitionprobability p be gkl where g = n g . Let us further assume that kl gk k Pm=1 km n n XXgkl =1. k=1l=1 This is just a scaling of the conductances, but is convenient. We assume that the network is connected, and therefore that the corresponding Markov chain has a path of positive probability from any state to any other - i.e the markov chain is strongly connected. If it is also aperiodic, it has a unique stationary distribution, which can be easily verified to be π = (g ,...,g ), with the scaled conductances. 1 n Let the initial probability distribution be p(0). Suppose we have a stopping rule under which the expected run-time, for some (and hence, since the Markov chain is strongly connected and finite, every) initial probability distribution is finite. Let the probabilityof the walkterminating atS be p(u), andthe correspondingvector k k be p(u). Let e be the expected number of times the walker visits state S , where k k the last move of the walk at which the walker stops is not counted as a visit. Thus for example if the walk began at S and the stopping rule declares that at the first 1 visitto S ,the walkends, thenaccordingto us the number ofvisits toS is always k k 0. With this notation, the following statement can be easily verified for each k: n p(k0)−pk(u) =ek− X(pmk em). m=1 This essentially says that “initial probability - final probability = net outflow - net inflow” In our present situation, p = gmk so we have mk gm n g p(k0)−pk(u) =ek− X( gmk)em. m m=1 In anticipation, let egmm be called vm (this will actually behave like voltage), and let J denote the L.H.S (which will play the role of injected current). Our equa- k tion then becomes J = g ×v − n (g v ) or J = n g (v −v ), k k k Pm=1 mk m k Pm=1 mk k m 3 which is identical to the Kirchhoff current equation we had earlier. This verifies the correspondence. In a network, i = (v − v)g , which therefore becomes kl k l kl e (p )−e (p )inthe Markovchain. Thereforethe Markovanalogueofcurrenti k kl l lk kl is the expected difference between the number of transitions from S to S and the k l number of transitions from S to S . In the network results that follow, whenever l k we use the phrase“A node S whose voltageis fixed externally”,we mean that the k injectedcurrentJ atS isnotnecessarily0. However,foranode S whosevoltage k k l is not fixed externally, J is necessarily 0. l 3 Preliminaries An arborescence of a directed graph is a tree, in which a node has been singled out as rootand allbranches areso directed that from any nodeof the graphto the root, there is a unique directed path. A Markov chain is a process consisting of a succession of events where the probability of an event happening is a function of the preceding event that has occurred in the process. In this paper, we only con- siderMarkovchainswhicharefiniteandwhicharestronglyconnected. Bystrongly connected, we mean that if E and E are any events in our sample space, and 1 2 our process launches itself with E , the probability that E occurs n events later is 1 2 non-zeroforsomeintegern. IfaMarkovchainisaperiodicandstronglyconnected, it is a result that whatever be the probability distribution with which the process begins, the probabilities of the variousevents tend to a “stationaryprobabilitydis- tribution”ifwewaitsufficientlylong. Wecalltheeventsfromthesamplespaceofa Markovchain, states,and giventhatE has just occurred,callthe probability that i E occurs next, the transition probability from E to E , which we denote by p . j i j ij Let the stationary probability distribution of a finite, irreducible Markov chain be {π ,π ,...},correspondingto events{E ,E ,...}. If π ×p =π ×p , the Markov 1 2 1 2 i ij j ji chain is called reversible. Let R be a subset of S = {S ,...,S }. We then call by F the set of all max- 1 n R imal forests of the network that “separate” states in R. Thus F consists of all R those subgraphs f of the network for which any S in S is connected by a unique k (conducting) path in f to some state in R (a state is always regarded to be con- nectedtoitself bythe nullpath). Itfollowsthatby the conditionofconnectivityin f, the states of S are partitioned into |R| blocks such that each block has exactly one state of R. If S belongs to R and f to F , we shall denote by B (k), the set k R f ofstateswhichareconnectedtoS byapathinf. We willoftenperformalgebraic k operationswithaforestf inF . Ineverysuchcase,f isinterpretedastheproduct R of the conductances of the forest f. This is a helpful abuse of notation; it creates no ambiguity, but does simplify our expressions. (For example, f +f represents 1 2 the sum of the products of conductances of the forests f and f .) 1 2 WedefineS (k),forf inF tobetheuniquestate(wehenceforthuse‘state’syn- f R onymously with node and vertex, since nodes become states of the related Markov chain)of R thatS is connectedto by a pathin f. v (k) is takento be the voltage k f of the state S (k). It is true that a forestf may belong to both F and F , where f R Q R and Q are different vertex sets, but in all our expressions,we talk of forests f in a particular F , and so S (k), and v (k) are well defined by the context in which R f f they appear. 4 4 The VJ Theorem WenowgiveaformulafortheinjectedcurrentsJ intermsoftheexternallyfixed k voltages of the network. When J is the vector (−1,1,0,...,0), we get Kirchhoff’s formula for equivalent conductance. Theorem VJ: Let R be a nonempty subset of S not containing S . Let Q = R∪S . Let the 1 1 voltages at nodes of Q be the only ones fixed externally. Then (v −v (1)) h J = Ph∈FR 1 h . 1 (f) Pf∈FQ Proof : We say that a forest f is contained in another forest f if every branch of f is 1 2 1 also a branch of f . If f belongs to F , h belongs to F and f is contained in h, 2 Q R then h has exactly one branch that f doesn’t, which we shall denote as h/f. If we look at the partition of S induced by the forest f (in which, we recall, every block has exactly one representative of Q), we see that h/f has exactly one endpoint in B (1). Let v(h/f) be the voltage of h/f, taken directed outward from B (1). We f f observe that the net current leaving B (1), (and entering states outside) is J for f 1 any f in F since the only state in B (1) that can possibly have non-zero injected Q f current is S . Therefore, 1 J1 = X gkl(vk−vl). (k,l s.t Sf(k)=S16=Sf(l)) i.e J1( X f)= f∈FQ X ( X (f)(gkl)(vk −vl)). f∈FQ (k,l s.t Sf(k)=S16=Sf(l)) This may be rewritten as X h×v(h/f) (f∈FQ,h∈FR s.t f⊂h) = X h×( X v(h/f)) h∈FR (f∈FQ s.t f⊂h) But for a fixed h in F , R X v(h/f) (f∈FQ s.t f⊂h) is the sum of branch voltagesalong the path from S to S (1) contained in h , and 1 h this is just v −v (1). Therefore 1 h J1( X f)= X h×(v1−vh(1)). f∈FQ h∈FR This proves the theorem. The VJ Theorem has a probabilistic interpretation. In what follows, whenever we say choose a random forest in F , it is implicit that the probability of choosing Q 5 f is proportionalto the productofits conductances. Similarlyby arandombranch we mean that it is chosen with probability proportional to its conductance. Let g be the sum of all conductances of our network. Choose a random forest f in F . Q Choose a randombranchg fromthe network. Then S (k) and S (l) are states in kl f f Q. Letthenumberofbranchesinthe uniquepathfromS toS (m)inf becalled m f d (m) for each m and each f . f Consider the current vector J(f,kl) given by J =0 if S is not S (k) or S (l). m m f f J = g×(vf(k)−vf(l)) if m=S (k). m 1+df(k)+df(l) f J = g×(vf(l)−vf(m)) if m=S (l) m 1+df(k)+df(l) f Then, the theorem says that the expected value of J(f,kl) when f and g are kl random, is the actual J that is injected into the network. 1+d (k)+d (l) is just f f the length of the connecting path via g ( in f ) between S (k) and S (l) when lk f f these are distinct ; it was used to simplify the expression. To see why the two forms of the VJ Theorem are identical, let us find the expected value of J (f,kl), the first component of J(f,kl). If S is not in Q, the 1 1 result is obvious since then J (f,kl) is always 0, so we assume that S belongs to 1 1 Q and let R=Q−S . 1 E[J (f,kl)]= 1 ( (f)(gkl)(v −v (l))( g )) Pf∈FQ P(k,l s.t Sk∈Bf(1)) g 1 f 1+df(k)+df(l) . f Pf∈FQ We observe that if S is not in B (1), then f ×g correspondsto a forest h in F , l f kl R while if it is, v −v (1) is 0. Further, the number of pairs of the form (f,g ) that 1 f kl lead to a forest h in F , is the number of branches in the unique path from S to R 1 S (1)inh. This isthe sameas1+d (k)+d (l)foranypair{f,g }thatgivesrise h f f kl to h. Therefore the above expression is equal to (v −v (1))×h Ph∈FR 1 h , f Pf∈FQ which is J . 1 5 The VV Theorem Theorem VV : Let R be a non-empty subset of S, not containing S . Let the states of R be the 1 only ones whose voltages are fixed externally, (i.e for all S in S −R, let J be 0). l l Then, (v (1)×h) v = Ph∈FR h 1 (h) Ph∈FR Proof : Let Q=R∪S . By the VJ Theorem, 1 (v −v (1))×h J = Ph∈FR 1 h . 1 f Pf∈FQ 6 J =0, and so 1 0= X ((v1−vh(1))×h), h∈FR which implies the stated theorem. All that was needed in proving the above VV Theorem is that for any state S in l S−R, n ((g )(v )) v = Pm=1 lm m l n g Pm=1 lm holds, which is a condition of harmonicity outside R. Here is the equivalent probabilistic version : Choose a random forest h out of the set F , with probability proportional to the R product of its conductances. The voltage at S , then is the expected value of the 1 voltage of the unique state of R that S is connected to via h. 1 A Markov chain, having elements of F as states and stationary probability of f R proportional to the product of its conductances can be obtained from the Markov chain that gives random trees. Take the network,and fuse all states of R. We now have a new conductive network whose branches all come from the original (though some may have become parallel ). It is clear that the forests in F , consist of R precisely those collections of branches that form the trees of the new network, and thatthiscorrespondenceisbijective. Now,simplyusethetreeMarkovchaintogive trees of the new network (this method works perfectly even when we have parallel branches), and pick corresponding forests from F . This gives random forests of R F with the right probabilities. R 6 The JI Theorem Here is a theorem that expresses the current distribution in a network in terms of the injected currents. Let T be the set of all (spanning) trees. Let I be the current matrix {i } where i is the currentflowing in the conductance from k kl n×n kl to l. Let J be the vector of injected currents. If we were to set all conductances other than those in a particular tree t to 0, we would have a current distribution that would be 0 in all branches except those of t. Let this current distribution in matrix form be denoted as I . t Theorem JI : t×(I ) I = Pt∈T t t Pt∈T (where asinour previoustheorems,when t appearsin analgebraicexpression,itis taken to be the product of the conductances in t.) Proof : The current matrix I is a linear function of the vector of injected currents J. A vector is a valid J if and only if the sum of its components J , ...,J is 0. Such 1 n vectors form a vectorspace and it is sufficient to prove the theorem for a basis of this vectorspace. It is clear that vectors of the form z , k = 2 to n form a basis, k where z is the vector with (z ) = −1, (z ) = 1 and all other entries (z ) = 0. k k 1 k k k l For simplicity in notation, we shall prove the theorem for J =z . Let S be taken 2 1 to be ground, ( i.e v =0). Let R be {S ,S }. We need to prove that 1 1 2 (I ) ×t i = Pt∈T t lk lk t Pt∈T 7 for each pair {S ,S }. We know that i = g ×(v −v ) which, using the VV l k lk lk l k Theorem is ( h×v (l))−( h×v (k)) g × Ph∈FR h Ph∈FR h . lk h Ph∈FR This is g h(v (l)−v (k)) lkPh∈FR h h . h Ph∈FR Now, the possible values of v (l)−v (k) over varying forests h are −v ,v and 0, h h 2 2 which arise respectively from the cases when (a) S is in B (1) and S is in B (2) l h k h (b) S is in B (2) and S is in B (1) l h k h (c) Both belong to the same block with respect to h. Therefore, g lk ilk =( )×{ X h×v2 + h Ph∈FR (h∈FR s.t Sl∈Bh(2) and Sk∈Bh(1)) X h×(−v2)}. (h∈FR,Sl∈Bh(1), s.t Sk∈Bh(2)) From Theorem VJ, (t)×(v −v ) J = Pt∈T 2 1 . 2 h Ph∈FR So, t×(v ) 1= Pt∈T 2 , h Ph∈FR and therefore h v = Ph∈FR . 2 t Pt∈T This is actually just Kirchhoff’s formula for equivalent resistance. We have it as a special case of Theorem VJ. Substituting this for v , 2 g lk ilk =( )×( X h t Pt∈T (h∈FR s.t Sl∈Bh(2) and Sk∈Bh(1)) + X −h) (h∈FR s.t Sl∈Bh(1) and Sk∈Bh(2)) Givenatreet,ifg isint,callbyhtheforestcorrespondingto t (whichtherefore lk glk “separates” S and S ). Then, (I ) is l k t lk (a) 0 if h is not in F . R (b) 1 if h is in F , S (l)=S and S (k)=S . R h 2 h 1 (c) −1 if h is in F , S (l)=S and S (k)=S . R h 1 h 2 These exhaust all possibilities. Therefore, Pt∈T(It)lk×t =( glk )×{ X (h) (t) t Pt∈T Pt∈T (h∈FR s.t Sl∈Bh(2) and Sk∈Bh(1)) + X (−h)}, (h∈FR s.t Sl∈Bh(1) and Sk∈Bh(2)) which from what we just saw is i . This proves our theorem. lk 8 The JI Theorem has a compact probabilistic interpretation. Choose a random tree t (with probability proportionalto t), andcalculate the branchcurrentmatrix after all conductances not in t have been set to 0. Then, the expected value of the current distribution that we would get is the actual distribution. 7 The IV Theorem This theoremgivesnode voltagesfor a particularJ. Like the previous theorem, it involves picking a random tree. Let J be the vector of injected currents. Let I n be a current matrix {i } such that (i )=J. lm n×n Pm=1 lm l We note that this need not be the actual current distribution that results from J in the network we are working with. We could, for example, obtain a valid I of this kind by taking some tree of the network and finding the currentdistribution if all other conductances were set to 0. This is (in general) a much easier task than solvingforthebranchcurrentsinournetworkwhichcouldhaveasmanyas (n)(n−1) 2 branches. Theorem IV : Choose a random tree t in T (with notation from Theorem JI), with probability proportional to the product of its conductances. Choose S as ground (this choice 1 is arbitrary). For each S , walk along the unique path that is in tree t from S to k 1 S , find k i lm X ( ), g lm all glm traversed from l to m in that path call this v (k), and call the voltage vector (0,v (2),...,v (n)) V . t t t t Then, the actual voltage vector V (with S as ground) is the expected value of the 1 voltage vectors V . t i.e t×(V ) V = Pt∈T t t Pt∈T Proof: Consider the n(n+1) current matrices I[k,l] corresponding to ordered pairs (k,l), 2 k >l, such that in I[k,l], i =1 , i =−1 and all other entries are 0. These form kl lk a basis of the vectorspace of all possible current matrices. Given a current matrix I, there is a unique injected current vector J corresponding to it which is a linear function of I. Given J, and the values of the conductances, taking S as ground, 1 there is a single (node) voltage vector V. All of these are related linearly, so it is enough to prove the theorem in the case where I has i = 1, i = −1, and all 21 12 other entries 0. If we prove this case, it proves the theorem for any of our basis matrices; the fact that S has been chosen as ground does not affect generality - it 1 only causes a constant shift in the voltages. The I we have chosen corresponds to J =(−1,1,0,...,0). It clearly suffices to prove that the theorem gives the correct value of v and of v . 3 2 To prove the v case, we partition the set of all trees into three classes : 3 T , the set of all trees t which do not contain the branch g . 0 12 T , the set of all trees t which contain g and have the property that the path 1 12 joining S and S in t does not go through S . 3 1 2 T , the set of all trees t which contain g and have the property that the path 2 12 joining S and S in t goes through S . 3 1 2 If we choose a random tree t, (V ) is 0 if t is in T or T . If t is in T , (V ) is 1 . t 3 0 1 2 t 3 g12 9 LetR={S ,S }. Then,tisinT ifandonlyifg isintandtheforesthobtained 1 2 2 12 by removing g from t (which always is in F ) is such that S (3)=S . 12 R h 2 Therefore, X(Vt)3 t= X (1/g12)×(h×g12)= X h. t∈T h∈FR,Sh(3)=S2 h∈FR,Sh(3)=S2 Using the VJ Theorem, we have proved that if J =(−1,1,0,...,0), thenv = Ph∈FRh,whereS isground. Thisprovesthetheoremforv . Fromthis, 2 1 2 Pt∈T(t) 1 v 2 × X (h)= × X h, t h Pt∈T h∈FR s.t Sh(3)=S2 Ph∈FR h∈FR,Sh(3)=S2 which from the VV Theorem is v . This completes our proof. 3 8 Applications to reversible Markov chains Using the equivalence that was mentioned in the beginning, all of the network theoremscanbetranslatedquiteliterallyintotheoremsforreversibleMarkovchains. Here however we shall only consider some interesting special cases. We start with a reversible Markov chain given by transition matrix P with states (S ,...,S ). 1 n Construct the equivalent electrical network in which the nodes are the states S k and the conductance g is p (π )=p (π ), π being the stationary distribution. kl kl k lk l Then p = gkl, where kl gk n gk = X gkm. m=1 g becomes the stationary probability at S . We will need to manipulate clumps k k of arborescences or “orchards” of the weighted directed graph represented by our Markov chain, so we need some more notation. (here the weight of the directed edgefromk tol istakenasp ). Anorchardisarootedforest;Ifwetakeaforestf, kl choose a root for every connected component of it, and direct all the edges in each connected component, so that there is a directed path from every state to the root of the component of f that it is in, we get an orchard. An orchard is fully deter- mined by the forest that is its imprint in the graph, and the set of nodes or states that are its roots. The orchard corresponding to a forest f, and root set R will be denoted by [f,R]. We shall denote the product of its edge transition probabilities by o[f,R]. LetR be a subsetofS. Assume forconveniencethatS isnotinR,but 1 S is. 2 Consider a random walk that originates at S at time 0. Impose a stopping rule 1 according to which we stop the walk the first time the walker reaches a state in R. We shall now give formulae for the expected duration of the walk (which we shall henceforth call τ ), and the probability that the walk terminates at S . The R 2 electrical equivalent of this problem is as follows (from what we did in Section 2): The voltages of states in Q = R∪{S } are fixed externally, with the voltages of 1 states in R set to 0. We don’t know v ,(which is the expected number of visits to 1 S into g ) but we do knowthat J =1, since the walkeris knownto startfromS 1 k 1 1 with probability 1. From the VJ Theorem, we have (h)(v −v (1)) 1=J = Ph∈FR 1 h . 1 (f) Pf∈FQ 10