ebook img

Asymptotics for the normalized error of the Ninomiya-Victoir scheme PDF

0.28 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Asymptotics for the normalized error of the Ninomiya-Victoir scheme

Asymptotics for the normalized error of the Ninomiya-Victoir scheme 6 A. Al Gerbi, B. Jourdain∗ and E. Cl´ement† 1 0 2 February 4, 2016 b e F 3 In [1] we proved strong convergence with order 1/2 of the Ninomiya-Victoir scheme XNV,η with ] timestepT/N tothesolution X of thelimiting SDE.Inthispaperwecheck thatthenormalized R error defined by √N X XNV,η converges to an affine SDE with source terms involving the P − Lie brackets between the Brownian vector fields. The limit does not depend on the Rademacher . (cid:0) (cid:1) h random variables η. This result can be seen as a first step to adapt to the Ninomiya-Victoir t a scheme the central limit theorem of Lindeberg Feller type, derived in [2] for the multilevel m Monte Carlo estimator based on the Euler scheme. When the Brownian vector fields commute, [ the limit vanishes. This suggests that the rate of convergence is greater than 1/2 in this case 2 and we actually prove strong convergence with order 1. v 8 6 2 1 Introduction 5 0 . 1 We consider a general n-dimensional stochastic differential equation, driven by a d-dimensional 0 standard Brownian motion W = W1,...,Wd , of the form 6 1 (cid:0) (cid:1) : d v dX = b(X )dt+ σj(X )dWj, t [0,T] i t t t t ∈ (1.1) X  j=1 r  X0 =x P a where x Rn is the starting point, b : Rn Rn is the drift coefficient and σj : Rn Rn,j ∈ −→ −→ ∈ 1,...,d , are the Brownian vector fields. We are interested in the study of the normalized { } error process for the Ninomiya-Victoir scheme. To do so we will consider in the whole paper a regular time discretization, with time step h = T/N, of the time interval [0,T]. We introduce some notations to define the Ninomiya-Victoir scheme. Let (t = kh) be the subdivision of [0,T] with equal time step h, • k k∈[[0;N]] j j j ∆W = W W , for s (t ,t ] and j 1,...,d , • s s − tk ∈ k k+1 ∈ { } ∆s= s t , for s (t ,t ], k k k+1 • − ∈ ∗Universit´e Paris-Est, Cermics (ENPC), INRIA, F-77455, Marne-la-Vall´ee, France e-mails: jour- [email protected], [email protected] - This research benefited from the support of the “Chaire RisquesFinanciers”, Fondation du Risque. †Universit´e Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRS, F-77454, Marne-la-Vall´ee, France, e- mail: [email protected]. 1 η = (η ) be a sequence of independent, identically distributed Rademacher random • k k≥1 variables independent of W. For V : Rn Rn Lipschitz continuous, exp(tV)x denotes the solution, at time t R, of the 0 −→ ∈ following ordinary differential equation in Rn dx(t) =V (x(t)) dt (1.2) x(0) = x . (cid:26) 0 To deal with the Ninomiya-Victoir scheme, it is more convenient to rewrite the stochastic dif- ferential equation (1.1) in Stratonovich form. Assuming 1 regularity for the vector fields, the C Stratonovich form of (1.1) is given by: d dX = σ0(X )dt+ σj(X ) dWj t t t ◦ t (1.3)  j=1  X0 = x P d  where σ0 = b 1 ∂σjσj and ∂σj is the Jacobian matrix of σj defined as follows − 2 j=1 P ∂σj = ∂σj = ∂ σij . (1.4) ik i,k∈[[1;n]] xk i,k∈[[1;n]] (cid:0)(cid:0) (cid:1) (cid:1) (cid:0) (cid:1) Now, we present the Ninomiya-Victoir scheme introduced in [11]. NV,η Starting point: X = x. • t0 For k 0...,N 1 , if η = 1: k+1 • ∈ { − } h h XNV,η = exp σ0 exp ∆Wd σd ...exp ∆W1 σ1 exp σ0 XNV,η, (1.5) tk+1 2 tk+1 tk+1 2 tk (cid:18) (cid:19) (cid:18) (cid:19) (cid:16) (cid:17) (cid:16) (cid:17) and if η = 1: k+1 − h h XNV,η = exp σ0 exp ∆W1 σ1 ...exp ∆Wd σd exp σ0 XNV,η. (1.6) tk+1 2 tk+1 tk+1 2 tk (cid:18) (cid:19) (cid:18) (cid:19) (cid:16) (cid:17) (cid:16) (cid:17) The strong convergence properties of a numerical scheme, which approximates the diffusion (1.1), are useful to control the variance of the multilevel Monte Carlo estimator based on this scheme (see [5] and [9]). This motivated our study of the strong convergence of the Ninomiya- Victoir scheme in [1]. More precisely, under some regularity assumptions on the coefficients of the SDE, we proved strong convergence with order 1/2: 2p p 1, C R∗, N N∗,E max X XNV,η C 1+ x 2p hp. (1.7) ∀ ≥ ∃ NV ∈ + ∀ ∈ 0≤k≤N tk − tk ≤ NV k k (cid:20) (cid:13) (cid:13) (cid:21) (cid:16) (cid:17) (cid:13) (cid:13) In this present paper, we focus on the conv(cid:13)ergence in law(cid:13)of the normalized error defined by √N X XNV,η . The asymptotic distribution of the normalized error for the continuous time − Euler scheme was established by Kurtz and Protter in [8]. The asymptotic behavior of the (cid:0) (cid:1) normalized error processfor thecontinuous time Milstein scheme [10], which is known to exhibit strong convergence with order 1, was studied by Yan in [14]. In both cases, the normalized error converges to the solution of an affine SDE with a source term involving additional randomness given by a Brownian motion independent of the one driving both the SDE and the scheme. 2 This paper is organized as follows. In Section 2, we recall basic facts about the theory of stable convergence in law, introduced by R´enyi [12] and developed by Jacod [6] and Jacod-Protter [7]. In Section 3, we will discuss the interpolation between time grid points and then derive the asymptotic error distribution for the Ninomiya-Victoir scheme in the general case. More precisely, we prove the stable convergence in law of √N X XNV,η to the solution of the − following SDE: (cid:0) (cid:1) T d j−1 t t d t V = σj,σm (X )dBj,m+ ∂b(X )V ds+ ∂σj(X )V dWj, t 2 s s s s s s s r j=1m=1Z0 Z0 j=1Z0 X X (cid:2) (cid:3) X where σj,σm = ∂σmσj ∂σjσm, for j,m 1,...,d ,m < j, denotes theLie bracket between − ∈ { } the Brownian vector fields σj and σm, ∂b is the Jacobian matrix of b, defined analogously to (cid:2) (cid:3) d(d−1) (1.4), and(B ) is a standard -dimensionalBrownian motion independentof W. This t 0≤t≤T 2 result ensures that the strong convergence rate is actually 1/2. Moreover, it can be seen as a first step to adapt to the Ninomiya-Victoir scheme the central limit theorem of Lindeberg Feller type, derived by Ben Alaya and Kebaier in [2] for the multilevel Monte Carlo estimator based on the Euler scheme. Their approach leads to an accurate description of the optimal choice of the parameters for the multilevel Monte Carlo estimator. When the Brownian vector fields commute, the limit vanishes, which suggests that the rate of convergence is greater than 1/2. In Section 4, we focus on the commutative case and we provide a suitable interpolation between time grid points, to show strong convergence with order 1. 2 Stable convergence We start with the definition of the stable convergence in law which is stronger than the conver- gence in law. Definition 2.1 Let ZN be a sequence of random variables all defined on the same proba- N∈N bility space (Ω, ,P) and with values in a metric space (E,d). Let (Ω∗, ∗,P∗) be an ”extension” F (cid:0) (cid:1) F of (Ω, ,P), and let Z be an E-valued variable on this extension. The sequence ZN stably F N∈N converges in law to Z and we write this convergence as follows (cid:0) (cid:1) ZN s=tably Z N→⇒+∞ if, and only if, for all f : E R bounded continuous and for all bounded random variable Ξ −→ on (Ω, ,P): F E f ZN Ξ E∗[f (Z)Ξ]. N−→→+∞ (cid:2) (cid:0) (cid:1) (cid:3) We do not go into details of the definition of an ”extension” (see [6] for more information). The purpose of this section is to recall basic facts about stable convergence to study a sequence of stochastic differential equations in Rn of the form t d t UN = RN +JN + H0,NUNds+ Hj,NUNdWj , (2.1) t t t s s s s s   Z0 j=1Z0 X   where Hj,N,j 0,...,d , take values in Rn Rn, RN is a remainder term and JN a source ∈ { } × term. This is motivated by the decomposition of the error process (3.25). The following fundamental proposition will be used to study the stable convergence in law of a random sequence of couple of variables (see section 2-1 in [6]). 3 Proposition 2.2 Let ΛN and ΓN be two sequences of random variables all defined N∈N N∈N on the same probability space (Ω, ,P), with values in a metric space (E,d), and Λ be a random (cid:0) (cid:1) F (cid:0) (cid:1) variable on an extension, with values in (E,d). Let ΘN be a sequence of random variables N∈N and Θ be a random variable all defined on (Ω, ,P), with values in an other metric space (E′,d′). F (cid:0) (cid:1) Then (i) if ΛN s=tably Λ and d ΛN,ΓN P 0 then ΓN s=tably Λ, (2.2) N→⇒+∞ N−→→+∞ N→⇒+∞ (cid:0) (cid:1) (ii) if ΛN s=tably Λ and d′ ΘN,Θ P 0 then ΛN,ΘN s=tably (Λ,Θ), (2.3) N→⇒+∞ N−→→+∞ N→⇒+∞ for the product topology on E (cid:0)E′. (cid:1) (cid:0) (cid:1) × Inthefollowing,weworkonthefilteredprobabilityspace(Ω, ,F,P),whereF = ( = σ(η,W ,s t)) . F Ft s ≤ t∈[0,T] We consider the metric space E = ([0,T],Rn) equipped with the supremum-norm. The fol- C lowing theorem, dedicated to the convergence of a sequence of semimartingales, is a simplified version of Theorem 2.1 in [6]. Theorem 2.3 Let YN be a sequence of continuous semimartingales with values in Rp, N∈N such that YN = MN +AN, N N, where MN is a sequence of continuous F-local martingales (cid:0) (cid:1) ∀ ∈ null at t = 0 and AN is a sequence of F-predictable continuous processes with finite variation. Assume that, there exist A and f such that: 1. P sup AN A 0, (2.4) t t t≤T − N−→→+∞ (cid:13) (cid:13) (cid:13) (cid:13) 2. t i,j 1,...,p , t [0,T], Mi,N,Mj,N P Fij = fijds, (2.5) ∀ ∈{ } ∀ ∈ t N−→→+∞ t Z0 s (cid:10) (cid:11) P i 1,...,p , k 1,...,d , t [0,T], Mi,N,Wk 0. (2.6) ∀ ∈ { } ∀ ∈ { } ∀ ∈ t N−→→+∞ D E Then, YN s=tably Y (2.7) N→⇒+∞ where t 1 Yt = At+ (fs)2 dBs, (2.8) Z0 1 ij (fs)2 is the square root of the positive semi-definite matrix fs = fs and B a p- i,j∈[[1;p]] dimensional standard Brownian motion defined on a Wiener space Ω(cid:16)B, (cid:17)B,PB and indepen- F dent of W. The stable convergence takes place in the canonical Wiener extension of W, denoted (cid:0) (cid:1) by (Ω∗, ∗,P∗) defined as follows F Ω∗ = Ω ΩB, ∗ = B, P∗ = P PB. × F F ⊗F ⊗ 4 In comparison with Theorem 2.1 of [6], the assumption, Mi,N = 0, i 1,...,p and N a bounded martingale orthogonal to W, t ∀ ∈ { } (cid:10) (cid:11) is obvious, since we can write M in terms of an Itˆo integral with respectto the Brownian motion W,by usingthe martingale representation theorem. We will useTheorem 2.3, together with the following proposition to study the source term JN in the decomposition (2.1). This proposition is a consequence of Theorem 2.3 in [7] (see the proof of Theorem 2.5 (c) in [7]). Proposition 2.4 Let YN be a sequence of continuous semimartingales with values in Rp, N∈N such that YN = YN +MN +AN, N N, t [0,T], where MN is a sequence of continuous t 0 (cid:0) t (cid:1) t ∀ ∈ ∀ ∈ F-local martingales null at t = 0 and AN is a sequence of F-predictable continuous processes T with finite variation null at t = 0. Assume that the sequence MN + dAN is T s (cid:18) Z0 (cid:19)N∈N tight. Then, for any sequence KN of F-predictable, right-con(cid:10)tinuo(cid:11)us and lef(cid:12)t-han(cid:12)d limited N∈N (cid:12) (cid:12) processes, with values in Rq Rp, such that the sequence KN,YN stably converges in law to ×(cid:0) (cid:1) a limit (K,Y) we have the following result: (cid:0) (cid:1) Y is a semimartingale and with respect to the filtration generated by the limit process (K,Y) and KN,YN, KNdYN s=tably K,Y, KdY , (2.9) N→⇒+∞ (cid:18) Z (cid:19) (cid:18) Z (cid:19) t t where KNdYN = KNdYN and KdY = K dY . s s s s Z (cid:18)Z0 (cid:19)t∈[0,T] Z (cid:18)Z0 (cid:19)t∈[0,T] The following theorem deals with a sequence of stochastic differential equations in Rn of the form d t UN = RN +JN + Hj,NUNdWj (2.10) t t t s s s j=0Z0 X where, by convention, dW0 = ds, JN is a sequence of continuous adapted processes, and s N∈N forj 0,...,d , Hj,N isasequenceofF-predictable,right-continuousandleft-handlimited { } N∈N (cid:0) (cid:1) processes, with values in Rn Rn. (cid:0) (cid:1) × Theorem 2.5 Assume that there exist Hj and J such that: 0≤j≤d (cid:0) (cid:1) j,N j P j 0,...,d ,sup H H 0. • ∀ ∈ { } t≤T t − t N−→→+∞ (cid:13) (cid:13) (cid:13) (cid:13) JN s=tably J. (cid:13) (cid:13) • N→⇒+∞ P sup RN 0. t • t≤T N−→→+∞ (cid:13) (cid:13) (cid:13) (cid:13) Then, UN stably converges in law towards U, where U is the unique solution of the following affine stochastic differential equation: d t U = J + HjU dWj. (2.11) t t s s s j=0Z0 X 5 Proof : On the one hand, denoting by d t VN = Hj,NdWj, t s s j=0Z0 X the first assumption ensures that P sup VN V 0 t t t≤T − N−→→+∞ (cid:13) (cid:13) (cid:13) (cid:13) where d t V = HjdWj. t s s j=0Z0 X On the other hand, (2.2) from Proposition 2.2 gives us RN +JN s=tably J. N→⇒+∞ Then, applying (2.3) from Proposition 2.2, we have RN +JN,VN s=tably (J,V). N→⇒+∞ (cid:0) (cid:1) Finally, since sup HN is tight, we get the desired result using Theorem 2.5 (c) in [7]. t t≤T !N∈N∗ (cid:13) (cid:13) (cid:13) (cid:13) 3 Asymptotic error distribution for the Ninomiya-Victoir scheme in the general case 3.1 Main result In order to study the stable convergence in law of the normalized error process, we have to build an interpolated scheme. Let us first introduce some more notation. Let τˆ be the last time discretization before s [0,T], ie τˆ = t if s (t ,t ], and for s s k k k+1 • ∈ ∈ s = t = 0, we set τˆ = t . 0 0 0 Let τˇ be the first time discretization after s [0,T], ie τˇ = t if s (t ,t ], and for s s k+1 k k+1 • ∈ ∈ s = t = 0, we set τˇ = 0. 0 0 By a slight abuse of notation, we set η = η if s (t ,t ], s k+1 k k+1 • ∈ A natural and adapted interpolation, at time t [0,T], for the Ninomiya-Victoir scheme could ∈ be defined as follows: ∆t ∆t NV,η h ,∆W , ;X , (3.1) ηt 2 t 2 τˆt (cid:18) (cid:19) 6 where ∆W = ∆W1,...,∆Wd , t t t (cid:0) (cid:1) h (t ,...,t ;x) = exp t σ0 exp t σ1 ...exp t σd exp t σ0 x, (3.2) −1 0 d+1 0 1 d d+1 (cid:16) (cid:17) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) and h (t ,...,t ;x) =exp t σ0 exp t σd ...exp t σ1 exp t σ0 x. (3.3) 1 0 d+1 0 d 1 d+1 (cid:16) (cid:17) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) Here, to computethe Itˆodecomposition of h ∆t,∆W , ∆t;XNV,η themain difficulty ηt 2 t 2 τˆt t∈[0,T] is to explicit the derivatives of h . In the(cid:16)gene(cid:16)ral case, the comput(cid:17)a(cid:17)tion of derivatives of this η function is quite complicated. For this reason, in this paper, we use the interpolation of the Ninomiya-Victoir introduced in [1]: d d 1 1 dXNV,η = σj(X¯j,η)dWj + ∂σjσj X¯j,η dt+ σ0 X¯0,η +σ0 X¯d+1,η dt  t t t 2 t 2 t t  XNV,η = xXj=1 Xj=1 (cid:16) (cid:17) (cid:16) (cid:16) (cid:17) (cid:16) (cid:17)(cid:17) 0 (3.4)   where, for s (t ,t ]: k k+1 ∈ ∆s X¯0,η = exp σ0 XNV,η1 +X¯1,η 1 , (3.5) s 2 tk {ηk+1=1} tk+1 {ηk+1=−1} (cid:18) (cid:19) (cid:16) (cid:17) for j 1,...,d ,X¯j,η = exp ∆Wjσj X¯j−1,η1 +X¯j+1,η1 , (3.6) ∈ { } s s tk+1 {ηk+1=1} tk+1 {ηk+1=−1} (cid:16) (cid:17) ∆s(cid:0) (cid:1) X¯d+1,η = exp σ0 X¯d,η 1 +XNV,η1 . (3.7) s 2 tk+1 {ηk+1=1} tk {ηk+1=−1} (cid:18) (cid:19) (cid:16) (cid:17) Although the stochastic processes X¯j,η , j 1,...d , are not adapted to the filtration t t∈[0,T] ∈ { } F, each stochastic integral is well d(cid:16)efined(cid:17)in (3.4). Indeed, X¯j,η is adapted with respect t t∈[0,T] (cid:16) (cid:17) to the filtration σ η,Wj,s t σ Wk,s T , for j 1,...d . Then, by s s ≤ ≤ ∈ { } k6=j !! (cid:16) (cid:17) t∈[0,T] independence, Wj is a also a BrownWian mWoti(cid:0)on with resp(cid:1)ect to this filtration and the stochastic t integral σj(X¯j,η)dWj is well defined for all t [0,T]. Using this interpolation, we proved in s s ∈ Z0 [1] the strong convergence with order 1/2. More precisely: Theorem 3.1 Assume that j 1,...,d ,σj 1(Rn,Rn). • ∀ ∈ { } ∈ C σ0,σj and ∂σjσj, j 1,...,d , are Lipschitz continuous functions. • ∀ ∈ { } Then, p 1, C R∗, N N∗: ∀ ≥ ∃ NV ∈ + ∀ ∈ 2p E sup X XNV,η C 1+ x 2p hp. (3.8) t− t ≤ NV k k "t≤T # (cid:13) (cid:13) (cid:16) (cid:17) (cid:13) (cid:13) (cid:13) (cid:13) 7 Then, the normalized error process is defined as follows VN = √N X XNV,η . (3.9) − In this section, we check that the normalized (cid:0)error VN con(cid:1)verges to an affine SDE with source terms. Here is the main result. Theorem 3.2 Assume that: σ0 2(Rn,Rn) and is a Lipschitz continuous function with polynomially growing second • ∈ C order derivatives. j 1,...,d ,σj 2(Rn,Rn) and is Lipschitz continuous together with its first order • ∀ ∈ { } ∈ C derivative. j,m 1,...,d ,∂σjσm is Lipschitz continuous. • ∀ ∈ { } j 1,...,d ,∂σjσj 2(Rn,Rn) with polynomially growing second order derivatives. • ∀ ∈ { } ∈ C Then: VN s=tably V (3.10) N→⇒+∞ where V is the unique solution of the following affine equation: T d j−1 t t d t V = σj,σm (X )dBj,m+ ∂b(X )V ds+ ∂σj(X )V dWj (3.11) t 2 s s s s s s s r j=1m=1Z0 Z0 j=1Z0 X X (cid:2) (cid:3) X with σj,σm = ∂σmσj ∂σjσm, and (B ) astandard d(d−1)-dimensional Brownian motion − t 0≤t≤T 2 independent of W. (cid:2) (cid:3) 3.2 Discrete scheme To compute the asymptotic error distribution, the method consists in writing the normalized error in the form (2.1) . Since the interpolation (3.4) is not adapted to the natural filtration of the Brownian motion W, we were not able to derive a decomposition (2.1) with VN replacing UN. Toget aroundthis problem,webuildan adaptedapproximation XˆD,η ofXNV,η,with order 1 ǫ, ǫ > 0, and introduce UN = √N X XˆD,η . Then, we obtain the decomposition of the − ∀ − form (2.1) (see (3.25)) and study the sat(cid:16)ble converg(cid:17)ence in law of UN to deducethe convergence of VN. The approximation is defined as follows: d d 1 2 XˆD,η = XˆD,η +b XˆD,η ∆t+ σj XˆD,η ∆Wj + ∂σjσj XˆD,η ∆Wj ∆t t τˆt τˆt τˆt t 2 τˆt t −   (cid:16) (cid:17) Xj=1 (cid:16) (cid:17) Xj=1 (cid:16) (cid:17)(cid:18)(cid:16) (cid:17) (cid:19)   + ∂σjσm XˆτˆDt,η ∆Wtm∆Wtj ηtmX<ηtj (cid:16) (cid:17)  Xˆ0D,η = x.   (3.12)   In the following proposition, we compare XNV,η and XˆD,η. 8 Proposition 3.3 Under the assumptions of Theorem 3.2: 2p 1 p 1, ǫ > 0, C R∗, N N∗, E sup XNV,η XˆD,η C . (3.13) ∀ ≥ ∀ ∃ D ∈ + ∀ ∈ t − t ≤ DN2p−ǫ "t≤T # (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) The proof of this proposition is postponed to the Appendix. The next lemma gives estimation of the moment of XˆD,η and its increments. Its hypotheses are consequences of the ones of Theorem 3.2. We omit its standard proof. Lemma 3.4 Assume that: b 0(Rn,Rn) has an affine growth. • ∈ C j 1,...,d ,σj has an affine growth. • ∀ ∈ { } j,m 1,...,d ,∂σjσm has an affine growth. • ∀ ∈ { } Then, p 1, Cˆ R∗, N N∗: ∀ ≥ ∃ D ∈ + ∀ ∈ (i) 2p E sup XˆD,η Cˆ . (3.14) t ≤ D "t≤T # (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (ii) 2p t [0,T],E XˆD,η XˆD,η Cˆ hp. (3.15) ∀ ∈ t − τˆt ≤ D (cid:20)(cid:13) (cid:13) (cid:21) (cid:13) (cid:13) (cid:13) (cid:13) 3.3 Proof of the stable convergence We recall that UN = √N X XˆD,η . By Proposition 3.3, sup√N XˆD,η XNV,η converges − t − t t≤T (cid:16) (cid:17) (cid:13) (cid:13) in probability to 0 as N goes to + . Since VN UN = √N Xˆ(cid:13)D,η XNV,η ,(cid:13)(2.3) from ∞ − (cid:13) − (cid:13) Proposition2.2ensuresthatTheorem3.2isaconsequenceofthefo(cid:16)llowing proposit(cid:17)iondedicated to the stable convergence in law of UN. Proposition 3.5 Under the assumptions of Theorem 3.2: UN s=tably V, (3.16) N→⇒+∞ where V is the unique solution of (3.11). Proof : We begin by describing the limiting process for UN = √N X XˆD,η . The differ- − ential of UN can be written as: (cid:16) (cid:17) 9 d dUN = √N b(X ) b XˆD,η dt+ σj(X ) σj XˆD,η dWj t  t − t t − t t (cid:16) (cid:16) (cid:17)(cid:17) Xj=1(cid:16) (cid:16) (cid:17)(cid:17)   d +√N b XˆD,η b XˆD,η dt+ σj XˆD,η σj XˆD,η dWj  t − τˆt t − τˆt t (cid:16) (cid:16) (cid:17) (cid:16) (cid:17)(cid:17) Xj=1(cid:16) (cid:16) (cid:17) (cid:16) (cid:17)(cid:17)   d √N ∂σjσj XˆD,η ∆WjdWj + ∂σjσm XˆD,η ∆WmdWj +∆WjdWm . −  τˆt t t τˆt t t t t  Xj=1 (cid:16) (cid:17) ηtmX<ηtj (cid:16) (cid:17)(cid:16) (cid:17)  (3.17) Then, the proof will go through several steps. Step 1: linearisation of the two terms in the first line of the right-hand side of (3.17). d √N b(X ) b XˆD,η dt+ σj(X ) σj XˆD,η dWj . (3.18)  t − t t − t t (cid:16) (cid:16) (cid:17)(cid:17) Xj=1(cid:16) (cid:16) (cid:17)(cid:17)   Let j 1,...,d and i 1,...,n . By the mean value theorem, we get: ∈ { } ∈{ } σij(X ) σij XˆD,η = σij ξij . X XˆD,η (3.19) t − t ∇ t t− t (cid:16) (cid:17) (cid:16) (cid:17) (cid:16) (cid:17) where: ξij = αijX + 1 αij XˆD,η for some αij [0,1]. Using a compact matrix notation, t t t − t t t ∈ we can write: (cid:16) (cid:17) σj(X ) σj XˆD,η = ∂σj,N X XˆD,η (3.20) t − t t t− t (cid:16) (cid:17) (cid:16) (cid:17) where: ∂σj,N = ∂ σij ξij . (3.21) t xm t i,m (cid:16) (cid:17) (cid:16) (cid:17) Then, we obtain d d √N σj(X ) σj XˆD,η dWj = ∂σj,NUNdWj. (3.22) t − t t t t t Xj=1(cid:16) (cid:16) (cid:17)(cid:17) Xj=1 In the same way: √N b(X ) b XˆD,η dt = ∂bNUNdt (3.23) t − t t t (cid:16) (cid:16) (cid:17)(cid:17) where: ∂bN = ∂ bi ξi0 (3.24) t i,m xm t and ξi0 = αi0X + 1 αi0 XˆD,η fo(cid:0)r som(cid:1)e αi0 [0,1](cid:0). (cid:1) t t t − t t t ∈ Step 2: decompo(cid:0)sition o(cid:1)f UN. Writing the fourth term in the right-hand side of (3.17), σj XˆD,η σj XˆD,η , as the sum of t − τˆt the dominant contribution (cid:16) (cid:17) (cid:16) (cid:17) d ∂σjσm XˆD,η ∆Wm τˆt t mX=1 (cid:16) (cid:17) 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.