ebook img

On Lipschitz continuous optimal stopping boundaries PDF

0.35 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On Lipschitz continuous optimal stopping boundaries

On Lipschitz continuous optimal stopping boundaries Tiziano De Angelis∗ & Gabriele Stabile† January 27, 2017 7 1 Abstract. We obtain a probabilistic proof of the local Lipschitz continuity for the optimal 0 stopping boundary of a class of problems with state space [0,T]×Rd, d ≥ 1. To the best of our 2 knowledge this is the only existing proof that relies exclusively upon stochastic calculus, all the n a other proofs making use of PDE techniques and integral equations. Thanks to our approach we J obtain our result for a class of diffusions whose associated second order differential operator is 5 not necessarily uniformly elliptic. The latter condition is normally assumed in the related PDE 2 literature. ] C Keywords: optimal stopping, free boundary problems, Lipschitz free boundaries. O MSC2010 subject classification: 60G40, 35R35. . h t a 1 Introduction m [ In this work we deal with optimal stopping problems of the form 1 τ v v(t,x) = sup E h(t+s,Xx)ds+1 f(t+τ,Xx)+1 g(Xx) (1.1) 1 s {τ<T−t} τ {τ=T−t} τ 9 0≤τ≤T−t (cid:20)Z0 (cid:21) 4 7 where E denotes the expectation operator. For d ≥ 1, given a suitable Rd-valued function µ and 0 a d×d matrix σ, the process X ∈ Rd follows the dynamic . 1 0 t 7 Xtx = x+ µ(Xsx)ds+σBt, t ≥ 0, 1 Z0 : v with B a Rd-valued Brownian motion. The main focus of our study is the analysis of the i X regularity of the optimal stopping boundary, i.e. the boundary of the set in [0,T)×Rd where r v = f. a Under mild assumptions on µ, f, g and h we provide a probabilistic representation of the gradient of v. Thelatter is used, along with more technical requirements on f, g and h, to prove that the optimal stopping boundary may be expressed in terms of a locally Lipschitz continuous function b : [0,T]×Rd−1 → R. One of the main features in our work is that we do not assume uniform non-degeneracy of the diffusion so that standard results based on PDE theory cannot be easily applied. It is well known that optimal stopping theory goes hand in hand with the theory of free boundary problems in PDE and the question of regularity of optimal stopping boundaries (free boundaries) has been the object of intensive study. The one dimensional case d = 1 attracted ∗School of Mathematics, University of Leeds, Woodhouse Lane LS2 9JT, Leeds, United Kingdom. [email protected] †Dipartimento di Metodi e Modelli per l’Economia, il Territorio e la Finanza, Sapienza University of Rome, Via del Castro Laurenziano 9, 00161, Rome, Italy. [email protected] 1 Lipschitz optimal stopping boundaries 2 the interest of several mathematicians who developed approaches ranging from probability to analysis. Early contributions to the topic were made in [19], [22] and [33], among others. In [19] and [33] it was proven that the free-boundary b is differentiable in the open interval (0,T) for a certain class of problems involving one-dimensional Brownian motion or solutions of one-dimensional SDEs with regular coefficents. Other papers employing PDE methods are for example [6], [28], where infinite differentiability of the free boundary in the Stephan problem is proved, and[15]whereC1 regularity oftheboundaryisobtained foracertain class ofvariational problems. The study of the optimal boundary of the American put option is perhaps one of the most renowned examples in this field and for an overview of existing results one may refer to [2], [7], [8], [13], [16], [18] [20], and [25] among others. Finally it is worth recalling that a thorough discussion of analytical methods for free boundary problems on [0,T]×R related to the heat operator may be found in the monograph [5] (see also [14, Ch.8]). In the latter, as well as in several of the above references, the first step in the analysis of the regularity of the free boundary is to prove that it is Lipschitz continuous or at least H¨older continuous with constant α > 1/2. There is also a large body of literature addressing similar questions in higher dimensions. Accounting in full for these results is a difficult task and it falls outside the reach of our work. However for our purposesit is interesting to recall the following fact: Lipschitz regularity for the free boundary of certain Stephan problems (with d ≥ 1) can be upgraded to C1,α regularity for some α ∈ (0,1) and eventually to C∞ regularity, under suitable technical conditions. Detailed derivationsofthisinformalstatementmaybefoundinthemonographs[4]and[23]andreferences therein(see also[21]forthestudyofAmerican optionswrittenonseveral assets andwithconvex payoff). In the literature on optimal stopping the vast majority of papers studying problems of the form (1.1) with d = 1 addresses the question of continuity of the boundary without looking at higher regularity (of course with the exception of the works mentioned above; see [10] for some results and further references). Moreover, even the question of continuity becomes difficult to handle for d > 1 and there seem to be very few works in this setting (for d = 2 and T = ∞ one may refer for example to [12] and [26]). Notably, Shreve and Soner [30, 32] address a problem of singular stochastic control which is equivalent to one of optimal stopping of the form (1.1), and characterise the optimal boundary as a real-valued, Lipschitz continuous function on [0,T] × Rd−1, d ≥ 1. In their work they employ the equivalence between the problem of singular control and the one of optimal stopping, and study the latter purely by means of PDE methods similar to those in [1]. Regularity of the free boundary is used in [30, 32] to obtain a classical solutiontoavariational problemwithgradientconstraintrelated tothesingularcontrol problem. It is worth mentioning that the same authors had previously shown C2 regularity for the optimal boundary of a two-dimensional singular control problem on an infinite time horizon [31]. However in the setting of [31] we are not aware of any direct link to an optimal stopping problem and therefore it is harder to draw a parallel with our work. Fromtheabovediscussionwelearnthatareasonableattempttowardsthestudyofregularity for optimal boundaries in optimal stopping theory should start form establishing their Lipschitz continuity. Of course this can be achieved in several instances by the PDE methods illustrated in the references above but instead we aim at finding a fully probabilistic approach. Under assumptionssimilartothoseadoptedin[30,32],ourworknotonlyservesthepurposeofbridging the PDE literature and the probabilistic one but it also contributes new results. One of our main contributions is to prove that for d > 1 local Lipschitz continuity of the optimal boundary can be obtained without requiring uniform ellipticity of the operator σσ⊤ (see Theorems 4.9 and 4.10, and Example 2 in Section 5). Relaxing this requirement makes it difficult to apply standard PDE results (including [4] and [23]) and the methods used in [30, 32] are no longer valid. In the special case d = 1 (see Theorem 4.3) we are able to localize the Lipschitz optimal stopping boundaries 3 assumptions made in [30, 32] and in particular the one relative to the running cost, i.e. our function h. Such relaxation allows us to apply our results to a wider class of examples than the one previously covered. For instance we can apply them in problems of irreversible capacity expansions where the running profit is expressed by a Cob-Douglas-type production function (see, e.g. [9] and Example 1 in Section 5). A more detailed comparison between our setting and the one in [30, 32] is provided in Remark 4.11. We also notice that our functional (1.1) allows a rather generic time-space dependenceof the functions f, g and h, while at the same time the dynamic of X allows state dependent drifts and correlations between the driving noises (i.e. σ is not necessarily diagonal). For d = 1 a generic time dependence of f and h makes it extremely hard and often impossible to establish monotonicity of theoptimal boundaryas afunctionof time. Thelatter is normally akey feature in the study of the boundary’s continuity. One advantage of our approach is that instead we do not need such monotonicity to establish Lipschitz continuity. Moreover if the boundary is Lipschitz then v ∈ C1([0,T)×R) (see Remark 4.4). Our method consists of two main steps which we can formally summarise as follows. In the first step we find a probabilistic representation of the time/space derivatives of the value function. The latter is then used in the second step along with the implicit function theorem to obtain bounds on the gradient of the optimal boundary. Notice that, while the second step is somehow in line with ideas in [32], the first step is entirely new. It is important to remark that despite the technical assumptions that we make, one of the main contributions of our work is the methodology. As it is often the case in optimal stopping and free boundary problems, in order to be able to give general results, one has to impose fairly strong conditions on the problem data. However, when considering specific examples it is possible to find ways around the technicalities and still apply the same methods. This is indeed true also for the theory that we are developing here and in Section 5 we provide some examples of such extensions. The rest of the paper is organised as follows. In Section 2 we provide a rigorous formulation of the problem outlined in (1.1) along with the standing assumptions. In Section 3 we obtain a probabilistic representation formula for the gradient ∇ v and for the bounds of the time x derivative ∂ v (see Theorem 3.1). Some other technical estimates are performed before passing t to Section 4. In the latter we finally give our main results regarding existence of a locally- Lipschitz continuous optimal boundary for problem (1.1). This result is given under three different sets of assumptions: in Theorem 4.3 for d = 1 and in Theorems 4.9 and 4.10 for d≥ 2. In Section 5 we show some applications of our results and their extensions in specific examples. 2 Setup and problem formulation Consideracompleteprobabilityspace(Ω,F,P)equippedwiththenaturalfiltrationF := (F ) t t≥0 generated by a Rd-valued Brownian motion (B ) . Assume that F is completed with P-null t t≥0 sets and let X ∈ Rd evolve according to t Xx = x+ µ(Xx)ds+σB , t ≥ 0, (2.1) t s t Z0 where µ ∈ C1(Rd;Rd) with sub-linear growth and σ is a d×d matrix. We denote by h·,·i the scalar product in Rd and by k·k the Euclidean norm in Rd. Notice that σσ⊤ is assumed to be d non-negative but not necessarily uniformly elliptic. This means that it may exist ξ ∈ Rd such that hσσ⊤ξ,ξi = 0. Throughout the paper we will often use P (·) = P(·|X = x) and P = P , so that t,x t x 0,x E f(X ) = Ef(Xt,x), s ≥ t, for any function f which is Borel-measurable and integrable. With t,x s s Lipschitz optimal stopping boundaries 4 no loss of generality we will assume Ω = C([0,T];Rd) so that t 7→ ω(t) is the canonical process and θ the shifting operator such that θ ω(t)= ω(t+s). · s For T ∈(0,+∞) we consider optimal stopping problems of the form τ v(t,x) = sup E h(t+s,Xx)ds+1 f(t+τ,Xx)+1 g(Xx) (2.2) s {τ<T−t} τ {τ=T−t} τ 0≤τ≤T−t (cid:20)Z0 (cid:21) wheref,g andharereal-valued withf ∈ C1,2([0,T]×Rd), h ∈C1,1([0,T]×Rd)andg ∈ C2(Rd). In the infinite horizon case, i.e. T = +∞, we consider τ v(t,x) = supE h(t+s,Xx)ds+f(t+τ,Xx) (2.3) s τ τ≥0 (cid:20)Z0 (cid:21) with f and h as above and, according to [29, Ch. 3], we set 1 f(t+τ,Xx) := limsupf(s,Xx), P-a.s. {τ=+∞} τ s s→∞ In what follows conditions at T for the terminal value g(X ) are understood to hold only T for T < +∞ and can always be neglected for T = +∞. From now on we assume that for all (t,x) ∈ [0,T]×Rd it holds T−t E |h(t+s,Xx)|ds+|g(Xx )|+ sup |f(t+s,Xx)| < +∞. (2.4) s T−t s hZ0 0≤s≤T−t i Moreover, if T = +∞ then we also assume 1 f(t+τ,Xx)= 0, P-a.s. (2.5) {τ=+∞} τ Both assumptions are fulfilled in the examples of Section 5. Remark 2.1. Notice that the dynamic (2.1) and the optimisation problem (2.2) are general enoughtoincludeforexample modelsinvolvinggeometricBrownian motionandOrstein-Uhlenbeck. To avoid further technicalities we also assume that v is a lower semi-continuous function. Often such regularity (or even continuity) is easy to check in specific examples (e.g., those in Section 5). There also exist mild sufficient conditions that guarantee lower semi-continuity of v in more general settings (see for instance [29, Ch. 3]. See also Remark 2.10 and eq. (2.2.80) in [27, Ch.I, Sec. 2]). The continuation set C and the stopping set S are given by C := {(t,x) ∈ [0,T)×Rd : v(t,x) > f(t,x)} (2.6) S := {(t,x) ∈[0,T)×Rd : v(t,x) = f(t,x)}∪({T}×R). (2.7) From standard optimal stopping theory we know that, in our setting, (2.4) and lower semi- continuity of v are sufficient for the optimality of τ (t,x) = inf{s ∈ [0,T −t]:(t+s,Xx)∈ S} (2.8) ∗ s provided that f(T,x) ≤ g(x), if T < +∞ (see [27, Ch. I, Sec. 2, Cor. 2.9]). For the infinite horizon case notice that if P (τ < +∞)< 1, then there is nooptimal stoppingtime and τ is a t,x ∗ ∗ (optimal) Markov time (according to the terminology in [29, Ch. 3, Thm. 3]). However methods used in the next sections work for both finite and infinite values of τ thanks to (2.5). ∗ Lipschitz optimal stopping boundaries 5 For arbitrary (t,x) ∈ [0,T]×Rd let s Y := v(t+s,Xx)+ h(t+u,Xx)du. s s u Z0 Since v is lower semi-continuous and using standard results in optimal stopping (see [27, Ch. I, Sec. 2, Thm. 2.4]) we have that (Y ) is P-a.s. right-continuous and s 0≤s≤T−t Y is a supermartingale for s ∈ [0,T −t], (2.9) s Y is a martingale for s ∈ [0,T −t]. (2.10) s∧τ∗ Notice in particular that since Y is right-continuous then the process s 7→ v(t + s,Xx) is P- s a.s. right continuous as well. We denote by L the infinitesimal generator associated to X and in particular we have 1 d ∂2F d ∂F LF(x) = (σσ⊤) (x)+ µ (x) (x), F ∈C2(Rd;R). (2.11) i,j i 2 ∂x ∂x ∂x i j i i,j=1 i=1 X X For future frequent use we also introduce the following notation m(t,x) := (∂ f +Lf)(t,x) and n(x):= Lg(x). (2.12) t Since µ ∈ C1(Rd;Rd) then the flow x 7→ Xx is differentiable ([24], Chapter V.7). Here we denote the initial point in (2.1) by x = (x ,...,x ), the i-th component of Xx by Xx,i, the 1 d partial derivative with respect to x by ∂ = ∂ , and the derivative of Xx with respect to k k ∂xk the initial point x by ∂ Xx = (∂ Xx,1,...∂ Xx,d). We define the process ∂Xx as a d × d k k k k matrix with entries ∂ Xx,j for j,k = 1,...d and the maps t 7→ ∂ Xx,j are P-a.s. continuous k k t with dynamics given by t d t ∂ Xx,j =δ + ∂ µ (Xx)∂ Xx,lds = δ + h∇ µ (Xx),∂ Xxids. (2.13) k t j,k l j s k s j,k x j s k s Z0 l=1 Z0 X In what follows we also assume that for any compact U ⊂ Rd it holds supE sup k∂ Xxk2 < +∞ for all k = 1,...d. (2.14) k t d x∈U "0≤t≤T # The next will be a standing assumption throughout the paper Assumption 2.2 (Regularity f,g,h.). For each (t,x) ∈ [0,T]×Rd we have T−t E k∇ h(t+s,Xx)k2ds+ sup k∇ f(t+s,Xx)k2 +k∇ g(Xx )k2 < +∞, x s d x s d x T−t d "Z0 0≤s≤T−t # T−t E |∂ h(t+s,Xx)|2ds+ sup |∂ f(t+s,Xx)|2+|h(T,Xx )+n(Xx )|2 < +∞. t s t s T−t T−t "Z0 0≤s≤T−t # Moreover the bounds are uniform over compact subsets of [0,T]×Rd. Lipschitz optimal stopping boundaries 6 3 Properties of the value function In this section we provide useful bounds for the gradient of the value function v and some other technical results. These are obtained by often using the following condition (A) Terminal value. If T < +∞ we have g(x) ≥ f(T,x) and ∂ g(x) ≥ ∂ f(T,x). 1 1 Before stating the next theorem it is useful to introduce the functions τ∗ v(t,x) = E ∂ h(t+s,Xx)ds (3.1) t s hZ0 +1 ∂ f(t+τ ,Xx )−1 h(T,Xx )+n(Xx ) {τ∗<T−t} t ∗ τ∗ {τ∗=T−t} T−t T−t i (cid:0) (cid:1) and τ∗ v(t,x) = E ∂ h(t+s,Xx)ds+1 ∂ f(t+τ ,Xx ) (3.2) t s {τ∗<T−t} t ∗ τ∗ hZ0 −1 |h(T,Xx )+n(Xx )|+|∂ f(T,Xx )| . {τ∗=T−t} T−t T−t t T−t (cid:16) (cid:17)i Theorem 3.1. Assume condition (A). Then the value function v is locally Lipschitz continuous on [0,T]×R and for a.e. (t,x) we have τ∗ ∂ v(t,x) =E h∇ h(t+s,Xx),∂ Xxids (3.3) k x s k s hZ0 +1 h∇ f(t+τ ,Xx ),∂ Xx i+1 h∇ g(Xx ),∂ Xx i {τ∗<T−t} x ∗ τ∗ k τ∗ {τ∗=T−t} x T−t k T−t i and v(t,x) ≤ ∂ v(t,x) ≤ v(t,x). (3.4) t Proof. Step 1. (Spatial derivative). First we show that v(t,·) is locally Lipschitz and (3.3) holds. Fix (t,x) ∈ [0,T] × Rd and take ε > 0. For an arbitrary k we denote for simplicity x = (x ,...x + ε,...x ), and consider the processes Xxε = (Xxε,1,......Xxε,d) and Xx = ε 1 k d (Xx,1,...Xx,d). We remark that all components of the vector process Xxε are affected by the shift in the initial point. We denote by τ = τ (t,x) the optimal stopping time (independentof ε) for theproblem with ∗ value function v(t,x). Using such optimality we first obtain v(t,x )−v(t,x) ε τ ≥E (h(t+s,Xxε)−h(t+s,Xx))ds+1 (f(t+τ,Xxε)−f(t+τ,Xx)) s s {τ<T−t} τ τ (cid:20)Z0 (cid:21) +E 1 g(Xxε )−g(Xx ) . {τ=T−t} T−t T−t (cid:2) (cid:0) (cid:1)(cid:3) Dividing both sides of the above expression by ε and recalling Assumption 2.2 and (2.14) we can pass to the limit as ε → 0 and use dominated convergence to conclude that v(t,x )−v(t,x) ε liminf ε→0 ε τ ≥E h∇ h(t+s,Xx),∂ Xxids+1 h∇ f(t+τ,Xx),∂ Xxi (3.5) x s k s {τ<T−t} x τ k τ (cid:20)Z0 (cid:21) +E 1 h∇ g(Xx ),∂ Xx i . {τ=T−t} x T−t k T−t (cid:2) (cid:3) Lipschitz optimal stopping boundaries 7 Now let τ = τ (t,x ) be optimal for the problem with value function v(t,x ). By analogous ε ∗ ε ε arguments to those above and using Assumption 2.2 and (2.14) we also find v(t,x )−v(t,x) ε τε ≤E (h(t+s,Xxε)−h(t+s,Xx))ds+1 f(t+τ ,Xxε)−f(t+τ ,Xx) s s {τε<T−t} ε τε ε τε (cid:20)Z0 (cid:21) +E 1 g(Xxε )−g(Xx ) ≤ c(x)ε, (cid:0) (cid:1) {τε=T−t} T−t T−t (cid:2) (cid:0) (cid:1)(cid:3) for some c(x) > 0 depending only on x. Notice that for the last inequality we have used Xxε −Xx ≤ ε· sup k∂ Xzk τε τε d k s d 0≤s≤T k (cid:13) (cid:13) X (cid:13) (cid:13) by the mean value theorem, with suitable z ∈ Rd such that kz−xk ≤ ε. d Sincethesameargumentmay berepeated forany direction x , we have thatv(t, ·)is locally k Lipschitz. For a.e. x ∈ Rd one has that ∇ v(t,x) exists and (3.5) provides a lower bound for x ∂ v. Now we want to show that the upper bound for ∂ v is the same so that (3.3) holds. k k Letxbeapointofdifferentiability ofv(t, ·), pickδ > 0anddenotex = (x ,...x −δ,...x ) δ 1 k d and Xxδ = (Xxδ,1,...Xxδ,d). Since τ is optimal in v(t,x) and sub-optimal in v(t,xδ) we have v(t,x)−v(t,x ) δ τ ≤E (h(t+s,Xx)−h(t+s,Xxδ))ds+1 (f(t+τ,Xx)−f(t+τ,Xxδ)) s s {τ<T−t} τ τ (cid:20)Z0 (cid:21) +E 1 g(Xx )−g(Xxδ ) . {τ=T−t} T−t T−t (cid:2) (cid:0) (cid:1)(cid:3) Dividing both sides by δ, taking limits and using dominated convergence again we obtain v(t,x)−v(t,x ) δ limsup δ δ→0 τ ≤E h∇ h(t+s,Xx),∂ Xxids+1 h∇ f(t+τ,Xx),∂ Xxi (3.6) x s k s {τ<T−t} x τ k τ (cid:20)Z0 (cid:21) +E 1 h∇ g(Xx ),∂ Xx i {τ=T−t} x T−t k T−t (cid:2) (cid:3) which together with (3.5) implies (3.3) because ∂ v(t,x) exists by assumption. k Step 2. (Time derivative). Nextwe show thatt 7→ v(t,x) is locally Lipschitz and(3.4)holds. We start by providing bounds for the left and right derivatives of v(·,x). Fix (t,x) ∈ [0,T] × Rd and let ε > 0. Then letting τ = τ (t,x) optimal for the problem ∗ with value function v(t,x) we notice that τ is admissible for the problem with value function v(t−ε,x). Using (2.9) and (2.10) we obtain the following upper bound. τ v(t,x)−v(t−ε,x) ≤ E (h(t+s,Xx)−h(t−ε+s,Xx))ds s s hZ0 +v(t+τ,Xx)−v(t−ε+τ,Xx) . (3.7) τ τ i Now we notice that since v ≥ f on [0,T]×Rd and v = f in S, by right continuity of v(t+·,Xx) · one has v(t+τ,Xx)−v(t−ε+τ,Xx)≤ f(t+τ,Xx)−f(t−ε+τ,Xx) on {τ < T −t} τ τ τ τ v(t+τ,Xx)−v(t−ε+τ,Xx)≤ g(Xx )−v(T −ε,Xx ) on {τ = T −t}. τ τ T−t T−t Lipschitz optimal stopping boundaries 8 Moreover from (2.2) we also have ε v(T −ε,XTx−t) ≥EXx h(T −ε+s,Xs)ds+g(Xε) T−t (cid:20)Z0 (cid:21) ε =g(XTx−t)+EXx (h(T −ε+s,Xs)+n(Xs))ds . T−t (cid:20)Z0 (cid:21) Collecting the above estimates and using the mean value theorem we conclude 1 (v(t,x)−v(t−ε,x)) ε τ ≤E ∂ h(t−ε′ +s,Xx)ds+1 ∂ f(t−ε′′+τ,Xx) (3.8) t s s {τ<T−t} t τ (cid:20)Z0 (cid:21) 1 ε −E 1 E (h(T −ε+s,X )+n(X ))ds x {τ=T−t} XT−t ε s s (cid:20) (cid:20) Z0 (cid:21)(cid:21) for ε′ and ε′′ in [0,ε]. Letting ε→ 0 we get s v(t,x)−v(t−ε,x) limsup (3.9) ε ε→0 τ ≤E ∂ h(t+s,Xx)ds+1 ∂ f(t+τ,Xx) t s {τ<T−t} t τ hZ0 i −E 1 h(T,Xx )+n(Xx ) . {τ=T−t} T−t T−t h i (cid:0) (cid:1) To prove a reverse inequality we notice that τ ∧(T −t−ε) is admissible for the problem with value v(t+ε,x), so that by using (2.9) and (2.10) and arguing as above we obtain v(t+ε,x)−v(t,x) τ∧(T−ε−t) τ ≥E (h(t+ε+s,Xx)−h(t+s,Xx))ds−1 h(t+s,Xx)ds s s {τ>T−t−ε} s hZ0 ZT−t−ε i +E 1 (f(t+ε+τ,Xx)−f(t+τ,Xx)) {τ≤T−t−ε} τ τ h i +E 1 g(Xx )−v(t+τ,Xx) . (3.10) {τ>T−t−ε} T−t−ε τ h i (cid:0) (cid:1) We can collect the two terms with the indicator of {τ > T −t−ε}, use iterated conditioning and the martingale property (2.10) to get τ E 1 g(Xx )−v(t+τ,Xx)− h(t+s,Xx)ds {τ>T−t−ε} T−t−ε τ s h (cid:18) ZT−t−ε (cid:19)i τ =E 1 g(Xx )−E v(t+τ,Xx)+ h(t+s,Xx)ds F {τ>T−t−ε} T−t−ε τ s T−t−ε h (cid:18) (cid:20) ZT−t−ε (cid:12) (cid:21)(cid:19)i =E 1{τ>T−t−ε} g(XTx−t−ε)−v(T −ε,XTx−t−ε) . (cid:12)(cid:12) h i (cid:0) (cid:1) To estimate the last term we argue as follows v(T −ε,Xx ) T−t−ε σ =esssupEXx h(T −ε+s,Xs)ds+1{σ<ε}f(T −ε+σ,Xσ)+1{σ=ε}g(Xε) T−t−ε 0≤σ≤ε (cid:20)Z0 (cid:21) σ =esssupEXx h(T −ε+s,Xs)ds+g(Xσ)+1{σ<ε}(f(T −ε+σ,Xσ)−g(Xσ)) T−t−ε 0≤σ≤ε (cid:20)Z0 (cid:21) Lipschitz optimal stopping boundaries 9 =g(Xx ) T−t−ε σ +esssupEXx (h(T −ε+s,Xs)+n(Xs))ds+1{σ<ε} f(T,Xσ)−g(Xσ) T−t−ε 0≤σ≤ε hZ0 (cid:16) (cid:17) T −1 ∂ f(u,X )du . {σ<ε} t σ ZT−ε+σ i Using that f(T,x)≤ g(x) by condition (A), we get v(T −ε,Xx ) T−t−ε ≤g(Xx ) (3.11) T−t−ε σ T +esssupEXx (h(T −ε+s,Xs)+n(Xs))ds+ |∂tf(u,Xσ)|du T−t−ε 0≤σ≤ε hZ0 ZT−ε+σ i ε ≤g(XTx−t−ε)+EXx (|h(T −ε+s,Xs)+n(Xs)|+|∂tf(T −ε+s,Xσ)|)ds . T−t−ε hZ0 i Plugging the estimates above inside (3.10) we then obtain 1 (v(t+ε,x)−v(t,x)) ε τ∧(T−t−ε) ≥E ∂ h(t+ε′ +s,Xx)ds+1 ∂ f(t+ε′′+τ,Xx) t s s {τ≤T−t−ε} t τ "Z0 # 1 ε −E 1 E (|h(T −ε+s,X )+n(X )|+|∂ f(T −ε+s,X )|)ds x {τ>T−ε−t} XT−t−ε ε s s t σ (cid:20) h Z0 i(cid:21) for suitable ε′ and ε′′ in [0,ε]. Taking limits as ε → 0 we conclude s v(t+ε,x)−v(t,x) liminf (3.12) ε→0 ε τ ≥E ∂ h(t+s,Xx)ds+1 ∂ f(t+τ,Xx) t s {τ<T−t} t τ hZ0 i −E 1 |h(T,Xx )+n(Xx )|+|∂ f(T,Xx )| . {τ=T−t} T−t T−t t T−t h i So far we have established a low(cid:0)er bound for the right-derivative and an(cid:1)upper bound for the left-derivative. Now if we prove that v(·,x) is indeed locally Lipschitz then (3.9) and (3.12) will imply (3.4). For the Lipschitz property we set τ := τ (t−ε,x) and notice that τ ∧(T −t) ε ∗ ε is admissible for the problem with value function v(t,x). Therefore arguing as in (3.10) we get v(t,x)−v(t−ε,x) τε∧(T−t) τε ≥E (h(t+s,Xx)−h(t−ε+s,Xx))ds−1 h(t−ε+s,Xx)ds s s {τε>T−t} s hZ0 ZT−t i +E 1 f(t+τ ,Xx)−f(t−ε+τ ,Xx) {τε≤T−t} ε τε ε τε h i +E 1 (cid:0)g(Xx )−v(t−ε+τ ,Xx) . (cid:1) (3.13) {τε>T−t} T−t ε τε h i Repeating step by ste(cid:0)p the arguments that follow (cid:1)(3.10) we obtain 1 (v(t,x)−v(t−ε,x)) ε τε∧(T−t) ≥E ∂ h(t−ε′ +s,Xx)ds+1 ∂ f(t−ε′′+τ ,Xx) t s s {τε≤T−t} t ε τε "Z0 # 1 ε −E 1 E (|h(T −ε+s,X )+n(X )|+|∂ f(T −ε+s,X )|)ds . x {τε>T−t} XT−t ε s s t σ (cid:20) h Z0 i(cid:21) Lipschitz optimal stopping boundaries 10 Using Assumption 2.2 and the above expression it is clear that we can find c(x) >0 depending only on x and such that v(t,x)−v(t−ε,x) ≥ −c(x)ε. The latter, together with (3.9) imply that |v(t,x)−v(t−ε,x)| ≤ cˆ(x)ε for some other cˆ(x) > 0 depending only on x. A symmetric argumentcanbeusedtoobtainananalogous boundfor|v(t+ε,x)−v(t,x)| andthereforev(·,x) is indeed locally Lipschitz. Remark 3.2. It is important to notice that the results of Theorem 3.1 hold in the same form when considering a state dependent diffusion coefficient σ(x) in (2.1), provided that σ ∈ ij C1(Rd;R). Indeed the proof remains exactly the same as we have never used the specific form of the dynamics of X in (2.1). There is a simple and useful corollary to the theorem Corollary 3.3. Assume T < +∞. Let condition (A) and one of the two conditions below hold (i) g(x) = f(T,x), x ∈ Rd, (ii) ∃c> 0 such that h(T,x)+n(x)≥ −∂ f(T,x)−c, for x ∈ Rd. t Then for a.e. (t,x) ∈ [0,T]×Rd and τ = τ (t,x) we have ∗ ∗ τ∗ ∂ v(t,x) ≤ E ∂ h(t+s,Xx)ds+∂ f(t+τ ,Xx ) +cP(τ = T −t) (3.14) t t s t ∗ τ∗ ∗ (cid:20)Z0 (cid:21) where c = 0 if (i) holds. Proof. Under(ii) the claim is trivial since∂ v ≤ v anddueto (3.1). Under(i) instead, wenotice t that (3.7) in the proof of Theorem 3.1 may be bounded as follows τ v(t,x)−v(t−ε,x) ≤E (h(t+s,Xx)−h(t−ε+s,Xx))ds s s hZ0 +v(t+τ,Xx)−v(t−ε+τ,Xx) τ τ τ i ≤E (h(t+s,Xx)−h(t−ε+s,Xx))ds s s hZ0 +f(t+τ,Xx)−f(t−ε+τ,Xx) . τ τ i Then dividing by ε and taking limits as ε → 0 we obtain (3.14). Before concluding the section we provide two simple technical lemmas which will be useful in the next section. Lemma 3.4. For k = 1,...d one has P-almost surely T d sup k∂ Xxk2 ≤ 2exp T k∇ µ (Xx)k2ds . (3.15) k t d  x j s d  0≤t≤T Z0 j=1 X   Proof. By using |a+b|2 ≤ 2(|a|2 +|b|2) and H¨older inequality applied to (2.13) we get d t 2 k∂ Xxk2 = δ + h∇ µ (Xx),∂ Xxids k t d j,k x j s k s j=1(cid:18) Z0 (cid:19) X d t ≤2 1+T k∇ µ (Xx)k2k∂ Xxk2ds .  x j s d k s d  j=1Z0 X   An application of Gronwall’s inequality concludes the proof.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.