ebook img

Local martingales in discrete time PDF

0.15 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Local martingales in discrete time

LOCAL MARTINGALES IN DISCRETE TIME VILMOS PROKAJANDJOHANNESRUF 7 Abstract. Foranydiscrete-timeP–localmartingaleS thereexistsaprobabilitymeasureQ∼ 1 PsuchthatS isaQ–martingale. Anewproofforthisresultisprovided. Thisproofalsoyields 0 2 that, for any ε>0, themeasure Q can be chosen so that dQ/dP≤1+ε. n a J 5 1. Introduction and related literature 1 Let (Ω,F,P) denote a probability space equipped with a discrete-time filtration (F ) , R] where Ft ⊂ F. Moreover, let S = (St)t∈N0 denote a d-dimensional P–local martingale,twth∈eNr0e P d ∈N. ThenthereexistsaprobabilitymeasureQ,equivalenttoP,suchthatS isaQ–martingale. . This follows from more general results that relate appropriate no-arbitrage conditions to the h t existence of an equivalent martingale measure; see Dalang et al. (1990) and Schachermayer a (1992) for the finite-horizon case and Schachermayer (1994) for the infinite-horizon case. These m results are sometimes baptized fundamental theorems of asset pricing. [ More recently, Kabanov (2008) and Prokaj and Ra´sonyi (2010) have provided a direct proof 1 for the existence of such a measure Q; see also Section 2 in Kabanov and Safarian (2009). v The proof in Kabanov (2008) relies on deep functional analytic results, e.g., the Krein-Sˇmulian 5 2 theorem. The proof in Prokaj and Ra´sonyi (2010) avoids functional analysis but requires non- 0 trivial measurable selection techniques. 4 As this note demonstrates, in one dimension, an important but special case, the Radon- 0 . Nikodym derivative Z∞ = dQ/dP can be explicitly constructed. Moreover, in higher dimensions, 1 themeasurable selection results can besimplified. This is donehereby appropriately modifying 0 7 an ingenious idea of Rogers (1994). 1 More precisely, the following theorem will be proved in Section 3. : v Xi Theorem 1. For all ε > 0, there exists a uniformly integrable P–martingale Z = (Zt)t∈N0, bounded from above by 1+ε, with Z = lim Z > 0, such that ZS is a P–martingale and ∞ t↑∞ t r such that E [Z |S |p]< ∞ for all t,p ∈ N . a P t t 0 The fact that the bound on Z can be chosen arbitrarily close to 1 seems to be a novel observation. Considering a standard random walk S directly yields that there is no hope for a stronger version of Theorem 1 which would assert that ZS is not only a P–martingale but also a P–uniformly integrable martingale. A similar version of the following corollary is formulated in Prokaj and Ra´sonyi (2010); it would also bea direct consequence of Kabanov and Stricker (2001). To state it, let us introduce the total variation norm k·k for two equivalent probability measures Q ,Q as 1 2 kQ1−Q2k= EQ1[|dQ2/dQ1−1|]. Corollary 2. For all ε > 0, there exists a probability measure Q, equivalent to P, such that S is a Q–martingale, kP−Qk< ε, and E [|S |p]< ∞ for all t,p ∈ N . Q t 0 Date: January 17, 2017. 2010 Mathematics Subject Classification. Primary: 60G42; 60G48. We thank Yuri Kabanov for many helpful comments. J.R. is grateful for the generous support provided by the Oxford-Man Instituteof QuantitativeFinance at theUniversity of Oxford. 1 2 VILMOSPROKAJANDJOHANNESRUF To reformulate Corollary 2 in more abstract terms, let us introduce the spaces M = {Q ∼ P : S is a Q–local martingale}; loc Mp = {Q ∼ P : S is a Q–martingale with E [|S |p]< ∞ for all t ∈ N }, p > 0. Q t 0 Then Corollary 2 states that the space Mp is dense in M with respect to the total p>0 loc variation norm k·k. T Proof of Corollary 2. Consider the P–uniformly integrable martingale Z of Theorem 1, with ε replaced by ε/2. Then the probability measure Q, given by dQ/dP= Z∞, satisfies the conditions of the assertion. Indeed, we only need to observe that E [|Z −1|] = 2E [(Z −1)1 ]≤ ε, P ∞ P ∞ {Z>1} where we used that E [Z −1] = 0 and the assertion follows. (cid:3) P ∞ 2. Generalized conditional expectation and local martingales For sake of completeness, we review the relevant facts related to local martingales in discrete time. To start, note that for a sigma algebra G ⊂ F and a nonnegative random variable Y, not necessarily integrable, we can define the so called generalized conditional expectation E [Y |G] = limE [Y ∧k|G]. P P k↑∞ Next, for a general random variable W with E [|W||G] < ∞, but not necessarily integrable, P we can define the generalized conditional expectation E [W |G]= E [W+|G]−E [W−|G]. P P P For a stopping time τ and a stochastic process X we write Xτ to denote the process obtained from stopping X at time τ. Definition 3. A stochastic process S = (S ) is t t∈N0 • a P–martingale if E [|S |]< ∞ and E [S |F ] = S for all t ∈ N ; P t P t+1 t t 0 • a P–local martingale if there exists a sequence (τ ) of stopping times such that n n∈N lim τ = ∞ and Sτn1 is a P–martingale; n↑∞ n {τn>0} • a P–generalized martingale if E [|S ||F ]< ∞ and E [S |F ] = S for all t ∈ N . P t+1 t P t+1 t t 0 Proposition 4. Any P–local martingale is a P–generalized martingale. This proposition dates back to Theorem II.42 in Meyer (1972); see also Theorem VII.1 in Shiryaev (1996). Its reverse direction would also be true but will not be used below. A direct corollary of the proposition is that a P–local martingale S with E [|S |] < ∞ for all t ∈ N is P t 0 indeed a P–martingale. For sake of completeness, we will provide a proof of the proposition here. Proof of Proposition 4. Let S denote a P–local martingale. Fix t ∈ N and a localization 0 sequence (τ ) . For each n∈ N, we have, on the event {τ > t}, n n∈N n E [|S ||F ] = limE [|S |∧k|F ]= lim E [|Sτn |∧k|F ] = E [|Sτn ||F ] < ∞. P t+1 t P t+1 t P t+1 t P t+1 t k↑∞ k↑∞ Since lim τ = ∞, we get E [|S ||F ]< ∞. n↑∞ n P t+1 t The next step we only argue for the case d = 1, for sake of notation, but the general case follows in the same manner. As above, again for fixed n ∈ N, on the event {τ > t}, we get n E [S |F ] = lim E [S+ ∧k|F ]−E [S− ∧k|F ] P t+1 t P t+1 t P t+1 t k↑∞ Ä = limä EP[(Stτ+n1∧k)∨(−k)|Ft]= St. k↑∞ Thanks again to lim τ = ∞, the assertion follows. (cid:3) n↑∞ n LOCAL MARTINGALES IN DISCRETE TIME 3 Example 5. Assume that (Ω,F,P) supports two independent random variables U and θ such that U is uniformly distributed on [0,1], and P[θ = −1] = 1/2 = P[θ = 1]. Moreover, let us assume that F = {∅,Ω}, F = σ(U), and F = σ(U,θ) for all t ∈ N\{1}. Then the stochastic 0 1 t process S = (St)t∈N0, given by St = θ/U1t≥2 is easily seen to be a P–generalized martingale and a P–local martingale with localization sequence (τ ) given by n n∈N τ = 1×1 +∞×1 . n {1/U>n} {1/U≤n} However, we have EP[|S2|] = EP[1/U] = ∞; hence S is not a P–martingale. Now, consider the process Z = (Z ) , given by Z = 1 +2U1 . A simple computation t t∈N0 t t=0 t≥1 shows that Z is a strictly positive P–uniformly integrable martingale. Moreover, since Z S = t t 2θ1 , we have E [Z |S |] ≤ 2 for all t ∈ N and ZS is a P–martingale. If we require the t≥2 P t t 0 Radon-Nikodym to be bounded by a constant 1+ε ∈ (1,2], we could consider Z = (Z ) t t∈N0 with Zt = 1t=0 +(U∧ε)/(ε−ε2/2)1t≥1. This illustrates the validity of Theorem 1 in the context of this example. “ “ To see a difficulty in provingTheorem 1, let us consider a local martingale S′ = (S′) with “ t t∈N0 two jumps instead of one; for example, let us define θ S′ = 1 −1 1 + 1 . t {U>1/2} {U<1/2} t≥1 U t≥2 Again, it is simple to see that tÄhis specification maäkes S′ indeed a P–local and P–generalized martingale. However, now we have EP[Z1S1′] = 1/2 6= 0; hence ZS′ is not a P–martingale. Similarly, neither is ZS′. Nevertheless, as Theorem 1 states, there exists a uniformly integrable P–martingale Z′ such that Z′S′ is a P–martingale. More details on th“e previous example are provided in Ruf (2017). 3. Proof of Theorem 1 We start with three preparatory lemmata. Lemma 6. Let Q denote some probability measure on (Ω,F), let G,H be sigma algebras with G ⊂ H ⊂ F, let W denote a H –measurable d-dimensional random vector with E [|W||G] < ∞ and E [W |G]= 0. (3.1) Q Q Suppose that (α ) is a bounded family of H –measurable random variables with lim α = k k∈N k↑∞ k 1. Then for any ε> 0 there exists a family (V ) of random variables such that k k∈N (i) V is H –measurable and takes values in (1−ε,1) for each k ∈N; k (ii) limk↑∞1{EQ[VkαkW|G]=0} = 1. We shall provide two proofs of this lemma, the first one applies only to the case d = 1, but avoids the technicalities necessary for the general case. Proof of Lemma 6 in the one-dimensional case. With the convention 0/0 := 1, define, for each k ∈ N, the random variable E [α W+|G] Q k C = k E [α W−|G] Q k and note that E [W+|G] 1 lim|C −1| = Q −1 = E [W+|G]−E [W−|G] = 0. k↑∞ k (cid:12)EQ[W−|G] (cid:12) EQ[W−|G] Q Q (cid:12) (cid:12) (cid:12) (cid:12) Next, set (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) V = (1−ε)∨ 1 (1∧C−1)+1 (1∧C ) , k {W≥0} k {W<0} k and note that on the event {1−ε Ä≤ Ck ≤ 1/(1−ε)} ∈ G we indeed haväe EQ[VkαkW |G] = 0, which concludes the proof. (cid:3) 4 VILMOSPROKAJANDJOHANNESRUF Proof of Lemma 6 in the general case. Theproof is similar to theproof of theDalang–Morton– Willinger theorem based on utility maximisation, see Rogers (1994) and Delbaen and Schacher- mayer (2006, Section 6.6) for detailed exposition. But instead of using the exponential utility, we choose a strictly convex function (the negative of the utility) which is smooth and whose derivative takes values in (1−ε,1). Indeed, in what follows we fix the convex function ε π f(a)= a 1+ arctan(a)− , a ∈ R. π 2 Then f is smooth and direct comÅputatiÅon shows, thatãfã is convex with derivative f′ taking values in the interval (1−ε,1). We formulated the statement with generalized conditional expectations. However, changing the probability appropriately with a G–measurable density we can assume, without loss of generality, that W ∈ L1(Q). Indeed, the probability measure Q′, given by dQ′ e−EQ[|W||G] = , dQ EQ[e−EQ[|W||G]] satisfies that W ∈ L1(Q′). Moreover, the (generalized) conditional expectations with respect to G arethe same underQ and Q′. So in whatfollows we assumethat |W|is an integrable random variable. For W there is a maximal G–measurable orthogonal projection R of Rd such that RW = 0 almost surely. For a proof, see Proposition 2.4 in Rogers (1994) or Section 6.2 in Delbaen and Schachermayer (2006). The orthocomplement of the range of R is called the predictable range of W. Let B denote the d–dimensional Euclidean unit ball and set α = 1. For each k ∈ N∪{∞}, ∞ consider the random function (or field) h over B, defined by the formula k h (u,·) = h (u) = E [f(α W ·u)|G]+|Ru|2 for all u ∈B. k k Q k Since f is continuous, for each k ∈ N∪{∞}, h has a version that is continuous in u for each k ω ∈ Ω; see Lemma 9 below. Then for each compact subset C of B and each k ∈ N∪{∞} there is a G–measurable random vector UC taking values in C such that h (UC) = min h (u). k k k u∈C k This is a kind of measurable selection; for sake of completeness we give an elementary proof below in Lemma 11. Next, for each k ∈ N, let U be a G–measurable minimiser of h in the unit ball B and define k k V = f′(α W ·U ). k k k With this definition, (i) follows directly. For (ii) we prove below that E [V α W |G]+2RU = 0, on {|U | < 1}, k ∈ N; (3.2) Q k k k k lim U = 0, almost surely. (3.3) k k↑∞ Then, on the event {|U |< 1}, (3.2) and the G–measurability of R yield k |E [V α W |G]|2 = −E [V α W |G]·Ru= −E [V α RW |G]·u= 0, Q k k Q k k Q k k giving us (ii). Thus, in order to complete the proof it suffices to argue (3.2)–(3.3). For (3.2), note that h k is continuously differentiable almost surely for each k ∈ N, see Lemma 10 below; morever, its derivative attheminimumpointU , whichequals theleft-hand sideof (3.2), mustbezero when k U is inside the ball B. k For (3.3) observe that h has a unique minimiser over B which is the zero vector. To see ∞ this, observe that h (u) = E [f(W ·(I −R)u)|G]+|Ru|2, ∞ Q LOCAL MARTINGALES IN DISCRETE TIME 5 where I denotes the d-dimensional identity matrix. So to see that the zero vector is the unique minimiser it is enough to show that inf h (u) > 0 = h (0) almost surely for any δ ∈(0,1]. |u|≥δ ∞ ∞ Let U be a G–measurable minimiser of h over {u : |u| ∈ [δ,1]}. Then ∞ E [f(W ·(I −R)U)|G]> 0, on {(I −R)U 6= 0}; Q |RU|2 ≥ δ2 > 0, on {(I −R)U = 0}. The first part follows from the strict convexity of f in conjunction with Jensen’s inequality, taking into account that E [W |G] = 0 and that W ·(I −R)U has non-trivial conditional law Q on {(I −R)U 6= 0} by the maximality of R. Whence inf h (u) > 0 = h (0), as required. |u|≥δ ∞ ∞ Finally, as lim α = 1 and f is Lipschitz continuous we have k↑∞ k lim sup|h (u)−h (u)| = lim sup |h (u)−h (u)| = 0 almost surely. k ∞ k ∞ k↑∞u∈B k↑∞u∈B∩Qd Hence, any G–measurable sequence (U ) of minimisers of h converges to zero, the unique k k∈N k minimiser of h , almost surely. This shows (3.3) and completes the proof. (cid:3) ∞ Lemma 7. Let Q denote some probability measure on (Ω,F), let G,H be sigma algebras with G ⊂ H ⊂ F, let Y denote a one-dimensional random variable with Y ≥ 0 and E [Y |H ] < ∞, Q and let W denote a H –measurable d-dimensional random vector such that (3.1) holds. Then, for any ε > 0, there exists a random variable z such that (i) z is H –measurable and takes values in (0,1+ε); (ii) Q[z < 1−ε] < ε; (iii) E [z|G]= 1; Q (iv) E [zW |G] = 0; Q (v) E [zY |G]< ∞. Q Proof. For each k ∈ N, define the (0,1]–valued, H –measurable random variable 1 αk = 1{EQ[Y|H]≤k}+ E [Y |H ]1{EQ[Y|H]>k} Q and note that lim α = 1. Lemma 6 now yields the existence of a family (V ) of k↑∞ k k k∈N H –measurable random variables such that Vk ∈ (1/(1+ε/2),1) and limk↑∞1{EQ[VkαkW|G]=0} = 1. Note that this yields a G–measurable random variable K, taking values in N, such that EQ[VKαKW |G] =0, EQ[VKαK |G] >1/(1+ε), and Q[EQ[Y |H ] > K]< ε. Setting now V α K K z = E [V α |G] Q K K yields a random variable with the claimed properties. (cid:3) Lemma 8. Fix n ∈ N , let Q denote some probability measure on (Ω,F) such that S is a Q– 0 local martingale, and let Y denote a nonnegative random variable with E [Y |F ]< ∞. Then, Q n for each ε > 0, there exists a probability measure Q′, equivalent to Q, with density Z(n) = dQ′/dQ such that (i) Z(n) ∈ (0,1+ε); (ii) Q[Z(n) < 1−ε] < ε; (iii) S is a Q′–local martingale; (iv) EQ′[Y]< ∞. Proof. Inthisproof,we usetheconvention F = {∅,Ω} and∆S = 0. Setε > 0besufficiently −1 0 small such that (n+1)ε ≤ ε, (1+ε)n+1 ≤ 1+ε, (1−ε)n+1 ≥ 1−eε. With εreplaced by εandwithG = F , H = F , andW = ∆S , henceE [|W||G] < ∞and n−1 n n Q E [W |G] = 0 by Proposeition 4, let z deenote the correspondingerandom variable of Lemma 7. Q n If n = 0, we define Qe′ by dQ′/dQ = z, and the lemma is proven. 6 VILMOSPROKAJANDJOHANNESRUF Ifn> 0, weproceediteratively. Considert ∈ {0,··· ,n−1} andassumethatwehaverandom variables z ,··· ,z such that, in particular, E [Y n z | F ] < ∞. We now obtain a t+1 n Q i=t+1 i t random variable z by again applying Lemma 7, with ε replaced by ε and with G = F , t t−1 H = F , W = ∆S , and Y replaced by Y n z . Q t t i=t+1 i With the family (z0,··· ,zn) now given, lQet us define Z(n) = ni=0zi aend Q′ by dQ′/dQ = Z(n). To argue that S is a Q′–local martingale, we may consider any sequence of stopping times that Q localizes S. Since all other assertions follow directly from the construction of Z(n) and the choice of ε, the lemma is proven. (cid:3) Proof of Theorem 1. We inductively construct a sequence (Q(n)) of probability measures, e n∈N0 equivalenttoP,andasequence(ε(n)) ofpositiverealsusingLemma8. Tostart, setQ(−1) = n∈N0 P. Now, fixn ∈ N forthemomentandsupposethatwehave Q(n−1) and(ε(m)) suchthat 0 0≤m<n n−1(1+ε(m))< 1+ε. Choose ε(n) to be sufficiently small such that n (1+ε(m)) < 1+ε, m=0 m=0 and for any A ∈ F with Q(n−1)[A] ≤ ε(n) we have P[A] < 2−n. Then apply Lemma 8 with ε Q Q replaced by ε(n), and with Q = Q(n−1) and Y = e|Sn| to obtain a probability measure Q(n) with density Z(n), that is dQ(n) =Z(n)dQ(n−1) = ( n Z(m))dP. m=0 Due to the fact Q P |1−Z(n)| > ε(n) ≤ 2−n as Q(n−1) |1−Z(n)| > ε(n) ≤ ε(n), the Borel-Cantelli lemma yields |1 − Z(n)| < ∞; hence the infinite product Z = î ó n∈N0 î ó ∞ ∞ Z(n) converges and is positive P–almost surely. It is clear that Z ≤ 1+ε. n=0 P ∞ We define the probability measure Q by dQ/dP = Z∞ and denote the corresponding density Q process by Z = E [Z |F ], for each t ∈ N . As Z(m) < 1+ε we have Q ≤ (1+ε)Q(t) t P ∞ t 0 m>t and as a result Q E Z e|St| = E e|St| ≤(1+ε)E e|St| < ∞ P t Q Q(t) by the choice of Q(t); hence E [Z |S |p] < ∞ for all t,p ∈ N . î P tó t î ó 0î ó It remains to argue that ZS is a P–martingale or, equivalently, that S is a Q–martingale. Sincewealreadyhave establishedE [|S |]< ∞forallt ∈ N , itsufficestofixt ∈ Nandtoprove Q t 0 that E [S |F ] = S . To this end, recall that S is a Q(n)–local martingale for each n ∈ N Q t t−1 t−1 0 by Lemma 8(iii) and note that dominated convergence, Bayes formula, and Proposition 4 yield n E [S |F ]Z = E [S Z |F ]= lim E S Z(m) F Q t t−1 t−1 P t ∞ t−1 P t t−1 n↑∞ " (cid:12) # mY=0 (cid:12) dQ(n) (cid:12)(cid:12) n = lim E [S |F ] = S l(cid:12)im E Z(m) F n↑∞ Q(n) t t−1 dP (cid:12)(cid:12)Ft−1 t−1n↑∞ P"mY=0 (cid:12)(cid:12) t−1# (cid:12) (cid:12) = St−1Zt−1. (cid:12) (cid:12) (cid:12) (cid:12) This completes the proof. (cid:3) Appendix A In this appendix, we provide some measurability results necessary for the proof of Lemma 6. Lemma 9. Let G be a sigma algebra with G ⊂ F and let ξ be a random element in C(K), where (K,m) isacompact metric space. Suppose thatE [sup |ξ(u)|] < ∞andletη(u) = E [ξ(u)|G] P u∈K P for all u ∈K. Then (η(u)) has a continuous modification. u∈K Proof. Let D be a countable dense subset of K. We show that there is Ω′ ∈ G with full probability such that (η(u)) is uniformly continuous over D on Ω′. Then we can define u∈D lim η(u ) on Ω′, n η˜(u) = uunn→∈Du  0 otherwise.   LOCAL MARTINGALES IN DISCRETE TIME 7 It is a routine exercise to check that η˜ is well defined and a continuous modification of η. One way to get Ω′ is the following. Let µ be the modulus of continuity of ξ, that is, µ(δ) = sup |ξ(u)−ξ(u′)|, δ > 0. u,u′∈K,m(u,u′)≤δ Obviously µ(δ) → 0 everywhere as δ ↓ 0. Dominated convergence, in conjunction with the bound µ ≤ 2sup |ξ(u)|, yields µ˜(δ) = E[µ(δ)|G]→ 0 as δ ↓0 almost surely. Now define u∈K 1 1 Ω′ = lim µ˜ = 0 ∩ |η(u)−η(u′)| ≤ µ˜ . n↑∞ n n ß Å ã ™ Ñn\∈N u,u′∈D,m\(u,u′)≤1/nß Å ã™é Clearly Ω′ has full probability and the claim is proved. (cid:3) In the setting of Lemma 9 when K ⊂ Rd and ξ is a random element in C1(K) then under mild conditions η(u) = E[ξ(u)|G] has a version taking values in C1(K). This is the content of the next lemma. Recall that a function f defined on K belongs to C1(K) if f is continuous and there is continuous Rd–valued function on K which agrees with the gradient f′ of f in the interior of K. Lemma 10. Let G be a sigma algebra with G ⊂ F and let ξ be a random element in C1(K), where K ⊂ Rd is a compact subset set. Suppose that E sup|ξ(u)| +E sup|ξ′(u)| < ∞ P P u∈K u∈K ñ ô ñ ô and let η(u) = E [ξ(u)|G] for all u∈ K. Then (η(u)) has a version taking values in C1(K) P u∈K and the continuous version of (E[ξ′(u)|G]) gives the gradient of η almost surely. u∈K Proof. By Lemma 9 both η(u) = E[ξ(u)|G] and η′(u) = E[ξ′(u)|G] have continuous versions. We prove that, apart from a null set, η′ is indeed the gradient of η. To this end, let D be a countable dense subset of the interior of K and denote by I(a,b) a directed segment going from a to b, for each a,b ∈ K. Then, by assumption, for a,b ∈ D, with I(a,b) ⊂ intK we get η(b)−η(a) = E[ξ(a)−ξ(b)|G] =E ξ′(u)du G = η′(u)du, almost surely. ñZI(a,b) (cid:12)(cid:12) ô ZI(a,b) Hence, there exists an event Ω′ ∈ G with P[Ω′] = 1 su(cid:12)(cid:12)ch that (cid:12) η(b,ω)−η(a,ω) = η′(u,ω)du, for all a,b ∈D, with I(a,b) ⊂ intK and ω ∈ Ω′. ZI(a,b) By continuity this idenity extends to all a,b ∈ intK with I(a,b) ⊂ intK on Ω′. Using again the continuity of η′(.,ω) yields that η′ is indeed the gradient of η on Ω′. (cid:3) Lemma 11. Let (K,m) be a compact metric space and η a random element in C(K). Then there is a measurable minimiser of η, that is a random element U in K, such that η(U) = min η(u). u∈K Proof. To shorten the notation, for each x ∈ K and δ ≥ 0, let B(x,δ) = {u ∈K : m(u,x) ≤δ}, η(x,n) = min{η(u) : u∈ B(x,2−n)}. For each n ∈N let D be a finite 2−n-net in K; that is, K ⊂ B(x,2−n). For each n ∈ N n x∈Dn fix an order of the finite set D . We shall use the fact that for any closed set F the minimum n S over F, that is, min η(u), is a random variable. u∈F We define a sequence (U ) of random elements in K by recursion, such that n n∈N • η(U ,n)= min η(u), and n u∈K • m(U ,U )≤ 2−n +2−(n+1). n n+1 8 VILMOSPROKAJANDJOHANNESRUF Then (U ) has a limit U which is a measurable minimiser of η over K. n n∈N For n = 1 let U be the first element in 1 {v ∈ D : η(v,1) = minη(u)}. 1 u∈K Since this set is not empty, U is well defined. If U ,...,U are defined for some n ∈ N set U 1 1 n n+1 to be the first element in v ∈ D : η(v,n+1) = minη(u), m(v,U )≤ 2−n +2−(n+1) n+1 n u∈K This set is not eßmpty as ™ B(U ,2−n)⊂ B(v,2−(n+1)), n v∈D[n+1 m(v,Un)≤2−n+2−(n+1) so U is well defined. We conclude that the sequence with the above properties exists and its n+1 limit is a measurable minimiser. (cid:3) References Dalang, R. C., A. Morton, and W. Willinger (1990). Equivalent martingale measures and no- arbitrage in stochastic securities market models. Stochastics Stochastics Rep. 29(2), 185–201. Delbaen, F. and W. Schachermayer (2006). The Mathematics of Arbitrage. Springer. Kabanov, Y. (2008). In discrete time a local martingale is a martingale under an equivalent probability measure. Finance Stoch. 12(3), 293–297. Kabanov, Y. and M. Safarian (2009). Markets with Transaction Costs. Springer-Verlag, Berlin. Kabanov,Y.andC.Stricker(2001). Onequivalentmartingalemeasureswithboundeddensities. In S´eminaire de Probabilit´es, XXXV, Volume 1755 of Lecture Notes in Math., pp. 139–148. Springer, Berlin. Meyer, P.-A. (1972). Martingales and Stochastic Integrals. I. Lecture Notes in Mathematics. Vol. 284. Springer-Verlag, Berlin-New York. Prokaj, V. and M. Ra´sonyi (2010). Local and true martingales in discrete time. Teor. Veroyatn. Primen. 55(2), 398–405. Rogers, L.C. G.(1994). Equivalent martingale measures and no-arbitrage. Stochastics Stochas- tics Rep. 51(1-2), 41–49. Ruf, J. (2017). Local martingales and their uniform integrability. A mini course. Schachermayer, W. (1992). A Hilbert space proof of the fundamental theorem of asset pricing in finite discrete time. Insurance Math. Econom. 11(4), 249–257. Schachermayer, W. (1994). Martingale measures for discrete-time processes with infinite hori- zon. Math. Finance 4(1), 25–55. Shiryaev, A. N. (1996). Probability (Second ed.), Volume 95 of Graduate Texts in Mathematics. Springer-Verlag, New York. Translated from the first (1980) Russian edition by R. P. Boas. Vilmos Prokaj, Department of Probability Theory and Statistics, Eo¨tvo¨s Lora´nd University, Budapest E-mail address: [email protected] JohannesRuf,DepartmentofMathematics,London SchoolofEconomicsandPoliticalScience E-mail address: [email protected]

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.