ebook img

Distribution of eigenvalues of sample covariance matrices with tensor product samples PDF

0.14 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Distribution of eigenvalues of sample covariance matrices with tensor product samples

Distribution of eigenvalues of sample covariance matrices with tensor product samples D. Tieplova 6 1 0 2 Abstract y 2 2 Weconsidern n realsymmetricandhermitianmatricesM ,whichareequaltosum a × n M of mn tensor products of vectors Xµ = B(Yµ Yµ), µ = 1,...,mn, where Yµ are i.i.d. random vectors from Rn(Cn) with zero mean a⊗nd unit variance of components, and B is 2 2 2 6 an n n positive definite non-random matrix. We prove that if m /n c [0,+ ) n 2 and th×e Normalized Counting Measure of eigenvalues of BJB, where J is→defin∈ed bel∞ow ] in (2.6), converges weakly, then the Normalized Counting Measure of eigenvalues of Mn h convergesweakly in probability to a non-random limit and its Stieltjes transform can be p found from a certain functional equation. - h t 1 Introduction a m [ Sample covariance matrices appeared initially in statistics in the 1920s -1930s. Nowadays 2 theserandommatrices are widely usedinstatistical mechanics, probability theory and statis- v tics, combinatorics, operator theory and theoretical computer science in mathematics, and 3 also telecommunication theory, qualitative finances, structural mechanics, etc. (see e. g. [2]). 4 4 We consider sample covariance matrices of the form: 7 0 1 1. Mn = nXTX∗, (1.1) 0 6 where X is an n m matrix whose entries are i.i.d. random variables such that 1 × v: E X = 0, E X2 = 1 (1.2) i { ij} { ij} X and T is a m m positive definite matrix. Oneof the first questions in studyingof ensembles r × a of random matrices is on their Normalized Counting Measure of eigenvalues, which is defined by formula N (∆)= Card i [1,n] :λ ∆ /n, n i { ∈ ∈ } where < λ ... λ < 1 n −∞ ≤ ≤ ∞ are the eigenvalues of M . Also let σ be the Normalized Counting Measure of eigenvalues n m τ m of T. { i}i=1 First rigorous result on the model (1.1) was obtained in [9], where it was proved that if m is a sequence of positive integers such that n { } m + , n + , c = m /n c [0,+ ), n n n → ∞ → ∞ → ∈ ∞ 1 and the sequence σ converges weakly to the probability measure σ: m lim σ = σ, m n →∞ thentheNormalized CountingMeasureN ofeigenvalues M converges weakly inprobability n n to a non-random measure N (N(R) = 1). The Stieltjes transform f of N, N(dλ) f(z)= , z = 0, λ z ℑ 6 Z − is uniquely determined by the equation τσ(dτ) 1 f(z)= c z − . 1+τf(z) − (cid:16) Z (cid:17) Since then a lot of ensembles were considered. We mention two versions of ensembles of sample covariance matrices, similar to (1.1). The first is BXX B, (1.3) ∗ where X is an n m matrix whose entries are i.i.d. random variables satisfying (1.2) and × B is an n n matrix. Note that while studying the eigenvalues of (1.3) we can consider the × matrices X B2X instead of (1.3) coinciding with (1.1) for T = B2. The second version is ∗ (R +aX )(R +aX ) , (1.4) n n n n ∗ where X is an n m matrix whose entries are i.i.d. random variables satisfying (1.2), a > 0 n × constant, and R is an n m random matrix independent of X . n n × Numerous results and references on the eigenvalue distribution of these random matrices can be found in [3], [4]. The paper is organized as follows. In Section 2 we present our result. In Section 3 we give the proof of the main theorem and in Section 4 we prove all the technical results which we use in Section 3. We denote by C, c, etc., various constants appearing below, which can be different in different formulas. 2 Problem and main results Let us define multi-indexes i = (i ,i ), where i ,i = 1,n, and inversion in multi-indexes 1 2 1 2 ¯i = (i ,i ). Let 2 1 B = B = B (2.1) n i,j { } be an n2 n2 real symmetric or hermitian matrix. × We consider real symmetric or hermitian random matrices m 1 M = Xµ X¯µ, (2.2) n n2 ⊗ µ=1 X where the vectors Xµ are given by the formula (cf. (1.3)) Xµ = B(Yµ Yµ),µ = 1,...,m, (2.3) ⊗ 2 and Yµ = Yµ n , µ = 1,...,m, are vectors of Rn (or Cn) such that Yµ (or Yµ, Yµ ) { i }i=1 { i } {ℜ i ℑ i } are i.i.d. random variables for all i = 1,n, µ = 1,m and E Yµ = 0, E YµYν = δ δ (2.4) { i } { i k } ik µν in the real symmetric case and E Yµ = E YµYν = 0, E YµY¯µ = δ (2.5) { i } { i k } { i k } ik in the hermitian case. Introduce the n2 n2 matrix × J = δ +δ , (2.6) p,q pq p¯q and denote by N and σ the Normalized Counting Measure of eigenvalues of M and BJB n n n respectively. In what follows by saying that the matrix bounded we will mean that its euclidian (or hermitian) norm ... <c for some constant c. The main result of the paper is | | Theorem 1 Let M be a random matrix defined by (2.1) – (2.2). Assume that the sequence n σ converges weakly to a probability measure σ: n lim σ = σ, n n →∞ B is bounded uniformly in n, and m is a sequence of positive integers such that n { } m + , n + , c = m /n2 c [0,+ ). n n n → ∞ → ∞ → ∈ ∞ Then the Normalized Counting Measures N of eigenvalues of M converge weakly in proba- n n bility to a non-random probability measure N, and if f(0) is the Stieltjes transform of σ, then the Stieltjes transform f of N is uniquely determined by the equation z f(z)= f(0) (c zf(z) 1) 1 − c zf(z) 1 − − (cid:18) − − (cid:19) in the class of Stieltjes transforms of probability measures. 3 Proof of the main result We will prove the theorem for the technically simpler case of hermitian matrices. The case of real symmetric matrices is analogous. Next Proposition sets the one-to-one correspondence between finite nonnegative measures and their Stieltjes transforms. Proposition 1 Let f be the Stieltjes transform of a finite nonnegative measure m. Then: (i)f is analytic in C R, and f(z) = f(z); \ (ii) f(z) z >0 for z = 0; ℑ ℑ ℑ 6 (iii) f(z) m(R)/ z , in particular, lim η f(iη) ; | | ≤ |ℑ | η + | | ≤ ∞ → ∞ (iv) for any function f possessing the above properties there exists a nonnegative finite measure m on R such that f is its Stieltjes transform and lim η f(iη) = m(R); (3.1) η + | | → ∞ 3 (v) if ∆ is an interval of R whose edges are not atoms of the measure m, then we have the Stieltjes-Perron inversion formula 1 m(∆) = lim f(λ+iε)dλ; ε +0π ℑ → Z ∆ (vi) the above one-to-one correspondence between finite nonnegative measures and their Stieltjes transforms is continuous if we use the uniform convergence of analytic functions on a compact set of infinite cardinality of C R for Stieltjes transforms and the vague convergence \ for measures in general and the weak convergence of probability measures if the r.h.s. of (3.1) is 1; For the proofs of assertions see e.g. [1, Section 59] and [5]. Now recall some facts from linear algebra on the resolvent of real symmetric or hermitian matrix: Proposition 2 Let M be a real symmetric (hermitian) matrix and G (z) = (M z) 1, z = 0, M − − ℑ 6 be its resolvent. We have: (i) GM(z) z −1; (3.2) | | ≤ |ℑ | (ii) if G (z) and G (z) are resolvents of real symmetric (hermitian) matrices M and M 1 2 1 2 respectively then: G (z) = G (z) G (z)(M M )G (z); (3.3) 2 1 1 2 1 2 − − (iii) if Y Rn(Cn), then ∈ G (Y Y¯)G M M GM+Y⊗Y¯ = GM − 1+(G⊗MY,Y) , ℑz 6= 0. (3.4) In what follows we need Yiµ(τ) = Yiµ1|Yiµ|≤τ√n, Yiµ(τ)◦ = Yiµ(τ) −E{Yiµ(τ)}. It is easy to see that these random variables satisfy condition E Yµ(τ)◦ =E (Yµ(τ)◦)2 = 0, E Yµ(τ)◦ 2 = 1+o(1), n + , (3.5) { i } { i } {| i | } → ∞ E{|Yiµ(τ)◦|k} ≤ n(k−2)/2τk−2. (3.6) Similarly to Xµ and M we can define n m 1 Xµ(τ) = B(Yµ(τ) Yµ(τ) ), Mτ = Xµ(τ) X¯µ(τ). ◦ ⊗ ◦ n n2 ⊗ µ=1 X Consider n2 n2 matrices × m m 1 1 K = Cµ C¯µ, K = Cµ X¯µ, n n2 ⊗ n n2 ⊗ µ=1 µ=1 X X b 4 where Cµ = B (YµYµ(1 δ )+Yµ(τ) Yµ(τ) δ ). (3.7) i i,p p1 p2 − p1,p2 p1 ◦ p2 ◦ p1,p2 p X n n Here and below = . Xp pX1=1pX2=1 We need the following simple fact, a version of the min-max principle of linear algebra (see e. g. [7], Section I.6.10). Proposition 3 Let M and M be n n hermitian matrices and N and N be Normalized 1 2 1 2 × Counting Measures of their eigenvalues. Then we have for any interval ∆ R: ⊂ N (∆) N (∆) rank(A A )/n. (3.8) 1 2 1 2 | − | ≤ − (1) (1) Let N , N and N be the Normalized Counting Measure of eigenvalues of matrices n n n M , K and K respectively. Then according to (3.8) and (3.7) n n n b |Nn −Nn(1)|≤b|Nn −Nn(1)|+|Nn(1) −Nn(1)| ≤ rank(Mn −Kn)/n2+rank(Kn −Kn)/n2 m 1 ≤ n2 rank{ Bi,p{b (Ypµ1(τb)◦Ypµ2(τ)◦ −Ypµ1Ypµ2)δp1,p2X¯qµ}bp,q}i,q b (cid:16) Xp µX=1 m +rank Cµ(Y¯µ(τ) Y¯µ(τ) Y¯µY¯µ)δ B¯ { { p q1 ◦ q2 ◦ − q1 q2 q1,q2}p,q q,i}p,i Xq µX=1 (cid:17) m 1 rank (Yµ(τ) Yµ(τ) YµYµ)δ X¯µ ≤ n2 { p1 ◦ p2 ◦ − p1 p2 p1,p2 q}p,q (cid:16) µX=1 m 2 +rank Cµ(Y¯µ(τ) Y¯µ(τ) Y¯µY¯µ)δ = . { p q1 ◦ q2 ◦ − q1 q2 q1,q2}p,q n µX=1 (cid:17) Lemma 1 Let G(1)(z) and Gτ(z) be the resolvents of the matrices K and Mτ respectively. n n Then 1 E Tr(G(1)(z) Gτ(z)) = o(1), n + . n2| { − }| → ∞ Proof. Consider the (n2+m) (n2+m) block matrices M and Mτ such that: × n n M = 0 A∗ , Mτ = 0 f(Aτ)∗ f, (3.9) n A 0 n Aτ 0 (cid:18) (cid:19) (cid:18) (cid:19) f f where A, Aτ are n2 m matrices and × A = n 1Cµ, Aτ = n 1Xµ(τ). i,µ − i i,µ − i Denote G(z) and Gτ(z) the resolvents of matrices M and Mτ respectively. Using formula n n of inversion of block matrix, we get: e e f f z Tr(G(1)(z2) Gτ(z2))= Tr(G(z) Gτ(z)). (3.10) − −2 − e e 5 Now we should estimate the last expression. From (3.3) we have: Tr(G Gτ) = Tr(GGτ(M Mτ)) | − | | n− n | (Tr(GGτG Gτ ))1/2(Tr(M Mτ)(M Mτ ))1/2. e e ee ≤f f ∗ ∗ n − n n∗− n∗ Here and below we drop the argument z. Relations (3.2) and (3.9) implies: ee e e f f f f n Tr(G Gτ) (Tr(2(A Aτ)(A (Aτ) )))1/2 ∗ ∗ | − | ≤ z2 − − ℑ m 1 1/2 e e = 2 (Cµ Xµ(τ))(C¯µ X¯µ(τ)) n z2 i − i i − i ℑ (cid:16) µX=1Xi (cid:17) m n = 2 B (1 δ )(YµYµ Yµ(τ) Yµ(τ) ) z2 i,p − p1,p2 p1 p2 − p1 ◦ p2 ◦ ℑ (cid:16) µX=1iX,p,q 1/2 B (1 δ )(Y¯µY¯µ Y¯µ(τ) Y¯µ(τ) ) × q,i − q1,q2 q1 q2 − q1 ◦ q2 ◦ m (cid:17) 1 = 2 B2 (YµYµY¯µY¯µ Yµ(τ) Yµ(τ) Y¯µY¯µ z2 q,p p1 p2 q1 q2 − p1 ◦ p2 ◦ q1 q2 ℑ (cid:16) µX=1pqX11=6=qp22 6 1/2 YµYµY¯µ(τ) Y¯µ(τ) +Yµ(τ) Yµ(τ) Y¯µ(τ) Y¯µ(τ) ) . − p1 p2 q1 ◦ q2 ◦ p1 ◦ p2 ◦ q1 ◦ q2 ◦ (cid:17) Notice that in view of (3.5) and (2.5) entries where one of indexes p ,p ,q ,q is different 1 2 1 2 { } from others equal zero. Thus m 1 Tr(G Gτ) 2 B2 (YµYµY¯µY¯µ Yµ(τ) Yµ(τ) Y¯µY¯µ | − | ≤ z2 q,p p1 p2 q1 q2 − p1 ◦ p2 ◦ q1 q2 ℑ (cid:16) µX=1pp¯X==qq e e 1/2 YµYµY¯µ(τ) Y¯µ(τ) +Yµ(τ) Yµ(τ) Y¯µ(τ) Y¯µ(τ) ) . − p1 p2 q1 ◦ q2 ◦ p1 ◦ p2 ◦ q1 ◦ q2 ◦ (cid:17) Relations (3.5) and (2.5) implies E Yµ 2 Yµ 2 Yµ(τ) Yµ(τ) Y¯µY¯µ YµYµY¯µ(τ) Y¯µ(τ) + Yµ(τ) 2 Yµ(τ) 2 {| p1| | p2| − p1 ◦ p2 ◦ p1 p2 − p1 p2 p1 ◦ p2 ◦ | p1 ◦| | p2 ◦| } = 1 (1+o(1)) (1+o(1))+(1+o(1)) = o(1). − − Combining all above we get 1 (2mTr(JB)2o(1))1/2 √2m E Tr(G Gτ) < = o(1). n2| { − }| N z2 n z2 ℑ ℑ Finally in view of (3.10) e e 1 √m E Tr(G(z)(1) Gτ(z)) < o(1) = o(1). n2| { − }| √2n z |ℑ | (cid:3) It follows from Lemma 1 that for our purposes it suffices to prove Theorem 1 for matrix Mτ. Hence below we will assume that M is replaced by Mτ. To simplify notations we drop n n n the index τ and denote G(z) = (Mn z)−1, Gµ(z) = G Xµ=0, N = n2. − | In the proof of main theorem we need some results 6 Lemma 2 If F is a non-random N N matrix such that F c then × | | ≤ (i) E (FGµXµ,Xµ) = Tr(FGµBJB), { } (3.11) Var N 1(FGµXµ,Xµ) = o(1), n + ; − { } → ∞ (ii) 1 TrF(G Gµ) = O(N 1); (3.12) − N| − | (iii) c Var N 1Tr(FG) . (3.13) − { } ≤ N The proof of the lemma is given in Section 4. According to (3.4), we have (GµXµ) (GµX¯µ) G = Gµ N 1 i j . i,j i,j− − 1+N 1(GµXµ,Xµ) − Hence, (GµXµ) (GXµ) = i . i 1+N 1(GµXµ,Xµ) − Take any N N bounded matrix K. Then × m 1 1 Tr(KGM) = K (GXµ) X¯µ N N2 j,i i j µ=1 i,j XX 1 m (KGµXµ)jX¯jµ 1 m (KGµXµ,Xµ) = = . (3.14) N2 1+N 1(GµXµ,Xµ) N2 1+N 1(GµXµ,Xµ) µ=1 j − µ=1 − XX X To analyze the r.h.s. of (3.14), let us show first that if and are random variables, C D such that E 2+ 2 < c and {|C| |D| } ¯= E , = ¯, ¯ = E , = ¯, ◦ ◦ C {C} C C −C D {D} D D−D then ¯ 2 2 E C = C +O E |C◦| + |D◦| . (3.15) ¯ ¯ 2 ¯ 2 (cid:26)D(cid:27) D (cid:18) (cid:26)|D| |D| (cid:27)(cid:19) Indeed, ¯+ (¯+ ) 3 ◦ ◦ ◦ ◦ C = C C C C D +O D . ¯ − ¯2 ¯ ! D D D (cid:18) D (cid:19) Thus ¯ 3 ¯ 2 2 E C = C +E C◦D◦ +O |D◦| C +E |C◦| +c|D◦| . ¯ ¯2 ¯3 ≤ ¯ ¯ 2 ¯ 2 (cid:26)D(cid:27) D (cid:26) D (cid:27) (cid:18) D (cid:19) D (cid:26)|D| |D| (cid:27) 7 The last inequality implies (3.15). Let = N 1(KGµXµ,Xµ), = 1+2N 1(GµXµ,Xµ). Since matrix K is bounded, it − − C D follows from (3.11) that E 2 = E 2 = o(1), n + . µ ◦ µ ◦ {|C | } {|D | } → ∞ This, (3.14) and (3.15) imply 1 1 m N 1Tr(KGµBJB) E Tr(KGM) = E − +o(1) . (3.16) N { } N 1+N 1Tr(GµBJB) µX=1(cid:16) n − o (cid:17) In the r.h.s. of (3.16) result (3.12) allows us to replace Gµ with G 1 c N 1Tr(KGBJB) E Tr(KGM) = E n − +o(1)) . (3.17) N { } 1+N 1Tr(GBJB) − n o The last step is to replace N 1Tr(KGBJB) and N 1Tr(GBJB) in (3.17) with their expec- − − tations. We use again (3.15) with = N 1Tr(KGBJB), = 1+N 1Tr(GBJB). It follows − − C D from (3.17) and (3.13) 1 c N 1E Tr(KGBJB) n − E Tr(KGM) = { } +o(1). (3.18) N { } 1+N 1E Tr(GBJB) − { } Note that 1 1 1 z E Tr(KGM) = E Tr(K(G(M z)+Gz)) = E TrK + E Tr(KG) . N { } N { − } N { } N { } This and (3.18) imply that for any bounded matrix K 1 1 E TrK = E Tr(KG(c b 1BJB z)) +o(1), (3.19) N { } N { n −n − } where b = 1+N 1E Tr(GBJB) . (3.20) n − { } Taking K = (c b 1BJB z) 1, we obtain n −n − − 1 E Tr(c b 1BJB z) 1 = f (z)+o(1), (3.21) N { n −n − − } n where 1 g (z) = Tr(G(z)), f (z) = E g (z) . n n n N { } It follows from (3.19) with K = I 1 c E Tr(I +zG) = n(b 1)+o(1). n N { } b − n Then we get 1 1+zf (z) = c (1 )+o(1). n n − b n 8 Now we can find b : n c n b = . (3.22) n c zf (z) 1+o(1) n n − − This and (3.21) yield z f (z) = f(0) (c zf (z) 1) 1+o(1), (3.23) n n c zf (z) 1 n − n − − (cid:18) n− n − (cid:19) where 1 f(0)(z) = E Tr(BJB z) 1 . n N { − − } The sequence f consists of functions, analytic and uniformly bounded in n and z. Hence, n { } there exists an analytic in C R function f and a subsequence f that converges to f \ { nj} uniformly on any compact set of C R. In addition we have \ f (z) z > 0, z = 0 n ℑ ℑ ℑ 6 thus f(z) z 0, z = 0. By Proposition 1(vi) and the hypothesis of the theorem on ℑ ℑ ≥ ℑ 6 (0) the weak convergence of the sequence σ to σ, the sequence f of their Stieltjes transforms n n consists of analytic in C R functions that converge uniformly on a compact set of C R to \ \ the Stieltjes transform f(0) of the limiting counting measure σ of matrices BJB. This allows us to pass to the limit n + in (3.23) and to obtain that the limit f of any converging → ∞ subsequence of the sequence f satisfies functional equation n z 1 f(z) = f(0) c zf(z) 1 − , (3.24) c zf(z) 1 − − (cid:18) − − (cid:19)(cid:16) (cid:17) and f(z) z 0, z = 0. The proof of the uniqueness of solution of the equation in the ℑ ℑ ≥ ℑ 6 class of functions, analytic for z = 0 and such that f(z) z 0, z = 0 is analogues to ℑ 6 ℑ ℑ ≥ ℑ 6 [9]. Hence, thewholesequencef converges uniformlyon acompact set of C R to theunique n \ solution f of the equation. Let’s show that the solution possesses the properties f(z) z ℑ ℑ ≥ 0, z = 0 and lim η f(iη) = 1. Assume that f(z ) = 0, z = 0. Then (3.24) implies 0 0 ℑ 6 η + | | ℑ ℑ 6 → ∞ that dσ(λ) = C f(0)(z˜)= 0, ℑ (c 1)λ z (f(z ) 1) ℑ 0 0 Z − − − where C is some real constant and z˜= 0. This is impossible because, according to Propo- ℑ 6 sition 1(ii), f(0)(z) is strictly positive for any nonreal z. Since f(iη) < η 1 we have − ℑ | | ηdσ(λ) lim η f(iη) = lim = 1 η + | | η + (c 1)λ iη iηf(iη) → ∞ → ∞Z − − − This and the Proposition 1(iv) imply that f is Stieltjes transform of a probability measure. (cid:3) 9 4 Proofs of the lemma 2 (i) It follows from (2.5) E (FGµXµ,Xµ) = Tr(FGµBJB). µ { } Denote rµ = (FGµXµ,Xµ) Tr(FGµBJB). n − We need to show that E (N 1rµ)2 = o(1), n + . Rewrite µ − { } → ∞ rµ = (FGµ) B B (YµYµY¯µY¯µ J ) n i,j j,p q,i p1 p2 q1 q2 − p,q i,j,p,q X = (FGµ) B B Yµ 2 Yµ 2 1 i,j j,p p,i | p1| | p2| − Xi,j (cid:16)Xp (cid:16) (cid:17) + B B Yµ 2 Yµ 2 1 + B YµYµB Y¯µY¯µ j,p p¯,i | p1| | p2| − j,p p1 p2 q,i q1 q2 Xp (cid:16) (cid:17) pp¯X6==qq (cid:17) 6 = (FGµ) B (JB) Yµ 2 Yµ 2 1 i,j j,p p,i | p1| | p2| − Xi,j (cid:16)Xp (cid:16) (cid:17) + B YµYµB Y¯µY¯µ . j,p p1 p2 q,i q1 q2 pp¯X6==qq (cid:17) 6 Since Gµ is independent of Yµ, we obtain 1 2 E (N 1rµ)2 = E (FGµ) B (JB) Yµ 2 Yµ 2 1 µ{ − } N2 µ i,j j,p p,i | p1| | p2| − n(cid:16)Xi,j (cid:17) (cid:16)Xp (cid:16) (cid:17) 2 + B YµYµB Y¯µY¯µ j,p p1 p2 q,i q1 q2 pp¯X6==qq (cid:17) o 6 1 = N2Eµ (FGµ)i,j(F¯G¯µ)i′,j′ Bj,pYpµ1Ypµ2Bq,iY¯qµ1Y¯qµ2B¯j′,p′Y¯pµ′1Y¯pµ′2B¯q′,i′Yqµ1′Yqµ2′ nXi,j Xi′,j′ (cid:16)pp¯X6==qqpp¯X′′6==qq′′ o 6 6 1 + E (FGµ) (FG¯µ) N2 µ i,j i′,j′ nXi,j Xi′,j′ × Bj,p(JB)p,iB¯j′,p′(JB¯)p′,i′ |Ypµ1|2|Ypµ2|2−1 |Ypµ′1|2|Ypµ′2|2−1 Xp Xp′ (cid:16) (cid:17)(cid:16) (cid:17)o 2 + E (FGµ) (FG¯µ) N2 µ i,j i′,j′ nXi,j Xi′,j′ 1 × Bj,p(JB)p,i |Ypµ1|2|Ypµ2|2−1 B¯j′,p′Y¯pµ′1Y¯pµ′2B¯q′,i′Yqµ1′Yqµ2′ =: N2(R1+R2+R3). Xp pp¯X′′6==qq′′ (cid:16) (cid:17) (cid:17)o 6 Denote H = BFGµB, 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.