ebook img

On the maximum entropy principle and the minimization of the Fisher information in Tsallis statistics PDF

0.15 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On the maximum entropy principle and the minimization of the Fisher information in Tsallis statistics

On the maximum entropy principle and the minimization of the Fisher information in Tsallis statistics 0 Shigeru Furuichi1 1 ∗ 0 1Department of Computer Science and System Analysis, College of Humanities and Sciences, 2 Nihon University, 3-25-40,Sakurajyousui, Setagaya-ku,Tokyo, 156-8550,Japan n a J 8 Abstract. We give a new proof of the theorems on the maximum entropy principle in Tsallis statistics. That is, we show that the q-canonical distribution attains the maximum value ] h of the Tsallis entropy, subject to the constraint on the q-expectation value and the q-Gaussian c distribution attains the maximum value of the Tsallis entropy, subject to the constraint on the e m q-variance, as applications of the nonnegativity of the Tsallis relative entropy, without using the - Lagrange multipliers method. In addition, we define a q-Fisher information and then prove a t a q-Cram´er-Rao inequality that the q-Gaussian distribution with special q-variances attains the t s minimum value of the q-Fisher information. . t a Keywords : Tsallisentropy,Tsallisrelativeentropy,maximumentropyprinciple,Gaussian m distribution, Fisher information and Cram´er-Rao inequality - d 2000 Mathematics Subject Classification : 94A17, 46N55, 62B10 n o c [ 1 Introduction 1 v 3 The problems on maximum entropy principle in Tsallis statistics [1, 2] has been studied in 8 classical system and quantum system [3, 4, 5, 6]. Such problems have been solved by the use 3 of the Lagrange multipliers formalism. However we give a new proof for such problems, that is, 1 . we prove them by applying the Tsallis relative entropy without Lagrange multipliers formalism. 1 Moreover, wederiveaone-parameterextendedCram´er-Raoinequalityinvolvingaone-parameter 0 0 extended Fisher information. 1 We denote the q-logarithmic function ln by : q v i x1 q 1 X ln x − − (q R,q = 1,x > 0) q r ≡ 1 q ∈ 6 a − and the q-exponential function exp by q 1 expq(x) (1+(1−q)x)1−q , if 1+(1−q)x > 0, (q R,q = 1,x R). ≡ ( 0 otherwise ∈ 6 ∈ The functions exp (x) and ln x converge to exp(x) and logx as q 1, respectively. Note that q q → we have the following relations: exp x+y+(1 q)xy = exp (x)exp (y), ln xy = ln x+ln y+(1 q)ln xln y. (1) q{ − } q q q q q − q q ∗E-mail:[email protected] 1 In the following of this section, we define the Tsallis entropy and the Tsallis relative en- tropy for the probability density functions. The set of all probability density function on R is represented by D f : R R :f(x) 0, ∞ f(x)dx = 1 . ≡ → ≥ (cid:26) Z−∞ (cid:27) Then the Tsallis entropy [1] is defined by H (φ(x)) ∞ φ(x)qln φ(x)dx (2) q q ≡ − Z−∞ forany nonnegative realnumberq = 1andaprobability density function φ(x) D. Inaddition, 6 ∈ the Tsallis relative entropy is defined by D (φ(x)ψ(x)) ∞ φ(x)q(ln φ(x) ln ψ(x))dx (3) q q q | ≡ − Z −∞ for any nonnegative real number q = 1 and two probability density functions φ(x) D and 6 ∈ ψ(x) D. Taking the limit q 1, the Tsallis entropy and the Tsallis relative entropy converge ∈ → to the Shannon entropy H1(φ(x)) ∞ φ(x)logφ(x)dx and the Kullback-Leibler divergence ≡ − −∞ D1(φ(x)|ψ(x)) ≡ ∞ φ(x)(logφ(x)−lRogψ(x))dx, respectively. See [7] for fundamental proper- ties on the Tsallis −re∞lative entropy. R We define two sets involving the constraints on the normalized q-expectation value and q- variance: 1 C(c) f D : ∞ xf(x)qdx = µ q ≡ ∈ c q (cid:26) q Z−∞ (cid:27) and 1 C(g) f C(c) : ∞(x µ )2f(x)qdx = σ2 , q ≡ ∈ q c − q q (cid:26) q Z−∞ (cid:27) where cq ∞ f(x)qdx is a normalization factor. ≡ −∞ (c) (g) Then the q-cannonical distribution φ (x) D and the q-Gaussian distribution φ (x) D R q q ∈ ∈ were formulated in [3, 4, 5, 6, 8, 9] by 1 φ(c)(x) exp β(c)(x µ ) , Z(c) ∞ exp β(c)(x µ ) q ≡ Z(c) q − q − q q ≡ q − q − q q n o Z−∞ n o and 1 β(g)(x µ )2 β(g)(x µ )2 φ(g)(x) exp q − q , Z(g) ∞ exp q − q , q ≡ Zq(g) q(− σq2 ) q ≡ Z q(− σq2 ) −∞ (c) (g) respectively, where β and β are constant numbers depending on the parameter q, and we q q often use β(g) = 1 . q 3 q − 2 Tsallis maximum entropy principle In this section, we revisit the maximum entropy principle in nonextensive statistical physics. The maximum entropy principles in Tsallis statistics have been studied and modified in many literatures [3, 4, 5, 6, 8]. Here we prove two theorems that maximize the Tsallis entropy under two different constraints by the use of the nonnegativity of the Tsallis relative entropy instead of the use of the Lagrange multipliers method. 2 Lemma 2.1 For q =1, we have 6 D (φ(x)ψ(x)) 0, q | ≥ with equality if and only if φ(x) = ψ(x) for all x. Proof: Since we have ln x x 1 with equality if and only if x = 1 for any q R,q = 1, we q ≤ − ∈ 6 have ψ(x) ψ(x) ∞ ∞ D (φ(x)ψ(x)) = φ(x)ln dx φ(x) 1 dx = 0, q q | − φ(x) ≥ − φ(x) − Z−∞ Z−∞ (cid:18) (cid:19) with equality if and only if φ(x) = ψ(x) for all x. (c) Theorem 2.2 If φ C , then q ∈ 1 H (φ(x)) c ln , q q q ≤ − Z(c) q with equality if and only if 1 φ(x) = exp β(c)(x µ ) , Z(c) q − q − q q n o (c) (c) (c) whereβq isaconstant numberdepending ontheparameter q,Zq ≡ ∞ expq −βq (x−µq) dx −∞ and cq ∞ φ(x)qdx. R n o ≡ −∞ Proof: PuRtting 1 ψ(x) = exp β(c)(x µ ) ,Z(c) ∞ exp β(c)(x µ ) dx Z(c) q − q − q q ≡ q − q − q q (cid:16) (cid:17) Z−∞ (cid:16) (cid:17) and taking an account for ln y = ln y+y1 qln 1 and ln 1 = xq 1ln x, we have q x q − q x q x − − q 1 ∞ φ(x)qln ψ(x)dx = ∞ φ(x)qln exp β(c)(x µ ) dx q q(Z(c) q − q − q ) Z−∞ Z−∞ q (cid:16) (cid:17) 1 q 1 = ∞ φ(x)q β(c)(x µ )+exp β(c)(x µ ) − ln dx (− q − q q − q − q q Z(c)) Z−∞ (cid:16) (cid:17) q 1 = β(c) ∞ (x µ )φ(x)qdx+ln ∞ φ(x)q 1 β(c)(1 q)(x µ ) dx − q − q q Z(c) − q − − q Z−∞ q Z−∞ n o 1 = c ln . q q (c) Z q Thus we have 1 H (φ(x)) ∞ φ(x)qln φ(x)dx ∞ φ(x)qln ψ(x)dx = c ln q q q q q ≡ − ≤ − − Z(c) Z−∞ Z−∞ q by the nonnegativity of the Tsallis relative entropy. From the equality condition of the Tsallis relative entropy, we see that the maximum attains if and only if 1 φ(x)= ψ(x) = exp β(c)(x µ ) . Z(c) q − q − q q (cid:16) (cid:17) 3 Remark 2.3 The generalized free energy takes minimum: 1 c 1 q F µ H (φ(x)) µ + ln q q q q q ≡ − β(c) ≥ β(c) Z(c) q q q if and only if φ(x) = 1 exp β(c)(x µ ) due to Theorem 2.2. Zq(c) q − q − q (cid:16) (cid:17) (c) (c) Corollary 2.4 If φ C , then H (φ(x)) logZ with equality if and only if φ(x) = ∈ 1 1 ≤ 1 1 exp β(c)(x µ) . Z(c) − 1 − 1 n o Proof: Take the limit q 1 in Theorem 2.2. → By the condition on the existence of q-variance σ (i.e., the convergence condition of the q integral x2exp ( x2)dx), we consider q R such that 0 q < 3 and q = 1. q − ∈ ≤ 6 R Theorem 2.5 For q R such that 0 q < 3 and q = 1 if φ C(g), then q ∈ ≤ 6 ∈ H (φ(x)) c ln 1 +c β(g)Z(g)q−1, q ≤ − q q Z(g) q q q q with equality if and only if 1 β(g)(x µ )2 q q φ(x) = exp − , Zq(g) q − σq2 ! where Zq(g) ≡ ∞ expq −βq(g)(x−µq)2/σq2 dx with βq(g) = 1/(3−q). −∞ n o Proof: PuttingR 1 β(g)(x µ )2 β(g)(x µ )2 ψ(x)= exp q − q ,Z(g) ∞ exp q − q dx Zq(g) q − σq2 ! q ≡ Z−∞ q − σq2 ! and taking account for ln y = ln y+y1 qln 1 and ln 1 = xq 1ln x, we have q x q − q x q x − − q 1 β(g)(x µ )2 ∞ φ(x)qln ψ(x)dx = ∞ φ(x)qln exp q − q dx Z−∞ q Z−∞ q(Zq(g) q − σq2 !) β(g)(x µ )2 β(g)(x µ )2 1−q 1 = ∞ φ(x)q q − q +exp q − q ln dx Z−∞ − σq2 q − σq2 ! q Zq(g) β(g) 1 β(g)(1 q)(x µ )2 = q ∞(x µ )2φ(x)qdx+ln ∞ φ(x)q 1 q −  − q dx − σq2 Z−∞ − q q Zq(g) Z−∞ ( − σq2 ) 1 1 = β(g)c +c ln β(g)c (1 q)ln − q q q q Z(g) − q q − q Z(g) q q = β(g)c Z(g)q−1 +c ln 1 . − q q q q q Z(g) q Thus we have H (φ(x)) ∞ φ(x)qln φ(x)dx ∞ φ(x)qln ψ(x)dx = c β(g)Z(g)q−1 c ln 1 q ≡ − q ≤ − q q q q − q q Z(g) Z Z q −∞ −∞ 4 by the nonnegativity of the Tsallis relative entropy. From the equality condition of the Tsallis relative entropy, we see that the maximum attains if and only if 1 β(g)(x µ )2 q q φ(x) = ψ(x)= exp − . Zq(g) q − σq2 ! Corollary 2.6 If φ C(g), then H (φ(x)) log√2πeσ with equality if and only if φ(x) = ∈ 1 1 ≤ √21πσ exp −(x2−σµ2)2 . n o Proof: Take the limit q 1 in Theorem 2.5. → 3 Minimization of q-Fisher information The theorem in the previous section and the fact that the Gaussian distribution minimizes the Fisher information lead us to study the Tsallis distribution (q-Gaussian distribution) minimizes a q-Fisher information as a one-parameter extension. We prepare some definitions for this (g) (g) purpose. In what follows, we abbreviate β and Z instead of β and Z , respectively. q q q q Definition 3.1 For the random variable X with the probability density function f(x), we define the q-score function s (x) and q-Fisher information J (X) by q q dln f(x) q s (x) , (4) q ≡ dx J (X) E s (x)2 , (5) q q q ≡ (cid:2) (cid:3) g(x)f(x)qdx where a normalized q-expectation value Eq is defined by Eq[g(X)] ≡ R f(x)qdx for random R variables g(X) for any continuous function g(x) and the probanility density function f(x). Note that our definition of a q-Fisher information is different from those in several literature [10, 11, 12, 13, 14, 15, 16]. Example 3.2 For the random variable G obeying to q-Gaussian distribution 1 β (x µ )2 φ(g)(x) exp q − q , q ≡ Z q − σ2 q ( q ) where βq ≡ 3−1q and q-partition function Zq ≡ −∞∞expq −βq(xσ−q2µq)2 dx, the q-score function is calculated as n o R q 1 2βqZq− s (x) = (x µ ). q − σ2 − q q Thus we can calculate the q-Fisher information as 4βq2Zq2q−2 J (G) = . (6) q σ2 q Note that we have 1 limJ (G) = . (7) q 1 q σ2 → 1 5 Theorem 3.3 Given the random variable X with the probability density function p(x), the q- expectation value µ E [X] and the q-variance σ2 E (X µ )2 , we have a q-Cram´er-Rao q ≡ q q ≡ q − q inequality: h i 1 2 J (X) 1 for q [0,1) (1,3). (8) q ≥ σ2 p(x)qdx − ∈ ∪ q (cid:18) (cid:19) Immediately we have R 1 J (X) for q (1,3). (9) q ≥ σ2 ∈ q Proof: Here we assume that lim f(x)p(x) = 0 for any q 0, any probability density x →±∞ ≥ function p(x) and any smooth function f which is suitably well-behaved at . Then we have ±∞ (x µ )p(x)qs (x)dx q q E [(X µ )s (x)] = − q − q q p(x)qdx R (x µ )p(x)dx = −Rq ′ p(x)qdx R 1 = −R . p(x)qdx Thus we have R 2 (X µ ) q 0 E s (x)+ − ≤ q q σ2 "(cid:26) q (cid:27) # E (X µ )2 2 q q − = J (X)+ E [(X µ )s (x)]+ q σ2 q − q q h σ4 i q q 2 1 = J (X) + , q − σ2 p(x)qdx σ2 q q R which implies a q-Cram´er-Rao lower bound given in (8). Proposition 3.4 The equalityinthe q-Cram´er-Rao inequality(8)holds iftheprobability density (g) function p(x) is the q-Gaussian density function φ (x) with the q-variance q q q+1 1 21−q (3 q)2(q−1) (1 q)2 σ = − − , (0 q < 1) (10) q B 1, 1 ≤ 2 1 q − (cid:16) (cid:17) or 1 3−q 1 21−q (3 q)2(q−1) (q 1)2 σ = − − , (1< q <3). (11) q B 1 1,1 q 1 − 2 2 − (cid:16) (cid:17) Proof: We show that the following inequality holds for 0 q < 3 and q = 1 ≤ 6 1 2 J (G) 1 , (12) q ≥ σq2 φq(g)(x)qdx − ! with equality if the q-variance is given by (10R) or (11). 6 (i) For the case of 0 q < 1, we firstly calculate ≤ β (x µ )2 β y2 ∞ q q ∞ q Z exp − dx = exp dy q ≡ q − σ2 q − σ2 Z−∞ ( q ) Z−∞ (cid:26) q (cid:27) = 2σ q31−−qq 1 1−qz2 1−1qdz = 2σ 3−q 1 1 t2 1−1q dt q q − 3 q 1 q − Z0 (cid:18) − (cid:19) r − Z0 1 (cid:0) (cid:1) 3 q 2 1 1 = σ − B , +1 q 1 q 2 1 q (cid:18) − (cid:19) (cid:18) − (cid:19) and ∞ φ(g)(x)qdx = 1 ∞ exp βq(x−µq)2 qdx = 2σq q31−−qq 1 1−qz2 1−qqdz q Zq q − σ2 Zq − 3 q Z−∞ q Z−∞ ( q ) q Z0 (cid:18) − (cid:19) = 2σq 3−q 21 1 1 t2 1−qq dt = σq 3−q 12 B 1, 1 . Zq 1 q − Zq 1 q 2 1 q q (cid:18) − (cid:19) Z0 q (cid:18) − (cid:19) (cid:18) − (cid:19) (cid:0) (cid:1) Then the L.H.S. and the R.H.S. of (12) are calculated as 2q 2 22qσq2q−2(3−q)−(q+1)(1−q)1−qB 21,1 1 q − (cid:18) − (cid:19) and q 1 2q+1σqq−1(3−q)−q+21 (1−q)1−2q B 21,1 1 q − −1, (cid:18) − (cid:19) respectively. Then we have the inequality L.H.S. R.H.S. − 2q 2 = 22qσq2q−2(3−q)−(q+1)(1−q)1−qB 21, 1 1 q − (cid:18) − (cid:19) q 1 −2q+1σqq−1(3−q)−q+21 (1−q)1−2q B 21, 1 1 q − +1 (cid:18) − (cid:19) 2 q 1 = 2qσqq−1(3−q)−q+21 (1−q)1−2q B 21,1 1 q − −1 ≥ 0, ( ) (cid:18) − (cid:19) with equality if Eq.(10) holds. (ii) For the case of 1 < q < 3, we similarly calculate 1 3 q 2 1 1 1 Z = σ − B , q q q 1 q 1 − 2 2 (cid:18) − (cid:19) (cid:18) − (cid:19) and 1 ∞ φ(g)(x)qdx = σq 3−q 2 B q 1,1 . q Zq q 1 q 1 − 2 2 Z−∞ q (cid:18) − (cid:19) (cid:18) − (cid:19) Then the L.H.S. and the R.H.S. of (12) are calculated as 2q 2 4σq2q−2(3−q)q−3(q−1)1−qB q 1 1 − 12,12 − (cid:18) − (cid:19) 7 and q 1 4σqq−1(3−q)q−23 (q−1)1−2q B q 1 1 − 21, 12 − −1, (cid:18) − (cid:19) respectively. Then we have the inequality L.H.S. R.H.S. − 2q 2 = 4σq2q−2(3−q)q−3(q−1)1−qB q 1 1 − 12, 12 − (cid:18) − (cid:19) q 1 −4σqq−1(3−q)q−23 (q−1)1−2q B q 1 1 − 21,12 − +1 (cid:18) − (cid:19) 2 q 1 = 2σqq−1(3−q)q−23 (q−1)1−2q B q 1 1 − 12, 21 − −1 ≥ 0, ( ) (cid:18) − (cid:19) with equality if Eq.(11) holds. NotethatwehaveJ (X) 1 inthelimitq 1. Proposition3.4alsoshowsthatq-Gaussian 1 ≥ σ2 → 1 with q-variance such that Eq.(10) or Eq.(11) minimizes the q-Fisher information. In addition, we note on the limit q 1 for the q-variances σ given in Eq.(10) and Eq.(11). The following q → results were checked by the computer software: q q+1 1 1−r r−2 1 21−q (3 q)2(q−1) (1 q)2 2 r (2+r) 2r r2 1 lim σ = lim − − = lim = q→1−0 q q→1−0 B 21,11q r→+0 B 21, 1r √2eπ − (cid:16) (cid:17) (cid:0) (cid:1) and 1 3−q 1 21−q (3 q)2(q−1) (q 1)2 1 lim σ = lim − − = . q q 1+0 q 1+0 B 1 1,1 √2eπ → → q 1 − 2 2 − (cid:16) (cid:17) Remark 3.5 In our previous paper [17], we gave the rough meaning of the parameter q from the information-theoretical viewpoint. In [17], we showed that the Tsallis entropies for q 1 had ≥ the subadditivity and therefore we had several information-theoretical properties in the case of q 1. However, the Tsallis entropies for q < 1 did not have such properties. As similar as the ≥ case of the Tsallis entropies, in the present paper we have found that q-Fisher information have the quite same situation such that we have J (X) 1 for q 1, however for the case of q < 1, q ≥ σq2 ≥ we do not have any relation between J (X) and 1 other than the inequality (8). Therefore q σ2 q these results give us the difference of the q-Fisher information J (X) for q [0,1) and J (X) q q ∈ for q (1,3), as similar as the Tsallis entropies did in [17]. Summarizing these results, we may ∈ conclude that the Tsallis entropies and q-Fisher information make a sense for the case of q 1 ≥ in our setting. 4 Concluding remarks Throughoutthepresentpaper,weadoptedthenormalizedq-expectationvalueasaone-parameter generalization of the standard expectation value. In this section, we consider on our results ob- tained in Section 2 and 3, for different expectation values. The normalized q-expectation value xf(x)qdx Eq[X] ≡ R f(x)qdx adopted inthepresentpaperhasmathematical desirablepropertiesso thatit R 8 was used in many literatures on Tsallis statistics, and is rewritten by the standard expectation f(x)q value as E [X] xh(x)dx where h(x) is often called the escort density function. 1 ≡ ≡ f(x)qdx R (c) (g) If we adopt thRe constraints C1 or C1 due to the standard expectation value E1[X] ≡ xf(x)dx, Theorem 2.2 and Theorem 2.5 can not be derived by the use of nonnegativity of the Tsallis relative entropy, as easily seen from the processes of their proofs. However for the R standardexpectation value, wehaveCorollary 2.4 andCorollary 2.6. Thatis, itwasreconfirmed that the standard expectation value E corresponds to Shannon entropy and Kullback-Leibler 1 information. As a one-parameter generalization of the standard expectation value E , the following q- 1 expectation value may be considered: E [X] ∞ xf(x)qdx. q ≡ Z −∞ f For this q-expectation value, we also have the following results. Define the constraints: C(c) f D : ∞ xf(x)qdx = µ , q q ≡ ∈ (cid:26) Z−∞ (cid:27) g C(g) f C(c) : ∞(x µ )2f(x)qdx = σ2 . q ≡ ∈ q − q q (cid:26) Z−∞ (cid:27) g g Then we have the following results by the similar way to Theorem 2.2 and Theorem 2.5. (c) Theorem 4.1 (1) If φ C , then q ∈ g 1 H (φ(x)) c ln , q ≤ − q q Z(c) q with equality if and only if 1 φ(x) = exp β(c)(x µ ) , Z(c) q − q − q q n o (c) (c) where β and Z are same constant numbers in Theorem 2.2. q q (2) For q R such that 0 q < 3 and q = 1 if φ C(g), then q ∈ ≤ 6 ∈ H (φ(x)) c ln 1g+β(g)Z(g)q−1, q ≤ − q q Z(g) q q q with equality if and only if 1 β(g)(x µ )2 q q φ(x) = exp − , Zq(g) q − σq2 ! (g) (g) where β and Z are same constant numbers in Theorem 2.5. q q Moreover, we may define a q-Fisher information by the q-expectation value E [X] as J (X) q q ≡ E [s (x)2], where s (x) is a same score function in Eq.(4). Then we have the following result by q q q the similar way to Theorem 3.3. f e f 9 Theorem 4.2 Given the random variable X with the probability density function p(x), the q- expectation value µ E [X] and the q-variance σ2 E (X µ )2 , we have a q-Cram´er-Rao q ≡ q q ≡ q − q inequality: h i f 1 f J (X) for q [0,1) (1,3). (13) q ≥ σ2 ∈ ∪ q e (g) In addition, the equality holds if p(x) =φ (x) and σ are given by Eq.(10) or Eq.(11). q q Thus we can see that Theorem 4.1 and Theorem 4.2 are almost similar to the results obtained in Thorem 2.2, Theorem 2.5 and Theorem 3.3, except for the normalization factor. We also find that Theorem 4.2 has a slightly modified form, if it is compared with Theorem 3.3, because a one-parameter generalized Cram´er-Rao inequality (13) holds for any q R such that 0 q < 3 ∈ ≤ and q = 1, while the inequality (9) holds for 1 < q < 3. 6 Weclosethissectiongivingacommentonapossibleapplicationofourq-Fisherinformations. The central limit theorem, which is one of important theorems in probability theory, states the distribution function of the standardized sum of an independent sequence of random variables convergences to Gaussian distribution under a certain assumpution. The classical central limit theorem is usually proved by the characteristic function. However it is known that the Fisher information can be applied to prove the classical central limit theorem [18, 19, 20]. In addition, quite recently, the q-central limit theorem for q 1 was proved in [21] by introducing new ≥ notions such as q-independence, q-convergence, q-Fourier transformation and q-characteristic function. Therefore we may expect that a new proof of q-central limit theorem may be given by applying q-Fisher information in the future. Acknowledgement This work was supported by the Japanese Ministry of Education, Science, Sports and Cul- ture, Grant-in-Aid for Encouragement of Young Scientists (B), 20740067 and Grant-in-Aid for Scientific Research (B), 18300003. References [1] C.Tsallis, Possible generalization of Bolzmann-Gibbs statistics, J.Stat.Phys.,Vol.52(1988),pp.479-487. [2] C. Tsallis et al., Nonextensive Statistical Mechanics and Its Applications, edited by S. Abe and Y. Okamoto (Springer-Verlag, Heidelberg, 2001); see also the comprehensive list of references at http://tsallis.cat.cbpf.br/biblio.htm. [3] S.Martinez,F.Nicol´as,F.PenniniandA.Plastino,Tsallis’entropymaximizationprocedure revisited, Physica A,Vol.286(2000), pp.489-502. [4] C.Tsallis, R.S.Mendes and A.R.Plastino, The role of constraints within generalized nonex- tensive statistics, Physica A,Vol.261,(1998),pp.534-554. [5] S.Abe, S. Martn´ez, F. Pennini, and A. Plastino, Nonextensive thermodynamic relations, Phys.Lett.A,Vol.281(2001),pp.126-130. [6] S.Abe, Heat and entropy in nonextensive thermodynamics: transmutation from Tsallis theory to R´enyi-entropy-based theory,Physica A,Vol.300(2001),pp.417-423. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.