The Application of the Fast Fourier Transform to Jacobi Polynomial expansions by Akil C. Narayan and Jan S. Hesthaven Abstract We observe that the exact connection coefficient relations transforming modal coefficients of one Jacobi Polynomial class to the modal coefficients of certain other classes are sparse. Because of this, when one of the classes corresponds to the Chebyshev case, the Fast Fourier Transform can be used to quickly compute modal coefficients for Jacobi Polyno- mialexpansions of class (α,β)when 2αand 2β are both odd integers. In addition,we pre- sent an algorithm for computing Jacobi spectral expansions that is more robust than Jacobi-Gauss quadrature. Numerical results are presented that illustrate the computa- tionalefficiency and accuracy advantage ofourmethod overstandard quadrature methods. 1 Introduction The classical Jacobi Polynomials P(α,β) are a family of orthogonal polynomials [22] that have n been used extensively in many applications for their ability to approximate general classes of functions. They are a class of polynomials that encompass the Chebyshev, Legendre, and Gegenbauer/ultraspheric polynomials. In addition, they have a very close connection to the Associated Legendre functions that are widely used in Spherical Harmonic expansions. Jacobi polynomial expansions have been used in a variety of applications, some of which are the resolution of the Gibbs’ phenomenon [14], electrocardiogram data compression [23], and the solution to differential equations [8]. Due to the range of applications for Jacobi polynomials, it is desirable to perform spectral expansion computations as quickly and accurately as possible. In this paper, we show that for a variety of (but not all) classes of Jacobi polynomials, one can exploit the Fast Fourier Transform (FFT) to perform spectral transformations; this implies an asymptotic reduction of the direct-method unoptimized O(N2) cost to an O(N log N) cost, which is significant if N is large. In addition, we show that computational savings is apparent even for small N. Our method is applicable for discrete polynomial transforms for an expansion in P(α,β) N−1 if both 2α and 2β are odd integers. This notably does not include the Legendre n n=0 casne whereo(α,β)=(0,0). The idea of using an FFT to compute Jacobi spectral transformations is not new. That the FFT can be used to compute Chebyshev modes via nodal evaulations at the Chebyshev quadra- ture points is well-known [16]. In addition, using the FFT for other Jacobi polynomial transfor- mations has been studied. For Legendre polynomial transforms, Alpert [2] employed a subma- trix decomposition of the modal connection matrix to travel between Chebyshev modes and Legendre modes. Orszag [19] used the WKB approximation to propose an algorithm for a broad class of eigenfunction transforms (including Legendre polynomial transforms). Driscoll [9], [10] provides an O(N log N) algorithm for general discrete polynomial transforms as long as the three-term recurrence for the polynomial family is known. The method relies in part on a Toeplitz-product factorization of the interpolating Vandermonde matrix, for which an asymp- totic O(NlogN) algorithm for matrix-vector multiplication exists [20]. 1 2 Section 2 Our method is based upon two main observations: firstly that the transformation from func- tion evaluations located at the Chebyshev quadrature nodes to Chebyshev polynomial expansion coefficients can be implemented with the FFT. Secondly, we show that the exact connection coefficient relation between Chebyshev polynomials and several other Jacobi polynomials classes can be computed and implemented robustly in an inexpensive O(N) cost. The connection coeffi- cient relation is simple enough that it does not require the evaluation of any transcendental or higher-order functions, requires very little preprocessing, and only O(N) storage is required. This should be compared to the previously mentioned methods which are either approximate in the computation, require more preprocessing, more storage, and/or are significantly more com- plicated to implement. The disadvantage of our method is that it can only be applied for certain classes of Jacobi Polynomials that, as already mentioned, excludes the Legendre case. In section 2 we give an overview of Jacobi polynomials and the relevant properties needed for our discussion. Section 3 is devoted to a theoretical description of the method and includes most of the major results. Section 4 is a special application of the results from section 3 to Cheby- shev-like systems where the FFT may be exploited. Finally, section 5 gives some numerical examples. 2 Jacobi Polynomials For a comprehensive treatement of Jacobi polynomials and their properties, [22], [11], and [1] prove to be excellent references. All properties and results that follow in this section are taken from these references. Jacobi polynomials are one of the two linearly independent solutions to the linear, second-order, singular Sturm-Liouville differential equation d dρ (1 r)α+1(1+r)β+1) n(n+α+β+1)(1 r)α(1+r)βρ=0, r [ 1,1]. (2.1) −dr − dr − − ∈ − (cid:20) (cid:21) The parameters α, β are restricted to take real values in the interval ( 1, ). The monic − ∞ Jacobi polynomial of order n, as a solution to (2.1), is written as P(α,β)(r). We define the n dimension-N space =span rn:0 n N 1 . For any α,β> 1, the Jacobi polynomials of N B { ≤ ≤ − } − class (α,β) are orthogonal under a weighted L2 inner product: 1 P(α,β)P(α,β)(1 r)α(1+r)βdr=h(α,β)δ , (2.2) Z−1 m n − n m,n where δ is the Kronecker delta function, and h(α,β) is a normalization constant given in the m,n n appendix, equation (A.1). We take the integral (2.2) in the Lebesgue sense. Define the weight function ω(α,β)(r)=(1 r)α(1+r)β, (2.3) − and denote angled brackets , as the inner product h· ·i(α,β) 1 f,g = f(r)g(r)ω(α,β)dr. h i(α,β) Z−1 This inner product induces a norm on the space L2 f:[ 1,1] R:f Lebesgue k·k(α,β) (α,β)4 − → measurable, f < . The Jacobi polynomials of class (α,β) are complete and orthogonal k k(α,β) ∞ (cid:8) in L2 . (α,β) (cid:9) Jacobi Polynomials 3 The monic Jacobi polynomials satisfy various relations and can be expressed in terms of the Hypergeometric function F , and also satisfy a generalized Rodrigues relation for all m n: 2 1 ≤ 2n+α+β ω(α,β)P(α,β)= n n (cid:18) (cid:19) (2.4) 2m−n(n (1−)1)(mn m+1) ddxmm ω(α+m,β+m)Pn(α−+mm,β+m) . − (cid:8) − h i Although the formulae for the monic orthogonal polynomials are often easier to write down than those for other normalizations, we shall prefer to work with the L2 -normalized polynomials. (α,β) To this end, we define P(α,β) P˜(α,β)= n , n h(α,β) n q which are orthonormal under the weight ω(α,β). All orthogonal polynomials satisfy a three-term recurrence relation from which the Kernel polynomials and the Christoffel-Darboux identity can be derived. These last two properties yield the following promotions and demotions of the Jacobi polynomial class parameters (α,β): (1 r)P˜(α,β) = µ(α,β)P˜(α−1,β) µ(α,β)P˜(α−1,β), (2.5) − n n,0 n − n,1 n+1 (1+r)P˜(α,β) = µ(β,α)P˜(α,β−1)+µ(β,α)P˜(α,β−1), (2.6) n n,0 n n,1 n+1 P˜(α,β) = ν(α,β)P˜(α+1,β) ν(α,β)P˜(α+1,β), (2.7) n n,0 n − n,−1 n−1 P˜(α,β) = ν(β,α)P˜(α,β+1)+ν(β,α)P˜(α,β+1), (2.8) n n,0 n n,−1 n−1 where µ(α,β) and ν(α,β) are constants for which we derive explicit formulae in appendix A. n,0/1 n,0/−1 The formulae (2.5)-(2.8) are the main ingredients for our results. Lastly we cover the spectral expansion of a function f(r) in Jacobi polynomials. For any f ∈ L2 we define the modal coefficients (α,β) fˆ(α,β)= f,P˜(α,β) . n n (α,β) D E We also have a Parseval-type relation: ∞ 2 f 2 = fˆ(α,β) . k k(α,β) n nX=0h i Naturally, fˆ(a,b) is well-defined if a α and b β because of the inclusion L2 L2 . We n ≥ ≥ (α,β)⊆ (a,b) define the projection operator N−1 (α,β)f= fˆ(α,β)P˜(α,β). PN n n n=0 X 4 Section 2 By completeness and orthogonality, this operator satisfies the relations f (α,β)f 0, n −Pn (cid:14) →∞ (α,β) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) f (α,β)f,φ =0, φ −Pn (α,β) ∈Bn D E 2.1 Connection Coefficients The problem of rewriting an expansion of one class of Jacobi polynomials into an expansion in a different class can be cast as the problem of determining the connection coefficients λ(α,β,γ,δ) n,m satisfying n P˜(α,β)= λ(α,β,γ,δ)P˜(γ,δ). (2.9) n n,m m m=0 X We clearly have by orthogonality that λ(α,β,γ,δ)= P˜(α,β),P˜(γ,δ) (2.10) n,m n m (γ,δ) D E These connection coefficients are explicitly known for a very large class of problems. Askey [4] compiles earlier formulae originated by Gegenbauer to give explicit formulae for the λ(α,β,γ,δ) m,n for the different cases (a) α=γ, (b) β=δ, and (c) α=β and γ=δ. These relations can in gen- eral be bootstrapped to determine the general coefficients for any (α, β, γ, δ). In terms of the hypergeometric function F , an explicit expression for the connection coefficients can also been 3 2 derived from the Rodrigues relation (2.4). See e.g. [3]. An alternative expression has been derived by Maroni and da Rocha [17] utilizing a recur- rence relation satisfied by the connection coefficients in conjunction with significant symbolic computational arithmetic. The disadvantage of both of these methods is that the computation of these connection coefficients requires a relatively sophisticated algorithm. Askey’s method does not actually give the connection coefficients explicitly if α is different from γ and β is dif- ferent from δ, and the coefficients that are given require the computation of falling factorials and possibly Gamma functions; of course one may evaluate the function F , but that is likewise rel- 3 2 atively complicated. Maroni’s results give explicit formulae for the λ coefficients, but require the computation of several products and sums of Gamma functions for each coefficient. Even if the coefficients λ can be computed quickly and accurately, in order to transfer modal coefficients of class (α,β) into those of class (γ,δ), the sum ∞ fˆ(γ,δ)= λ(γ,δ,α,β)fˆ(α,β) (2.11) n m,n m m=n X must be computed in some fashion if the exact modes fˆ(γ,δ) are desired. If we wish to simply n approximate the modes, we may assume that fˆ(α,β) = 0 for m > N for some large N. In this m case, we still must perform the operation (2.11) (with the upper limit truncated to N) for each n=0,1,2, ,N 1, which takes O(N2) operations as written. (cid:7) − The main contribution of this paper to the problem of finding connection coefficients is given in section 3. We will assume certain restrictions on the values of α, β, γ,and δ and show that the infinite sum (2.11) is actually a finite sum (independent of the truncation order N) allowing us in theory to exactly compute the modes fˆ(γ,δ) from a finite collection of modes fˆ(α,β), and n n the computation of each mode can be done in an operation count that is n-independent. In practice, we must still compute the connection coefficients λ(γ,δ,α,β). While e.g. Askey’s or m,n Maroni’s formulae may in principle be used to accomplish this, we also present a simple algo- rithm for robustly computing the connection coefficients assuming restrictions (to be given in the next section) on the quartet (α,β,γ,δ). Jacobi-Jacobi transformations 5 3 Jacobi-Jacobi transformations The purpose of this section is to form a relationship between the expansions (α,β)f and PN (γ,δ)f for α γ and/or β δ. More than that, we will determine a relationship that will allow PN (cid:2) (cid:2) us to travel between the two expansions robustly and with very little effort. Of course, the speed of this transformation comes with a natural price: this can only be done when δ β and | − | γ α are integers. | − | We begin by proving a lemma that is an inductive application of equations (2.5) and (2.6): Lemma 3.1. For any A,B N and α,β> 1, the following promotion relations hold: ∈ − A (1 r)AP˜(α+A,β) = M(α,A,β)P˜(α,β) (3.1) − n m,n n+m m=0 X B (1+r)BP˜(α,β+B) = ( 1)mM(β,B,α)P˜(α,β), (3.2) n − m,n n+m m=0 X where the M(α,β) are constants. Similarly, we can demote the class parameters α+A and β+B m,n down to α and β as well: A P˜n(α,β) = Nm(α,,nA,β)P˜n(α−+mA,β) (3.3) m=0 X B P˜n(α,β) = (−1)mNm(β,,nB,α)P˜n(α−,mβ+B) (3.4) m=0 X Proof. The proofs of (3.1) and (3.2) are accomplished via the demotion relations (2.5) and (2.6): repeated application of (2.5) to the left-hand side of (3.1), and repeated application of (2.6) to the left-hand side of (3.2) yields the desired result. The relations (3.3) and (3.4) are proven in exactly the same fashion using the promotion relations (2.7) and (2.8). (cid:3) Note that we did not give the formulae for the constants M(α,A,β) and N(α,A,β) in Lemma m,n m,n 3.1. It would be possible for us to derive such formulae in terms of the µ(α,β) and ν(α,β), but it n,i n,i is of little value. The main use of Lemma 3.1 is in the proof of the following theorem: Theorem 3.2. For A,B N and α,β> 1, let f L2 . Then ∈ 0 − ∈ (α,β) A+B fˆ(α+A,β+B)= λ(α+A,β+B,α,β)fˆ(α,β), n 0 (3.5) n n+m,n n+m ≥ m=0 X where the λ(α+A,β+B,α,β) are the connection coefficients (2.10). n+m,n Proof. For ease of exposition, we drop the superscripts on λ(α+A,β+B,α,β) in this proof. We n+m,n use Lemma 3.1 twice on (1 r)A(1+r)BP˜(α+A,β+B) to show that there exist constants Λ − n n+m,n such that A+B ω(A,B)P˜(α+A,β+B)= Λ P˜(α,β). n n+m,n n+m m=0 X 6 Section 3 Following this we note that fˆ(α,β) is well-defined because f L2 , and fˆ(α+A,β+B) is well- n ∈ (α,β) n defined because of the inclusion of L2 in L2 . Finally, we have (α,β) (α+A,β+B) fˆ(α+A,β+B) = f,P˜(α+A,β+B) n n (α+A,β+B) D E = f,(1 r)A(1+r)BP˜(α+A,β+B) − n (α,β) D E A+B = f, Λ P˜(α,β) n+m,n n+m * + mX=0 (α,β) A+B = Λ f,P˜(α,β) n+m,n n+m (α,β) mX=0 D E A+B = Λ fˆ(α,β). n+m,n n+m m=0 X Furthermore, it is easy to show by orthogonality that Λ λ , where λ are the n+m,n n+m,n n+m,n ≡ Jacobi connection coefficients (2.10). (cid:3) Remark 3.3. Theorem 3.2 may also be proven by utilizing equations (2.7)-(2.8) and by noting that the projection operator (α,β) is the identity operator on the space of (N 1)st degree PN − polynomials for any pair (α,β). In this case it is trivial to see that Λ λ . N n+m,n n+m,n B ≡ Theorem 3.2 gives us a special version of the connection coefficient relation (2.11). In the language of section 2.1, we have derived a result when γ=α+A and δ=β+B. It says that we can express the modes of higher-class Jacobi expansions in terms of a finite linear combinations of lower-class Jacobi expansions. This relation is exact. Roughly speaking, in order to convert an expansion of class (α,β) to one of class (α+A,β+B), we require about O(A+B) computa- tions, as long as A and B are both natural numbers. The above result is likely known by many authors in this field due to the voluminous litera- ture on Jacobi polynomials. In particular, the connection coefficient relations presented by e.g. Askey [4] and Maroni [17] can no doubt be simplified with the assumptions of Theorem 3.2 to produce the result. However we have not seen this observation made in the literature, and so we present the above theorem. One deficiency in the theorem above is that we do not present explicit values of the connec- tion coefficients Λ(A,B,α,β). While this is a notable loss (especially since in principle such for- m,n mulae, albeit cumbersome, are present in existing literature), it is of little consequence as we present a rather straightforward algorithm in the next section for computing these connection coefficients. This algorithm is at worst slightly higher in operation count than if we were to derive explicit formulae. Furthermore, the numerical results in section 5 show that the one-time overhead cost we incur by computing these coefficients is insignificant compared to the savings we gain from being able to use the FFT when applicable for certain spectral mode calculations. 3.1 Computing the transformation Jacobi-Jacobi transformations 7 We collect the modes fˆ(α,β) N+A+B−1 into the vector fˆ(α,β). Theorem 3.2 makes it clear n n=0 that there exists an N n(N +Ao+B) connection coefficient matrix Cˆ, dependent on α, β,A, × and B such that fˆ(α+A,β+B)=Cˆfˆ(α,β). (3.6) (We assume that A,B N .) This relation transports N +A+B modes from an expansion in 0 ∈ the polynomials P˜(α,β) to N modes from an expansion in P˜(α+A,β+B). The entries of the matrix Cˆ are the connection coefficients Cˆ = λ(α+A,β+B,α,β), but we only have relatively m,n n,m complicated expressions for the explicit entries of the matrix Cˆ (see e.g. [4], [17]). In order to more easily construct the matrix Cˆ we first note that it is a sparse matrix, banded upper-trian- gular, with about (A+B)(N+A+B) non-zero entries. To construct Cˆ, we essentially write out the proof of Lemma 3.1 via induction. To this end, let us define the following set of matrices: let U(α,β) and V(α,β) be bidiagonal matrices with entries defined by U(α,β) = µ(α,β) n,n n,0 UVn((αα,n,,ββ+))1 == −µ(βµ,n(αα,)1,β) n=1,2,(cid:7),N+A+B−1 n,n n,0 Vn(,αn,+β)1 = −µn(β,1,α) U(α,β) = µ(α,β) N+A+B,N+A+B n,0 V(α,β) = µ(β,α) N+A+B,N+A+B n,0 The matrix U(α,β) transforms the modes of an (α 1,β) expansion to those of an (α,β) expan- − sion whenever α > 0. Similarly, V(α,β) transforms the modes of an (α, β 1) expansion into − those of an (α,β) expansion whenever β>0. To define the entries of U and V we have used the demotion relations (2.5) and (2.6). Note, however, that the last mode N +A+B will be incor- rect because we require information about mode N +A+B+1 (which we don’t have) to deter- mine it. Thus, the last output mode will be corrupted. However, given that we are merely com- puting connection coefficients, this ‘corruption’ can simply be characterized as a type of filtering of the terminal modes to reproduce the original finite-degree polynomial instead of the true set of modes fˆ(α+A,β+B) for which information is embedded in the unavailable high-degree modes n fˆ(α,β). n We can now define a square (N+A+B) (N+A+B) matrix C as × B A C= V(α+A,β+b) U(α+a,β) (3.7) b=1 a=1 Y Y The matrix C has some nice properties. As the product of banded bidiagonal matrices, we also know that C has non-zero entries on at most (A+B+1) diagonals, and is upper-triangular. Proposition 3.4. The matrix C defined by equation (3.7) is invertible and positive-definite. 8 Section 3 Proof. C is upper-triangular, and thus the diagonal entries are the eigenvalues. By utilizing equation (3.7) and expressions (A.2) in the appendix, we have, for all n=1,2, ,N+A+B (cid:7) B A C = µ(β+b,α+A) µ(α+a,β) > 0 (3.8) n,n n,0 n,0 b=1 a=1 Y Y (cid:3) See also e.g. [3]. Proposition 3.4 tells us that the matrix C is invertible, which means we can travel back and forth between the modes fˆ(α+A,β+B) and fˆ(α,β). Clearly this is the case; we n n did not need Proposition 3.4 to be aware of this in light of the well-posedness of the general con- nection coefficient problem. In practice, inverting the conventional connection coefficient problem may be difficult because of either the need to generate an additional set of connection coefficients, or because of the need to invert a full linear system. In our case, the inversion can be accomplished via back-substitution because C is banded upper-triangular. The back substi- tution has a sequential computational cost O(N(A+B)), similar to the cost of the formation or application of the matrix. Thus, promoting the parameters (α, β) by (A, B) requires a (paral- lelizable) O(N) multiplication by C, the demotion by (A,B) requires a sequential O(N) applica- tion of C−1. The positive-definiteness of C is not necessary for our results, but it hearkens to similar questions in the literature about the positivity of the Jacobi connection coefficients λ, e.g. [3]. Note however that all the entries in the matrix C are not always positive; the above proposition only deals with the main diagonal. With the explicit expression (3.8) for the eigenvalues C , we can show that the N N n,n × matrix C satisfies λ (C) 2−(A+B)/2, N , min ∼ →∞ where λ (C) denotes the minimum eigenvalue of C. Since the maximum eigenvalue C is min 1,1 O(1), this provides a bound for the ratio between maximum and minimum eigenvalues for C. We note however that experimentally the ratio of extremal singular values of C (the condition number) seems to be much worse, and by objective standards one would judge the conditioning of C very poorly. However, we show in section 5.2 that in practice using the connection matrix is one of the more stable and robust methods available for computing modal expansions. We tangentially note that it is not entirely necessary to form the matrix C; one can merely store the coefficients µ(α,β) required for the formation of C and view the application (inversion) n,i of C as A+B sequential applications (inversions) of invertible bidiagonal matrices. Returning to the goal presented at the beginning of this section, we define the non-square matrix Cˆ as the first N rows of C, and this is exactly the matrix we were looking for. Thus, the formation of Cˆ (or C) requires (A+B) multiplications between (sparse) bidiagonal matrices. However, in contrast to the relation presented in (3.6), we prefer an invertible transforma- tion. The reason is that we will eventually use this transform as an ingredient in a modal/nodal transformation. The matrix Cˆ is not invertible (it’s not even square), but of course C is invert- ible, so we shall frequently use that matrix. (As mentioned before, one should be aware that the last A+B modes are not the true modes, but some version of them.) 3.2 Properties of the transformation Exploiting the FFT 9 ∞ In this section we assume that we are given a collection of modes fˆ(α,β) defining a n function f = ∞ fˆ(α,β)P˜(α,β), and that we wish to obtain some finitennumbeornN=0of the (α+ n=0 n n A,β+B) modes. We can use the results from the previous section to compute an N N matrix C, and an N P(N+A+B) matrix Cˆ. Define the length-N vectors fˆ(1) and fˆ(2) as× × n n fˆ(1)=Cfˆ(α,β) n n fˆ(2)=Cˆfˆ(α,β). n n The vector fˆ(α,β) is of length N in the first expression, and of length N +A+B in the second n expression. Note that due to the definition of C (a submatrix of Cˆ), we have that fˆ(1) = fˆ(2) n n for 0 n N A B 1. Define the corresponding polynomials ≤ ≤ − − − N−1 f(1)= fˆ(1)P˜(α+A,β+B) n n n=0 X N−1 f(2)= fˆ(2)P˜(α+A,β+B). n n n=0 X Application (inversion) of the matrix C is equivalent to performing the connection coefficient problem (2.11) with γ=α+A and δ=β+B (respectively, γ=α A, δ=β B). This is easy − − to see if one accepts the statement of remark 3.3 (namely that theorem 3.2 is just a connection coefficient problem). Therefore, it is clear that f(1)= (α,β)f. PN Function f(2), the application of Cˆ is a little more subtle, but the result is that f(2)= (α+A,β+B) (α,β) f= (α+A,β+B)f. PN PN+A+B PN The latter equality follows from theorem 3.2, which proves the finite termination of the modal connection expression (2.11). Therefore the two functions f(1) and f(2) are not equal; they differ in the last A+B modes. Both functions can essentially be computed in the same computational time (although it does take about 1(A + B)(A + B + 1) more operations to compute the N 2 modes fˆ(2)) but they are different projections. Note that the difference between these two func- n tions is indeed important. For example in the resolution of the Gibbs’ phenomenon [14] the function required is exactly the projection (α+A,β+B)f; the lower-order projection (α,β) will P P not accomplish the task. 4 Exploiting the FFT Up until this point we have not discussed any implementation issues. In this section we explore the use of the FFT for performing spectral transformations based on Jacobi polyno- mials. Section 4.1 introduces some necessary notation for future discussion and section 4.2 pro- vides the methodology for using the FFT to compute Jacobi spectral expansions. 4.1 Quadrature and interpolation All the results in this subsection are well-known and we point to the given references for details. 10 Section 4 We shall call an N-point quadrature rule Gaussian under a particular weight function ω(α,β) if it exactly integrates any polynomial in the space under the weight ω(α,β). Such a quadra- 2N B ture rule always exists and is unique if the weight function is non-negative. A quadrature rule is defined by N nodes and weights r , w N and we shall write the Gaussian quadrature rule { n n}n=1 evaluation under weight ω(α,β) as 1 N f(r)ω(α,β)dr Q(α,β)[f] f(r )w . Z−1 ≃ N 4 n=1 n n X With this notation, we can write the definition of a Gaussian quadrature rule as one that satis- fies 1 Q(α,β)[φ]= φω(α,β)dr, φ . N Z−1 ∈B2N We recall a fundamental result due to Golub and Welsch [12]: the determination of the nodes and weights r ,w can be accomplished via the computation of the eigenvalues and eigenvec- n n { } tors of a symmetric tridiagonal matrix. A more efficient way is to just compute the eigenvalues (which are the nodes r ) and then use known formulae [5] to compute the weights w . This n n brings the cost of computation down to O N2 operations. We refer to the nodes and weights corresponding to the Gaussian quadrature r(cid:0)ule Q(cid:1)N(α,β)[·] as rn(α,β),wn(α,β) . n o In many computations we cannot exactly compute the modal coefficients fˆ(α,β) because we n cannot compute the integral exactly, or we can only evaluate f but do not know an analytic form for it. Instead, we can use quadrature rules to approximate the modal coefficients: fˆ(α,β) f˜(α,β) Q(α,β) fP˜(α,β) . n ≃ n 4 N n h i We can then form the following approximation to (α,β)f: PN N−1 (α,β) f˜(α,β)P˜(α,β). IN 4 n n n=0 X Due to the exactness of the Gauss quadrature rule, (α,β)f = (α,β)f for any f . Because IN PN ∈BN of the Christoffel-Darboux identity, the expansion (α,β)f is the unique (N 1)st degree poly- IN − N nomial interpolant of f at the nodes r(α,β) . For f , the difference between the inter- n n=1 (cid:6) BN polation (α,β)f and the projection n(α,β)fois called the aliasing error [16], and arises due to IN PN the error in the quadrature rule. Let A,B N . We define the following discrete approximations to the N modal coefficients 0 ∈N−1 fˆ(α+A,β+B) : n n=0 n o f˜(α+A,β+B) = Q(α+A,β+B) fP˜(α+A,β+B) (4.1) n;GQ N n h i A+B f˜(α+A,β+B) = λ(α+A,β+B,α,β)f˜(α,β) (4.2) n;C n+m,n n+m;GQ m=0 X f˜(α+A,β+B) = Q(α,β) f P˜(α+A,β+B)ω(A,B) (4.3) n;DQ N n h i
Description: