ebook img

DTIC ADP012001: H-Bases II: Applications to Numerical Problems PDF

11 Pages·0.42 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADP012001: H-Bases II: Applications to Numerical Problems

UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP012001 TITLE: H-Bases II: Applications to Numerical Problems DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the following report: TITLE: International Conference on Curves and Surfaces [4th], Saint-Malo, France, 1-7 July 1999. Proceedings, Volume 2. Curve and Surface Fitting To order the complete compilation report, use: ADA399401 The component part is provided here to allow users access to individually authored sections f proceedings, annals, symposia, etc. However, the component should be considered within [he context of the overall compilation report and not as a stand-alone technical report. The following component part numbers comprise the compilation report: ADP011967 thru ADPO12009 UNCLASSIFIED H-Bases II: Applications to Numerical Problems H. Michael M6ller and Thomas Sauer Abstract. We show how H-bases can be applied to polynomial interpo- lation and for the solution of systems of nonlinear equations. We will give an example of a system of polynomial equations where the H-basis leads to more stable computations than with the Gr6bner basis. §1. Introduction In the preceding paper [12], we introduced the notion of H-bases for poly- nomial ideals, and showed how to construct H-bases in the numerically most interesting case of a zero dimensional ideal. In this paper we consider two prob- lems from Numerical Analysis, namely polynomial interpolation and solving systems of polynomial equations, and point out how H-bases can be applied to both. More precisely, in both cases the computation of normal forms with respect to an ideal plays a crucial role, and with the basic results from [12] available, H-bases yield a perfect replacement for the Gr6bner bases which are normally and frequently used to do this job [8]. Finally, we will consider an example where a properly chosen H-basis leads to a significant stabilization of the computations in comparison with the use of Gr6bner bases. §2. Interpolation A finite set 0 C -I' of linearly independent functionals on H is said to define an ideal interpolation scheme if its kernel, ker 0 C 11, is an ideal in II. Given an ideal interpolation scheme 0 and a polynomial f G H, the interpolation problem consists of finding p E H such that 0(p) = 0(f), i.e., V(p) = 19(f), V E 0. (1) Curve and Surface Fitting: Saint-Malo 1999 333 Albert Cohen, Christophe Rabut, and Larry L. Schumaker (eds.), pp. 333-342. Copyright @2 000 by Vanderbilt University Press, Nashville, TN. ISBN 0-8265-1357-3. All rights of reproduction in any form reserved. 334 H. M. Mdller and T. Sauer So far, we have put no restrictions on p; hence, there are infinitely many solutions to (1). More precisely, if p is any solution of (1), hence f-p E ker E, then the set of all solutions is the equivalence class [p] = p + ker E) = f + ker E [f]. We denote the linear space of all equivalence classes by I/ ker E, and remark that (dim 1/ ker 9) = #E. Of course, in order to compute interpolation poly- nomials, we must find a way to choose a specific element from the equivalence class [f]. A "natural" choice is to take the normal form NF (f, R), where R- is an H-basis for ker 9. Since [f] = [g] implies that f - g E (7H), and since NF (., 71) is a linear operator, we have that [f] = [g] = NF (f, 7-) = NF (g,7-) + NF (f - g, R) = NF (g, H). -0 Hence, NF ([f], 7-) = NF (f, H), that is, the normal form is the same for any element of the same equivalence class. This algebraic approach also allows for interpolation of functionals which are only given implicitly, that is, by an ideal _"C 11: compute an H-basis R for . and the interpolation operator is the "remainder of division" NF (., 7-t). It is worthwhile to remark that one of the oldest papers on multivariate interpolation, namely [6], starts with implicitly given interpolation nodes. Another approach is to look for a polynomial space P C HI which allows for unique interpolation with respect to 9; to restrict the number of solutions to this problem, one usually demands the interpolation operator Lp : H --+ P to be degree reducing [3], that is, deg Lpf < deg f, f El . Such an interpolation space with a degree reducing interpolation operator is called a minimal degree interpolation space. The most prominent minimal degree interpolation spaces is the least interpolation space introduced by de Boor et al in [2], and is the unique degree reducing interpolation space which satisfies the additional condition f ...... 7= kerq(D), q(D) :=q qEker E On the other hand, it is obvious that the operator NF (., R) is degree reducing, linear and interpolating, hence all the spaces P = NF (H, H), for any H- basis W, are minimal degree interpolation spaces with interpolation operator Lp = NF (., 7-). Moreover, it is even possible to recover known minimal degree interpolation spaces by this algebraic process. Theorem 1. [15] The least interpolation space is given as NF (H, R-), where R-t is an orthogonal H-basis with respect to the inner-product (p, q) = (p(D) q) (0), p, q C H. Applications of H-bases 335 §3. Polynomial System Solving Probably the best-known and most frequent use of Gr6bner bases is for solv- ing polynomial systems of equations, where they form a core part of literally all available computer algebra systems. These systems of equations arise natu- rally in a geometric context, such as finding solutions of geometric constraints (for example, any Euclidean distance constraint yields a quadratic equation) or "simply" computing the intersection of algebraic curves/surfaces given in implicit form. So, given any finite set F E II one wants to find the associated algebraic variety X E RK (some algebraic closure of our underlying field 1K) such that 7(X) = 0, (2) that is, f(x) = 0, x E X, f E.F. Note that the emphasis here is not on finding one solution (which could, at least in the case that #F = n, be done by a Newton method), but on finding all solutions and obtaining structural information about the variety. It is easy to see that the variety is not a property of the specific set F, but of the ideal (T): F(X) = 0 ((= F x) = 0. Therefore, it may be helpful to find particular bases for (F) which allow for an efficient solution of (2). The "classical" implementation in most Computer Algebra systems relies on the computation of elimination ideals, which means the computation of a basis for the subideals (V)k = (97) n IK[xl,..., xk], k = 1,..., n, where (V)n = (F7). In fact, this corresponds to transforming the original problem .7(X) = 0 into a triangular system g1( x1 0, g2( X 1, X2 • = 00, (3) gin( X1, X2) ... ,I X" - 0 . Once such a triangular system is available, the solution strategy is obvious: determine the zeros of the univariate polynomial g, (xi) and substitute them into g2 (.', x2) which is now, for for any such zero, again a univariate polynomial in X2, and go on with this procedure. Moreover, such a triangular basis can indeed be computed: gix, the reduced Gr6bner basis for (.F) with respect to the lexicographical term order where xi -< x2 -< "" -< x. has the property that gk = g n K [xl,..., Xk] c ('F)k is a Gr~bner basis for (F)k (cf. [4, p. 114]). 336 H. M. M'ller and T. Sauer However, as nice as this idea of successive elimination of variables sounds, there are numerous drawbacks: (i) The complexity of computing a lexicographical Grdbner basis is tremen- dous, and even relatively "simple" problems still exceed the limitations of existing computing facilities. (ii) There are often several polynomials in a certain number of variables, that is, the system is not as triangular as one would want it to be. (iii) The degree of the polynomial gi is usually very high. This makes it impossible to compute its zeros exactly. (iv) The tempting idea to find gl's zeros approximately and substitute these values will not lead very far since it is well-known that the zeros of a polynomial are usually quite ill-conditioned with respect to its coefficients (cf. [5,17]). So, the summary is fairly disappointing: elimination methods do not provide a good tool to tackle polynomial systems of equations. In particular, they rely too much on symbolic methods (with exact computations) to become a useful tool in numerical applications. A different approach has been proposed quite recently by Stetter [16] (see also [10]; in [7] this method is partly attributed to Stickelberger) which is based on transforming the nonlinear system of equations into an eigenvalue problem for which a huge library of powerful routines is available. For that purpose, let us assume that the set of solutions X is finite (that is, the associated ideal (.F) is zero dimensional) and that all the common zeros are simple. The latter restriction is made to keep the presentation simple; details on how to handle multiplicities can be found in [10]. We first note that for any f e H, the mapping n/(/7)) S[p] [[,f .,p] is a homomorphism on the #X-dimensional linear space H1/ (.F). Now, sup- pose for a moment that we know X. Then there are polynomials p,, C H, x E X, defined by pX (W') = 6X,,, x,x' C X, which form a basis for H/ (.F), i.e., II/((.) = span { [ps]]: x X }. Obviously, for any x E X, the polynomial gx = (f - f(x))px satisfies g,(X) = 0, and therefore [0] = [g,] = [(f - f(x))pX ] = (b1 [pX] - f(x) [pr]. What we have proved with this simple argument is the following crucial the- orem. Applications of H-bases 337 Theorem 2. The polynomials px, x E X, are joint eigenvectors of all homo- morphisms 4I, f E 1, with respect to the eigenvalue f(x). This result again suggests a strategy to solve polynomial systems of equa- tions: compute a set of representers for H/ (T), that is, a finite set P C H of linearly independent polynomials such that H/ (F) = span { [p] : p e P }, and compute the matrix Mf which describes the action of (If with respect to the basis P. The eigenvalues of such matrices yield, when combined appro- priately, the solutions X. We remark that the (transpose of the) matrix Mf is called the multiplication table for f with respect to P', and that the original goal for Buchberger's doctoral thesis (supervised by Gr~bner) was not the invention of Gr6bner bases but the computation of multiplication tables. Of course, the most natural approach would be to compute the multiplication tables Mrj, j = 1,...,n, for the coordinate functions and thus compute the respective coordinates of the elements of X as the eigenvalues of the multipli- cation table. Note that the different components are finally "glued together" by the requirement that they must correspond to the same eigenvector. What we now have is the possibility of reducing the search for the solu- tions of a polynomial system of equations to an eigenvalue problem, provided that we are able to perform two operations: (i) Given a basis .F for an ideal (.F) compute a basis P of representers for H/ (T). (ii) Having this basis available and given any f E H, compute the multipli- cation table Mf with respect to P. Fortunately, this is where [12] enters - the answer are normal forms: if 'H is an H-basis for (T), then any basis for NF (H, H) is exactly the desired 7P, and the action of If can be computed by expanding NF (f -p, 7-) for all p E 7P, which yields the multiplication table Mf. The remaining question is "why H- bases?", and this question is justified since the computation of normal forms and thus of multiplication tables is perfectly possible with the help of Gr6bner bases as well. To give a partial answer to this question, we look at an example. §4. When Two Ellipses Meet In this section we consider a simple example which will show that also the eigenvalue method can encounter serious obstacles, in particular when Grdbner bases are involved. The important thing here is simplicity, as it will not be too surprising if extremely complicated and difficult examples cause problems. We consider the two ellipses f(xy) = 1 22 f = + y- 1, g (x, y) = 2x2 + y -1 338 H. M. M6ller and T. Sauer 1I.'6 06 0.4 02 -16-14- 2 -1 8O46604-02 O 0204 06 08 1 I 14 .6 -04 -06 Fig. 1. The two ellipses. Clearly, these two ellipses intersect in the four well-separated points (±1, ±1) as can be seen in Fig. 1. Now, we are going to perturb g a little bit and replace it by go = g (A¢(x, y)), where A, denotes the rotation [ Ao,(xy) Cos0 sin -sin¢0 cos¢J y " Note that we have in mind small values of 0, so the intersections should still be close to (±1, ±1) and the problem should still be well-conditioned. Recalling that lexicographic Gr6bner bases are known as troublemakers, we first try some "better" Gr~bner basis, namely the one which is based on the graded lexicographic term order with x -< y. Note that this ideal basis is not only a Gr6bner basis, but also an H-basis. In this case the Gr6bner bases go consists, for ¢ 5 0, of the three polynomials 4 sin 0 xy + 3 cos 0 x2 - 3 cos x2 + 2y2 - 3, cos (cos2 ±+8) x3 -3cos (cos2¢+2) x+12 sin (sin2 - 1) y, while o= {x2 - 1,y 2 - 1} Here we already observe that some singularity must appear for ¢ = 0, since go is not just a limit 0 -* 0 of go, although the basis changes continuously with respect to 0. The singularity becomes more apparent if we look at the normal forms, which are o {1,x,y,x2} if0540, 7¢={{ 1,x,y,xy} if0=0. Applications of H-bases 339 Finally, the multiplication tables Mx,¢ for the multiplication by x take the form [0 1 001 MX,o = 0 00 0 0 01 while the multiplication table [0 0 3cos0 1 0 0 3 cos' -+8 MX, 1 si 00o¢q~ t 0 0 8 4sine____ provides us with difficulties. Not only does this matrix not converge to M(cid:127),o for 0 -+ 0, but some entries in this matrix even diverge to ±oo, respectively. Indeed, if one tries to compute the eigenvalues and eigenvectors of this matrix for small values of 0, things become disastrous: A Maple computation with 10 digits worked until about 0 _ 10-5, where an error message reported that the QR algorithm did not work. For smaller values, like ' - 10-6, Maple invented complex zeros with an imaginary part of the magnitude 0.5 x 10-5 which by far exceeds any negligible machine number. On the other hand, Octave, a free Matlab clone whose Linear Algebra facilities are based on LAPACK [1], reproduced the eigenvalues correctly, but gave eigenvectors which were practically 0. Hence, we end up with some kind of paradox which is due to a singularity at ' = 0: though the original problem of solving the polynomial system of equations is very well-conditioned, the graded lexicographical Gr6bner basis is extremely sensitive to very small perturbations (1I1 (cid:143) 10-5), but by far not so sensitive to relatively "large" (11 > 10-5) perturbations. Similar problems appear when we replace the graded lexicographical Gr6bner basis by a purely lexicographical one with x -< y which yields the normal forms f P0 1{1, x, x2,x3} if€000, {1,x,y, xy} if0 = 0. Though the components of the multiplication table Mx,o at least are contin- uous functions in ' and remain bounded in this case, the limit ' --+ 0 again is not Mý, . But the multiplication tables My,0 with respect to the purely lexi- 0 cographical Gr6bner basis is even worse: its entries are either zero or diverge ' for --+ 0. The behavior of the Grdbner bases at '- = 0 raises the question of whether this singularity is systematic, that is, intrinsic to the problem, or if it is a repre- sentation singularity generated by the Gr6bner bases. Systematic singularities appear, for example, if several zeros "collapse" into one multiple zero which leads to extremely intricate problems in the multivariate case [9]. Here, how- ever, the good separation of the zeros suggests the conjecture that we only face a representation singularity. 340 H. M. Mller and T. Sauer Indeed, since H-bases leave more degrees of freedom, we can try another one which is now based on orthogonalization. For this purpose, we use the inner-product (p, q) = (p(D) q) (0) and recall from [11, Theorem 5.3] that the set {f, go} is already an H-basis. Moreover, the normal form space, which is, according to Theorem 1, the least interpolation space, is spanned by P; ={ 1,x,y,2sin¢ x2 - 3cos¢ xy- siney 2} and depends continuously on q with limp; = P0 {1, x, y,xy}. ¢00 Then one can compute the respective multiplication table as [0 1 + E10) 62(05) -3(0)1 M*, = 0 0 1 +E5(0) ' 0 6(0) 1 + 67(0) E8(0) I where ej (.), j = 1,..., 8, are continuous functions which vanish at the origin. In particular, Mx, - Mx,o as x -- 0 and the computation of eigenvalues and eigenvectors of M*,, can now be done with sufficient accuracy. However, we remark that the fact that the matrices M*,¢ and My, have two approxi- mately double eigenvalues ±1, requires some extra care when connecting these individual values in the final determination of the intersections. §5. Summary We have given examples of numerical applications which can be reduced to the computation of normal forms with respect to a certain polynomial ideal, an operation which is usually performed using a Gr6bner basis. On the other hand, H-bases could as well be used for normal form computations, and their greater flexibility may yield stabilizing effects which are highly desired in nu- merical computations. Acknowledgments. The second author was supported by the Deutsche For- schungsgemeinschaft with a Heisenberg fellowship, Grant Sa-627/6. Applications of H-bases 341 References 1. Anderson, E., Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov and D. So- rensen, LAPACK User's Guide, Second Edition, SIAM, 1995. 2. de Boor, C. and A. Ron, On multivariate polynomial interpolation, Con- str. Approx. 6 (1990), 287-302. 3. de Boor, C. and A. Ron, The least solution for the polynomial interpola- tion problem, Math. Z. 210 (1992), 347-378. 4. Cox, D., J. Little and D. O'Shea, Ideals, Varieties and Algorithms, Sprin- ger, 1992. 5. Gautschi, W., Questions of numerical conditions related to polynomials, in Studies in Numerical Analysis, G. H. Golub (ed.), The Mathematical Association of America, 1984, 140-177. 6. Kronecker, L.,Uber einige Interpolationsformeln ffir ganze Funktionen mehrerer Variabeln (Lecture at the academy of sciences, December 21, 1865), in L. Kroneckers Werke, volume I, H. Hensel (ed.), 133-141. Teub- ner 1895, reprinted by Chelsea Publishing Company, 1968. 7. Gonzales-Vega, L., F. Rouillier and M.-F. Roy, Symbolic recipes for poly- nomial system solving, in Some Tapas in Computer Algebra, A. M. Cohen, H. Cuypers and M. Sterk (eds.), Springer, 1999, 34-65. 8. Mdller, H. M., Grdbner bases and numerical analysis, in Gr6bner Bases and Applications (Proc. of the Conf. 33 Years of Gr6bner Bases), B. Buchberger and F. Winkler (eds.), Cambridge University Press 1998, 159-178. 9. Marinari, M. G., H. M. M6ller and T. Mora, On multiplicities in polyno- mial system solving, Trans. Amer. Math. Soc. 348 (1996), 3283-3321. 10. M611er, H. M. and H. J. Stetter, Multivariate polynomial equations with multiple zeros solved by matrix eigenproblems, Numer. Math. 70 (1995), 311-329. 11. M6ller, H. M. and T. Sauer, H-bases for polynomial interpolation and system solving, Advances Comput. Math., to appear. 12. M6ller, H. M. and T. Sauer, H-Bases I: The foundation, Curve and Surface Fitting: Saint-Malo 1999, Albert Cohen, Christophe Rabut, and Larry L. Schumaker (eds.), Vanderbilt University Press, Nashville, 2000, 325-332. 13. Sauer, T., Polynomial interpolation of minimal degree, Numer. Math. 78 (1997), 59-85. 14. Sauer, T., Polynomial interpolation of minimal degree and Gr6bner bases, in Gr6bner Bases and Applications (Proc. of the Cone 33 Years of Gr6b- ner Bases), B. Buchberger and F. Winkler (eds.), Cambridge University Press 1998, 483-494.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.