AAPPPPRROOXXIIMMAATTEE KKAALLMMAANN FFIILLTTEERRIINNGG APPROXIMATIONS AND DECOMPOSITIONS Editor-in-Chief: CHARLES K. CHUI Vol. 1: Wavelets: An Elementary Treatment in Theory and Applications Tom H. Koornwinder, ed. Vol. 2: Approximate Kalman Filtering Guanrong Chen, ed. Series in Approximations and Decompositions - Vol. 2 APPROXIMATE KALMAN FILTERING edited by Guanrong Chen University of Houston World Scientific Singapore- New Jersey- London- Hong Kong Published by World Scientific Publishing Co. Pte. Ltd. POBox 128, Farrer Road, Singapore 9128 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 73 Lynton Mead, Tottendge, London N20 8DH Library of Congress Cataloging-in-Publication Data Approximate Kalman filtering / edited by Guanrong Chen. p. cm. — (Series in approximations and decompositions ; vol. 2) Includes index. ISBN 981021359X 1. Kalman filtering. 2. Approximation theory. I. Chen, Guanrong. II. Series. QA402.3.A67 1994 O03'.76'0115-dc20 93-23176 CIP Copyright © 1993 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form orby any means, electronic or mechanical, including photocopying, recording orany information storage and retrieval system now known or to be invented, without written permission from the Publisher. For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 27 Congress Street, Salem, MA 01970, USA. Printed in Singapore by Utopia Press. Approximations and Decompositions During the past decade, Approximation Theory has reached out to encompass the approximation-theoretic and computational aspects of several exciting areas in ap plied mathematics such as wavelets, fractals, neural networks, and computer-aided- geometric design, as well as the modern mathematical development in science and technology. The objective of this book series is to capture this exciting development in the form of monographs, lecture notes, reprint volumes, text books, edited review volumes, and conference proceedings. Approximate Kaiman Filtering, the second volume of this series, represents one of the engineering aspects of Approximation Theory. This is an important subject devoted to the study of efficient algorithms for solving many of the real- world problems when the classical Kaiman filter does not directly apply. The series editor would like to congratulate Professor Guanrong Chen for his excellent job in editing this volume and is grateful to the authors for their fine contributions. World Scientific Series in APPROXIMATIONS AND DECOMPOSITIONS Editor-in-Chief: CHARLES K. CHUI Texas A&M University , College Stollon, Texas This page is intentionally left blank. Preface Kalman Filtering: from "exact" to "approximate" niters As has been in the last three decades and still is today, the term Kalman filter evokes favorable responses and applause from engineers, scientists, and mathematicians, both researchers and practitioners alike. The history of the development of the Kalman filter, or more precisely, the Kalman filtering algorithm, has been fairly long. Ever since the fundamental con cept of least-squares for signal estimation was introduced by Gauss at the age of eighteen in 1795, first published by Legendre in his book Nouvelles methodes pour la determination des orbites des cometes in 1806, and later also appeared in Gauss' book Theoria Motus Corporum Coelestium in 1809, no significant improvement was achieved in the next hundred years — not until 1912 when R. A. Fisher pub lished the celebrated maximum likelihood method, which was already anticipated by Gauss but unfortunately rejected also by Gauss himself much earlier. Then, a little later, Kolmogorov in 1941 and Wiener in 1942 independently developed the im portant fundamental estimation theory of linear minimum mean-square estimation technique. All these together, as well as the strong motivation from astronomi cal studies and the exciting stimulus from computational mathematics, provide the necessary and sufficient background for the subsequent development of the Kalman filtering algorithm, a mile-stone for the modern systems theory and technology. The Kalman filter, mainly attributed to R. E. Kalman (1960), may be consid ered in very general terms as an efficient computational algorithm for the discrete- time linear least-squares estimation method of Gauss-Kolmogorov-Wiener, which was extended to the continuous-time setting by Kalman himself and to more gen erality by Bucy about a year later. To briefly describe the discrete-time Kalman filtering algorithm, we consider a stochastic state-space system of the form JJxx ii == ,4AfcX-xf.c++|i,, ffcc++ k k fck \\ vVffcc == CCffccXXffcc ++ r7^^ ,, fc A: == 00, , I!,,-'■-,■ , where {xt} is the sequence of state vectors of the system, with an initial state vector xo, {vfc} the sequence of measurement (or observation) data, {£ } and {n } Vll Preface Vlll two noise sequences, and {Ak} and {Ck} two sequences of the time-varying system and measurement matrices. In this linear state-space system, the first and second equations are usually called the dynamic and measurement equations, respectively. The problem is to find and calculate, for each k = 0,1, ■ ■ •, the optimal estimate xjt of the unknown state vector x^ of the system, using the available measurement data {vi, V2, ■ ■ ■, vjt} under the criterion that the estimation error covariance Cov(xL - Xfc) = min, k over all possible linear and unbiased estimators which use the aforementioned avail able measurement data, where the linearity is in terms of the data and the unbias is in the sense that E{xk} — E{xk}- It turns out that optimal solutions exist under, for example, the following additional conditions: (1) The initial state vector is Gaussian with mean E{x.o} and covariance Cou(xo) both being given; and (2) the two noise sequences {£,} and {rj. } are stationary, mutually indepen dent Gaussian, and mutually independent of xo, with known covariances Cov(£) = Qk and CW(7? ) = Rk, respectively. k fc In addition, the optimal solutions x/t can be calculated recursively by jf xxo == ££{{xx}} ,, 0 00 \\ xx == yyllfc__iixx __ii ++ GG((vv -- AAk.-.iix-x-k_-ii)),, kk == 11,,22,, •• •• • , ffcc fc ffcc kk kk k k where Gk are the Kalman gains, successively calculated by ••PPoo..oo == CCoouu((xxoo)),, PPfkc,k.f-ci- l == AAkf-icP-kl-iF.kn-i^A-j^l A^! ++ QQfcfc__! !,, GGfkc == PPkk,kk--\iCC ((CCkfcPPkf,ck,f-ciC-iC ++ Rfktf)c~) ~ ,, y kk kfc PPfkc.f,kc == ((// -- GGCC))PPk,,kk--ii,, kk kk k in which Pk,k-i is actually the prediction error covariance CCoovv(x(kx —-4 Ak_-i\xx.k_-ii)),, k k= = fcJ fc fc 0,1,-, The entire set of these formulations comprises what we have been calling the Kalman filter or the Kalman filtering algorithm. This algorithm can be derived via several different methods. For more information about this recursive estimation algorithm, for example its detailed derivations, statistical and geometrical interpre tations, relations with other estimation techniques, and broad-range of applications, the reader is referred to the existing texts listed in the section entitled Further Read ing in the book. A few remarkable advantageous features of the above recursive computational scheme can easily be observed. First, starting with the initial estimate xo = E{XQ}, Preface IX each optimal estimate x/t, obtained in the consequent calculations, only requires the previous (one-step-behind) estimate and a single bit (current) datum Vfc. The essential advantage of such a simple "step-by-step" structure of the computational scheme is that there is no need to keep all the old state estimates and measurement data for each up-dating state estimate, and this saves huge computer memories and CPU times, especially in real-time (on-line) applications of very large scale (higher dimensional) systems with intensive measurement data. Second, all the recursive formulas of the algorithm are straightforward and linear, consisting of only matrix multiplications and additions, and a single matrix inversion in the calculation of the Kalman gain. This special structure makes the scheme feasible for parallel implementation using advanced computer architectures such as systolic arrays to further speed up its computation. Moreover, the Kalman filtering algorithm is the optimal estimator over all possible linear ones under the aforementioned conditions, and gives unbiased estimates of the unknown state vectors — although this is not the unique characteristic monopolized by the Kalman filter. The Kalman filter is ideal in an ideal world. This is merely to say that the Kalman filter is "perfect" if the real world is ideal — offering linear mathematical models for describing physical phenomena, providing accurate initial conditions for the model established, guaranteeing the exact models and their accurate parameters would not be disturbed or changed throughout the process, and creating Gaussian white noise (if there should be any) with complete information about its means and variances, etc. Unfortunately, nothing is ideal in the real world, and this makes the ideal Kalman filter often impractical. As a result, various modified versions of the standard Kalman filter, called approximate Kalman filters, become undoubtedly necessary. To be more specific, recall that the standard (ideal) Kalman filter requires the following conditions: the dynamic and measurement equations of the system both have to be linear; all the system parameters (matrices) must be given exactly and fixed without any perturbation (uncertainty); the mean £{xo} and covariance Cew(xo) of the Gaussian initial state vector need to be specified; and the two noise sequences, {£ } and {rh }, are both stationary, mutually independent, Gaussian, and mutually independent of xo, with known covariances Cov(£ ) = Qk and Cov(r] ) = Rk, respectively. If any of these conditions is not fulfilled, the Kalman filtering algorithm is not efficient: it will not give optimal, often not even satisfactory, estimation results. In most applications, the following questions are raised: (1) "What if the state-space system is nonlinear?" (2) "What if the initial conditions are unknown, or only partially given?" (3) "What if the noise statistics (means and/or covariances) are unknown, or only partially given, or changing?" (4) "What if the noise sequences are not Gaussian?" (5) "What if the system parameters (matrices) involve uncertainties?" (6) "What if • ■ • ?" x Preface The objective of this book is to help answering at least some of these questions. However, it is appropriate to remark that in a very modest size of this tutorial vol ume, neither do we (actually, never can we) intend to cover too many interesting topics, nor does the reader can expect to be able to get into very deep insights of the issues that we have chosen to discuss. The motivation for the authors of the chapters to present their overview and commentary articles in this book is basically to pro mote more effort and endeavor devoted to the stimulating and promising research direction of approximate Kalman filtering theories and their real-time applications. We would like to mention that a good complementary volume is the collection of research papers on the standard (ideal) Kalman filter and its applications, entitled Kalman Filtering: Theory and Application, edited by H. W. Sorenson and published by IEEE Press in 1985. The first topic in this tutorial volume is the extended Kalman filter. As has been widely experienced, a mathematical model to describe a physical system can rarely be linear, so that the standard Kalman filtering algorithm cannot be applied directly to yield optimal estimations. In the first article, Bullock and Moorman give an introduction to an extension of the linear Kalman filtering theory to estimating the unknown states of a nonlinear system, possibly forced by additive white noise, using measurement data which are the values of certain nonlinear functions of the state vectors corrupted by additive Gaussian white noise. In the second part of their article, some possible choices for linearization are discussed, leading to the standard and ideal extended Kalman filters, whereas some modified extended Kalman filters are also described. Then, in part three of the article, Moorman and Bullock study how to use the a priori state estimate sequence to perform linearization and analyze the bias occurred in this modified extended Kalman filter. The second issue which we are concerned with in this book is the initialization of the Kalman filter. If the initial conditions, namely, the mean E{X.Q} and covariance Ccw(xo) of the initial state vector, are unknown or only partially given, the Kalman filtering algorithm cannot even start operating. Catlin offers an introduction to the classical Fisher estimation scheme, used to initialize the Kalman filter when prior information is completely unknown, and then further extends it to the more general case where the measurements may be ill-conditioned, making no assumption on the invertibility of any matrix involved in the estimation process. In the joint article of Gomez and Maravall, several successful approaches to initializing the Kalman filter with incomplete specified initial conditions, which work well even for nonstationary time series, are reviewed. In particular, they describe a simple solution, based on a trivial extension of the Kalman filter, to the problem of optimal estimation, forecasting, and interpolation for a general class of linear systems. Next, the adaptive Kalman filters are discussed. Basically, adaptive Kalman filters are referred to as those modified Kalman filters that can adapt either to un known noise statistics or to changing system parameters (or changing noise statis tics). Under the assumption that all the noises are Gaussian, although with un-
Description: