ebook img

Control Theory and Design. An RH₂ and RH∞ Viewpoint PDF

373 Pages·1997·4.712 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Control Theory and Design. An RH₂ and RH∞ Viewpoint

Preface &; Acknowledgments Robust control theory has been the object of much of the research activity developed in the last fifteen years within the context of linear systems control. At the stage, the results of these efforts constitute a fairly well established part of the scientific community background, so that the relevant techniques can reasonably be exploited for practical purposes. Indeed, despite their complex derivation, these results are of simple implementation and capable of accounting for a number of interesting real life applications. Therefore the demand of including these topics in control engineering courses is both timely and suitable and motivated the birth of this book which covers the basic facts of robust control theory, as well as more recent achievements, such as robust stability and robust performance in presence of parameter uncertainties. The book has been primarily conceived for graduate students as well as for people first entering this research field. However, the particular care which has been dedicated to didactic instances renders the book suited also to undergraduate students who are already acquainted with basic system and control. Indeed, the required mathematical background is supplied where necessary. Part of the here collected material has been structured according to the textbook Controllo in RH2-RH00 (in Italian) by the authors. They are deeply indebted to the publisher Pitagora for having kindly permitted it. The first five chapters introduces the basic results on RH2 and RHoo theory whereas the last two chapters are devoted to present more recent results on robust control theory in a general and self-contained setting. The authors gratefully acknowledge the financial support of Centro di Teoria del Sistemi of the Italian National Research Council - CNR, the Brazilian National Research Council - CNPq (under grant 301373/80) and the Research Council of the State of Sao Paulo, Brazil - FAPESP (under grant 90/3607 - 0). This book is a result of a joint, fruitful and equal scientific cooperation. For this reason, the authors' names appear in the front page in alphabetical order. Patrizio Colaneri Milan, Italy Jose C. Geromel Campinas, Brazil Arturo Locatelli Milan, Italy Chapter 1 Introduction Frequency domain techniques have longly being proved to be particularly fruitful and simple in the design of (linear time invariant) SISO ^ control systems. Less appealing have appeared for many years the attempts of generalizing such nice techniques to the MIMO ^ context. This partially motivated the great deal of interest which has been devoted to time domain design methodologies starting in the early 60's. Indeed, this stream of research originated a huge number of results both of remarkable conceptual relevance and practical impact, the most celebrated of which is probably the LQG ^ design. Widely acknowledged are the merits of such an approach: among them the rel atively small computational burden involved in the actual definition of the controller and the possibility of affecting the dynamical behavior of the control system through a guided sequence of experiments aimed at the proper choice of the parameters of both the performance index (weighting matrices) and uncertainty description (noises intensities). Equally well known are the limits of the LQG design methodology, the most significant of which is the possible performance decay caused by operative con ditions even slightly differing from the (nominal) ones referred to in the design stage. Specifically, the lack of robustness of the classical LQG design originates from the fact that it does not account for the uncertain knowledge or unexpected perturbations of the plant, actuators and sensors parameters. The need of simultaneously complying with design requirements naturally specified in the frequency domain and guaranteeing robustness of the control system in the face of uncertainties and/or parameters deviations, focused much of the research activity on the attempt of overcoming the traditional and myopic dichotomy between time and frequency domain approaches. At the stage, after about two decades of intense efforts on these lines, the control system designer can rely on a set of well established results which give proper answers to the significant instances of performance and stability robustness. The value of the results achieved so far partially stems in the construction of a unique formal theoretical picture which naturally includes both the classical LQG design {RH2 design), revisited at the light of a transfer function-like approach, and the new challenging developments of the so called robust design {RHoo design), which encompasses most of the above mentioned robustness instances. The design methodologies which are presented in the book are based on the mini mization of a performance index, simply consisting of the norm of a suitable transfer ^Single-input single-output ^Multi-input multi-output "^Linear quadratic gaussian 2 CHAPTER 1. INTRODUCTION function. A distinctive feature of these techniques is the fact that they do not come up with a unique solution to the design problem; rather, they provide a whole set of (admissible) solutions which satisfy a constraint on the maximum deterioration of the performance index. The attitude of focusing on the class of admissible controllers instead of determining just one of them can be traced back to a fundamental result which concerns the parametrization of the class of controllers stabilizing a given plant. Chapter 3 is actually dedicated to such a result and deals also with other questions on feedback systems stability. In subsequent Chapters 4 and 5 the main results of RH2 and RHQQ design are presented, respectively. In addition, a few distinguishing aspects of the underlying theory are emphasized as well, together with particular, yet significant, cases of the general problem. Chapter 5 contains also a preliminary discussion on the robustness requirements which motivate the formulation of the so called standard RHoo control problem. Chapter 6 and 7 go beyond the previous ones in the sense that the design problems to be dealt with are setting in a more general framework. One of the most interesting examples of this situation is the so called mixed RH2/RH00 problem which is expressed in terms of both RH2 and RHoo norms of two transfer functions competing with each other to get the best tradeoff between performance and robustness. Other problems that fall into this framework are those related to regional pole placement, time-domain specification and structural constraints. All of them share basically the same difficulty to be faced numerically. Indeed, they can not be solved by the methodology given in the previous Chapters but by means of mathematical programming methods. More specifically, all can (after a proper change of variables) be converted into convex problems. This feature is impor tant in both practical and theoretical points of view since numerical efficiency allows the treatment of real-word problems of generally large dimension while global opti- mality is always assured. Chapter 7 is devoted to the controllers design for systems subject to structured convex bounded uncertainties which models in an adequate and precise way many classes of parametric uncertainties with practical appealing. The associated optimal control problems are formulated and solved jointly with respect to the controller transfer function and the feasible uncertainty in order to guarantee minimum loss in the performance index. One of such situation of great importance for its own is the design problem involving actuators failure. Robust stability and performance are addressed for two classes of nonlinear perturbations, leading to what are called Persidiskii and Lur'e design. In general terms, the same technique involving the reduction of the related optimal control design problems to convex programming problems is again used. The main point to be remarked is that the two classes of non linear perturbations considered impose additional linear and hence convex constraints, to the matrices variables to be determined. Treating these arguments requires a fairly deep understanding of some facts from mathematics not so frequently included in the curricula of students in Engineering. Covering the relevant mathematical background is the scope of Chapter 2, where the functional (Hardy) spaces which permeate all over the book are characterized. Some miscellaneous facts on matrix algebra, system and control theory and convex optimization are collected in Appendix A through I. Chapter 2 Preliminaries 2.1 Introduction The scope of this chapter is twofold: on one hand it is aimed at presenting the ex tension of the concepts of poles and zeros, well known for single-input single-output (SISO) systems, to the multivariable case; on the other, it is devoted to the intro duction of the basic notions relative to some functional spaces whose elements are matrices of rational functions (spaces RL2^ RLoo, RH2^ RH^). The reason of this choice stems from the need of presenting a number of results concerning significant control problems for linear, continuous-time, finite dimensional and time-invariant systems. The derivation of the related results takes substantial advantage on the nature of the analysis and design methodology adopted; such a methodology was actually developed so as to take into account state-space and frequency based techniques at the same time. For this reason, it should not be surprising the need of carefully extending to multi- input multi-output (MIMO) systems the notions of zeros and poles, which proved so fruitful in the context of SISO systems. In Section 2.5, where this attempt is made, it will be put into sharp relief few fascinating and in some sense unexpected relations between poles, zeros, eigenvalues, time responses and ranks of polynomial matrices. Analogously, it should be taken for granted the opportunity of going in depth on the characterization of transfer matrices (transfer functions for MIMO systems) in their natural embedding, namely, in the complex plane. The systems considered hereafter obviously have rational transfer functions. This leads to the need of provid ing, in Section 2.8 the basic ideas on suitable functional spaces and linear operators so as to throw some light on the connections between facts which naturally lie in time-domain with others more suited with the frequency-domain setting. Although the presentation of these two issues is intentionally limited to few basic aspects, nevertheless it requires some knowledge on matrices of polynomials, matrices of rational functions, singular values and linear operators. To the acquisition of such notions are dedicated Sections 2.3-2.7. 4 CHAPTER 2. PRELIMINARIES 2.2 Notation and terminology The continuous-time linear time-invariant dynamic systems, object of the present text, are described, depending on circumstances, by a state space representation X = Ax + Bu y = Cx + Du or by their transfer function G{s) = C{sI-A)-^B + D The signals which refer to a system are indifferently intended to be in time-domain or in frequency-domain all the times the context does not lead to possible misun derstandings. Sometimes, it is necessary to explicitly stress that the derivation is in frequency-domain. In this case, the subscript "L" indicates the Laplace transform of the considered signal, whereas the subscript "LO" denotes the Laplace transform when the system state at the initial time is zero (typically, this situation occurs when one thinks in terms of transfer functions). For instance, with reference to the above system, one may write VLO = G{S)UL yL^yLo-^C{sI-A)-'x{0) Occasionally, the transfer function G{s) of a system E is explicitly related to one of its realizations by writing G{s) = E{A,B,C,D) or " A B ' G{s) C D The former notation basically has a compactness value, whereas the latter is mainly useful when one wants to display possible partitions in the input and/or output ma trices. For example, the system X = Ax + Biw -\- B2U z — Cix + D12U y = C2X + D21W is related to its transfer function G{s) by writing • A Bi B2 ' Gis) = Ci 0 Du 0 . '^2 £•21 When a purely algebraic (i.e. nondynamic) system is considered, these notations become G(s) = S(0,0,0,Z)) , ^ ,^ 2.3. POLYNOMIAL MATRICES 5 Referring to the class of systems considered here, the transfer functions are in fact rational matrices of complex variable, namely, matrices whose generic element is a rational function, i.e., a ratio of polynomials with real coefficients. The transfer function is said to be proper when each element is a proper rational function, i.e., a ratio of polynomials with the degree of the numerator not greater than the degree of the denominator. When this inequality holds in a strict sense for each element of the matrix, the transfer function is said to be strictly proper . Briefly, G{s) is proper if lim G{s) ^ K < oo where the notation K < oo means that each element of matrix K is finite. Analo gously, G{s) is strictly proper if lim G{s) = 0 A rational matrix G{s) is said to be analytic in Re{s) > 0 (resp. < 0) if all the elements of the matrix are bounded functions in the closed right (resp. left) half plane. In connection with a system characterized by the transfer function ' A B ' G{s) (2.1) C D the so-called adjoint system has transfer function -A' -C ' G-{s) := G'{-s) B' D' whereas the transfer function of the so-called transpose system is ' A' C" ' G'{s) :- B' D' System (2.1) is said to be input-output stable if its transfer function G{s) is analytic in Re{s) > 0 (G(s) is stable, by short). It is said to be internally stable if matrix A is stable, i.e., if all its eigenvalues have negative real parts. Now observe that a system is input-output stable if and only if all elements of G(5), whenever expressed as ratio of polynomials without common roots, have their poles in the open left half plane only. If the realization of system (2.1) is minimal^ the system is input-output stable if and only if it is internally stable. Finally, the conjugate transpose of the generic (complex) matrix A is denoted by A^ and, if it is square, Xi{A) is its i-th eigenvalue, while rs{A) :=max|Ai(A)| denotes its spectral radius. 2.3 Polynomial matrices A polynomial matrix is a matrix whose elements are polynomials in a unique unknown. Throughout the book, such an unknown is denoted by the letter s. All the polynomial CHAPTER 2. PRELIMINARIES coefficients are real Hence, the element nij{s) in position (i, j) in the polynomial matrix N{s) takes the form nij{s) = ajys"" 4- a^-i^'' + ais + ao, ak E R , V/c The degree of a polynomial p{s) is denoted by deg[p(s)]. If the leading coefficient ajy is equal to one, the polynomial is said to be monic. The rank of a polynomial matrix N{s), denoted by rank[Ar(5)], is defined by analogy from the definition of the rank of a numeric matrix, i.e., it is the dimension of the largest square matrix which can be extracted from N{s) with determinant not identically zero. A square polynomial matrix is said to be unimodular if it has full rank (it is invertible) and its determinant is constant. Example 2.1 The polynomial matrices 1 s + 1 s+1 s-2 N2{s) = 0 3 s+2 s-1 are unimodular since det[A/'i(s)]=det[A^2(5)]=3. A very peculiar property of a unimodular matrix is that its inverse is still a polynomial (and obviously unimodular) matrix. Not differently from what is usually done for polynomials, the polynomial matrices can be given the concepts of divisor Sind greatest common divisor as well. Definition 2.1 (Right divisor) Let N{s) be a polynomial matrix. A square polyno mial matrix R{s) is said to be a right divisor of N{s) if it is such that N{s) = N{s)R{s) with N{s) a suitable polynomial matrix. • An analogous definition can be formulated for the left divisor. Definition 2.2 (Greatest common right divisor) LetN{s) and D{s) be polynomial matrices with the same number of columns. A square polynomial matrix R{s) is said to be a Greatest Common Right Divisor (CCRD) of {N{s)^D{s)) if it is such that i) R{s) is a right divisor of D{s) and N{s), i.e. N{s) = N{s)R{s) D{s) = D{s)R{s) with N{s) and D{s) suitable polynomial matrices a) For each polynomial matrix R{s) such that N{s) = N{s)R{s) D{s) = D{s)R{s) with N{s) and L){s) polynomial matrices, it turns out that R{s) = W{s)R{s) where W{s) is again a suitable polynomial matrix. • 2.3. POLYNOMIAL MATRICES 7 A similar definition can be formulated for the Greatest Common Left Divisor (GCLD). It is easy to see, by exploiting the properties of unimodular matrices, that, given two polynomial matrices N{s) and D{s), there exist infinite GCRD's (and obviously GOLD'S), A way to compute a GCRD (resp. GCLD) of two assigned polynomial matrices N{s) and D{s) relies on their manipulation through a unimodular matrix which represents a sequence of suitable elementary operations on their rows (resp. columns). The elementary operations on the rows (resp. columns) of a polynomial matrix N{s) are 1) Interchange of the i-th row (resp. i-th column) with the j-th row (resp. j-th column) 2) Multiplication of the i-th row (resp. i-th column) by a nonzero scalar 3) Addition of a polynomial multiple of the i-th row (resp. i-th column) to the j-th row (resp. j-th column). It is readily seen that each elementary operation can be performed premultiplying (resp. postmultiplying) N{s) by a suitable polynomial and unimodular matrix T{s). Moreover, matrix T{s)N{s) (resp. N{s)T{s)) turns out to have the same rank as N{s). Remark 2.1 Notice that, given two polynomials ro(s) and ri(s) with deg[ro(5)]>deg[ri(s)], it is always possible to define two sequences of polynomials {ri{s), z = 2, 3, • • •,p -h 2} and {gi(s), z = 1, 2, • • • ,p + 1}, with 0 < p <deg[ri(s)], such that ri{s) = qi^i{s)ri+i{s) -h r^^2{s) , z = 0,1, • • • ,p deg[ri+2(s)] < deg[ri+i(s) 2 = 0,1,- rp+2(5) = 0 Letting ri-i{s) T.{s) := ni{s) := i = 1,3, 5, • • I 1 r^{s) 1 0 ri{s) T^{s) := ni{s) := 2 = 2,4,6,-- -Qi{s) 1 ri-i{s) T{S):=1[T,^,-,{S) and noticing that T(s) is unimodular (product of unimodular matrices), it turns out that T{s)ni{s) p= 1,3,5, 0 0 T{s)ni{s) , p = 2,4,6. For instance, take ro{s) = s^ -h 2s^ — s + 2, ri(s) = s^ -\- s — 2. It follows that ^'1(5) = s, g2(s) = 8-1, r2{s) = 5^ + 5 -h 2 and rsis) = 0. D By repeatedly exploiting the facts shown in Remark 2.1, it is easy to verify that, given a polynomial matrix N{s) with the number of rows not smaller than the number of columns, there exists a suitable polynomial and unimodular matrix T{s) such that R{s) T{s)N{s) 0 CHAPTER 2. PRELIMINARIES where R(s) is a square polynomial matrix. Algorithm 2.1 (GCRD of two polynomial matrices) Let N{s) and D{s) be two polynomial matrices with the same number, say m, of columns and with n^ and rid rows, respectively. 1) Assume that m < rid-{-rim otherwise go to point 4). Let P{s) :— [D'{s) N'{s)Y and determine a polynomial and unimodular matrix T{s) such that R{s) }' Tis)Pis) 0 Notice that T{s) can be partitioned as follows Tdiis) Tniis) Tis) := Td2{s) r„2(s) rid columns 2) Letting S{s) := T-'^is) and writing Sdi{s) Sd2{s) S{s) := Snl{s) Sn2{s) Ud columns it turns out D{s) = Sdi{s)R{s) N{s) = Sni{s)R{s) so that R{s) is a right divisor of both D{s) and N{s). 3) It also holds that R{s) = Tdi{s)D{s) + Tni{s)N{s) (2.2) Hence, suppose that R{s) is any other right divisor of both D{s) and N{s). Therefore, for some polynomial matrices D{s) and N{s) it follows that D{s) — D{s)R{s) and N{s) = N{s)R{s). The substitution of these two expressions in eq. (2.2) leads to R{s) = [Tdi{s)D{s) +Tni{s)N{s)]R{s) so that R{s) is a GCRDoi{N{s),D{s)). 4) If m > n^ + n^, take two matrices D{s) and N{s) both with m columns and rid and rin rows, respectively D{s) := [ 7 0 0 ] N{s) [I 0 0] and let D{s) R{s) := N{s) (2.3) 0 m — Ud — rin rows 2.3. POLYNOMIAL MATRICES Thus, D{s) = D{s)R{s) and N{s) = N{s)R{s). Hence, R{s) is a right divisor of both D{s) and N{s). Assume now that R{s) is any other right divisor, i.e. there exist two polynomial matrices D{s) and N{s) such that D{s) = D{s)R{s) and N{s) = N{s)R{s). By substituting these two last expressions in eq. (2.3) one obtains - D{s) R{s) := I N{s) I R{s), 0 so leading to the conclusion that R{s) is a GCRD of {N{s), D{s)). D Example 2.2 Consider the matrices ^^^^ I 25^ + 95 + 5 25^ + 55 + 5 N{s) = [ s^ + 1 s^ + 2s + 1 ] Now take 1 0 0 1 0 — 6 Ti{s) -2 1 0 T2(s) 0 1 -11 -1 0 1 0 0 1 -5/3 0 1 0 0 • Tsis) 1 0 T4s) 0 1 6 s/6 1 0 0 1 _ 0 0 1 1 0 0 T5{s) 0 - 10 Teis) = 0 1 -51/14 1 0 0 0 0 1 0 0 0 Tris) 14/103 Tsis) 1 0 -196s/309 -1 1 Then T{s) = l[Ts-^{s) = (3s - 2)/2 s/6 (6-lls)/6 -(24s + 93)/103 (3s - 14)/103 (18s + 70)/103 L (112s^ + 252s + 196)/103 {-Us^ + 28s + 14)/103 -(84s2 + 70s + 70)/103 so that 1 (-17s2 + 6s + 6)/6 Dis) T{s)P{s) := T{s 0 s N{s) 0 0 and 1 (-17s2-h6s + 6)/6 R{s) 0 s Finally, notice that Sdl(s) Sd2{s) T-\s):=S{s) = Snlis) Sn2{s)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.