Feedback Capacity over Networks Bo Li and Guodong Shi∗† Abstract In this paper, we investigate the fundamental limitations of feedback mechanism in dealing with uncertainties for network systems. The study of maximum capability of feedback control was pioneered 7 in Xie and Guo (2000) for scalar systems with nonparametric nonlinear uncertainty. In a network set- 1 0 ting, nodes with unknown and nonlinear dynamics are interconnected through a directed interaction 2 graph. Nodes can design feedback controls based on all available information, where the objective is n to stabilize the network state. Using information structure and decision pattern as criteria, we spec- a J ify three categories of network feedback laws, namely the global-knowledge/global-decision, network- 1 1 flow/local-decision, and local-flow/local-decision feedback. We establish a series of network capacity characterizations for these three fundamental types of network control laws. First of all, we prove that ] Y for global-knowledge/global-decision and network-flow/local-decision control where nodes know the in- S √ (cid:0) (cid:1) . formation flow across the entire network, there exists a critical number 3/2+ 2 /(cid:107)AG(cid:107)∞, where s √ c 3/2+ 2 is as known as the Xie-Guo constant and A is the network adjacency matrix, defining ex- [ G actly how much uncertainty in the node dynamics can be overcome by feedback. Interestingly enough, 1 v the same feedback capacity can be achieved under max-consensus enhanced local flows where nodes 6 8 only observe information flows from neighbors as well as extreme (max and min) states in the network. 1 Next,forlocal-flow/local-decisioncontrol,weprovethatthereexistsastructure-determinedvaluebeing 3 0 alowerboundofthenetworkfeedbackcapacity.Theseresultsrevealtheimportantconnectionbetween . 1 network structure and fundamental capabilities of in-network feedback control. 0 7 1 Keywords: adaptive control, nonlinear systems, feedback mechanism, network systems : v i X r 1 Introduction a 1.1 Background Lying at the heart of practicing and understanding control systems has been the feedback mechanism. Today it is recognized that the first systematic study of feedback control was made by J. C. Maxwell in ∗The two authors contributed equally to this work. †B. Li is with Key Lab of Mathematics Mechanization, Chinese Academy of Sciences, Beijing 100190, China. G. Shi is with the Research School of Engineering, The Australian National University, ACT 0200, Canberra, Australia. Email: [email protected], [email protected] 1 1868 on pendulum governors [1]. Invented by J. Watt to control his steam engine, the so-called fly-ball governor senses engine speed via the spinning angle of two weighted balls, and in the mean time adjusts the steam valve through levers connected to the balls [2]. The basic idea of feedback has since been clear from this historical example: the behaviour of a dynamical system can be regulated by feeding the outputs of the system back to its inputs, and particularly, via feedback unknown disturbances can be rejected to a desired level at the output end. How to design and optimize feedback controllers that can maximally reduce the effects of internal or external uncertainty becomes a central theme in the field of automatic control [3]. Theinfluenceofexternaluncertaintysuchasdisturbancesandsensornoisescanbewellandconveniently understood by classical frequency-based methods [2]. Treatments to internal and structural uncertainties thatareubiquitousinreal-worldplantsarehoweverfarmorechallenging.Therearetwoparallelbutrelated major research paths along which celebrated results have been developed for discrete-time or continuous- time, linear or nonlinear, and autonomous or time-varying systems. Robust control synthesis [4, 5, 6, 7, 8] characterizes uncertainty within a prescribed (often compact) set around the true plant, and controllers are designed often to optimize certain performance metrics induced by the uncertainty neighborhood, e.g., maximizing performance for worst-case scenarios. Adaptive control methodology [9, 10, 11, 12, 13, 14] utilizes online estimation techniques from the input-output signals, where controllers are adjusted in real time from the estimation outcomes. The study of feedback control has been pushed forward to a new network era in the past decade, inspired by the emergence of a variety of dynamical systems of complex networks. The need of carrying out control and sensing over communication channels has led to the introduction of information theory to the study of control systems. Fundamental results have been established for the necessary data rate between the sensor and actuator for stabilizing a plant [15, 16, 17, 18, 19], and for the performance of control and estimation over lossy or noisy channels [20, 21]. Moreover, the notion of distributed control [22] sparkled a tremendous amount of work aiming at robust and scalable solutions for a large number of interconnected nodes to achieve collective goals ranging from consensus and formation to optimization and computation [23, 24, 25, 26, 27]. Multi-agent control has evolved to a discipline in its own right [28], beinggeneralizedeventocontrolofquantumnetworks[29].Ofparticularinterestthereisalsothestudyof networkcontrollability[30,31,32],focusingonhowinteractionstructuresinfluencenetworkcontrollability when measurement and control take place at a few selected nodes. 1.2 Motivation Besides the tremendous success of in-network control design [28], it is equally important to understand the limitations of feedback mechanism over network dynamics facing uncertainty. More specifically, a clear characterization to the capacity of feedback mechanism over a network in dealing with uncertainty, for centralized and distributed controllers, respectively, will help us understand the boundaries of controlling 2 complex networks from a theoretical perspective. In the seminal work [33], Xie and Guo established fundamental results on the capability of feedback mechanism with nonparametric nonlinear uncertainty for the following discrete-time model y(t+1) = f(y(t))+u(t)+w(t), t = 0,1,... wherethey(t),u(t),andw(t)arerealnumbersrepresentingoutput,control,anddisturbance,respectively. It was shown in [33] that with completely unknown plant model f(·) : R → R and bounded but unknown disturbancesignalw(t),anecessaryandsufficientconditionfortheexistenceofstabilizingfeedbackcontrol √ of the above system is that a type of Lipschitz norm of f(·) must be strictly smaller than 3/2+ 2. This number, now referred to as the Xie-Guo constant in the literature, points to fundamental lim- itations of all feedback laws. Generalizations have been made for a few types of parametric models for which the corresponding feedback capabilities can be characterized [34, 35, 36, 37]. Naturally we wonder (i) Would such a feedback capacity critical value exist for a network system? (ii) If it indeed exists, how would it depend on the network structure? (iii) How feedback capacity would differ between centralized anddistributedcontrollers?Answerstothesequestionswilladdtofundamentalunderstandingsforcontrol of networked systems and for feedback mechanism itself as well. 1.3 Main Results We consider a network setting of the nonparametric uncertainty model in [33], where nodes with unknown nonlinear self-dynamics are interconnected through a directed interaction graph. For the ease of presen- tation the dynamics of the nodes are assumed to be identical, corresponding to homogenous networks. The interaction graph defines neighbor relations among the nodes, where measurement and control take place. Nodes can design any feedback controller using the information they have, and the objective is to stabilize the entire network, i.e., every node state in the network. Three basic categories of feedback laws over such networks are carefully specified. In global-knowledge/ global-decisionfeedback,everynodeknowsnetworkstructure(interactiongraph)andnetworkinformation flow, and nodes can coordinate to make control decisions; in network-flow/local-decision feedback, each node only knows the network information flow and carries out decision individually; in local-flow/local- decision feedback, nodes only know information flow of neighbors and then make their own control deci- sions. Note that various existing distributed controllers and algorithms can be naturally put into one of the three categories. A series of network feedback capacity results has been established: (i) For global-knowledge/global-decision and network-flow/local-decision control, the generic network feedback capacity is fully captured by a critical value √ (cid:0) (cid:1) 3/2+ 2 /(cid:107)A (cid:107) G ∞ where A is the network adjacency matrix. G 3 (ii) For local-flow/local-decision control, there exists a structure-determined value being an lower bound of the network feedback capacity. (iii) Network flow can be replaced by max-consensus enhanced local flows, where nodes only observe information flows from their neighbors as well as network extreme (max and min) states via max- consensus, and then the same feedback capacity can be reached. Additionally, for strongly connected graphs, we manage to establish a universal impossibility theorem on the existence of stabilizing feedback laws. 1.4 Paper Organization The remainder of the paper is organized as follows. Section 2 introduces the network model and defines the problem of interest. Section 3 presents the main results, followed by Section 4 presenting the network stabilizing controllers. Section 5 provides the proofs of all the statements. Finally Section 6 concludes the paper by a few remarks pointing out a few interesting future directions. Notation: The set of real numbers is denoted by R, and the set of integers is denoted by Z. A sequence a ,a ,... is abbreviated as (cid:104)a (cid:105) . For any real number a, (a)+ is defined as (a)+ = max{a,0}. For 0 1 t t≥0 convenience we use dist(X,Y) to denote the distance between two sets X and Y in R by dist(X,Y) = inf |x−y|, and simply dist(a,Y) := inf |a−y|, dist(a,b) = |a−b| for real numbers a and b. x∈X,y∈Y y∈Y 2 The Model 2.1 Network Dynamics with Uncertainty Consider a network with n nodes indexed in the set V = {1,...,n}. The network interconnection structure is represented by a directed graph G = (V,E), where E is the arc set. Each arc (i,j) in the set E is an orderedpairoftwonodesi,j ∈ V,andlink(i,i)isallowedateachnodeidefiningaself-arc.Theneighbors of node i, that node i can be influenced by, is defined as nodes in the set N := {j : (j,i) ∈ E}. Let a ∈ R i ij be a real number representing the weight of the directed arc (j,i) for i,j ∈ V. The arc weights a comply ij with the network structure in the sense that a (cid:54)= 0 if and only if (j,i) ∈ E. Let A be the adjacency ij G matrix of the graph G with [A ] = a . G ij ij Time is slotted at t = 0,1,2,.... Each node i holds a state x (t) ∈ R at time t. The dynamics of the i x (t) are described by i (cid:88) x (t+1) = a f(x (t))+u (t)+w (t), i = 1,...,n, (1) i ij j i i j∈Ni wheref isafunctionmappingfromRtoR,u (t) ∈ Risthecontrolinput,andw (t) ∈ Risthedisturbance. i i We impose the following standing assumptions. 4 Figure 1: A graphical diagram of the considered network model: (i) Interaction structure forms a directed graph where nodes are influenced by their in-neighbors and influence their out-neighbors; (ii) Node in- teraction rules are governed by completely unknown nonlinear dynamics and link width indicates the weight (strength) of interactions; (iii) Control inputs are applied to individual nodes subject to unknown disturbances. Assumption 1. (Dynamics Uncertainty) The function f is unknown, and the arc weight a is known to ij the node i. Assumption 2. (Disturbance Boundedness) The process disturbance w (t) is unknown but bounded, i.e., i (cid:12) (cid:12) there exists w∗ > 0 such that (cid:12)wi(t)(cid:12) ≤ w∗ for all t and for all i ∈ V, and further, the bound w∗ is unknown. Assumption 3. (Feasible Estimates) At time t+1, each node i has access to z (t) ∈ R satisfying i (cid:12) (cid:12) (cid:12)f(xi(t))−zi(t)(cid:12) ≤ D0 (2) for all i ∈ V and for all t = 0,1,..., where D is an unknown constant. 0 The Assumptions 1–2 are quite natural and general. Assumption 3 is on the other hand a technical assumption that is designed to ease our presentation while maintaining the essence of problem. Practically the z (t) allows node i to observe an estimation of f(x (t)) at time t+1, excitation of its plant at the i i previous state x (t), which is certainly not a hard requirement. We will show that with global information, i in many cases such required estimations of f(x (t)) can actually be obtained from the nodes states; with i local information, such a requirement relies on local node observations and is essential to form a feasible problem scope. Assumptions 1–3 are adopted throughout the remainder of the paper without specific further mention. An illustration of this dynamical network model can be seen in Fig. 1. 5 2.2 Feedback Laws over Networks We now classify all possible network feedback control laws into categories determined by information patterns and decision structures. Such a classification is not straightforward at all bearing the following questions in mind: (i) (Knowledge)Howmuchwouldnodesknowaboutthenetworkitself,e.g.,numberofnodesn,network connectivity, or even the network topology G? (ii) (Flows) How much would nodes know about the network information flows, e.g., availability of x (t), i z (t), and u (t) for a neighbor, or a neighbors’ neighbor of the node i? i i (iii) (Decisions) To what level nodes could cooperate in determining the control actions, e.g., can a node i tell a neighbor j to stand by with u (t) = 0 at time t to implement its own control input u (t)? j i Differentanswerstothesequestionswillleadtodrasticallydifferentscopesofnetworkcontrolrules.Inthis paper, we focus on a few fundamental forms of network feedback laws that from a theoretical perspective represent a variety of network control and computation results in the literature. Denote X(t) = (x (t) ... x (t))(cid:62), Z(t) = (z (t) ... z (t))(cid:62), and U(t) = (u (t) ... u (t))(cid:62). The 1 n 1 n 1 n following definition specifies network and local flows. Definition 1 The network flow vector up to time t is defined as (cid:16) (cid:17)(cid:62) Θ(t) := X(0),...,X(t);Z(0),...,Z(t−1);U(0),...,U(t−1) . The local network flow vector for node i up to time t is defined as (cid:16) (cid:17)(cid:62) (cid:83) Θ (t) := x (0),...,x (t);z (0),...,z (t−1);u (0),...,u (t−1) : j ∈ N {i} . i j j j j j j i Note that, here we have assumed that the x (t), z (t), and u (t) are known to a node i even if it does i i i not hold a self arc (i,i) ∈ E (therefore i ∈/ N ). This is indeed quite natural and general which simplifies i the presentation considerably. 2.2.1 Global-Knowledge/Global-Decision Feedback RecallthatA istheadjacencymatrixofthegraphG.Networkcontrollersthathaveomniscientnarration G and omnipotent actuators at all nodes are certainly of primary interest. Definition 2 A network control rule in the form of (cid:16) (cid:17) U(t) = h Θ(t);A , t = 0,1,... (3) t G where h is an arbitrary function mapping from Rn(3t+1) to Rn with A being a common knowledge, is t G termed a Global-Knowledge/Global-Decision Feedback Law for the network system (1). 6 Toimplementaglobal-knowledge/global-decisionnetworkcontrol,onerequiresanetworkoperatorwho knows the structure of the network (topology and arc weights), collects states and signals across the entire network, and then enforces control decisions on each individual node. 2.2.2 Network-Flow/Local-Decision Feedback Knowing the network flow, nodes can still carry out individual control decisions even without knowledge of the entire network structure G. This will incur restrictions on feasible control rules, leading to the following definition. Definition 3 A network control rule in the form of (cid:16) (cid:17) u (t) = hi Θ(t);[A ] ,j ∈ N (4) i t G ij i where independent with other nodes, hi is an arbitrary function mapping from R|Ni∪{i}|(3t+1) to R for any t t = 0,1,..., is termed a Network-Flow/Local-Decision Feedback Law for the network system (1). The hi being independent means that a node m can determine its control rule hm without knowing t t or influencing the exact control decision values at any other node and for any given time. The following example helps clarify the ambiguity in the notion of independent decisions. Example 1. Consider two nodes 1 and 2. The following control rule with q being a function with proper t dimension for its argument u (t) = q (Θ(t)) 1 t (5) u (t) = 1−q (Θ(t)) 2 t implicitly holds the identity u (t)+u (t) = 1 1 2 and therefore can only be implemented if the two nodes coordinate their respective inputs. In this sense (5) is a global-knowledge/global-decision feedback rather than a network-flow/local-decision feedback law. (cid:3) 2.2.3 Local-Flow/Local-Decision Feedback The notion of distributed control consists of three basis elements [28]: nodes only have a local knowledge of the network structure; nodes only receive and send information to a few neighbors; control and decision are computed by each node independently. Inspired by these criteria we impose the following definition. Definition 4 Any feedback control rule in the form of (cid:16) (cid:17) u (t) = hi Θ (t);[A ] ,j ∈ N (6) i t i G ij i 7 with hi : R|Ni∪{i}|(3t+1) → R being an arbitrary function independent with other nodes, is termed a Local- t Flow/Local-Decision Feedback Law for the network system (1). The three classes of network feedback laws are certainly not disjoint. In fact the set of global-knowledge /global-decision feedback contains the set of network-flow/local-decision feedback, which in turn contains the set of local-flow/local-decision feedback. 2.3 Network Stabilizability We are interested in the existence of feedback control laws that stabilize the network dynamics (1) for the closed loop, as indicated in the following definition. Definition 5 A feedback law stabilizes the network dynamics (1) if there holds (cid:16)(cid:12) (cid:12) (cid:12) (cid:12)(cid:17) sup (cid:12)xi(t)(cid:12)+(cid:12)ui(t)(cid:12) < ∞, ∀i ∈ V (7) t≥0 for the closed loop system. It is easy to see that the stabilizing condition sup (|x (t)|+|u (t)|) < ∞ is simply equivalent to t≥0 i i sup|x (t)| < ∞. i t≥0 2.4 Discussions Inthissubsection,weprovidesomeadditionaldiscussionsonsuitablefunctionspaceforthenodedynamical mode f and the possible mechanism of nodes obtaining the estimates z (t). i 2.4.1 Function Space We need a metric quantifying the uncertainty in the node dynamical mode f. Let F denote the space that contains all R → R functions, where the f ∈ F are equipped with a quasi-norm defined by1 |f(x)−f(y)| (cid:107)f(cid:107) := lim sup . q α→∞x,y∈R |x−y|+α Define F := {f ∈ F : (cid:107)f(cid:107) ≤ L} L q as a subspace in F consisting of functions bounded by L > 0 under quasi-norm (cid:107)·(cid:107) . Functions in F can q L certainly be discontinuous, but they are closely related to Lipschitz continuous functions. The following lemma holds, whose proof can be found in [33]. 1We refer to [33] for a thorough explanation of this quasi-norm and the resulting function space F. 8 Lemma 1 Let (cid:107)f(cid:107) ≤ L. Then for any η > 0, there exists c ≥ 0 such that q |f(x)−f(y)| ≤ (L+η)|x−y|+c, ∀x,y ∈ R. (8) (cid:8) (cid:9) Based on Lemma 1, the set Γ (f) := (η,c) : Eq. (8) holds is nonempty for any f ∈ F . We further L L define for any f ∈ F and r > L L (cid:8) (cid:9) W (r) := inf c : L+η < r,(η,c) ∈ Γ (f) . (9) f L 2.4.2 Estimation z (t) i In vector form the system (1) can be written as x (t+1) f(x (t)) u (t) w (t) 1 1 1 1 . . . . .. = AG .. + .. + .. . (10) x (t+1) f(x (t)) u (t) w (t) n n n n Then if A is nonsingular, at time t+1, G f(x (t)) w (t) 1 1 (cid:16) (cid:17) . . Z(t) = A−1 X(t+1)−U(t) = .. +A−1 .. (11) G G f(x (t)) w (t) n n provides an estimate of (f(x (t))...f(x (t))(cid:62) that satisfies Assumption 3. In other words, with the 1 n knowledge of a nonsingular adjacency matrix, the required estimation z (t) can be calculated at each node i i entirely based on the history of the network states and inputs. Remark 1 We would like to point out that with A being singular, at time t+1, (11) can be replaced by G the least squares estimate of (f(x (t))...f(x (t))(cid:62) in (10), where again only the information of X(t+1) 1 n and U(t) are used. The resulting estimation might not satisfy Assumption 3, but potentially it can be equally useful in designing feedback controllers. Ontheotherhand,withlocalinformationstructure,itiscertainlyhardorevenimpossibleforthenodes to calculate the required z (t) purely based on the state and input history of neighbors. Therefore, z (t) i i can only be direct observation of f(x (t)) at the plant of node i at time t+1, which is reasonable because i this observation happens after the excitation of the plant at time t and noisy observation is allowed. This is independent with whether there is a self link at node i since it is the f(x (t)) that influences nodes i holding i as a neighbor. The estimates z (t) at each individual nodes of course can then be shared among i neighborhood, leading to the local information flow vector Θ (t+1) at each node i and at time t+1. i 9 3 Network Stabilizability Theorems In this section, we present a series of possibility and/or impossibility results for the stabilizability of the network dynamics (1) for the three categories of feedback laws. 3.1 Global-Knowledge/Global-Decision Feedback With global-knowledge/global-decision feedback, it turns out that, the infinity norm (cid:107)A (cid:107) of the the G ∞ adjacency matrix A , i.e., G (cid:88) (cid:12) (cid:12) (cid:107)AG(cid:107)∞ = max (cid:12)[AG]ij(cid:12) i∈V j∈Ni plays a critical role. Recall that D is the estimation bound for z (t) introduced in (2) and W (·) is the function de- 0 i f fined in (9). The following theorem characterizes a generic fundamental limit for the capacity of global- knowledge/global-decision feedback laws. Theorem 1 (Generic Fundamental Limit) Consider F in the function space F. Then there exists L a generic Global-Knowledge/Global-Decision feedback law that stabilizes the network dynamics (1) if and only if √ L < (3/2+ 2)/(cid:107)A (cid:107) . G ∞ To be precise, the following statements hold. √ (i) If L < (3/2 + 2)/(cid:107)A (cid:107) , then there exists a global-knowledge/global-decision feedback control G ∞ law that stabilizes the system (1) for all f ∈ F and for all interaction graphs G. In fact, with L √ L < (3/2 + 2)/(cid:107)A (cid:107) we can find a global-knowledge/global-decision feedback control law that G ∞ ensures (cid:16) √ (cid:17) (cid:0) (cid:1) limsup|x (t)| ≤ W 3/2+ 2)/(cid:107)A (cid:107) +D (cid:107)A (cid:107) +w , ∀i ∈ V. i f G ∞ 0 G ∞ ∗ t→∞ √ (ii) If L ≥ (3/2+ 2)/(cid:107)A (cid:107) , then for any global-knowledge/global-decision feedback law (3) and any G ∞ initial value X(0), there exist an interaction graph G and a function f ∈ F under which there holds L limsupmax|x (t)| = ∞. i t→∞ i∈V √ Note that, the critical value L < (3/2+ 2)/(cid:107)A (cid:107) established in Theorem 1 is for general interaction G ∞ graphs. In fact, as will be shown in its proof, the graph constructed for the necessity proof is a very special one containing exactly one self arc. For a given graph G, e.g., a complete graph or a directed cycle, it is certainly possible that the corresponding network dynamics are stabilizable even with L ≥ √ (3/2+ 2)/(cid:107)A (cid:107) . Find such feedback capacity values for any given interaction graph seems to be rather G ∞ challenging, as illustrated in the following example. 10