ebook img

Optimal Low-Rank Dynamic Mode Decomposition PDF

0.3 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Optimal Low-Rank Dynamic Mode Decomposition

OPTIMALLOW-RANKDYNAMICMODEDECOMPOSITION PatrickHe´asandCe´dricHerzet INRIACentreRennes-BretagneAtlantique,CampusuniversitairedeBeaulieu,35000Rennes,France ABSTRACT runthehigh-dimensionalsystem. However,inmanyapplica- tive contexts, it is impossible to generate enough trajecto- Dynamic Mode Decomposition (DMD) has emerged as a 7 riestomakeaccurateapproximationswithMonte-Carlotech- 1 powerful tool for analyzing the dynamics of non-linear sys- niques. 0 tems from experimental datasets. Recently, several attempts Asaresponsetothiscomputationalbottleneck, reduced- 2 have extended DMD to the context of low-rank approxima- order models aim to approximate the trajectories of the sys- n tions. This extension is of particular interest for reduced- temforarangeofregimesdeterminedbyasetofinitialcon- a order modeling in various applicative domains, e.g., for J ditions [1]. A common approach is to assume that the tra- climate prediction, to study molecular dynamics or micro- jectories of interest are well approximated in a sub-space of 5 electromechanicaldevices. Thislow-rankextensiontakesthe Rn. In this spirit, many tractable low-rank approximations form of a non-convex optimization problem. To the best of ] of high-dimensional systems have been proposed in the lit- L our knowledge, only sub-optimal algorithms have been pro- erature, the most familiar being proper orthogonal decom- M posedintheliteraturetocomputethesolutionofthisproblem. position (POD) [2], balanced truncation [3], Taylor expan- Inthispaper,weprovethatthereexistsaclosed-formoptimal . sions[4]orreduced-basistechniques[5]. Otherpopularsub- t solutiontothisproblemanddesignaneffectivealgorithmto a space methods, such as linear inverse modeling (LIM) [6], t compute it based on Singular Value Decomposition (SVD). s principaloscillatingpatterns(POP)[7],ormorerecently,dy- [ A toy-example illustrates the gain in performance of the namic mode decomposition (DMD) [8, 9, 10, 11, 12], are proposedalgorithmcomparedtostate-of-the-arttechniques. 2 knownasKoopmanoperatorapproximations. v Index Terms— Low-Rank Approximations, Reduced- In this paper, we consider the setting where system (1) 4 OrderModels,DynamicalModeDecomposition,SVD is a black-box. In other words, we assume that we do not 6 know the exact form of f in (1) and we only have access 0 t to a set of representative trajectories {xi} , i = 1,...,N, 1 1. INTRODUCTION t t,i 0 t = 1,...,T so-called snapshots, obtained by running the . high-dimensional system for N different initial conditions. 1 In many fields of Sciences, one is interested in studying the Moreover, we focus on the low-rank DMD approximation 0 spatio-temporalevolutionofastatevariablecharacterizedby 7 a partial differential equation. Numerical discretization in problem studied in [9, 10]. In a nutshell, these studies pro- 1 videaprocedurefordeterminingamatrixAˆ ∈Rn×nofrank space and time leads to a high dimensional system of equa- k v: tionsoftheform: k (cid:28)n,whichsubstitutesforfunctionftin(1)as i (cid:40) X (cid:40) x˜ =Aˆ x˜ , xt =ft(xt−1), t k t−1 (2) r (1) x˜ =θ, a x =θ, 1 1 andgeneratestheapproximationsx˜ ∈ Rn withalowcom- t where each element of the sequence of state variables {xt}t putationaleffort. Alternatively,givenAˆk,anditsk non-zero belongstoRn,ft : Rn → Rn withtheinitialconditionθ ∈ eigenvaluesλi ∈ C, i = 1···k andassociatedeigenvectors Rn. Because(1)maycorrespondtoaveryhigh-dimensional φ ∈ Cn, i = 1···k,trajectoriesof(2)canbecomputedby i system in some applications, computing a trajectory {xt}t usingthereduced-ordermodel given an initial condition θ may lead to a heavy computa- k tionalload,whichmayprohibitthedirectuseoftheoriginal (cid:88) x˜ = ν φ , ν =λt−1φ∗θ, (3) high-dimensionalsystem. t i,t i i,t i i i=1 Thecontextofuncertaintyquantificationprovidesanap- pealingexample. Assumeweareinterestedincharacterizing as long as matrix Aˆ is symmetric. We will assume it is al- k the distribution of random trajectories generated by (1) with ways the case for simplification issues. In what follows, we respecttothedistributionoftheinitialcondition. Astraight- will refer to the parameters φ and ν ∈ C as the i-th low- i i,t forwardapproachwouldbetosampletheinitialconditionand rankDMDmodeandamplitudeattimet. Matrix Aˆ targets the solution of the following non- (xi ,··· ,xi ), and let two large matrices X,Y ∈ Rn×m k t1 t2 convex optimization problem, which we will refer to as the withm=(T −1)N bedefinedas low-rankDMDapproximationproblem X=(X1 ,...,XN ), Y =(X1 ,...,XN ). (5) (cid:88) 1:T−1 1:T−1 2:T 2:T A(cid:63) ∈ argmin (cid:107)xi−Axi (cid:107)2, (4) k t t−1 2 A:rank(A)≤k t,i Without loss of generality, this work will assume that m ≤ n and that rank(X) = rank(Y) = m. We introduce the where (cid:107) · (cid:107) refers to the (cid:96) norm. In order to compute a 2 2 SVD decomposition of a matrix M ∈ Rp×q with p ≥ q: solutionA(cid:63), theauthorsin[9,10]proposetorelyontheas- k M = W Σ V∗ with W ∈ Rp×q, V ∈ Rq×q and sumptionoflineardependenceofrecentsnapshotsonprevi- M M M M M Σ ∈ Rq×q so that W∗ W = V∗V = I and Σ is ousones. Thisassumptionmaynotbereasonable,especially M M M M M q M diagonal. TheMoore-Penrosepseudo-inverseofamatrixM inthecaseofnon-linearsystems. willbedefinedasM† =V Σ−1W∗ . Beyond the reduced modeling context discussed above, M M M Withthesenotations,problem(4)canberewrittenas therehasbeenaresurgenceofinterestforlow-ranksolutions oflinearmatrixequations[13].Thisclassofproblemsisvery A(cid:63) ∈ argmin (cid:107)Y−AX(cid:107)2. (6) largeandincludesinparticularproblem(4). Problemsinthis k F A:rank(A)≤k class are generally nonconvex and do not admit explicit so- lutions. Howewer, important results have arisen at the theo- Inwhatfollows,webeginbypresentingtwostate-of-the- reticalandalgorithmiclevel,enablingthecharacterizationof artmethodswhichenabletocomputeanapproximationofthe the solution for this class of problems by convex relaxation solutionofproblem(6). [14]. Applications concern scenarios such as low-rank ma- trixcompletion,imagecompressionorminimumorderlinear systemrealization,see[13]. Nevertheless,thereexistscertain 2.1. ProjectedDMDandLow-RankFormulation instances with a very special structure, which admit closed- As detailed herafter, the original DMD approach first pro- formsolutions[15,16]. Thisoccurstypicallywhenthesolu- posedin[8], so-calledprojected DMDin[12], assumesthat tioncanbededucedfromthewell-knownEckart-Youngthe- columnsofAXareinthespanofX. Theassumptioniswrit- orem[17]. tenbytheauthorsin[8,10]astheexistenceofAc ∈Rm×m, The contributionof this paperis to showthat the special theso-calledcompanionmatrixofAparametrizedbymco- structureofproblem(4)enablesthecharacterizationofanex- efficients,suchthat act closed-form solution and an easily implementable solver based on singular value decomposition (SVD). In the case AX=XAc. (7) k ≥N(T−1),theproposedalgorithmcomputesthesolution of[12].Moreinterestingly,fork <N(T−1),i.e.,inthecon- Weremarkthatthisassumptionisinparticularvalidwhenthe strainedcase,ourapproachenablestosolveexactlythelow- i-thsnapshotxi canbeexpressedasalinearcombinationof rank DMD approximation problem without 1) any assump- T thecolumnsofXi andwhenf islinear. UsingtheSVD tionoflineardependence,2)theuseofaniterativesolver,on 1:T−1 t decompositionX = W Σ V∗ andnoticingXisfullrank, thecontrarytotheapproachesproposedin[9,10,14]. X X X weobtainfrom(7)aprojectedrepresentationofAinthebasis The paper is organized as follows. In section 2, we pro- spannedbythecolumnsofW , X vide a brief review of state-of-the-art techniques to compute low-rankDMDofexperimentaldata.Section3detailsouran- W∗AW =A˜c, (8) alyticalsolutionandthealgorithmsolving(4). Giventhisop- X X timalsolution,itthenpresentsthereduced-ordermodelsolv- whereA˜c =Σ V∗AcV Σ−1 ∈Rm×m.Therefore,thelow- ing(2). Finally,anumericalevaluationofthemethodispre- X X X X rank formulation in [10] proposes to approach the solution sentedinSection4andconcludingremarksaregiveninalast of(6)bydeterminingthemcoefficientsofmatrixAc which section. minimizetheFrobeniusnormoftheresidualY−AX. This yields after some algebraic manipulations to solve the prob- 2. STATE-OF-THE-ARTOVERVIEW lem In what follows, we assume that we have at our disposal N argmin (cid:107)W∗YV −A˜cΣ (cid:107)2. (9) X X X F trajectories of T snapshots. We will need in the following A˜c:rank(A˜cΣX)≤k some matrix notations. The symbol (cid:107) · (cid:107) and the upper F script·∗willrespectivelyrefertotheFrobeniusnormandthe TheEckart-Youngtheorem[17]thenprovidestheoptimalso- transposeoperator. I willdenotethek-dimensionalidentity lutiontothisproblembasedonarank-kSVDapproximation k matrix. Letconsecutiveelementsofthei-thsnapshottrajec- ofmatrixB = W∗YV givenbyW Λ V∗ whereΛ isa X X B B B B torybetweentimet andt begatheredinamatrixXi = diagonalmatrixcontainingonlythek-largestsingularvalues 1 2 t1:t2 of Σ and with zero entries otherwise. Exploiting the low- Algorithm1Solverfor(6) B dimensionalrepresentation(8),areduced-ordermodelfortra- input: N-sample{Xi }N 1:T i=1 jectoriescanthenbeobtainedbyinsertingin(2)thelow-rank 1)FormmatrixXandYasdefinedin(5). approximation 2)ComputetheSVDofX. Aˆ =W W Λ V∗Σ−1W∗. (10) 3)ComputethecolumnsofP usingRemark1. k X B B B X X output: matrixA(cid:63) =PP∗YV Σ−1W∗ k X X X Asanalternative,theauthorsproposeareduced-ordermodel fortrajectoriesrelyingontheso-calledDMDmodesandtheir Algorithm2Low-rankDMDmodesandamplitudes amplitudes. These modes are related to the eigenvectors of the solution of (9). The amplitudes are given by solving a input: matrices(P,Q,θ),withQ=(YX†)∗P convexoptimizationproblemwithaniterativegradient-based 1)ComputetheSVDofmatrixQ. method,seedetailsin[10]. 2)Solvefori = 1···k theeigenequationA˜kwi = λiwi, wherew ∈Cmandλ ∈Cdenoteeigenvectorsandeigen- i i valuesofA˜ =W∗PV Σ ∈Rm×m. 2.2. Non-projectedDMD k Q Q Q output: DMD modes φ = W w and amplitudes ν = i Q i i,t If we remove the low-rank constraint, (6) becomes a least- λt−1φ∗θ i i squaresproblemwhosesolutionis Aˆ =YX† =YV Σ−1W∗. (11) m X X X Remark1 Theeigenvectorsassociatedtothem ≤ nnon- Based on the approximation Aˆ , DMD modes and ampli- zero eigenvalues of matrix YY∗ ∈ Rn×n with Y ∈ Rn×m m tudes serve to design a model to reconstruct trajectories of can be obtained from the eigenvectors VY = (v1,...,vm) ∈ (2). We note that the DMD modes are simply given by the Rm×m and eigenvalues of the smaller matrix Y∗Y ∈ eigendecomposition of Aˆ , which can be efficiently com- Rm×m. Indeed, the SVD of a matrix Y of rank m is m putedusingSVD,asproposedin[12]. TheassociatedDMD Y =WYΣYVY∗,wherethecolumnsofmatrixWY ∈Rn×m amplitudescantheneasilybederived. aretheeigenvectorsofYY∗. SinceVY isunitary,weobtain Itisimportanttoremarkthattruncatingtoarank-ktheso- thatthesoughtvectorsarethefirstkcolumnsofWY,i.e.,of lutionoftheaboveunconstrainedminimizationproblemwill YVYΣ−Y1. not necessarily yield the solution of (6). This approach will Inthelightofthisremark, itisstraightforwardtodesign generallybesub-optimal. Howeversurprisingly,thesolution Algorithm 1, which will compute efficiently the solution of toproblem(6)remaintoourknowledgeoverlookedinthelit- (6)basedonSVDs. erature, and no algorithms enabling non-projected low-rank DMDapproximationshaveyetbeenproposed. 3.3. Reduced-OrderModels 3. THEPROPOSEDAPPROACH We now discuss the resolution of the reduced-order model (2)giventhesolutionA(cid:63) of(6). Trajectoriesof(2)arefully k 3.1. Closed-formSolutionto(6) determined by a k-dimensional recursion involving the pro- LetthecolumnsofmatrixP ∈Rn×kbetherealorthonormal jectedvariablezt =P∗x˜t: eigenvectorsassociatedtotheklargesteigenvaluesofmatrix (cid:40) z =P∗YX†Pz , YY∗. t t−1 (12) z =P∗YX†θ. Theorem1 Asolutionof (6)isA(cid:63) =PP∗YX†. 2 k Thistheoremstatesthat(6)canbesimplysolvedbycomput- Then, by multiplying both sides by matrix P, we obtain the ing the orthogonal projection of the unconstrained problem soughtlow-rankapproximationx˜t =Pzt. solution(11)ontothesubspacespannedbythek firsteigen- Alternatively, we can employ reduced-order model (3). vectorsofYY∗. Adetailedproofisprovidedinthetechnical Theparametersofthismodel,i.e.,low-rankDMDmodesand reportassociatedtothispaper[18]. amplitudes, are efficiently computed without any minimiza- tionprocedure,incontrasttowhatisproposedbytheauthor in[10]. Indeed,werelyonthefollowingremarkstatingthat 3.2. EfficientSolver DMD modes and amplitudes can be obtained by means of The matrix YY∗ is of size n×n. Since n is typically very SVDsusingAlgorithm2. Theremarkisprovedinthetechni- large, this prohibits the direct computation of an eigenvalue calreport[18]. decomposition. Thefollowingwell-knowremarkisusefulto overcomethisdifficulty. Remark2 Each pair (φi,λi) generated by Algorithm 2 is oneofthekeigenvector/eigenvaluepairofA(cid:63). k 45 method a) 40 method b) method c) 35 )i gn30 ii) ft(xt−1)=Gxt−1, ittes ro25 iii) ft(xt−1)=Gxt−1+Gdiag(xt−1)diag(xt−1)xt−1. f mro20 Setting i) corresponds to a linear system satisfying the as- n ro15 sumption (7), as made in the projected DMD approaches rrE [8, 10]. Setting ii) and iii), do not make this assumption 10 and simulate respectively linear and non-linear dynamical 5 systems. We assess three different methods for computing 00 10 20 30 40 50 Aˆk: rank a) optimalrank-kapproximationgivenbyAlgorithm1, 2000 method a) 1800 mmeetthhoodd bc)) b) kth-order SVD approximation of (11), i.e., k-th order 1600 approximation of the rank-m non-projected DMD so- )ii gn1400 lution[12], ittes ro1200 c) rank-k approximation by (10), corresponding to the f m1000 projectedDMDapproach[10](or[8]fork ≥m). ron 800 ro Theperformanceismeasuredintermsoftheerrornorm(cid:107)Y− rrE 600 Aˆ X(cid:107) withrespecttotherankk. Resultsforthethreeset- k F 400 tingsaredisplayedinFigure1. 200 Asafirstremark,wenoticethatthesolutionprovidedby 0 0 10 20 30 40 50 Algorithm1(methoda)yieldsthebestresults,inagreement Rank 12000 withTheorem1. method a) method b) Second, in setting i), the experiments confirm that when 10000 method c) thelinearityassumptionisvalid,thelow-rankprojectedDMD )iii gn 8000 (methodc)achievesthesameperformanceastheoptimalso- itte lution (method a). Moreover, truncating the rank-m DMD s rof m 6000 solution (method b) induces as expected an increase of the ro error norm. This deterioration is however moderate in our n ro 4000 experiments. rrE Then,insettingsii)andiii)weremarkthatthebehavior 2000 of the error norms are analogous (up to an order of magni- tude). Theperformanceoftheprojectedapproach(methodc) 0 0 10 20 30 40 50 Rank differs notably from the optimal solution. A significant de- Fig. 1. Evaluationoferrornorms(cid:107)Y−Aˆ X(cid:107) asafunctionof teriorationisvisiblefork > 10. Thisistheconsequenceof k F rankk. Settingi)(top)andii)(middle)implybothalinearmodel thenon-validityoftheassumptionmadeinmethodc. Never- buttheformersatisfiesthesnapshotslineardependenceassumption. theless,wenoticethatmethodcaccomplishesaslightgainin Settingiii)(bottom)implementsanon-linearmodel. Weevaluate performance compared to method b up to a moderate rank 3algorithms: methoda)istheproposedoptimalalgorithm,method (k < 5). Besides, we also notice that the error norm of b)providestherank-kSVDapproximationoftheunconstrainedso- methodbinthecasek <30isnotoptimal. lutiongivenin[12]andmethodc)isthelow-rankprojectedDMD Finally,asexpected,allmethodssucceedinproperlychar- methodproposedin[10].SeedetailsinSection4. acterizingthelow-dimensionalsubspaceassoonask ≥r. 4. NUMERICALEVALUATION 5. CONCLUSION Inwhatfollows,weevaluateonatoymodelthedifferentap- Followingrecentattemptstocharacterizeanoptimallow-rank proachesforsolvingthelow-rankDMDapproximationprob- approximationbasedonDMD,thispaperprovidesaclosed- lem.Weconsiderahigh-dimensionalspaceofn=50dimen- form solution to this non-convex optimization problem. To sions,alow-dimensionalsubspaceofr =30dimensionsand the best of our knowledge, state-of-the-art methods are all m=40snapshots.LetGbeamatrixofrankrgeneratedran- sub-optimal. Thepaperfurtherproposeseffectivealgorithms domlyaccordingtoG=(cid:80)r ξ ξ∗,whereentriesofξ ’sare i=1 i i i based on SVD to solve this problem and run reduced-order n independent samples of the standard normal distribution. models. Our numerical experiments attest that the proposed Let the initial condition θ be randomly chosen according to algorithm is more accurate than state-of-the-art methods. In thesamedistribution. Thesnapshots,gatheredinmatricesX particular,weillustratethefactthatsimplytruncatingthefull- andY,aregeneratedusing(1)forthreeconfigurationsoff : t rankDMDsolution,orexploitingtoorestrictiveassumptions i) f (x )=Gx ,s.t. ∃AcsatisfyingGX=XAc, fortheapproximationsubspaceisinsufficient. t t−1 t−1 A. REFERENCES [13] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via [1] A. Cohen and R. Devore, “Approximation of high- nuclearnormminimization,” SIAMreview,vol.52,no. dimensional parametric PDEs,” ArXiv e-prints, Feb. 3,pp.471–501,2010. 2015. [14] M. Fazel, Matrix rank minimization with applications, [2] P. Holmes, J. L. Lumley, and G. Berkooz, Turbulence, StanfordUniversity, Ph.D.thesis,2002. coherent structures, dynamical systems and symmetry, Cambridge University Press, 1996, Cambridge Books [15] P. A. Parrilo and S. Khatri, “On cone-invariant linear Online. matrix inequalities,” IEEE Transactions on Automatic Control,vol.45,no.8,pp.1558–1563,2000. [3] A.C.Antoulas, “Anoverviewofapproximationmeth- odsforlarge-scaledynamicalsystems,”AnnualReviews [16] M.MesbahiandG.PPapavassilopoulos, “Ontherank inControl,vol.29,no.2,pp.181–190,Jan.2005. minimization problem over a positive semidefinite lin- earmatrixinequality,” IEEETransactionsonAutomatic [4] J. P. Fink and W. C. Rheinboldt, “On the error behav- Control,vol.42,no.2,pp.239–243,1997. ior of the reduced basis technique for nonlinear finite [17] C. Eckart and G. Young, “The approximation of one element approximations,” ZAMM - Journal of Applied matrix by another of lower rank,” Psychometrika, vol. MathematicsandMechanics,vol.63,no.1,pp.21–28, 1,no.3,pp.211–218,1936. 1983. [18] P. He´as and C. Herzet, “Low-rank Approximation and [5] A.Quarteroni,G.Rozza,andA.Manzoni,“Certifiedre- Dynamic Mode Decomposition,” ArXiv e-prints, Oct. ducedbasisapproximationforparametrizedpartialdif- 2016. ferentialequationsandapplications,” JournalofMath- ematicsinIndustry,vol.1,no.1,pp.1–49,Dec.2011. [6] C. Penland and T. Magorian, “Prediction of nino 3 seasurfacetemperaturesusinglinearinversemodeling,” JournalofClimate,vol.6,no.6,pp.1067–1076,1993. [7] K.Hasselmann,“PIPsandPOPs:Thereductionofcom- plex dynamical systems using principal interaction and oscillationpatterns,” JournalofGeophysicalResearch: Atmospheres,vol.93,no.D9,pp.11015–11021,1988. [8] P.J.Schmid,“Dynamicmodedecompositionofnumeri- calandexperimentaldata,”JournalofFluidMechanics, vol.656,pp.5–28,2010. [9] K. K. Chen, J. H. Tu, and C. W. Rowley, “Variants of dynamic mode decomposition: boundary condition, koopman, and fourier analyses,” Journal of Nonlinear Science,vol.22,no.6,pp.887–915,2012. [10] MRJovanovic,PJSchmid,andJWNichols, “Low-rank and sparse dynamic mode decomposition,” Center for TurbulenceResearchAnnualResearchBriefs,pp.139– 152,2012. [11] M.O.Williams, I.GKevrekidis, andC.W.Rowley, “A data–driven approximation of the koopman operator: extending dynamic mode decomposition,” Journal of NonlinearScience,vol.25,no.6,pp.1307–1346,2015. [12] J.H.Tu,C.W.Rowley,D.M.Luchtenburg,S.L.Brun- ton,andJ.N.Kutz, “Ondynamicmodedecomposition: theoryandapplications,” JournalofComputationalDy- namics,vol.1,no.2,pp.391–421,2014.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.