ebook img

Simulated annealing: Practice versus theory - Lester Ingber's Archive PDF

34 Pages·2001·0.17 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Simulated annealing: Practice versus theory - Lester Ingber's Archive

%A L. Ingber %T Simulated annealing: Practice versus theory %J Mathl. Comput. Modelling %V 18 %N 11 %D 1993 %P 29-57 Simulated annealing: Practice versus theory Lester Ingber Lester Ingber Research, P.O.B. 857, McLean, VA22101 [email protected] Simulated annealing (SA) presents an optimization technique with several striking positive and neg- ative features. Perhaps its most salient feature, statistically promising to deliver an optimal solution, in current practice is often spurned to use instead modified faster algorithms, “simulated quenching” (SQ). Using the author’s Adaptive Simulated Annealing (ASA) code, some examples are given which demon- strate howSQcan be much faster than SA without sacrificing accuracy. Ke ywords:Simulated annealing, random algorithm, optimization technique SA Practice vs Theory -2- Lester Ingber 1. Introduction 1.1. Shadesof simulated annealing Simulated annealing presents an optimization technique that can: (a) process cost functions pos- sessing quite arbitrary degrees of nonlinearities, discontinuities, and stochasticity; (b) process quite arbi- trary boundary conditions and constraints imposed on these cost functions; (c) be implemented quite eas- ily with the degree of coding quite minimal relative to other nonlinear optimization algorithms; (d) statis- tically guarantee finding an optimal solution. Section 2 gives a short introduction to SA, emphasizing its property of (weak) ergodicity. Note that for very large systems, ergodicity is not an entirely rigorous con- cept when faced with the real task of its computation[1]. Moreover, in this paper “ergodic” is used in a very weak sense, as it is not proposed, theoretically or practically,that all states of the system are actually to be visited. Even “standard” SA is not without its critics. Some negative features of SA are that it can: (A) be quite time-consuming to find an optimal fit, especially when using the “standard” Boltzmann technique; (B) be difficult to fine tune to specific problems, relative to some other fitting techniques; (C) suffer from “over-hype” and faddish misuse, leading to misinterpretation of results; (D) lose the ergodic property (d) by misuse, e.g., by transforming SA into a method of “simulated quenching” (SQ) for which there is no statistical guarantee of finding an optimal solution. Section 3 presents some examples to demonstrate how SQ can give misleading results. There also is a large and growing domain of SA-like techniques, which do not theoretically predict general statistical optimality, but which are extremely powerful for cer- tain classes of problems. Section 3 includes some of these algorithms. Section 4 gives a short description of a sampling of the many complex problems which have bene- fited greatly by the use of SA and SQ. Specific examples are given from papers addressing robust prob- lems across manydisciplines. Thereare manyreviews of simulated annealing, comparisons among simu- lated annealing algorithms, and between simulated annealing and other algorithms[2-5]. This paper is not as exhaustive as these other reviews were in their time. The sampling presented here is not meant to be a reviewofSA, but rather a documented statement of the widespread use of SA and SQ. The emphasis is on comparing the basic theoretic constraints of true simulated annealing (SA) with actual practice on a range of problems spanning many disciplines. On one hand, this may help to address what may yet be expected in terms of better necessary conditions on SA to make it a more efficient algorithm, as many believe that the present sufficiency conditions are overly restrictive. On the other hand, perhaps some of the results not adhering to the present sufficiency conditions that are being reported in the literature are quite biased, perhaps being too positive or too negative. An attempt has been made to limit technical dis- cussion to only that necessary to highlight particular approaches. There are several approaches being researched to develop better SA algorithms and auxiliary algo- rithms to predict the efficiencyofSAonparticular problems. These give some insight into howSAmight be developed into a faster but still optimal algorithm for many kinds of systems. Section 5 describes some of these approaches. In Section 6 the author’s publicly available code, Adaptive Simulated Annealing (ASA)[6], illus- trates howSQcan indeed sometimes perform much faster than SA, without sacrificing accuracy. This paper appreciates the utility of SQ as a trade-off to benefit from (a), (b) and (c) at the expense of (D). The conclusion, Section 7, iterates the theme in this introduction, of the questionable push to neglect some of the theoretical strengths of SA in favor of expediency, and of some new dev elopments that may makesome of these compromises less necessary. 1.2. Criticsof SA At the outset it must be stated that SA is not without its critics. The primary criticism is that it is too slow; this is partially addressed here by summarizing much work in appropriately adapting SQ to many problems. Another criticism is that it is “overkill” for many of the problems on which it is used; this is partially addressed here by summarizing much work demonstrating that it is not insignificant that many researchers are using SA/SQ because of the ease in which constraints and complex cost functions can easily be approached and coded. SA Practice vs Theory -3- Lester Ingber There is another class of criticisms that the algorithm is too broadly based on physical intuition and is too short on mathematical rigor[7]. In that particular bitter and scathing critique the authors take offense at the lack of reference to other prior work [8], the use of “metaphysical non-mathematical ideas of melting, cooling, and freezing” reference to the physical process of annealing as used to popularize SA [9], and they giv e their own calculations to demonstrate that SA can be a very poor algorithm to search for global optima in some instances. That there are undoubtedly other references that should be more regularly referenced is an objective issue that has much merit, with respect to SA as well as to other research projects. The other criticisms may be considered by some to be more subjective,but theyare likely no more extreme than the use of SQ to solvefor global optima under the protective umbrella of SA. 2. “Standard”simulated annealing (SA) The Metropolis Monte Carlo integration algorithm[10] was generalized by the Kirkpatrick algo- rithm to include a temperature schedule for efficient searching[9]. Asufficiencyproof was then shown to put an lower bound on that schedule as 1/log(t), where t is an artificial time measure of the annealing schedule [11]. However, independent credit usually goes to several other authors for independently devel- oping the algorithm that is nowrecognized as simulated annealing [8,12]. 2.1. Boltzmannannealing (BA) Credit for the first simulated annealing is generally recognized as a Monte Carlo importance-sampling technique for doing large-dimensional path integrals arising in statistical physics problems[10]. This method was generalized to fitting non-convex cost-functions arising in a variety of problems, e.g., finding the optimal wiring for a densely wired computer chip[9]. The choices of probability distributions described in this section are generally specified as Boltzmann annealing (BA) [13]. The method of simulated annealing consists of three functional relationships. 1. g(x): Probability density of state-space of D parameters x = {xi;i = 1,D}. 2. h(∆E): Probability for acceptance of newcost-function giventhe just previous value. 3. T(k): schedule of “annealing” the “temperature” T in annealing-time steps k, i.e., of changing the volatility or fluctuations of one or both of the twoprevious probability densities. The acceptance probability is based on the chances of obtaining a newstate with “energy” Ek+1 rel- ative toaprevious state with “energy” E , k h(∆E) = exp(−Ek+1/T) exp(−Ek+1/T)+exp(−Ek/T) 1 = 1+exp(∆E/T) ≈ exp(−∆E/T) , (1) where ∆E represents the “energy” difference between the present and previous values of the energies (considered here as cost functions) appropriate to the physical problem, i.e.,∆E = Ek+1−Ek. This essen- tially is the Boltzmann distribution contributing to the statistical mechanical partition function of the system [14]. This can be described by considering: a set of states labeled by x, each with energy e(x); a set of probability distributions p(x); and the energy distribution per state d((e(x))), giving an aggregate energy E, Σ p(x)d((e(x))) = E . (2) x The principle of maximizing the entropy, S, S = −Σ p(x)ln[p(x)/p(x)] , (3) x where x represents a reference state, using Lagrange multipliers[15] to constrain the energy to average SA Practice vs Theory -4- Lester Ingber valueT,leads to the most likely Gibbs distributionG(x), 1 G(x) = exp((−H(x)/T)) , (4) Z in terms of the normalizing partition function Z, and the Hamiltonian H operator as the “energy” func- tion, Z = Σexp((−H(x)/T)) . (5) x Forsuch distributions of states and acceptance probabilities defined by functions such as h(∆E), the equilibrium principle of detailed balance holds. I.e., the distributions of states before, G(x ), and after, k G(xk+1), applying the acceptance criteria, h(∆E) = h(Ek+1−Ek)are the same: G(xk)h((∆E(x))) = G(xk+1) . (6) This is sufficient to establish that all states of the system can be sampled, in theory. Howev er, the anneal- ing schedule interrupts equilibrium every time the temperature is changed, and so, at best, this must be done carefully and gradually. An important aspect of the SA algorithm is to pick the ranges of the parameters to be searched. In practice, computation of continuous systems requires some discretization, so without loss of much gener- ality for applications described here, the space will be assumed to be discretized. There are additional constraints that are required when dealing with generating and cost functions with integral values. Many practitioners use novel techniques to narrow the range as the search progresses. For example, based on functional forms derived for many physical systems belonging to the class of Gaussian-Markovian sys- tems, one could choose an algorithm for g, g(∆x) = (2πT)−D/2exp[−∆x2/(2T)] , (7) where ∆x = x − x is the deviation of x from x (usually taken to be the just-previously chosen point), 0 0 proportional to a “momentum” variable, and where T is a measure of the fluctuations of the Boltzmann distribution g in the D-dimensional x-space. Given g(∆x), it has been proven[11] that it suffices to obtain a global minimum of E(x)ifT is selected to be not faster than T T(k) = 0 , (8) lnk withT “large enough.” 0 Forthe purposes of this paper,aheuristic demonstration follows, to showthat Eq. (8) will suffice to give a global minimum of E(x)[13]. In order to statistically assure, i.e., requiring many trials, that any point in x-space can be sampled infinitely often in annealing-time (IOT), it suffices to prove that the prod- ucts of probabilities of not generating a state x IOTfor all annealing-times successive to k yield zero, 0 ∞ Π (1−g ) = 0 . (9) k k=k 0 This is equivalent to ∞ Σ g = ∞ . (10) k k=k 0 The problem then reduces to findingT(k)tosatisfy Eq. (10). ForBA, ifT(k)isselected to be Eq. (8), then Eq. (7) gives ∞ ∞ ∞ Σ g ≥ Σ exp(−lnk) = Σ 1/k = ∞ . (11) k k=k k=k k=k 0 0 0 Although there are sound physical principles underlying the choices of Eqs. (7) and (1)[10], it was noted that this method of finding the global minimum in x-space was not limited to physics examples requiring bona fide “temperatures” and “energies.” Rather, this methodology can be readily extended to SA Practice vs Theory -5- Lester Ingber anyproblem for which a reasonable probability density h(∆x)can be formulated [9]. 3. Simulatedquenching (SQ) Many researchers have found it very attractive to take advantage of the ease of coding and imple- menting SA, utilizing its ability to handle quite complex cost functions and constraints. However, the long time of execution of standard Boltzmann-type SA has many times driven these projects to utilize a temperature schedule too fast to satisfy the sufficiency conditions required to establish a true (weak) ergodic search. A logarithmic temperature schedule is consistent with the Boltzmann algorithm, e.g., the temperature schedule is taken to be lnk T =T 0 , (12) k 0 lnk where T is the “temperature,” k is the “time” index of annealing, and k is some starting index. This can 0 be written for large k as lnk ∆k ∆T = −T 0 , k >> 1 0 k(lnk)2 lnk Tk+1 =Tk −T0 k(lnk0)2 . (13) However, some researchers using the Boltzmann algorithm use an exponential schedule, e.g., Tk+1 = cTk , 0 < c < 1 ∆T = (c−1)∆k , k >> 1 T k T =T exp(((c−1)k)) , (14) k 0 with expediency the only reason given. While perhaps someday some less stringent necessary conditions may be developed for the Boltzmann algorithm, this is not now the state of affairs. The question arises, what is the value of this clear misuse of the claim to use SA to help solve these problems/systems? Below, a variant of SA, adaptive simulated annealing (ASA)[6,16], in fact does justify an exponential annealing schedule, but only if a particular distribution is used for the generating function. In many cases it is clear that the researchers already know quite a bit about their system, and the convenience of the SA algorithm, together with the need for some global search over local optima, makes a strong practical case for the use of SQ. In some of these cases, the researchers have been more diligent with regard to their numerical SQ work, and have compared the efficiency of SQ to some other methods theyhav etried. Ofcourse, the point must be made that while SA’s true strength lies in its ability to statis- tically deliver a true global optimum, there are no theoretical reasons for assuming it will be more effi- cient than anyother algorithm that also can find this global optimum. 3.1. Geneticalgorithms (GA) As an example of other algorithms competitive with SQ, there is a very popular class of algorithms, genetic algorithms (GA) that has spawned its own culture across many disciplines. While the origins of its development were not to seek optimization per se[17,18], there are reasons to consider GA as valid approaches to numerical optimization[19,20]. This has led to some comparisons between GA and SA techniques [21], which currently must be viewed in the context of “judging” these algorithms only spe- cific to the problems/systems being tested. I.e., it should be expected that there are systems for which one of GA or SA will be better suited than the other. While GA does not possess any claim to ergodicity, albeit there is some progress in establishing convergence to some fixed optima[22], features typically addressed by SQ, such as premature global convergence, rapid local convergence, and the handling of constraints, all can be reasonably treated in the framework of GA[19]. GA also is not without its critics with respect to its approach, and examples have been developed to illustrate howsimple random mutation may be superior to GA [23]. SA Practice vs Theory -6- Lester Ingber 3.1.1. GA-SAhybrids Belowahybrid parallelized SA-GA technique, parallel recombinative simulated annealing (PRSA), is reported to be useful to speed up SA under some circumstances[24]. While the actual test cases reported in the PRSA paper used SQ exponential temperature schedules on Boltzmann algorithms, the PRSA method is an alternative method of taking advantage of flexibility in searching the parameter space, e.g., as does ASA. Given the use of true SA temperature schedules in PRSA, the advantages in optimal searching of the parameter space afforded by ASA could reasonably be overshadowed by some advan- tages offered by GA, e.g., added degrees of parallelism and perhaps less sensitivity to initial conditions. It would be interesting to explore the application of ASA techniques to the processes of crossoverand muta- tion in the GA stages of PRSA. There have been other successful attempts to create hybrid GA-SA algorithms. In one approach, the authors have giv enaproof that an equilibrium distribution can be achievedbyusing a Metropolis-type acceptance rule [25]. 3.2. Someproblems with SQ To make the point of how quenching can lead to some problems, consider some graphs from a pre- vious study[21]. Fig. 1 uses f , an objective function which contains a very large number of local 0 minima [26],and is very difficult to optimize. Trajectories were developed in an SA study[21] using very fast simulated reannealing (VFSR)[16,27], discussed belowasASA [6],and a standard genetic algorithm generator [28]. The number of local minima is given by 105n −1; when n = 4 it contains 1020 local min- ima. (Visiting each minimum for a millisecond would take about the present age of the universe to visit all minima.) f (x , ..., x ) = Σn (tisgn (zi)+zi)2cdi if |xi −zi| < |ti| 0 1 n i=1 dixi2 otherwise ,    x z =  i+0. 49999sgn (x )s , i i i si  s = 0. 2, t = 0. 05, i = 1,n , i i d = {1. 0, 1000. 0, 10. 0, 100. 0,...} , i c = 0. 15 , −1000. 0 ≤ x ≤ 1000. 0 , i = 1,n , (15) i where s , t , d (repeated in cycles of 4), and c are coefficients defined such that f defines a paraboloid i i i 0 with axis parallel to the coordinates, and a set of holes that increase in depth near the origin. SA Practice vs Theory -7- Lester Ingber GA vs. VFSR f0 1.0e+10 1.0e+00 cost 1.0e-10 1.0e-20 1.0e+00 1.0e+02 1.0e+04 1.0e+06 1.0e+08 generated Fig. 1. Comparison between GA and VFSR is given for function f , where the dimension of the space is 0 4. Solidand short dashed lines each represent one VFSR run each, and dashed and long dashed lines rep- resent one GA run each. The runs are log-log plotted to show relative convergence rates of each algo- rithm. The abscissa indicates the number of function calls, while the ordinate shows the best function evaluation found so far. For purposes of these log-log plots, VFSR was cut off arbitrarily at f < 10−12, ev enwhen it actually attained 0 to machine precision. Fig. 2 shows twotrajectories when the dimension of f is increased from 4 to 10, presenting a prob- 0 lem with 1050 local minima (most of which are beyond a typical workstation’sprecision and recognition). Clearly, a quenching algorithm might well have not obtained an optimal solution within any practical time. In fact, some standard SA techniques, such as BA and fast annealing (FA, discussed below), can miss global optima as well when optimizing functions with extremely large numbers of minima [29]. SA Practice vs Theory -8- Lester Ingber VFSR f5: n = 10 1020 1010 st 100 o c 10-10 10-20 100 102 104 106 108 generated Fig. 2. Trajectories for VFSR are givenfor function f ,where the dimension of the space is 10. See Fig. 0 1for legend. Fig. 3 uses f , the plateau function, generated as the sum of integer threshold values. The five 3 dimensional space has one minimum and is discontinuous. f (x , ..., x ) = 30. 0+ Σ5 x  , 3 1 5  j j=1 −5. 12 ≤ x ≤ 5. 12 , i = 1, 5 . (16) i SA Practice vs Theory -9- Lester Ingber GA vs. VFSR f3 1.0e+05 1.0e+00 cost 1.0e-05 1.0e-10 1.0e-15 1.0e+00 1.0e+01 1.0e+02 1.0e+03 1.0e+04 1.0e+05 generated Fig. 3. Comparison between GA and VFSR is givenfor function f . See Fig. 1 for legend. 3 In Fig. 1, quenching would seem to work quite well if one were using the optimization procedure illustrated by the medium-dashed and long-dashed trajectories, since no clear dramatic benefit seems to be derivedbycontinuing with more detailed searching. However, with respect to the algorithm illustrated by the solid and short-dashed trajectories, especially given no advance knowledge of a given function/data, when should one decide to curtail the search? In this second case, if one does not venture out long enough, the true global minimum will very likely be completely missed! This point is emphasized again in Fig. 3. If one does not venture out far enough, the global mini- mum will likely not be reached. Furthermore, here efficiency is irrelevant, since once a favorable approach is determined, the calculation suddenly divesdowninto the global minimum. 4. Samplingof SA/SQ applications Because of the very widespread use of simulated annealing over many disciplines, it is convenient to describe a sampling with respect to specific disciplines. A main purpose here is to demonstrate the nontrivial power of SA/SQ to handle quite complexproblems/systems and constraints. 4.1. Traveling salesman problem (TSP) The first popular paper on simulated annealing that drew the attention of many researchers was focussed on optimizing the circuitry of computer chips and on the traveling salesman problem (TSP)[9]. The literature is quite dense with other applications to the TSP, a simple example of an NP-complete problem. The TSP should be included in any list of test problems, if for no other reason than its popular- ity,but also because it can be considered a prototypical physical model of manyquasi-linear systems[30]. In at least one early study, the TSP was used as a test case to try to determine an “efficient” expo- nential temperature schedule of type Eq. (14), leading to a variant of SQ[31]. In that particular study, SA Practice vs Theory -10- Lester Ingber advantage was taken of the nature of the TSP and of Boltzmann annealing to test some analytic derivations of expected properties of the algorithm, e.g., of numerical convergence to expected “thermo- dynamic” properties. 4.2. Circuit design Applications to more complex circuit design problems including several layers of logic hierarchy were approached using SQ[32]. This required placements and routing for tens to hundreds of groups of units, potentially a higher dimensional task than placing individual connections among units. While SQ has been effective in determining circuitries, an inverse problem also can be approached. A “Boltzmann machine” SQ algorithm, a variant of mean-field annealing discussed below, was hard- wired onto a VLSI chip to perform SQ at very high speeds [33]. 4.3. Mathematics/combinatorics The design of efficient classification and decision trees, an NP-complete problem, greatly benefited by applying SQ, with an exponential temperature schedule Ti+1 =αTi , 0.7 ≤α≤ 0. 99 , (17) more so than trying the information-theoretic Huffman algorithm [34]. SQ techniques similarly have been useful in approaching graph problems. In one study, searching for the maximum number of edges in graphs of order v ≤ 200 and girth >= 5, the authors found that their own variant of “hillclimbing” was superior[35]. Another study using the SQ mean-field annealing algo- rithm (MFA), described below, found SQ and SA superior overother optimization techniques in determin- ing maximal sets of vertices with all pairs connected by an edge [36]. SQ was used to determine subsquare free Latin squares[37]. The authors demonstrated that the ability to recognize an optimal solution made it feasible to use SQ instead of SA. Mean field annealing (MFA), discussed below, was used to apply neural networks to the minimum cut graph bisection problem, and its speed of solution was found superior to other techniques [38]. Manydifficult optimization problems arise concerning matrices. Standard SA was useful in finding optimal block and row-column designs[39]. Another optimization problem used SQ, using low accep- tance ratios as the criteria to exit, to optimize row-column permutations designed to diagonalize matrices representing coauthor citation frequencies [40]. 4.4. Dataanalysis Standard SA was found optimal in some cases, prohibitively slow in others, when applied to exploratory data analysis, i.e., mapping problems of matching distances among patterns in high dimen- sional spaces and clustering problems in labeling patterns into natural subsets [41]. When looking at controlled rounding procedures in Census data, to preserve the anonymity of respondents, SQ, using an exponential temperature schedule Tj = FTj−1 , F = (T /T )1/Ncycles (18) min max wasfound superior,both in speed and in finding optimal solutions, to all other techniques tried [42]. 4.5. Imaging Image reconstruction and filtering requires recognition and extraction of patterns from sets of data. Often, an algebraic model is used to develop a filter to aid in this process. Then, parameters of the model must be fit to data, and here SQ techniques have been quite successful[43,44]. The models often are not very nonlinear,but theyare high dimensional. A very difficult problem, in determining both spatial and temporal aspects of estimation of visual motion over sequences of images, was approached by developing a model invoking continuity of the

Description:
Simulated annealing (SA) presents an optimization technique with several . ality for applications described here, the space will be assumed to be discretized.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.