ebook img

Research Article Annealing Ant Colony Optimization with Mutation Operator for Solving TSP PDF

14 Pages·2016·1.66 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Research Article Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2016, Article ID 8932896, 13 pages http://dx.doi.org/10.1155/2016/8932896 Research Article Annealing Ant Colony Optimization with Mutation Operator for Solving TSP AbdulqaderM.Mohsen DepartmentofComputerScience,FacultyofComputingandInformationTechnology,UniversityofScienceandTechnology,Sana’a, Yemen CorrespondenceshouldbeaddressedtoAbdulqaderM.Mohsen;[email protected] Received22June2016;Revised15October2016;Accepted19October2016 AcademicEditor:ElioMasciari Copyright©2016AbdulqaderM.Mohsen. This is an open access article distributed under the Creative Commons Attribution License,whichpermitsunrestricteduse,distribution,andreproductioninanymedium,providedtheoriginalworkisproperly cited. AntColonyOptimization(ACO)hasbeensuccessfullyappliedtosolveawiderangeofcombinatorialoptimizationproblemssuch asminimumspanningtree,travelingsalesmanproblem,andquadraticassignmentproblem.BasicACOhasdrawbacksoftrapping intolocalminimumandlowconvergencerate.Simulatedannealing(SA)andmutationoperatorhavethejumpingabilityandglobal convergence;andlocalsearchhastheabilitytospeeduptheconvergence.Therefore,thispaperproposedahybridACOalgorithm integratingtheadvantagesofACO,SA,mutationoperator,andlocalsearchproceduretosolvethetravelingsalesmanproblem. ThecoreofalgorithmisbasedontheACO.SAandmutationoperatorwereusedtoincreasetheantspopulationdiversityfrom timetotimeandthelocalsearchwasusedtoexploitthecurrentsearchareaefficiently.Thecomparativeexperiments,using24TSP instancesfromTSPLIB,showthattheproposedalgorithmoutperformedsomewell-knownalgorithmsintheliteratureintermsof solutionquality. 1.Introduction Metaheuristic algorithms such as genetic algorithms (GAs) [7], particle swarm optimization (PSO) [8], tabu One of the most popular combinatorial optimization prob- search(TS)[7],simulatedannealing(SA)[9],andantcolony lemsisthetravelingsalesmanproblem(TSP)[1].Givenaset optimizations(ACO)[10]arewidelyusedforsolvingtheTSP. ofcities,asalesmanattemptstofindtheshortestoratleast Ant colony optimization proposed by Dorigo et al. in 1996 neartotheshortesttourbyvisitingeachcityonlyonceand [10]simulatestheintelligentbehaviorofrealantsseekingfor turning back to the starting city. TSP is a representative of the food in nature. It has been successfully applied to solve variety of combinatorial problems. It has been studied for many optimization problems such as TSP [10], quadratic thelast40years.Ithasmanyrealworldapplicationssuchas assignment[11],job-shopscheduling[12],andloadbalancing the movement of people, postal delivery, school bus routes, intelecommunicationsnetworks[13]. garbage collection, design of hardware devices and radio electronic systems, machine scheduling, integrated circuits, Inapplyingstandalonemetaheuristicalgorithms,thereis andcomputernetworks[2–4]. possibility of losing the diversity of the population through Metaheuristic algorithms are formally defined as algo- prematureconvergenceandthusthealgorithmgetsstuckin rithms that inspired by nature and biological behaviors. localoptima.Therefore,maintainingthediversityandmak- They produce high-quality solutions by applying a robust ing tradeoff between diversification and intensification by iterativegenerationprocessforexploringandexploitingthe combiningtwoormorealgorithmstoproducehigh-quality searchspaceefficientlyandeffectively.Recently,metaheuris- solutions and speed up the execution time is indispensable ticalgorithmsseemtobeahotandpromisingresearchareas [14]. [5]. They can be applied to find near-optimal solutions in For hybrid ACO, the earliest study was conducted by a reasonable time for different combinatorial optimization McKendall and Shang [15]. They presented a hybrid ant problems[6]. system algorithm to solve dynamic facility layout problem. 2 ComputationalIntelligenceandNeuroscience Anotherresearchwasahybridantsystemalgorithmforsolv- 2.TravelingSalesmanProblem ingTSPinwhichantcolony,geneticalgorithm,andsimulated annealing are hybridized [16]. For the hybrid ant colony TSP is an active field of research in computer science. It system (ACS), many researches were conducted including demonstrates all the aspects of combinatorial optimization theworkbyHuangandLiao[17],YoshikawaandOtani[18], andcomesunderthesetofNP-hardproblemswhichcannot Xing et al. [19], Liao et al. [20], Lin et al. [21], Hajipour besolvedoptimallyinapolynomialtime[33].SolvingTSPsis et al. [22], and Min et al. [16]. The research by Katagiri et animportantpartofapplicationsinmanypracticalproblems al. [23] is an example for hybrid MAX-MIN Ant System. withindailylive[2–4]. To solve TSP problems, several hybrid ACO variants with TSP is represented as a connected graph 𝐺, consisting othermetaheuristicalgorithmssuchasSA,PSO,ACO,ABC, of a set of vertices 𝑉, an edges set 𝐸, and a set of distances andANNwereproposed.BontouxandFeillet[24]proposed 𝑑 associated with each edge in 𝐸 and stored in a distance a hybrid ACO algorithm with local search procedures to matrix𝐷.Thevalue𝑑𝑖𝑗 isthecostoftraversingfromvertex solve TSP. Tsai et al. [25] presented a hybrid ACO called 𝑖 ∈ 𝑉 to vertex 𝑗 ∈ 𝑉 and the diagonal elements 𝑑𝑖𝑖 are ACOMACalgorithmforsolvingTSP.Beam-ACOalgorithm zeros.Giventhisinformation,atourinTSPisformulatedas isahybridACOwithbeamsearchforsolvingTSP[26].Chen acyclicpermutation,calledHamiltoniancycleof𝐺visiting and Chien presented a hybrid algorithm, called the genetic eachvertexinthegraphexactlyonce,𝜋of{1,2,...,𝑛},where simulated annealing ant colony system with particle swarm 𝜋(𝑖) is the city, on the tour, following city (𝑖). The aim in optimizationtechniques,forsolvingTSP[27].Junqiangand solving TSP is to find a permutation 𝜋 that minimizes the Aijia proposed a hybrid ant colony algorithm (HACO), lengthofthetourasshownin which combined ACO with delete-cross to overcome the 𝑛 shortcomingofslowconvergencespeedofACO[28].Dong minimize ∑𝑑𝑖𝜋(𝑖). (1) etal.[29]proposedanalgorithm,calledcooperativegenetic 𝑖=1 ant system (CGAS) for solving TSP, which hybridized both GAandACOtoimprovetheperformanceofACO.Recently, It is worth mentioning that the total number of possible Gu¨ndu¨z et al. [30] presented a hybrid ACO with ABC distinctfeasibleroutescoveringallcities𝑛is(𝑛−1)!/2.This for solving TSP. In addition, Mahi et al. [31] proposed a produced a problem which is very hard to solve (NP-hard new algorithm in which ACO was hybridized with PSO problem). and 3-Opt for solving small TSP instances. The PSO was 3.AlgorithmDesign used to determine the optimum values of the two main parameters of ACO which affected algorithm performance An overview of the ACO, SA, mutation operator, and the andthe3-Optwasusedtoescapefromthelocaloptimafound proposedalgorithmispresentedinthefollowingsubsections. by ACO algorithm. Furthermore, Yousefikhoshbakht et al. [32] proposed REACSGA for solving small TSP instances 3.1. Ant Colony Optimization. ACO is a population-based which employed the modified ACS for generating initial metaheuristicalgorithmwhichwasinspiredbytheforaging diversified solutions and GA for intensification mecha- behavioroftherealantswhensearchingfortheshortestpath nisms. fromthefoodsourcetotheirnest.Analogically,theartificial As noted above, previous studies show that ACO still ants search for good solutions iteratively in several gener- has drawbacks. The performance of these studies was dra- ation. In each generation, every ant constructs its feasible maticallydecreasedwhendealingwithlarge-scaleinstances. solutionpathstepbystepguidedbyatransitionrulethatis To the best of my knowledge, no research has been done afunctionofartificialpheromoneanddistancebetweentwo to hybridize elitist ant system with SA, mutation, and local cities(heuristicinformation)[34]asshownin(2).Afterthat, search.Therefore,inthisresearchanewhybridelitistantsys- theantlaysdownanamountofpheromonetrailontheedges temwithSA,mutationoperator,andlocalsearchprocedureis ofitscompletedtour.Inthenextgeneration,antsareattracted introducedforsolvingTSP.IntroducingSAcanhelpACOto bythepheromonetrail.Therefore,thiswillguidethesearch escapefromthelocaloptima.Ontheotherhand,determining inthesearchspacetowardsgoodqualitysolutions. initialsolutionofSAisalmostdifficult.Therefore,theuseof TSPisidenticaltotheforagingbehaviorsofrealantsin the ACO is useful in the generation of SA initial solution. nature.Therefore,applyingantcolonyoptimizationtosolve WhileintroducingthemutationoperationtoACOalgorithm TSPwillbeverysimple.Equation(2)isusedtocalculatethe willenhancethealgorithmperformance,expandthediversity probabilityofselectingcity𝑗byant𝑘tobevisitedaftercity𝑖. ofpopulation,andinhibittheprematureconvergence.Apply- ingeitherSAormutationisbasedonthediversitylevelofthe 𝜏𝛼𝜂𝛽 population.AfterapplyingSAormutation,elitistantsystem 𝑃𝑘 = 𝑖𝑗 𝑖𝑗 if 𝑗∈𝑁𝑘, (2) goesthroughalocalsearchproceduretospeeduptheconver- 𝑖𝑗 ∑ 𝜏𝛼𝜂𝛽 𝑖 𝑗∈𝑁𝑘 𝑖𝑗 𝑖𝑗 𝑖 gence. The rest of the paper is structured as follows. Sec- where 𝜏𝑖𝑗 denotes the amount of pheromone between city 𝑖 tion2presentstheTSPformulation.Section3describesthe and city 𝑗, 𝜂𝑖𝑗 indicates the distance between city 𝑖 and city hybridalgorithm.Theexperimentalresultsarepresentedin 𝑗(prioriavailableheuristicinformation),𝛽istheparameter Section 4. Conclusions and future work are given in Sec- that represents the relative importance of the pheromone tion5. (dynamic evaluation) versus the heuristic value (static ComputationalIntelligenceandNeuroscience 3 evaluation),and𝑁𝑘 isasetofcitieswhichant𝑘hasnotyet temperature is decreased and the process is repeated until 𝑖 visited.Therefore,theselectionprobabilityisproportionalto a frozen state (free energy state) is reached at 𝑇 = 0. theproductofthestaticanddynamicevaluation. The analogy between the states of a physical system and Inthedynamicevaluation,twopheromoneupdaterules optimization problems is given as follows: (i) the current areusedtocalculatetheamountofpheromoneoneachedge configuration of the thermodynamic system is similar to betweencities.Thefirstruleiscalledthelocalupdateruleas the current solution of the optimization problem; (ii) the shownin thermodynamic system energy is similar to the objective 𝑚 functionofoptimizationproblem;and(iii)groundstatusof 𝜏𝑖𝑗(𝑡+1)=(1−𝜌)𝜏𝑖𝑗(𝑡)+∑Δ𝜏𝑖𝑘𝑗(𝑡), (3) thethermodynamicsystemissimilartotheglobalminimum 𝑘=1 oftheoptimizationproblem.Algorithm2showsthegeneral where0 < 𝜌 ≤ 1isthepheromonetrailevaporationratein structureofSA. localupdateruleand𝑚isthenumberofants.Thus,thelocal 3.3. MutationOperator. Mutationoperatorisinspiredfrom updateruleisdecreasingthepheromonetrailsbyaconstant the evolutionary algorithms in which each ant in the pop- factor(pheromoneevaporation).Thesecondruleistheglobal ulation will be given the chance to be altered based on a updaterulewhichaddsextraamountpheromonetrailtothe predefinedprobability.Thisoperatormayhelptheantcolony bestrouteinthepopulation.Itisworthmentioningthatthe algorithmtoexplorenewareasinthesearchspace.Itcanbe bestrouteistheshortestrouteasinelitiststrategy[10],the appliedbyrandomlyexchangingthepositionoftwocitiesin extendedversionoforiginalantsystemalgorithm.Equation the tour which leads to generate a new solution that is not (4)showsthedefinitionoftheglobalupdateruleinelitistant veryfarfromtheoriginalone.Algorithm3showsthemain system: stepsofthemutationoperation. 𝑒 { if edge(𝑖,𝑗)∈𝑇gb Δ𝜏𝑖g𝑗b(𝑡)={𝐿gb(𝑡) (4) 3.4.TheProposedAlgorithm. Inthissection,anewalgorithm {0 otherwise, called annealing elitist ant system with mutation operator andlocalsearchforsolvingthetravelingsalesmanproblem where𝑇gb isthebestroute,𝐿gb(𝑡)isthedistanceofthebest isintroduced.Given𝑛citiesinTSPinstance,first,theelitist route, and 𝑒 is a positive integer. This means that the edges ant system will generate the initial population with 𝑚 ants belongingtotheglobal-besttourgetanadditionalamountof and each ant will randomly choose a city as its starting pheromoneeachtimethepheromoneisupdated. city. The pheromone intensity level between any two cities ThepseudocodeofthebasicACOisillustratedinAlgo- is initialized with small positive constant 𝑠0. The iteration rithm1. counter, which will record the number of iterations the proposed algorithm executes so far, is initially set to zero. 3.2.SimulatedAnnealing. SAisatrajectory-basedoptimiza- For every interval 𝑖 during the execution of the algorithm, tiontechnique.Itisbasicallyaniterativeimprovementstrat- where 𝑖 is a predefined number, the elitist ant system will egy with a criterion that accepts higher cost configurations calleithersimulatedannealingormutationoperation,based sometimes. The first attempt to apply SA for solving the on the diversity of the elitist ant system, to enhance the combinatorial optimization problems was in the 80s of the algorithm performance. If the diversity is greater than 0.5, lastcentury[35,36].Anoverviewofsimulatedannealing,its thealgorithmneedsintensificationwhichcanbeachievedby theoreticaldevelopment,andapplicationdomainsisshown applyingsimulatedannealingforratioofthesolutionspool. in[9].Simulatedannealingwasinspiredbyphysicalanneal- On the contrary, if the diversity is less than 0.5, it means ing process of solids in which a solid is first heated and that the algorithm will lose the diversity and may get stuck then cooled down slowly to reach a lower state of energy. in the local minima. Therefore, the ant algorithm needs to Metropolis acceptance criterion [37], which models how increasethediversitybyapplyingthemutationoperatorwith thermodynamic systems moves from one state to another apredefinedprobability.Thediversitybetweenthefitnessof state, is used to determine whether the current solution is the ants in the population was measured by calculating the acceptedorrejected. EuclideanDistance(ED)asshownin The original Metropolis acceptance criterion was that the initial state of a thermodynamic system was chosen at 𝑑−𝑑 energy𝐺andtemperature𝑇.Holding𝑇constant,theinitial ED= 𝑑 −m𝑑in , (6) max min configuration of the system is perturbed to produce new configuration and the change in energy Δ𝐺 is calculated. where𝑑istheaveragedistancebetweenthefitnessofthebest The new configuration is accepted unconditionally if Δ𝐺 ant and the fitness of the remaining ants in the population. is negative whereas it is accepted if Δ𝐺 is positive with a 𝑑 and𝑑 arethedistancesoftheworstantfitnessandthe min max probability given by the Boltzmann factor shown in (5) to secondbestantfitnessfromthefitnessofbestantrespectively avoidtrappinginthelocaloptima: [38].EDhasarangeofvaluesbetween0and1.IfEDislow, exp−ΔCost/Temperature. (5) mostantsinthepopulationareconcentratingaroundthebest antandsotheconvergenceisachieved.IfEDishigh,mostof This processes is then repeated until reaching a good sam- theantsarenotbiasedtowardsthecurrentbestant.Therefore, pling statistics for the current temperature, and then the EDgivesadescriptionforthepopulationvariationandthe 4 ComputationalIntelligenceandNeuroscience (1) begin output:𝑃𝑏𝑒𝑠𝑡 (2) 𝑃𝑏𝑒𝑠𝑡 ←Create Heuristic Solution(Problem Size); (3) 𝑃𝑏𝑒𝑠𝑡cost←(𝑆ℎ); (4) ←InitializePheromone(𝑃𝑏𝑒𝑠𝑡 ); cost (5) while(¬Stop Condition())do (6) Candidates←0; (7) for(𝑖=1To𝑚)do (8) 𝑆𝑖←ProbabilisticStepwiseConstruction(Pheromone,ProblemSize,𝛼,𝛽); (9) 𝑆𝑖cost←Cost(𝑆𝑖); (10) if(𝑆𝑖 ≤𝑃𝑏𝑒𝑠𝑡 )then cost cost (11) 𝑃𝑏𝑒𝑠𝑡 ←𝑆𝑖 ; cost cost (12) 𝑃𝑏𝑒𝑠𝑡 ←𝑆𝑖; (13) end (14) Candidates←𝑆𝑖; (15) end (16) DecayPheromone(Pheromone,𝜌); (17) for(𝑆𝑖∈𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠)do (18) UpdatePheromone(Pheromone,𝑆𝑖,𝑆𝑖cost); (19) end (20) end (21) return(𝑃𝑏𝑒𝑠𝑡) (22) end Algorithm1:Pseudocodeforantsystem. (1) Simulated Annealing( ) (2) begin (3) Solution=InitialSolution; (4) Cost=Evaluate(Solution); (5) Temperature=InitialTemperature; (6) while(Temperature>FinalTemperature)do (7) NewSolution=Mutate(Solution); (8) NewCost=Evaluate(NewSolution); (9) ΔCost=NewCost−Cost; (10) if (ΔCost≤0)OR(𝑒−ΔCost/Temperature >Rand)then (11) Cost=NewCost; (12) Solution=NewSolution; (13) end (14) Temperature=codlingrate×Temperature (15) end (16) returnthebestsolution; (17) end Algorithm2:Simulatedannealing. lackofsimilaritybetweenants.Thefollowingarethestepsof Step2. Calculatetheroutecostofeachant.Afterthat,apply theproposedalgorithm. the global pheromone update rule in which the amount of pheromone is added to the best route which has the lowest Step 1. After the initial stage mentioned above, each ant in cost.Thisruleisdefinedin(4). thepopulationbuildsitstourbyapplyingthetransitionrule followedbythelocalpheromoneupdaterule: Step3. Calculatethediversityofthepopulation.Ifthediver- sity is high, the algorithm needs intensification by applying (1)TransitionRule.The𝑖thantdecidesthenextcity𝑗to SA.Otherwise,thealgorithmneedstoregainthediversityby bevisitedaccordingto(2). applyingthemutationoperator. (2)LocalPheromoneUpdateRule.Afterallantscomplete their tours, the local update rule of the pheromone SimulatedAnnealing.Ifthediversityvalueisgreaterthan0.5, trailsisappliedforeachrouteaccordingto(3). thentheelitistantsystemwilluseSAtechniquetoenhance ComputationalIntelligenceandNeuroscience 5 Start (1) begin (2) Step1:Forant𝐴,generatingmutationposition Generate population ofNants and put the initial pheromone randomas𝑘1; on each edge (3) Step2:; (4) if (𝑘1=𝑛)then (5) 𝑘2=1 Every ant completes its traveling sequence of cities using the (6) else transition rules and the local pheromone update operation (7) 𝑘2=𝑘+1 (8) end ((190)) SStteepp34::EGxectthiannggninegw𝐴an𝑘1t𝐴an󸀠;d𝐴𝑘2; Evaluate the rout lenpghther oofm eoacnhe aunptd iant ee aocphe rgartoiounp and perform global (11) end No The Yes Algorithm3:Thepseudocodeforapplyingthemutationoperator diversity forTSPproblem. is high theselectedantbyencodingitstourreachedbytheantsystem Apply mutation Apply simulated intoSA.Here,thetouroftheselectedantisencodedasthe operation annealing algorithm initialstateofSA.Initializethesystemtemperatureto𝑇0.In SA,twobitsintheant𝐴arerandomlyselectedforexchange togenerateanewmutateant𝐴󸀠anditsenergy(cost)iseval- uatedaccordingly.Iftheenergyofant𝐴󸀠 isbetterthanthat Apply local search procedure. Evaluate the route length of each ant. ofant𝐴orarandomnumbergeneratedbetween0and1is Perform global pheromone update operation lessthantheBoltzmannfactorasdefinedby(5),thenthenew ant𝐴󸀠isaccepted.Otherwise,theant𝐴remainsunchanged. Reduce the temperature by annealing schedule or by factor Maximum 0.0 < 𝛼 < 1.0.Iteratively,thisprocesswillberepeateduntil No number of 𝑇0 reaches to a predefined low temperature. If tour path of generations is ant𝐴󸀠 isbetter thanthatof𝐴, thentheenhancedant 𝐴󸀠 is reached? includedintheantspopulation.Otherwise,theant𝐴remains unchangedandisplacedbackintotheantspopulation. Yes MutationOperator.Tomaintainthediversityoftheproposed End algorithm, the mutation operator is introduced for further explorationofnewareasofthesearchspace.Ifthediversity Figure1:Theflowchartoftheproposedmethodalgorithm. valueislessthan0.5,thentheelitistantsystemwillusethe mutationoperationtoenhancetheselectedant.Inthisproc- ess,anantisselectedrandomlywithapredefinedprobability. in this section. The experiments were conducted using 24 If a random number between 0 and 1 is less than the pre- TSP standard benchmark problems, with different length, defined mutation rate, then the algorithm selects an ant to from TSPLIB [39, 40]. The proposed algorithm has been mutate. Two bits in the ant 𝐴 are selected randomly for exchange to generate a new mutate ant 𝐴󸀠 and calculate its implemented in Java on an Intel Core-i7 PC. The Object Oriented Paradigm (OOP) and different data structures tour. have been used to optimize its code. All experiments were conductedonthesymmetricTSP. Step4. Applythelocalsearchprocedureforfurtherenhance- As in other metaheuristic algorithms, the quality of the ment. solutions created by the proposed algorithm was affected largely by the different values of the parameters. Thus, a Step 5. Again, apply the global pheromone update rule number of different alternative values were examined to accordingto(4). tune the parameters of the proposed algorithm. Table 1 Step6. Iftheterminationconditionissatisfied,thenreturn providestheparameterswhichshowvariationsintherange thebestroutewithitslength.Otherwise,gotoStep1.Figure1 of values while default values of other parameters were showstheflowchartoftheproposedalgorithm. taken. Finally, the selected parameter values are those that achieved the best computational results with respect to the 4.ExperimentalResultsandDiscussion qualityofthesolution.Alloftheparametervalueshavebeen determined by the experiments on Eil51, lin318, and fl1400 The experimental results to examine the validity and the TSP instances which represent small, medium, and large performance of the proposed algorithm were introduced instances, respectively. In these experiments, the proposed 6 ComputationalIntelligenceandNeuroscience Table1:Parametersettingoftheproposedalgorithm. of the algorithm, prevented it from being trapped in local optima, and thus improved its performance. Moreover, the Proposed introductionofthelocalsearchstrategyincreasedthespeed algorithm Testedvalues Optimumvalue ofalgorithmconvergence. parameters Figure 2 shows the behavior of the proposed algorithm 𝛼 12345 1or 2 with three TSP instances. They were chosen to represent 𝛼 13579 579 short,medium,andlargeinstances,andassuch,thefindings 𝜌 0.010.05 0.10.5 0.7 0.1or less canbegeneralizedtotheotherinstances. 𝑞 0.010.05 0.10.50.7 0.05 Figure 2(a) shows the results from a single run for 0 korB200 instance. In this instance, the proposed algorithm Numberofants 255075 100 25 reached the optimal solution that is 29437 in all ten runs. Q 50100500 100 Thisfiguredepictsbest,average,andworstsolutionobtained SATemp. 10010005000 10000 1000 during the run. Strong optimization capability of proposed SAAlpha 0.50.7 0.90.99 0.99 algorithmcouldbeinferred.Diversifiedsolutions,highcon- Mutationrate 0.001 0.010.10.5 0.1 vergence speed, and stagnation avoidance can be observed fromtheconvergencebehaviorofthealgorithm.Specifically, the figure shows how quickly the optimal solution, that is algorithmwasstoppedwhenreachingtheoptimalsolutionor 29437, was found after 26 iterations. The trend goes down 1000iterations.Theoptimumcombinationsoftheparameters fast towards the optimum solution in the early iterations areshowninTable1.Afterward,thealgorithmwasinitialized untilitapproachestheoptimumsolution.Itisclearthatno with a population of 25 ants using 𝛼 = 1 and 𝛽 = 5 stagnationhappenedduringthesearchprocessasobserved whichcontroltheinfluenceofpheromonetrailandheuristic fromthefigure. information (edge cost) in selection of the next city by Figure2(b)illustratestheresultsfromasinglerunofthe transition rule. The parameter 𝑞0 was set to 0.05 which proposedalgorithmforthelin318instance.Innineoutoften specifies the intensification/diversification rate. The initial runs, the proposed algorithm reached the optimal solution valueofthepheromonetrailwassettobe𝜏0 = 0.5andthe thatis42029.Thisfigurepresentsthegraphwhichshowsthe maximumnumberofiterationwas1000iterations. convergence behavior of the algorithm. It is clearly noticed AlloftheinstancesincludedinTSPLIBhavealreadybeen thattheproposedalgorithmhashighconvergencespeedwith examinedintheliteratureandtheiroptimalityresultscanbe diversified solutions. The algorithm succeeded in avoiding used to compare algorithms according to the best and the the potential stagnation and premature convergence as can average values. Different instances with different size were beobservedfromtheconvergencebehaviorofthealgorithm. selected. Theseinstancescanbe classified intothreegroups According to the figure, there is no stagnation happened basedontheirlengths.Thefirstgroupisthesmallestgroup during the search process. Additionally, the algorithm con- whichincludes8instancesvaryinginlengthbetween51and vergedtothebestsolutionafteramaximumof106iterations. 100 cities. The second is the medium group which includes The search space for this instance is medium although the 10instancesvaryinginlengthbetween101and318cities.The algorithm has no problem in quickly finding the optimum thirdisthelarge-scalegroupwhichincludes5instanceswith solution. lengthbetween575and1655cities.Therefore,theresultswere Figure2(c)demonstratesasinglerunforrl1323instance collectedafterconductingtheexperiments10timesforeach containing 1323 cities. This graph plots the convergence instance and took the best results, the average results, and behavior of the proposed algorithm over 1000 iterations. the standard deviation for comparison with previous work. The algorithm reached a solution with cost near to optimal Twodifferentexperimentswereconductedfortheevaluation. solution at iteration number 293 and never once changed Both of them were configured according to the parameters afterwards. As it can be seen from the figure, in the initial settingasshowninTable1. stage,thediversitywashighduetothevariationofthepopu- In the first experiment, the proposed algorithm was lation. However, as fitness function decreased, the diversity comparedwiththebasicelitistantsystemalgorithm(EAS). also decreased until the suboptimal solution was attained Six evaluation measures were used to evaluate both algo- that is 270388. This smooth convergence was due to the rithms. These measures are best solution, worst solution, goodbalancebetweendiversificationandintensificationthat average solution, standard deviation, number of iterations, proposedalgorithmcouldprovide.Althoughthesearchspace and running time of algorithms. Table 2 shows the superi- for that instance was large, no stagnation happened during orityoftheproposedalgorithmoverEASincomputational thesearchprocess. results for all evaluation measures. As can be seen from In the second experiment, the proposed algorithm was the tabulated values, the quality of the solutions obtained compared with four state-of-the-art metaheuristic algo- by the proposed algorithmwas significantly better than the rithms:ChenandChien[27],Wangetal.[41],Yousefikhosh- solutionsobtainedbyEAS.Thissuperiorityoftheproposed bakht et al. [32], and Mahi et al. [31]. The authors of these algorithmmaybeattributedtotheintroductionofSAwhich algorithms proved that their algorithms outperformed the exploited the detected promising solutions to speed up the other algorithms in the literature. The best found solution, learningcapabilityofthealgorithm.Meanwhile,theaddition the average one over all runs, the standard deviation, the of the mutation operator enhanced global search capability percentage deviation of the best results, and the percentage ComputationalIntelligenceandNeuroscience 7 Table2:ResultsobtainedbytheEASandtheproposedalgorithmforthetestproblemsaccordingtothebestsolution,worstsolution,average solution,standarddeviation,numberofiterations,andrunningtimeofalgorithms. Instance Opt. Method Best Worst Average Std.dev. Iteration Time(s) EAS 430 474 442.3 14.38 999 0.01 eil51 426 Proposed 426 449 426 0.00 2 0.00 EAS 547 653 563.9 13.25 999 0.02 eil76 538 Proposed 538 566 538 0.00 2 0.01 EAS 661 778 677.1 9.27 999 0.04 eil101 629 Proposed 629 667 629 0.00 3 0.02 EAS 7633 10674 7816.9 141.35 999 0.01 berlin52 7542 Proposed 7542 10804 7542 0.00 3 0.06 EAS 121978 151016 125064.1 1866.12 999 0.07 bier127 118282 Proposed 118282 153876 118282 0.00 4 0.05 EAS 6386 7757 6515.3 83.80 999 0.07 ch130 6110 Proposed 6110 6519 6110 0.00 4 0.05 EAS 6734 7771 6865.5 101.49 999 0.07 ch150 6528 Proposed 6528 6937 6528 0.00 3 0.03 EAS 8240 10161 8422.2 151.46 999 0.04 rd100 7910 Proposed 7910 8540 7910 0.00 2 0.01 EAS 14756 18010 15102.3 278.80 999 0.02 lin105 14379 Proposed 14379 15315 14379 0.00 2 0.01 EAS 44981 58151 46293.6 917.35 999 1.42 lin318 42029 Proposed 42029 43715 42042.4 42.37 12 0.54 EAS 22085 29227 22603.8 450.09 999 0.04 kroA100 21282 Proposed 21282 23228 21282 0.00 2 0.01 EAS 27560 31282 28370 504.96 999 0.12 kroA150 26524 Proposed 26524 27793 26524 0.00 4 0.05 EAS 31499 41604 31886.1 249.14 999 0.11 kroA200 29368 Proposed 29368 30829 29368 0.00 9 0.20 EAS 22652 27908 23134.8 281.92 999 0.04 krob100 22141 Proposed 22141 28127 22141 0.00 2 0.01 EAS 27248 34550 28099.5 512.53 999 0.14 krob150 26130 Proposed 26130 27556 26130 0.00 2 0.02 EAS 31054 38684 32019 559.76 999 0.23 krob200 29437 Proposed 29437 31024 29437 0.00 11 0.24 EAS 21194 22485 21777.8 348.02 999 0.03 kroc100 20749 Proposed 20749 26834 20749 0.00 2 0.01 EAS 22205 29101 22872.5 457.97 999 0.03 krod100 21294 Proposed 21294 22627 21294 0.00 3 0.02 EAS 22699 31216 23460.4 486.84 999 0.07 kroe100 22068 Proposed 22068 23078 22068 0.00 2 0.01 EAS 7365 8398 7436.6 72.57 999 9.11 rat575 6773 Proposed 6777 6986 6787.1 10.13 999 46.23 EAS 9706 12596 9860.8 128.84 999 26.79 rat785 8806 Proposed 8811 9086 8829.7 15.02 999 66.85 EAS 297599 392473 304626.9 5122.59 999 110.29 rl1323 270199 Proposed 270309 287323 270841.7 403.43 999 194.62 EAS 22432 43829 23669.5 820.80 999 93.12 fl1400 20127 Proposed 20194 29176 20233.4 26.01 999 34.24 EAS 68182 92988 71629.3 1905.70 999 33.58 d1655 62128 Proposed 62291 68922 62457.5 90.39 999 199.58 8 ComputationalIntelligenceandNeuroscience ×103 ×103 55 70 50 65 45 60 h h gt gt n n e 40 e 55 ur l ur l o o T 35 T 50 30 45 25 40 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 80 90 100 110 Iterations Iterations Best Best Worst Worst Average Average (a) kroB200 (b) lin318 ×103 500 450 h gt 400 n e ur l 350 o T 300 250 100 101 102 103 Iterations Best Worst Average (c) rl1323 Figure2:TheperformanceoftheproposedalgorithmwiththreeTSPinstancesrepresentingshort,medium,andlargeinstances. deviation of the average results were used as evaluation PD Avg= avgsolution−bestknownsolution ×100. (8) measures for the comparison. The results are presented in bestknownsolution Tables3and4andFigure3. In Tables 3 and 4, column 1 shows the TSP instances, In Table 3, the proposed algorithm was compared with column2showsthebestknownsolutions,column3shows ChenandChien[27]andWangetal.[41]on24benchmark the algorithms, column 4 shows the best solutions over all instances with cities from 51 to 1655. As can be seen in runs,column5showstheaveragesolutionofallruns,column Table3,forthe24TSPinstances,theproposedalgorithmwas 6 presents the standard deviation, column 7 reveals the muchbetterthanbothalgorithmsonallmediumandlarge percentagedeviationofthebestresults(PD Best)compared instances, such as lin318, rat575, rat783, rl1323, fl1400, and to those of the best known solution, and column 8 reveals d1655,withrespecttothefiveevaluationmeasuresmentioned thepercentagedeviationoftheaverageofthebestsolutionof above.Therewasnosignificantdifferencebetweenthepro- allruns(PD Avg)incomparisontothebestknownsolution. posedalgorithm,ChenandChien[27]andWangetal.[41] PD Bestwascalculatedby(7)andPD Avgwascalculatedby on the small instances with cities less than or equal to 100, (8). withrespecttobestfoundsolutionandPD Best. In Table 4, the proposed algorithm was compared with (bestsolution−bestknownsolution) Yousefikhoshbakht et al. [32] and Mahi et al. [31] on 15 PD Best= and 8 benchmark instances, respectively, with cities from bestknownsolution (7) 51 to 200 as reported in their studies. As can be seen in ×100, Table 4, the values of columns best and PD Best show that ComputationalIntelligenceandNeuroscience 9 Table3:AcomparisonoftheproposedalgorithmwithChenandChien[27]andWangetal.[41]accordingtothebestsolution,average solution,standarddeviation,thepercentagedeviationoftheaveragesolution(PD Avg),andthepercentagedeviationofthebestsolution (PD Best)foundbythealgorithms. Instance Opt. Method Best Average Std.dev. PD Best PD Avg Proposedalgorithm 426 426 0.000 0.000 0.000 eil51 426 ChenandChien[27] 427 427.27 0.450 0.235 0.298 Wangetal.[41] 426 426 N/A 0.000 0.000 Proposedalgorithm 538 538 0.000 0.000 0.000 eil76 538 ChenandChien[27] 538 540.2 2.940 0.000 0.409 Wangetal.[41] 538 538 N/A 0.000 0.000 Proposedalgorithm 629 629 0.000 0.000 0.000 eil101 629 ChenandChien[27] 630 635.23 3.590 0.159 0.990 Wangetal.[41] 629 629 N/A 0.000 0.000 Proposedalgorithm 7542 7542 0.000 0.000 0.000 berlin52 7542 ChenandChien[27] 7542 7542 0.000 0.000 0.000 Wangetal.[41] 7542 7542 N/A 0.000 0.000 Proposedalgorithm 118282 118282 0.000 0.000 0.000 bier127 118282 ChenandChien[27] 118282 119421.8 580.830 0.000 0.964 Wangetal.[41] 118282 118282 0.000 0.000 Proposedalgorithm 6110 6110 0.000 0.000 0.000 ch130 6110 ChenandChien[27] 6141 6205.63 43.700 0.507 1.565 Wangetal.[41] 6110 6112.4 0.000 0.039 Proposedalgorithm 6528 6528 0.000 0.000 0.000 ch150 6528 ChenandChien[27] 6528 6563.7 22.450 0.000 0.547 Wangetal.[41] 6528 6531.84 N/A 0.000 0.059 Proposedalgorithm 7910 7910 0.000 0.000 0.000 rd100 7910 ChenandChien[27] 7910 7987.57 62.060 0.000 0.981 Wangetal.[41] 7910 7910 N/A 0.000 0.000 Proposedalgorithm 14379 14379 0.000 0.000 0.000 lin105 14379 ChenandChien[27] 14379 14406.37 37.280 0.000 0.190 Wangetal.[41] 14379 14379 N/A 0.000 0.000 Proposedalgorithm 42029 42042.4 42.375 0.000 0.032 lin318 42029 ChenandChien[27] 42487 43002.9 307.510 1.090 2.317 Wangetal.[41] 42081 42204.16 N/A 0.124 0.417 Proposedalgorithm 21282 21282 0.000 0.000 0.000 kroA100 21282 ChenandChien[27] 21282 21370.47 123.360 0.000 0.416 Wangetal.[41] 21282 21284.24 N/A 0.000 0.011 Proposedalgorithm 26524 26524 0.000 0.000 0.000 kroA150 26524 ChenandChien[27] 26524 26899.2 133.020 0.000 1.415 Wangetal.[41] 26524 26528.12 N/A 0.000 0.016 Proposedalgorithm 29368 29368 0.000 0.000 0.000 kroA200 29368 ChenandChien[27] 29383 29738.73 356.070 0.051 1.262 Wangetal.[41] 29368 29374.84 N/A 0.000 0.023 Proposedalgorithm 22141 22141 0.000 0.000 0.000 kroB100 22141 ChenandChien[27] 22141 22282.87 183.990 0.000 0.641 Wangetal.[41] 22141 22186.28 N/A 0.000 0.205 Proposedalgorithm 26130 26130 0.000 0.000 0.000 kroB150 26130 ChenandChien[27] 26130 26448.33 266.760 0.000 1.218 Wangetal.[41] 26130 26133.2 N/A 0.000 0.012 Proposedalgorithm 29437 29437 0.000 0.000 0.000 kroB200 29437 ChenandChien[27] 29541 30035.23 357.480 0.353 2.032 Wangetal.[41] 29437 29439.64 N/A 0.000 0.009 10 ComputationalIntelligenceandNeuroscience Table3:Continued. Instance Opt. Method Best Average Std.dev. PD Best PD Avg Proposedalgorithm 20749 20749 0.000 0.000 0.000 kroC100 20749 ChenandChien[27] 20749 20878.97 158.640 0.000 0.626 Wangetal.[41] 20749 20749 N/A 0.000 0.000 Proposedalgorithm 21294 21294 0.000 0.000 0.000 kroD100 21294 ChenandChien[27] 21309 21620.47 226.600 0.070 1.533 Wangetal.[41] 21294 21297.2 N/A 0.000 0.015 Proposedalgorithm 22068 22068 0.000 0.000 0.000 kroE100 22068 ChenandChien[27] 22068 22183.47 103.320 0.000 0.523 Wangetal.[41] 22068 22075.52 N/A 0.000 0.034 Proposedalgorithm 6777 6787.1 10.126 0.059 0.208 rat575 6773 ChenandChien[27] 6891 6933.87 27.620 1.742 2.375 Wangetal.[41] 6807 6830.88 N/A 0.502 0.855 Proposedalgorithm 8811 8829.7 15.019 0.057 0.269 rat783 8806 ChenandChien[27] 8988 9079.23 52.690 2.067 3.103 Wangetal.[41] 8859 8877.92 N/A 0.602 0.817 Proposedalgorithm 270309 270841.7 403.434 0.041 0.238 rl1323 270199 ChenandChien[27] 277642 280181.5 1761.660 2.755 3.694 Wangetal.[41] 270919 271481.6 N/A 0.266 0.475 Proposedalgorithm 20194 20233.4 26.014 0.333 0.529 fl1400 20127 ChenandChien[27] 20593 21349.63 527.880 2.315 6.075 Wangetal.[41] 20314 20428.48 N/A 0.929 1.498 Proposedalgorithm 62291 62457.5 90.388 0.262 0.530 d1655 62128 ChenandChien[27] 64151 65621.13 1031.940 3.256 5.622 Wangetal.[41] 62463 62670.52 N/A 0.539 0.873 there was no significant difference between the proposed 7 algorithm and Yousefikhoshbakht et al. [32] on the small 6 instanceswithcitieslessthanorequalto100.Forthelarger 5 instances,theproposedalgorithmgainedmuchbetterresults 4 thanYousefikhoshbakhtetal.[32].ComparingwithMahiet 3 al.[31],theproposedalgorithmachievedbetterresultsinall 2 the 8 instances with respect to best found solution, average 1 solution,PD Best,andPD Avg. 0 lin318 rat575 rat783 rl1323 fl1400 d1655 Figure3showsacomparisonoftheproposedalgorithm toChenandChien[27]andWangetal.[41]basedontheper- Proposed algorithm centagedeviationsoftheaveragesolutiontothebestknown Chen and Chien [27] solution.Itisclearthattheproposedalgorithmsignificantly Wang et al. [41] gained smaller percentage deviations than Chen and Chien [27]andWangetal.[41]inthelarge-scaleTSPinstancesthat Figure3:Percentagedeviationsoftheaveragesolutiontothebest knownsolutionofthelarge-scaleTSPinstancesfortheproposed islin318,rat575,rat783,rl1323,fl1400,andd1655. algorithmandtheotheralgorithms. In summary, numerical results show that the proposed algorithm was effective. It was able to solve small and large size instances better than the existing algorithms. This is becauseoftheproposedalgorithmcapabilityofsearchingthe general,theresultsindicatethatthestructureoftheproposed optimal solution until the last iterations without stagnation algorithm, which depends on the concepts of embedding or premature convergence, especially for the medium and simulated annealing, mutation operation, and local search large TSP instances, compared to the other algorithms. In procedure,achievedthebalancebetweendiversificationand

Description:
SA and mutation operator were used to increase the ants population diversity from ing TSP in which ant colony, genetic algorithm, and simulated.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.