Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/6426
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBharati-
dc.date.accessioned2014-10-13T14:00:37Z-
dc.date.available2014-10-13T14:00:37Z-
dc.date.issued1993-
dc.identifierPh.Den_US
dc.identifier.urihttp://hdl.handle.net/123456789/6426-
dc.guideMohan, C.-
dc.description.abstractThe present. thesis is interdisciplinary in nature. In the thesis an attempt has been made to develop some efficient/ and reliable computational algorithms for obtaining global optimal solutions.of realistic nonlinear optimization problems arising in various disciplines. Mathematical techniques for finding the optimal value (the greatest possible value or the least possible value) of a function are called 'Optimization Techniques'. The function to be optimized may be linear or nonlinear. Even its exact mathematical form may or may not be known. It can also be subject to a set of some specified constraints. Optimization problems with multiple objectives are.also possible in real life situations. OptiMization problems arise in various disciplines such as engineering design, manufacturing- systems, economics, business administration, physical sciences etc. In view of their practical utility, there is a need for an efficient and robust computational -algorIthms-which-cam7mtmmrrically-solve on computers mathematical models of medium as well as large size optimization problems arising in different fields. Simplex based methods and Karmarkar type algorithMs -cmai- efficiently solve large size linear programming problems. However, as yet no computational algorithm is available which can solve with comparable efficiency even medium size nonlinear optimization problems. .A nonlinear optimization problem may have one or more than one local optimal solution. Whereas every local optimal solution of a linear programming problem is also its global optimal solution, it is not so in the case of a nonlinear problem. The problem of determining the global optimal solution of a nonlinear problem is even more difficult as compared to determining one of Its local optimal solutions.HOwever, in certain practical situations, it is necessary to search for the global rather than a local optimal solution.Where as extensiVe literature is currently available on optimization techniques which only guarantee a local optimal solution, literature on techniques which assure global optimal solutions is comparatively scanty. It is partly due to the fact that theoretically it is not possible to find the point of global minima (maxima) without having to search in the ndighbourhood of every feasible point. As a consequence no computational algorithm can guarantee to locate the global solution in a finite number of-steps. In traditional mathematical treatment of the global optimization problems rather restrictive assumptions must be made. Therefore such an approach Is not very relevant to global optimization problems in general. In order to be able to treat such problems some heuristics generally have •be introduced. Of course one would expect a global optimization algorithm to be neither purely heuristic nor purely mathematical because of the complexity of the problem to be solved, but to be a proper combination of both. Most of the algorithms which have appeared in literature are usually heuristic in nature for which rigorous mathematical theorems regarding their convergence properties cannot. be established- even though they have been known to work well in many of the realistic situations. However, as optly remarked, this should not be a reason not to study further such types of ,algorithms. Rather the failure of the present mathematical tools 'in proving or disproving their convergence properties should encourage mathematicians to develop tools suited for analysing such types of algorithms. The present study is primarily concerned with the computational aspects of global optimization. In the thesis, an attempt has been made to developed some efficient computational algorithms based on controlled random search algorithm of Price (142) for solving realistic global optimization problems arising in various disciplines. The-algorithm of Price is partly heuristic in nature. It works iteratively in two phases called the global phase and the local phase. Whereas in the global phase a suitably large .sample of random feasible points is chosen, in the local phase these sampled points are manipulated by local searches to yield possible candidates for global minima. In the present work we have tried to develop some superior versions of this algorithm. iv-) The algorithms developed in the thesis have also been used to solve mathematical models of certain realistic problems. Chapter wise summary of the work presented in the thesis, is as follows : In Chapter I the basic concepts of optimization are introduced and the relevant literature available on the subject is briefly reviewed, Keeping in view the subject matter of the thesis, literature on global optimi2ation techniques is reviewed in greater detail.In particular we dwell at some length on a class of random.' search techniques of global optimization to which our present study pertains. Chapter closes with a summary of the work presented in the subsequent chapters. In chapter II the possibility of improving the performance of the controlled random search algorithm RST1 developed by Mohan and Shanker for solving constrained nonlinear global optimization problems has been considered. RST1 algorithm of Mohan and Shanker (120b) is based on the controlled random search algorithm of Price.For improving its efficiency, and reliability three modified versions of this algorithm --(named RST1(a),RST1(b)• and- RST1(c) versions respectively ).have been developed. Like the original algorithm, the three modified versions developed in this chapter also work iteratively in two phases,the global phase,and the local phase.With a view to compare their relative performance, the three modified versions have been tested on thirty five test problems chosen from different sources in literature., The problems include linear as well as nonlineae, inequality as well as equality constrained problems. The test problems are listed in Appendix I at the end of the thesis. In order to compare the relative performance of various algorithms,a .performance index has also been proposed in this chapter.This index takes into account the success rate as well as the computational time as well as the number of function evaluations made in achieving the correct solution. Based upon their performance on the set Of thirty five test, problems,the value of this performance index has been computed for the three (v) modified versions RST1(a), RST1(b), RST1(c) as well as the original algorithm RST1. The results indicate that the performance of RST1(a) version is superior to the original _RST1 algorithm as 'well as the other two modified versions developed by us in this chapter. Situations in which the algorithms fail to obtain correct global optimal solutions have been also further analysed to look for the possible causes of failure. Mohan and Shanker also proposed- another controlled random search algorithm (named RST2 algorithm) for solving constrained global optimization problems.RST2 version differs from RST1 version primarily in the local phase.Whereas in RST1 version,a simplek of (n+1) points chosen randomly from the array A (generated in the global phase) is used to generate the next possible candidate for global minima, in RST2 algorithm this objective is achieved by determining the point of minima of a quadratic curve passing through'the three points chosen randomly from the initial array A (called quadratic approximation approach).According to the authors the performance of RST2- version is superior to RST1 version particularly in the case of highly nonliriear problems. In chapter three, four modified versions of the RST2 algorithm (we call these RST2(a), RST2(b), RST2(c) and RST2(d) versions) have been developed with a view to improve further its performance. The performance of the original RST2 algorithm as well•as the proposed four modified versions has been tested on the same set of 35 test problems which were earlier used in chapter II for testing the performance of RST1 based algorithms.The value of the performance index has also been computed for each of these versions in the manner as explained in chapter II. Our computational experience -shows that the performance of RST2(a) and RST2(b) versions is comparable and superior to RST2, RST2(c) and RST2(d) versions.The cases in which algorithms fail to achieve global: optimal solutions have been analysed further to search for the possible causes of failure. Certain hybrid versions of RST1' and RST2 based algor-ithms have been considered in chapter IV. In these hybrid versions (vi) instead of either exclusively using simpleX approach (as used in the case of RST1 algorithm-and_its-variants), or exclusively using quadratic approximation approach (as used in the case of RST2 algorithm and its variants), a mix of the two approacheS is tried. In these hybrid algorithms simplex approach is first tried as usual in the local phase.However.in situations, where in an iteration of the local phase,this simplex approach fails to supply a superior point for inclusion in the array A; search is repeated with the local phase of RST2 based algorithms-.A trial of the local phase is discarded as a failure only when both the approaches fail to yield a better candidate for inclusion. In the array A.Such types of hybrid versions . might succeed in problems "where the original versions failed. In other situations these hybrid versions may speed up--the rate of convergence of the algorithm to the global optimal solution. Keeping this objective in view, five hybrid. versions of the controlled random search algorithm have been developed in this chapter and tried on the same set of 35' test problems given in appendix I.Hybrid-1 version is a modification of RST1(a) version in which quadratic approximation approach is used whenever the .simplex approach of the local phase fails to provide a possible candidate for inclusion in the array A. Hybrid-2 and Hybrid-3 versions are parallel to Hybrid-1 version but are respectively based on RST1(b) and RST1(c) versions in place of RST1(a) version. Like Hybrid-2 version. Hybrid-4 version is also based on RST1(b) version.However in this version we do not invoke quadratic.. approximation approach in the local -phase as and when simplex times before invoking the quadratic approxiMation approach-. Hybrid-S version is similar to Hybrid.,-4 version except that it is based on RST1(c) version in place of RST1(b) version. Computer programs of all these five hybrid-versions have been developed' and tested on the same- set of 35 problems given in appendix I.In order to evaluate their relative performance, the value of the performance index defined in chapter II has also been computed for each- algorithm. Our computational experience shows-that performance of Hybrid-i version is superior to the other four (vii) ) Hybrid versions developed by us. The situations in which these hybrid algorithms failed to search global optimal solutions have been' further analysed to search for the possible causes of -failUre. The computational algorithms developed thus far are primarily meant for solving real valued problems in which an unknown variable can assume any real value within specified feasible domain. These algorithms cannot be directly used t6 solve integer or mixed Integer programming problems where integer solutions of some or all unknown variables are desired. In chapter V two computational algorithms (namely IRST1 and IRST2-)_ base& on controlled random search approach have been developed for solving integer and mixed integer programming problems. These algorithms, which are primarily designed for obtaining the global optimal solutions of integer and mixed integer programming problems, are essentially suitably modified versions of RST1(a) and RST2(a) algorithms. Computer programs for both these versions have been developed and their effectiveness tested on a set of fifteen test problems taken from literature.- These problems are listed in Appendix II at the end'of the thesis.The problems include single and multiobjective linear and nonlinear integer, mixed integer, and 0-1 programming problems. Based-on their performance on these test problems, the value of the performance index has been computed for both these algorithms. Our results show that the performance of IRST1 algorithm is superior to IRST2 algorithm particularly in the case of linear integer problems. Our computational experience indicates that even though the rate of success of the controlled random search algorithms considered in the earlier chapters is generally quite high, hundred percent success in the search of the global optimal solution cannot be guaranteed with any of these algorithms. Keeping this fact in view, the possibility of the repetitive use of the controlled random search algorithms, in a tunnelling type manner,in their search of the global optimal solutions has been considered in chapter VI.This 4proach has been primarily motivated by the tunnelling algorithms of Levy et. al (106) as used by Shanker (158). In this approach a controlled random search algorithm is first used in the normal way to search for the global optimal solution. Once an optimal solution has been obtained, the algorithm is made to search again for the optimal solution of the problem in a domain which excludes the earlier achieved point of optima. In case the new optimal solution achieved is superior to the earlier solution,the earlier solution is discarded in favour of the new one, else the earlier solution is retained. This process is repeated a specified_number of times.The best solution obtained in this manner is taken-to be the global optimal solution. Suitable updated versions of RST1(a) and RST2(a) algorithms, incorporating this repetitive tunnelling type concept, have been developed and tested on those test problems'of Appendix I where original versions of RST1(a) and RST2-(a) algorithms failed to achieve hundred percent success. With this tunneling type use of RST2(a) algorithm, hundred percent success is achieved in all the problems (except for a problem in which the solution is unbounded) when the tunnelling phase is employed eight times. Performance of the algorithms developed in the thesis have been tested thus far -only on standard test problems taken from literature. In these problems the number- of unknown variables is -upto ten and the number of constraints (excluding bounds on variables) upto seven. In real life situations one may have to deal with- optimization problems in which the number of unknown variables and the number of constraints is much larger. In chapter VII we consider the possibility of using the controlled random search algorithms in solving moderately large size problems.In the case of controlled random search algorithm normally an initial array A of 10(n+1) randomly generated feasible points (where n is the number of unknown variables in the problem) is recommended. However this number becomes unduly large in the case of large size problems. Our computational experience has shown that the size of the initial array A. to a large extent determines the requirements of the memory space and the computational time In case the controlled random search algorithms developed in the earlier chapters are used in their present forms for solving large size problem in which the number of unknown variables and the number of constraints is large, the computational time and memory space requirements are expected to increase manifold. As a result it may not be possible to use these algorithm to solve real life problems in practice except when facilities of large computing systems are available; In order to be able to solve such problems on commonly available computing systems such as PC's, there is a need to suitably modify these algorithms to conserve upon the requirements of memory space as well as computational time. In order to check whether the size of this initial array A can -be reduCed in the case of large size problems, (without sacrificing in any way the rate of success), a set of 12 moderately large size test problems have been solved in chapter VII with RST2(a), RST2(b) and Hybrid-1 algorithms using different, size -of the initial array. For this purpose initial arrays of 10(n+1), 5(n+1) and 2(n+1) points have been used. The set of the test problems includes linear, nonlinear, constrained as well as unconstrained problems. The number of.unknown variables, in these problems-is upto hundred and the number of constraints (excluding bounds on variables) upto fourty five. Using each of these three algorithms, each problem was solved five times .using initial arrays of 10(n+1), 5(n+1) and 2(n+1) random feasible points. Three moderately large size integer/mixed integer programming problems have also been solved with IRST1 and IRST2 algorithms using initial arrays of 5(n+1) and 2(n+1) random feasible points. Our computational experience shows that for large size problems, initial array of reduced size of 5(h+1) (or even 2(n+1)) random points can be used tparticularly in the case of problems in which number of local optimal solutions is not too large), without, affecting the rate of success. In the case of problems considered by us there is on an average, fifty percent saving in the computational time when an initial array of 5(n+1) random feasible points is used in place of 10(n+1) random feasible points. Savings of upto seventy percent in the computational time is achieved when initial arrays of 2(n+1) points are used. In the case of problems in which hundred percent success is not achieved with smaller (x) initial arrays of 2(n+1) points, hundred percent could be achieved when these problems were solved using these algorithms in tunnelling type manner. The possibility of solving large size problems by controlled random search algorithms on parallel computing systems has also been-briefly discusSed in this chapter. Our main objective in developing the computational algorithms reported in the earlier chapters of this 'thesis has been to provide the user with some reliable and efficient computational algorithms for solving mathematical models of real life optimization problems which arise In different fields of hUman activity. In chapter VIII we present our experience of using the computational algorithms developed in the earlier chapters in solving mathematical models of some real life problems taken from different'fields. The use of RST2(b) algorithm in solving these test' problems has been demonstrated in particular. The selected problems include both linear as well as nonlinear prOblems for which results have been obtained earlier in literature using some other optimization techniques. Some- of the practical problems of engineering design which have been solved with RST2(b) algorithm are; Gas Transmitslon Compression Design Problem; Optimal Capacities of Gas Production Facilities.PPoblem; Optimal Design of a Welded Beam; Optimal Design of Industrial Refrigeration System; Optimization of Riser Design; Solution of an Alkylation Process;' Boiler/Turbo Generator system Optimization Problem; Finding the Global Minima of the Transformer Design Function; Optimal Thermohydraulic Performance of Artificially Roughened solar Air., Heaters; and Screening' Model of a Water Resource Development system. Limitations and scope of the work presented in the thesis are finally discussed in chapter IX. Certain conclusions and recommendations based on the present study are also made in this concluding chapter. _Our-exper-fence• has shown that the controlled. random search methods can be' profitably used to solve the mathematical models of a large variety of real life complex problems of engineering_ design. the methods are recommended for solving optimization problems arising in various fields of human activity, particularly those in which the mathematical nature of the functions appearing in_the problem is not well understood, and more then one. local optimal solution are possible. .A paper entitled, 'Relative performance of certain! computational algorithms based on random search for determining the global minima of nonlinear optimization problems' was presented at the 55th Annual Conference of the Indian Mathematical Society, held at Delhi in December,1989. A paper entitled 'Computational algorithms based on random search procedures for solving nonlinear optimization problems' was presented_ at the- Annual -Convention of the Operational Research Society of India, held at Delhi in December,1990 and appeared in its proceedings (pp.734 - 751). A paper entitled 'Solution of some medium size test problems using controlled random search techniques of global optimization', was presented at the NSC - 91 conference held at University of Roorkee in March 1992 and has been published in its proceedings (pp.184 -188). .A paper entitled 'A controlled random search techniques for solving integer programming problems,' has been published in the proceedings of the OPTEC - TS- 92 conference, (pp.13-18) held at Madurai in July ,1992.en_US
dc.language.isoenen_US
dc.subjectMATHEMATICSen_US
dc.subjectNON-LINEAR OPTIMIZATIONen_US
dc.subjectCONTROLLED RANDOMen_US
dc.subjectOPTIMIZATION TECHNIQUESen_US
dc.titleCONTROLLED RANDOM SEARCH OPTIMIZATION TECHNIQUES AND THEIR APPLICATIONSen_US
dc.typeDoctoral Thesisen_US
dc.accession.number247237en_US
Appears in Collections:DOCTORAL THESES (Maths)

Files in This Item:
File Description SizeFormat 
247237MATH.pdf
  Restricted Access
10.59 MBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.