Learn smart - Learn online. Upto 88% off on courses for a limited time. View Courses
New User? Start here.
Error goes here
Please upload all relevant files for quick & complete assistance.
Optimization: Algorithms and Applications presents a variety of solution tech - niques for optimization problems, emphasizing concepts rather than r ...
Optimization: Algorithms and Applications presents a variety of solution tech - niques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden–Fletcher–Goldfarb–Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible directions, genetic algorithms, particle swarm optimization (PSO), simulated annealing, ant colony optimization, and tabu search methods. The author shows how to solve non-convex multi-objective optimiza - tion problems using simple modifications of the basic PSO code. The book also introduces multidisciplinary design optimization (MDO) architectures—one of the first optimization books to do so—and develops software codes for the simplex method and affine-scaling interior point method for solving linear programming problems. In addition, it examines Gomory’s cutting plane method, the branch- and-bound method, and Balas’ algorithm for integer programming problems. The author follows a step-by-step approach to developing MATLAB ® codes from the algorithms. He then applies the codes to solve both standard functions taken from the literature and real-world applications, including a complex trajectory de - sign problem of a robot, a portfolio optimization problem, and a multi-objective shape optimization problem of a reentry body. This hands-on approach improves your understanding and confidence in handling different solution methods. Features • Explains how to solve complex optimization problems using gradient-based and stochastic methods • Describes different architectures to handle MDO problems • Solves many practical problems using MATLAB • Links the software code to the corresponding algorithms in a user-friendly way • Provides the MATLAB codes for download on the book’s CRC Press web page K25568 www.crcpress.com OPTIMIZATION OPTIMIZATION Algorithms and Applications Rajesh Kumar Arora Arora Mathematics K25568_cover.indd 1 4/9/15 9:22 AM OPTIMIZATION Algorithms and Applications OPTIMIZATION Algorithms and Applications Rajesh Kumar Arora Senior Engineer Vikram Sarabhai Space Centre Indian Space Research Organization Trivandrum, India MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book ’s use or discussion of MATLAB® soft- ware or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software. CRC Press Taylor & Francis Group 6000 Broken Sound Parkway N W, Suite 300 Boca Raton, FL 33487-2742 © 2015 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Version Date: 20150205 International Standard Book Number-13: 978-1-4987-2115-8 (eBook - PDF) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectif y in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit - ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access w w w.copyright. com (http://w w w.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://w w w.taylorandfrancis.com and the CRC Press Web site at http://w w w.crcpress.com Dedicated to my mother vii Contents Preface ...................................................................................................................... xi Author ..................................................................................................................... xv 1. Introduction ..................................................................................................... 1 1.1 Historical Review .................................................................................. 1 1.2 Optimization Problem .......................................................................... 3 1.3 Modeling of the Optimization Problem ............................................ 5 1.4 Solution with the Graphical Method ................................................ 11 1.5 Convexity ............................................................................................. 13 1.6 Gradient Vector, Directional Derivative, and Hessian Matrix ..... 16 1.7 Linear and Quadratic Approximations ........................................... 23 1.8 Organization of the Book ................................................................... 25 Chapter Highlights ........................................................................................ 27 Formulae Chart .............................................................................................. 28 Problems .......................................................................................................... 29 2. 1-D Optimization Algorithms ................................................................... 35 2.1 Introduction ......................................................................................... 35 2.2 Test Problem ......................................................................................... 37 2.3 Solution Techniques ............................................................................ 38 2.3.1 Bisection Method ................................................................... 38 2.3.2 Newton–Raphson Method ................................................... 40 2.3.3 Secant Method ........................................................................ 42 2.3.4 Cubic Polynomial Fit ............................................................. 44 2.3.5 Golden Section Method ........................................................ 46 2.3.6 Other Methods ....................................................................... 47 2.4 Comparison of Solution Methods ..................................................... 49 Chapter Highlights ........................................................................................ 51 Formulae Chart .............................................................................................. 52 Problems .......................................................................................................... 52 3. Unconstrained Optimization ..................................................................... 55 3.1 Introduction ......................................................................................... 55 3.2 Unidirectional Search ......................................................................... 57 3.3 Test Problem ......................................................................................... 59 3.4 Solution Techniques ............................................................................ 60 3.4.1 Steepest Descent Method ...................................................... 62 3.4.2 Newton’s Method ................................................................... 63 3.4.3 Modified Newton’s Method ................................................. 66 3.4.4 Levenberg–Marquardt Method ........................................... 66 viii Contents 3.4.5 Fletcher–Reeves Conjugate Gradient Method ................... 68 3.4.6 DFP Method ............................................................................ 70 3.4.7 BFGS Method .......................................................................... 72 3.4.8 Powell Method ....................................................................... 74 3.4.9 Nelder–Mead Algorithm ...................................................... 75 3.5 Additional Test Functions .................................................................. 78 3.5.1 Rosenbrock Function ............................................................. 78 3.5.2 Quadratic Function ................................................................ 79 3.5.3 Nonlinear Function ............................................................... 81 3.5.4 Wood’s Function ..................................................................... 82 3.6 Application to Robotics ...................................................................... 83 Chapter Highlights ........................................................................................ 85 Formulae Chart .............................................................................................. 86 Problems .......................................................................................................... 87 4. Linear Programming .................................................................................... 93 4.1 Introduction ......................................................................................... 93 4.2 Solution with the Graphical Method ................................................ 95 4.3 Standard Form of an LPP ................................................................... 98 4.4 Basic Solution ..................................................................................... 103 4.5 Simplex Method ................................................................................ 105 4.5.1 Multiple Solutions ................................................................ 112 4.5.2 Degeneracy ........................................................................... 114 4.5.3 Two-Phase Method .............................................................. 116 4.5.4 Dual Simplex Method ......................................................... 121 4.6 Interior-Point Method ....................................................................... 125 4.7 Portfolio Optimization ..................................................................... 127 Chapter Highlights ...................................................................................... 131 Formulae Chart ............................................................................................ 133 Problems ........................................................................................................ 133 5. Guided Random Search Methods ........................................................... 139 5.1 Introduction ....................................................................................... 139 5.2 Genetic Algorithms ........................................................................... 140 5.2.1 Initialize Population ............................................................ 142 5.2.2 Fitness Evaluation ................................................................ 143 5.2.3 Reproduction ........................................................................ 143 5.2.4 Crossover and Mutation ..................................................... 147 5.2.5 Multimodal Test Functions ................................................ 148 5.3 Simulated Annealing ........................................................................ 154 5.4 Particle Swarm Optimization .......................................................... 157 5.5 Other Methods .................................................................................. 160 5.5.1 Ant Colony Optimization ................................................... 160 5.5.2 Tabu Search ........................................................................... 163 Chapter Highlights ...................................................................................... 164 ix Contents Formulae Chart ............................................................................................ 165 Problems ........................................................................................................ 166 6. Constrained Optimization ....................................................................... 169 6.1 Introduction ....................................................................................... 169 6.2 Optimality Conditions ..................................................................... 171 6.3 Solution Techniques .......................................................................... 175 6.3.1 Penalty Function Method ................................................... 176 6.4 Augmented Lagrange Multiplier Method ..................................... 182 6.5 Sequential Quadratic Programming .............................................. 184 6.6 Method of Feasible Directions ........................................................ 190 6.6.1 Zoutendijk’s Method ........................................................... 191 6.6.2 Rosen’s Gradient Projection Method ................................. 192 6.7 Application to Structural Design .................................................... 195 Chapter Highlights ...................................................................................... 196 Formulae Chart ............................................................................................ 197 Problems ........................................................................................................ 199 7. Multiobjective Optimization ................................................................... 203 7.1 Introduction ....................................................................................... 203 7.2 Weighted Sum Approach ................................................................. 205 7.3 ε-Constraints Method ....................................................................... 210 7.4 Goal Programming ........................................................................... 212 7.5 Utility Function Method .................................................................. 214 7.6 Application ......................................................................................... 215 Chapter Highlights ...................................................................................... 220 Formulae Chart ............................................................................................ 220 Problems ........................................................................................................ 221 8. Geometric Programming .......................................................................... 223 8.1 Introduction ....................................................................................... 223 8.2 Unconstrained Problem ................................................................... 224 8.3 Dual Problem ..................................................................................... 229 8.4 Constrained Optimization .............................................................. 231 8.5 Application ......................................................................................... 235 Chapter Highlights ...................................................................................... 238 Formulae Chart ............................................................................................ 238 Problems ........................................................................................................ 240 9. Multidisciplinary Design Optimization ............................................... 243 9.1 Introduction ....................................................................................... 243 9.2 MDO Architecture ............................................................................ 245 9.2.1 Multidisciplinary Design Feasible .................................... 247 9.2.2 Individual Discipline Feasible ........................................... 248 9.2.3 Simultaneous Analysis and Design .................................. 249 xContents 9.2.4 Collaborative Optimization ................................................ 251 9.2.5 Concurrent Subspace Optimization .................................. 252 9.2.6 Bilevel Integrated System Synthesis .................................. 252 9.3 MDO Framework .............................................................................. 253 9.4 Response Surface Methodology ...................................................... 254 Chapter Highlights ...................................................................................... 257 Formulae Chart ............................................................................................ 258 Problems ........................................................................................................ 259 10. Integer Programming ................................................................................ 263 10.1 Introduction ....................................................................................... 263 10.2 Integer Linear Programming .......................................................... 264 10.2.1 Gomory’s Cutting Plane Method ....................................... 265 10.2.2 Zero-One Problems ............................................................. 272 10.3 Integer Nonlinear Programming .................................................... 277 10.3.1 Branch-and-Bound Method ................................................ 278 10.3.2 Evolutionary Method .......................................................... 284 Chapter Highlights ...................................................................................... 286 Formulae Chart ............................................................................................ 286 Problems ........................................................................................................ 287 11. Dynamic Programming ............................................................................. 289 11.1 Introduction ....................................................................................... 289 11.2 Deterministic Dynamic Programming.......................................... 289 11.3 Probabilistic Dynamic Programming ............................................ 294 Chapter Highlights ...................................................................................... 296 Formula Chart .............................................................................................. 297 Problems ........................................................................................................ 297 Bibliography ........................................................................................................ 299 Appendix A: Introduction to MATLAB ® ...................................................... 309 Appendix B: MATLAB ® Code ......................................................................... 321 Appendix C: Solutions to Chapter Problems ............................................... 401 Index ..................................................................................................................... 437 xi Preface There are numerous books on the subject of optimization, attributable to a number of reasons. First, the subject itself is mathematically rigorous and there are a number of solution methods that need to be examined and under - stood. No single solution method can be applied to all types of optimization problems. Thus a clear understanding of the problem, as well as solution techniques, is required to obtain a proper and meaningful solution to the optimization problem. With the progression of time, optimization prob - lems have also become complex. It is necessary not only to obtain the global optimum solution, but to find local optima as well. Today’s problems are also of the multiobjective type, where conflicting objective functions are to be handled. There is also a need to simultaneously handle objective func - tions and constraints of different disciplines, resulting in multidisciplinary design optimization (MDO) problems that are handled using different archi - tectures. Gradient-based methods were popular until the 1990s. At pres - ent, a large number of complex problems are solved using guided random search methods such as genetic algorithm, simulated annealing, and particle swarm optimization (PSO) techniques. Even hybrid algorithms, that use a combination of gradient-based and stochastic methods, are also very popu - lar. Different authors have addressed these issues separately, resulting in a number of books in this area. So how does this book differ from the others? The solution techniques are detailed in such a way that more emphasis is given to the concepts and rigorous mathematical details and proofs are avoided. It is observed that a method can be understood better if different parameters in the algorithm are plotted or printed over different iterations while solving a problem. This can be accomplished by writing a software code for the method or the algorithm. It is often difficult for a newcomer to write a software code if the algorithm such as, say, Broyden–Fletcher–Goldfarb–Shanno (BFGS) or PSO is given to him or her. In this book, a step-by-step approach is followed in developing the software code from the algorithm. The codes are then applied to solve some standard functions taken from the literature. This creates understand - ing and confidence in handling different solution methods. The software codes are then suitably modified to solve some real-world problems. A few books on optimization have also followed this approach. However, the soft - ware code in these books is hard to correlate with the corresponding algo - rithms mentioned in the book and readers are forced to use them as black box optimization tools. The codes presented in this book are user friendly in the sense that they can be easily understood. A number of practical problems are solved using these codes. xii Preface The codes are written in the MATLAB ® environment and the use of ready- made optimization routines available in MATLAB is avoided. The algorithms are developed right from computing the gradient or Hessian of a function to a complex algorithm such as for solving a constraint optimization problem. MATLAB is a software package for technical computing that performs both computing and visualization with ease. It has a number of built-in func - tions that can be used by an individual’s application. The main advantage of MATLAB is the ease with which readers can translate their ideas into an application. The book covers both gradient and stochastic methods as solution tech - niques for unconstrained and constrained optimization problems. A sepa - rate chapter (Chapter 5) is devoted to stochastic methods, where genetic algorithm, PSO, simulated annealing, ant colony optimization, and tabu search methods are discussed. With simple modifications of the basic PSO code, one can also solve nonconvex multiobjective optimization problems. This is probably the first optimization book in which MDO architectures are introduced (Chapter 9). Software codes are also developed for the sim - plex method and affine-scaling interior point method for solving linear pro - gramming problems. Gomory’s cutting plane method, branch-and-bound method, and Balas’ algorithm are also discussed in the chapter on integer programming (Chapter 10). A number of real-world problems are solved using the MATLAB codes given in the book. Some applications that are included in this book are solving a complex trajectory design problem of a robot (Chapter 3), multiobjective shape optimization problem of a reentry body (Chapter 7), portfolio optimization problem (Chapter 4), and so forth. I thank my organization, Vikram Sarabhai Space Centre (a lead center of Indian Space Research Organisation [ISRO]), for giving permission to pub - lish this book. The book has been reviewed internally by Dr. Mohankumar D., Head, Computer Division. I thank him for his suggestions and correc - tions. I thank Mr. Pandian, S., Deputy Director, VSSC for supporting the idea to write this book. I am ever grateful to Prof. M Seetharama Bhat from IISc, Bangalore and Dr. Adimurthy, V. for their support during the last ten years. I thank my colleagues Dr. Jayakumar K., Mr. Priyankar, B., Mr. Sajan Daniel and Mr. Amit Sachdeva for many hours of discussions on book-related aspects. I am grateful to Taylor & Francis Group for agreeing to publish this book and agreeing to most of my suggestions. Much credit should be given to Ms. Aastha Sharma, Editor, for her prompt actions and follow-up with the reviewers. Thanks are also due to three anonymous reviewers for their criti - cal remarks, corrections, and suggestions. I thank Mr. Sarfraz Khan, assistant to Ms. Aastha Sharma, for providing online support in signing the contract. I also thank Mr. David Fausel for coordinating and reviewing the style and art files of the book. My sincere thanks to Mr. Ed Curtis and Ms. Amor Nanas for language corrections, copyediting, and other production related works. The cover page is designed by Mr. Kevin Craig. xiii Preface I thank the MATLAB book program for supporting the idea of this book on optimization with MATLAB codes. They have also agreed to give wide publicity to this book on their website, for which I am grateful. I thank my wife, Manju, and children, Abhinav and Aditi, for their patience during the last two years. In fact my whole family—father, brothers, sister, and in-laws—are eagerly waiting for the launch of this book. This is the first edition of this book. Some errors and omissions are expected. The MATLAB codes are validated with a number of test problems taken from the literature. It is still possible that some pathways in the codes would not have been exercised during this validation. As a result, no claim is made that these codes are bug-free. I request readers to report corrections and suggestions on this book at [email protected] or [email protected] .com. The MATLAB codes mentioned in this book can be downloaded from the weblink http://www.crcpress.com/product/isbn/9781498721127. Rajesh Kumar Arora, PhD, MBA, FIE MATLAB ® is a registered trademark of The MathWorks, Inc. For product information, please contact: The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 USA Tel: 508 647 7000 Fax: 508-647-7001 E-mail: [email protected] Web: www.mathworks.com xv Author Rajesh Kumar Arora is a senior engineer with the Indian Space Research Organization (ISRO), where he has been working for more than two decades. He earned his PhD in aerospace engineering from the Indian Institute of Science, Bangalore. He has published a book titled Fundamentals of Aerospace Engineering . His area of research includes mission design, simulation of launch vehicle systems, and trajectory optimization. He also has a master’s degree in business administration and is a Fellow of Institution of Engineers (India). 1 1 Introduction 1.1 Historical Review Optimization means finding the best solution among many feasible solu - tions that are available to us. Feasible solutions are those that satisfy all the constraints in the optimization problem. The best solution could be mini - mizing the cost of a process or maximizing the efficiency of a system. Some simple optimization problems that come to mind are machine allocation and diet problems. In the machine allocation problem, one has to find how jobs are to be allocated to different machines of different capacities and with different operating costs so as to meet the production target with minimum cost. In the diet problem, different food types are available with different nutritional contents at different costs. The aim is to estimate different quan - tities of food so that nutritional requirements are met for an individual at minimum cost. Though rigorous mathematical analysis of the optimization problems was carried out during the 20th century, the roots can be traced back to about 300 b.c. , when the Greek mathematician Euclid evaluated the minimum dis - tance between a point and a line. Another Greek mathematician, Zenedorous, showed in 200 b.c. that a figure bounded by a line that has a maximum area for a given perimeter is a semicircle. In the 17th century, Pierre de Fermat, a French mathematician, laid the foundation of calculus. He showed that the gradient of a function vanishes at the maximum or minimum point. Moving further in the timeline, Newton and Leibniz laid mathematical details for the calculus of variations. This method deals with maxima or minima of functionals. The foundation for the calculus of variations is credited to Euler and Lagrange (in the 18th century), as they provided rigorous mathematical details on this topic. Subsequently, Gauss and Legendre developed the least squares method, which is exten - sively used even today. Cauchy used the steepest descent method to solve unconstrained optimization problems. 2 Optimization: Algorithms and Applications The first textbook on optimization was authored by Harris Hancock and published in 1917. In 1939, Leonid Kantorovich presented the linear pro - gramming (LP) model and an algorithm for solving it. A few years later in 1947, George Dantzig presented a simplex method for solving LP problems. Kantorovich and Dantzig are regarded as pioneers who provided break - throughs in the development of optimization techniques. The conditions for constrained optimization were brought together by Harold Kuhn and Albert Tucker in 1951 and also earlier by William Karush in 1939. Richard Bellman laid the principles of dynamic programming problems in which a complex problem is broken down into smaller subproblems. Ralph Gomory’s contribution to the development of integer programming is worth noting, as in this type of optimization problem, design variables can take integer values such as 0 and 1. With the advent of computers in the 1980s, subsequently many large-scale problems were solved. Present-day problems in the optimization area are of the multidisciplinary and multiobjective type. The solution techniques that are employed today to solve complex optimization problems are not just gradient-based algorithms, but also include nontraditional methods such as genetic algorithms, ant colony optimization, and particle swarm optimiza - tion that mimic natural processes. Today, optimization methods are required to solve problems from all dis - ciplines, whether economics, sciences, or engineering. As a result of stiff competition in virtually all disciplines, the role of optimization has become still more substantial as one aims to minimize the cost of a product or wants to allocate resources judiciously. A simple example from the subject field of aerospace engineering can prove this point. The cost of putting 1 kilogram of payload in a low Earth orbit is typically about US$15,000. The fuel and structural weight of the different stages of the rocket strongly influence the payload mass, as does the trajectory of the rocket. Of course, one can reduce the structural weight of a stage only to the extent it should not fail because of aerodynamic and other loads. The optimization problem that aims at maxi - mizing the payload mass is highly complex and requires algorithms that run on high-speed computers. Even if the optimization technique results in few extra kilograms in payload, it represents large revenue for the space agency. In the next section, we introduce to the optimization problem design vari - ables, constraints, and applications of optimization in different domains. Further in the chapter, modeling aspects of a physical problem are explained that convert the verbal problem to a mathematical form. The solution of sim - ple optimization problems with up to two design variables is explained by the graphical method. The importance of convex function in optimization is then explained. The chapter concludes with an introduction to the math - ematical preliminaries of the gradient vector, Hessian matrix, directional directive, and linear and quadratic approximation of the function. The road map of this chapter is given in Figure 1.1. 3 Introduction 1. 2 Optimization Problem In an optimization problem, a function is to be maximized or minimized. The function that is being optimized is referred to as the objective function or the performance index. The function is a quantity such as cost, profit, efficiency, size, shape, weight, output, and so on. It goes without saying that cost minimization or profit maximization are prime considerations for most organizations. Certain types of equipment, such as air conditioners or refrig - erators, are designed with different optimization criteria to have higher effi - ciency in terms of reducing energy consumption requirements of the user. However, this higher efficiency evidently comes at a higher cost to the user. Weight minimization is a prime consideration for aerospace applications. The variables in the objective function are denoted the design variables or decision variables. Typically it could be the dimensions of a structure or its material attributes, for a structure optimization problem. From practical considerations, design variables can take values within a lower and an upper limit only. For instance, the maximum capacity of a machine is limited to a certain value. The design variables can be a real or a discrete number, binary, or integer type. Though a majority of the design variables in the optimi - zation problems are real, some variables can also be discrete. For example, pipe sizes come in standard numbers such as 1, 2, or 5 inches. If pipe size is Optimization problem Modeling of the problem Convexity Gradient vector, Hessian matrix, and directional derivative Linear and quadratic approximation Solution with graphical method Book layout FIGURE 1.1 Road map of Chapter 1. 4 Optimization: Algorithms and Applications used as a design variable in an optimization problem, it has to be treated as discrete only. There is no point in selecting it as a real number and getting a solution such as, say, 3.25 inches, a pipe dimension that really does not exist. See Table 1.1 for some typical objective functions and design variables for optimization problems from different disciplines. The optimization problem can be mathematically expressed as follows. Minimize f(x) (1.1) subject to gi(x) ≤ 0 i = 1, 2,…, m < n (1.2) hj(x) = 0 j = 1, 2,…, r < n (1.3) xl ≤ x ≤ xu where x is a vector of n design variables given by x= x x xn 1 2 The functions f, gi, and hj are all differentiable. The design variables are bounded by xl and xu. The constraints gi are called inequality constraints and Tabl E 1.1 Typical Optimization Problems Discipline Design Variables Objective Function Manufacturing Productivity from different machines Minimize cost Corporate Different capitals from projects Maximize the net present value Airline Different aircrafts, different routes Maximize the profit Aerospace Propellant fraction in different stages Maximize the payload Agriculture Different crops Maximize the yield Biology Gene interaction Network stability Electronics Size of the devices Minimize the power consumption Portfolio Investment in stocks/bonds Maximize the return Thermal Dimensions and material properties Minimize the heat load 5 Introduction hj are called equality constraints. An example in the aerospace industry is to restrict the dimensions of the spacecraft so that in can be accommodated inside the payload fairing of a rocket. These restrictions are the constraints of the optimization problem. The constraints are functions of the design vari - ables. In addition, the number of equality or inequality constraints is lower than the number of design variables ( n). If the design variables satisfy all the constraints, they represent a feasible set and any element from the set is called a feasible point. The design variables at which the minimum of f(x) is reached are given by x*. If the optimization problem does not have any constraints, it is referred to as an unconstrained optimization problem. If the objective function and constraints are linear functions in x then the optimi - zation problem is termed a linear programming problem (LPP). 1. 3 Modeling of the Optimization Problem Modeling refers to expressing observations about a problem in mathemati - cal form using basic building blocks of mathematics such as addition, sub - traction, multiplication, division, functions, and numbers with proper units. Observations refer to data obtained for the problem in hand, by varying cer - tain parameters of the problem through experiments. Further, mathemati - cal models provide predictions of the behavior of the problem for different inputs. If the model does not yield expected results, it has to be refined by conducting further experiments. The mathematical model is not unique for different problems, as observed data can be discrete (defined at select inter - vals) or continuous and can vary in different fashion (say, linear or quadratic) with change in input parameters. Some simple mathematical models of dif - ferent physical phenomena are presented next. The pressure ( P), volume ( V), and temperature ( T) relationship of a gas is given by Boyle’s law as PV = kT (1.4) where k is a constant. Using this mathematical model, the behavior of a gas can be predicted (say, pressure) for the different input parameters (say, tem - perature), keeping the volume of the gas constant. An example from economics could be constructing a mathematical model for the demand–supply problem. The price of a product is to be calculated so as to maximize the profit. It is well known that if the price of the merchan - dise is kept high, profit per unit will increase but then demand for the prod - uct may be low. Likewise, if the price of the product is kept low, profit per unit will decrease, but then demand for the product may be higher. Typically, demand ( D) varies with price ( P) as 6 Optimization: Algorithms and Applications D c c P= + 1 2 2 (1.5) where c1 and c2 are constants. Some problems can be written mathematically in differential equation form. A differential equation contains an unknown function and its derivatives. As the derivative represents the rate of change of a function, the differen - tial equation represents the continuously varying quantity and its rate of change. For example, the temperature change (with respect to time) of an object is proportional to the difference between the temperature ( T) of the object and that of its surroundings ( Ts) and can be represented in differential equation form as dT dt k T T = ( ) s (1.6) Equation 1.6 is also referred to as Newton’s law of cooling . The solution of the differential equation is a function that satisfies the differential equation for all values of the independent variable in the domain. As the name suggests, independent variables are changed during an experiment and the dependent variable responds to this depending on on the type of the experiment being conducted. A differential equation can have many solutions (referred to as general solution). A particular solution is one such solution. Often, a differ - ential equation has a closed form solution. For example, the solution for the differential equation representing Newton’s law of cooling is T(t) = Ts + ( To − Ts)e−kt (1.7) Not all problems have closed form solutions and such problems have to be numerically simulated to arrive at the solutions. Therefore, using modeling, one can construct the objective function as well as the constraint functions for the optimization problem. One can then use different optimization techniques for solving such problems. The following examples illustrate how to formulate an optimization problem by construct - ing the objective and constraint functions. Example 1.1 In a diet problem, an individual has to meet his daily nutritional require - ments from a menu of available foods at a minimum cost. The avail - able food items are milk, juice, fish, fries, and chicken. The nutrient requirements to be met are for proteins, vitamins, calcium, calories, and carbohydrates. Table 1.2 shows the cost in dollars of the food items per serving, nutrient values are shown in rows against their names (such as 7 Introduction proteins, vitamins, etc.) per serving, and the last column indicates the minimum daily requirements of the nutrients. The first step is to select the design variables for the problem. It appears obvious to select quantities of food items such as fish, fries, and so on as the design variables. Let us represent the design variables by x1, x2, x3, x4, and x5 for quantities in milk, juice, fish, fries, and chicken respectively. As discussed earlier, the objective function and constraints are a func - tion of these design variables. In this particular problem, the objective function is to minimize the cost of the food items purchased. If x3 is the quantity of fish ordered and $2 is its unit price, then the total cost of the fish item is 2 x3. In a similar way, we can evaluate the cost of other items such as milk, juice, and so on. Hence the total cost of the food items is 1.1 x1 + 1.2 x2 + 2 x3 + 1.3 x4 + 3 x5 Note that the cost function or the objective function is linear ; that is, x1 is not dependent on x2 or any other variable. Having defined the objec - tive function, let us define the constraints for the problem. In the prob - lem it is clearly mentioned that the nutritional needs of the individual have to be met. For example, a minimum protein requirement of 60 units is to be met. Similarly, minimum requirements of other nutrients such as vitamins, calcium, and so forth are also to be met. Now, we can write the first constraint as 8x1 + 2 x2 + 15 x3 + 4 x4 + 30 x5 ≥ 60 (1.8) Note that this constraint is an inequality. In a similar fashion, we can write other constraints. We are now ready to write the objective function and constraints for the diet problem. Minimize 1.1 x1 + 1.2 x2 + 2 x3 + 1.3 x4 + 3 x5 (1.9) Tabl E 1.2 Data for the Diet Problem Milk Juice Fish Fries Chicken Required Cost 1.1 1.2 2.0 1.3 3.0 Proteins 8 2 15 4 30 60 Vitamins 9 3 3 1 9 100 Calcium 35 3 17 1 16 120 Calories 100 90 350 200 410 2100 Carbohydrates 10 20 40 25 40 400 Note: Construct the objective function and the constraints for this optimization problem. 8 Optimization: Algorithms and Applications subject to 8x1 + 2 x2 + 15 x3 + 4 x4 + 30 x5 ≥ 60 (1.10) 9x1 + 3 x2 + 3 x3 + x4 + 9 x5 ≥ 100 (1.11) 35x1 + 3 x2 + 17 x3 + x4 + 16 x5 ≥ 120 (1.12) 100 x1 + 90 x2 + 350 x3 + 200 x4 + 410 x5 ≥ 2100 (1.13) 10x1 + 20 x2 + 40 x3 + 25 x4 + 40 x5 ≥ 400 (1.14) Once the optimization problem is defined, one has to use standard optimization techniques in evaluating the design variables x1, x2, x3, x4, and x5. These methods are described in the later chapters. In this chapter, we are focusing on the formulation of the optimization problem. Example 1.2 A soft drink manufacturer needs to produce a cylindrical can that can hold 330 mL of a soft drink. He wants to make the dimensions of the container such that the amount of material used in its construction is minimized. Formulate the optimization problem by writing down the objective function and the constraint. The design variables for the optimization problem are the radius and the height of the can. Let these variables be denoted by x1 and x2 with units in millimeters (Figure 1.2). The cylindrical can consists of a curved portion and two circular ends. The area of the curved portion is given by 2πx1x2 and the area of two circular lids is given by 2 12x. Hence, the total area that needs to be minimized is 2 2 1 2 12 x x x + . The volume of the can is given by x x122. This volume is to be limited to 330 mL or 330,000 mm 3. Now we are ready to formulate the optimization problem. Minimize 2 2 1 2 12 x x x + (1.15) x1 x2 FIGURE 1.2 Cylindrical can. 9 Introduction subject to x x122 330 000 = , (1.16) Note that in this optimization problem, the constraint is an equality. Example 1.3 The shape of a reentry body is a spherical nose, a conical body, and a flared bottom (see Figure 1.3). The design variables through which the configura - tion of the reentry body can be altered are nose radius ( Rn), cone length ( l1), cone angle ( θ1), flare length ( l2), and flare angle ( θ2). By varying the design variables, the area ( A) of the reentry capsule is to be minimized. As the reentry capsule has to house electronic packages and other instruments, it must have a certain minimum volume ( V), which is specified as 1 m 3. The design variables are bounded between a minimum and maximum value. Rn can take a value between 0.4 and 0.6 m, l1 and l2 can take a value between 0.4 and 0.8 m, θ1 can take a value between 22 and 27 degrees, and θ2 can take a value between ( θ1 + 5) and ( θ1 + 10) degrees. Formulate the shape optimization problem of the reentry capsule. The total surface area and volume of the capsule are computed using the equations A R R R R R l R R n B = + + + + + 2 1 2 1 1 2 2 12 12 2 ( ) sin ( ) ( ) ( ) (R R R l R B B + + 22 22 2 ) (1.17) V R R R l R R n = + ( ) + + ( ) sin ( s in ) 1 6 3 1 1 3 1 12 12 12 1 1 2 222 1 2 2 2 22 2 1 3 + ( )+ + + ( ) R R l R R R R B B (1.18) Flare Conical body Spherical nose 1 2 l2 l1 R FIGURE 1.3 Reentry capsule. 10 Optimization: Algorithms and Applications where R1 = Rn cos θ1 (1.19) R2 = Rn cos θ1 + l1 tan θ1 (1.20) RB = R2 + l2 tan θ2 (1.21) The optimization problem can now be written as Minimize 2 1 2 1 1 2 2 12 12 2 R R R R R l R R R n B B ( ) sin ( ) ( ) + + ( ) + + + R R l RB 22 22 2 ( ) + + (1.22) subject to R R R l R R n( ) sin ( s in ) 1 6 3 1 1 3 1 12 12 12 1 1 2 22 + ( )+ + +RR R l R R R R B B 1 2 2 2 22 2 1 3 1 ( ) + + + ( ) (1.23) 0.4 < Rn < 0.6 (1.24) 22 < θ1 < 27 (1.25) θ1 + 5 < θ2 < θ1 + 10 (1.26) 0.4 < l1 < 0.8 (1.27) 0.4 < l2 < 0.8 (1.28) Example 1.4 It is required to find the optimum diameter ( d) of a solid steel shaft whose mass ( M) is to be minimized and the first cantilever frequency has to be greater than 20 Hz. Formulate this as an optimization problem by writing down the objective function and the constraint. If L is the length of the rod (Figure 1.4) and ρ is its density, then the mass of the rod is given by M d L = 4 2 (1.29) 11 Introduction For this problem, L = 1 m and ρ = 7800 kg/m 3. The first cantilever fre - quency is given by f L EI k 1 2 1 2 35156 = . (1.30) where E is Young’s modulus of steel and its value is 2 × 10 11 N/m 2. The variable k is mass per unit length. The moment of inertia I for the rod is given by I d= 64 4 (1.31) The optimization problem can be written as follows. Minimize 4 2d L (1.32) subject to 1 2 35156 20 2 . L EI k (1.33) 1.4 Solution with the Graphical Method Having formulated the optimization problems in the previous section, it is tempting for readers to get solutions for these problems. The graphical method is a simple technique for locating the optimal solution for problems with up to two to three design variables. Beyond three variables, the rep - resentation of the optimization problem through graphs becomes complex. L d FIGURE 1.4 Cantilever rod. 12 Optimization: Algorithms and Applications The optimization problem mentioned in Example 1.2 requires two design variables, x1 and x2, to be evaluated such that it minimizes the total surface area and at the same time satisfying the equality constraint. A MATLAB ® code, graph_examp12.m , given at the end of this book, is used for drawing the graph for this problem. For a quick introduction to MATLAB, see Appendix A. The variable x1 is varied from 1 to 100 mm and the variable x2 is varied from 1 to 200 mm in the code. The surface area is calculated based on the val - ues x1 and x2 and contour of the objective function is plotted (Figure 1.5) for different values of x1 and x2. The constraint function is then plotted (marked with *). Because this is an equality constraint optimization problem, the min - imum value of the objective function is the contour curve that touches the constraint curve and has the lowest value. The minimum value of the objec - tive function is 26,436 mm 2 corresponding to design variables x1 as 37.45 mm and x2 as 74.9 mm. Note that the length of the can is two times its radius at the minimum point. This can be proved analytically using elementary calculus. Similarly, the optimization problem mentioned in Example 1.4 has only one design variable, the diameter d of the rod. Again, we can use a graphical method to solve this problem. A MATLAB code, graph_ examp14.m , is written for this problem. On executing the code, the output is in the form of a graph or a plot as shown in Figure 1.6. The value of the objective function (along the y-axis) decreases with the reduction in the value of the design variable (along the x-axis). However, the constraint value (also plotted along the y-axis) also decreases with the reduction in the value of the design variable. In the opti - mization problem, it is given that the constraint should have a value that is equal to or greater than 20 Hz. Hence the optimum solution corresponds to 90,000 70,000 26,436 15,000 50,000 15,000 50,000 70,000 90,000 15,000 50,000 70,000 26,436 90000 x1 (mm) x2 (mm) 10 20 30 40 50 60 70 80 90 100 20 40 60 80 100 120 140 160 180 200 Objective function Constraint 26,436 FIGURE 1.5 Function contours for the optimization problem in Example 1.2. 13 Introduction d = 0.0283 m, where the value of the objective function mass is 4.94 kg and the constraint value is 20 Hz. 1.5 Convexit y Consider two design points, x1 and x2, that belong to a set S. If the line join - ing these two points is also within the set S, then the set S is a convex set. If the line joining the design points x1 and x2 does not belong to the set S, then the set S is a nonconvex set. See Figure 1.7 for convex and nonconvex sets. In optimization, often we have to check a function for its convexity. Consider a single variable function f(x) as shown in Figure 1.8 and two points x1 and x2 at which the value of the function is f(x1) and f(x2) respectively. Consider any point x on the line joining x1 and x2. If f x( ) is less than the value of the func - tion at the corresponding point x` on the line joining f(x1) and f(x2) then f(x) is a convex function, that is, for convexity f x f x ( ) ( )` (1.34) Examples of convex functions are x2, ex, etc. If f(x) is a convex function then ef(x) is also a convex function. Hence, ex2 and eex are also convex functions. Let us plot (Figure 1.9) these functions in MATLAB ( convexity.m ) to show that these functions are indeed convex. 0 0.01 0.02 0.03 0.04 0.05 0 5 10 15 20 25 30 35 40 d (m) Objective function (kg), constraint (Hz) Objective function Constraint (0.0283, 4.94) FIGURE 1.6 Objective function and constraint plot for the problem in Example 1.4. 14 Optimization: Algorithms and Applications The concept of convexity is important in declaring that a function has one minimum only. A convex function thus has a global minimum. If a function is nonconvex, the optimum reached might be a local one (see Figure 1.10). Such functions with more than one minimum or maximum are referred to as multimodal functions. Traditional gradient-based algorithms have difficulty in locating a global optimum solution. In addition, a designer often has to look for an alternative solution to a global optimum because of the pres - ence of the constraints. For example, at a global optimum solution, the design variables may be such that it might be difficult to manufacture the product or the particular material might be very costly. The task of the designer is thus difficult. He not only has to find a global optimum solution, but also locate local optimum solutions. x2 S S Convex set Nonconvex set x1 x1 x2 FIGURE 1.7 Convex and nonconvex sets. x1 x2 x~ ~ f (x2) f (x1) f (x`) f (x) FIGURE 1.8Convex function. 15 Introduction If f(x) is a convex function, then − f(x) is a concave function. Similarly, if f(x) is a concave function, then − f(x) is a convex function. Figure 1.11 shows both convex and concave functions for y = ex. Typically, optimization algorithms are developed to minimize the objec - tive function. As discussed earlier, convexity plays an important role for functions where their minima are to be located. However, there can be opti - mization problems where one needs to maximize the objective function f(x). The maximization problem can be converted to the minimization type by changing the sign of the objective function to − f(x). Mathematically, –2 0 2 – 2 0 2 –2 0 2 –2 0 2 0 2 4 0 5 10 0 1000 2000 0 20 40 60 y = x2 y = e x y = e x 2 y = e ex y y y y x x x x FIGURE 1.9 Examples of convex functions. Local optimum Global optimum x1 x2 FIGURE 1.10Local and global optima for a nonconvex function. 16 Optimization: Algorithms and Applications Maximize f(x) is the same as Minimize −f(x) 1.6 Gradient Vector, Directional Derivative, and Hessian Matr ix The derivative or gradient of a function f(x) at a point x, generally denoted by f′(x), is the slope of the tangent (see Figure 1.12) at that point. That is, f′(x) = tan θ (1.35) where θ is the angle measured by the tangent with respect to the horizon - tal. Along the gradient direction, there is the maximum change in the value of the function. Thus, gradient information provides the necessary search direction to locate the maximum or minimum of the function. 2 1.5 1 0.5 0 0.5 1 1.5 2 8 6 4 2 0 2 4 6 8 x y Convex Concave y = e x y = e x FIGURE 1.11 Concave and convex functions. 17 Introduction In most optimization problems, which are generally nonlinear, f′(x) has to be evaluated numerically. We can use forward difference , backward difference , and central difference methods to find the derivative of a function at a point. If the value of a function f(x) is known at a point x, then the value of the func - tion at its neighboring point x + Δx can be computed using Taylor’s series as f x x f x xf x x f x x f x ( ) ( ) ( ) ! ( ) ! ( ) + = + + + + 2 3 2 3 (1.36) Rearranging Equation 1.36 gives f x x f x x f x xf x x f x ( ) ( ) ( ) ! ( ) ! ( ) + = + + + 2 3 2 (1.37) The forward difference formula for evaluating the derivative of a function can be written as = + + f x f x x f x x O x ( ) ( ) ( ) ( ) (1.38) The quantity O(Δx) represents that this formula is first-order accurate. In a similar fashion, the backward difference formula can be written as = + f x f x f x x x O x ( ) ( ) ( ) ( ) (1.39) Tangent f (x) x 0 FIGURE 1.12 Concept of derivative. 18 Optimization: Algorithms and Applications Using the forward and backward difference formulas, one can derive the central difference formula as = + + f x f x x f x x x O x ( ) ( ) ( ) ( ) 2 2 (1.40) Because the central difference formula for computing the derivative of a function is of second order, it is more accurate than forward/backward difference method. Again, the second derivative can be evaluated using the equation = + + f x f x x f x f x x x ( ) ( ) ( ) ( ) 2 2 (1.41) Let us take a function f(x) = 2 sin 5 x + 3 x3 − 2 x2 + 3 x − 5 (1.42) and compute the first and second derivatives using the central difference formula for x ranging from 0.1 to 1.0 with Δx as 0.01. A MATLAB code, derivative.m , is written and the output is plotted in Figure 1.13. The top plot in the Figure 1.13 is f(x) varying with x. Note that the function has one maximum and one minimum and these points are shown with *. The derivative of the function is plotted in the second plot. Note that f′(x) = 0 at the maximum and minimum of the function. From the third plot, observe 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 –4 –3 –2 f (x) –20 –100 0 20 f ´(x) 0 100 x f (x) FIGURE 1.13 Plot of a function with its first and second derivative. 19 Introduction that f″(x) ≥ 0 at the minimum and f″(x) ≤ 0 at the maximum of the function. The second derivative provides curvature information of the function. For certain functions such as f(x) = x3, both f′(x) = f″(x) = 0 at x* = 0. In such instances, one has to look for higher order derivatives. Here f‴(x) = 6, which is nonzero. If the first nonzero higher order derivative is denoted by n, then x* is an inflection point (or a saddle point ) if n is odd and x* is local optimum if n is even. Therefore, x* is an inflection point for the function f(x) = x3, as the first nonzero higher order derivative is odd (third derivative). Similarly, it can be shown that the function f(x) = x4 has a local minimum at x* = 0. These two functions are plotted in Figure 1.14. So far we considered the derivative of a function with one variable only. The gradient is a vector that contains partial derivatives of the function with respect to the design variables ( x1, x2, ⋯, xn) and is mathematically written as = f f x f x f xn 1 2 (1.43) Let us plot a tangent and gradient at a given point ( x1, x2) on the function contours for Example 1.2. For a single-variable case, we observed that the tangent at any point for a function and its gradient are the same (Figure 1.12). –2 0 2 –8 –6 –4 –2 0 2 4 6 8 0 2 4 6 8 10 12 14 16 y y x –2 0 2x y = x 3 y = x 4 FIGURE 1.14 Saddle point and local minimum functions. 20 Optimization: Algorithms and Applications However, for a two-variable case, the tangent for each function contour is different and the value of the function remains the same along the tangent, that is, along a tangent f f x x f x x = + = 1 1 2 2 0 (1.44) The gradient is normal to the tangent. A MATLAB code, grad.m , is written that on execution gives an output shown in Figure 1.15. On the function con - tour with a value of 15,000, a point (25, 70.493) is located where we desire to plot the tangent and gradient. Using Equation 1.44, we can write x f x f x x x x x x 2 1 2 1 1 2 1 1 2 = = + (1.45) Using the incremental Equation 1.45, a tangent can be drawn at the point (25, 70.493). If the slope of the tangent is given by mt, then the slope of the gradient mg can be computed from the relation mgmt = −1 (1.46) Consider three functions, f1(x1, x2, x3), f2(x1, x2, x3), and f3(x1, x2, x3), which are functions of three variables, x1, x2, and x3. The gradient of these functions 5000 5000 5000 15,000 15,000 15,000 30,000 30,000 30,000 50,000 50,000 50,000 70,000 70,000 70000 90,000 90000 x1 (mm) x2 (mm) 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Objective function TangentGradient FIGURE 1.15 Tangent and gradient for the objective function given in Example 1.2. 21 Introduction can be put in a single matrix referred to as a Jacobian J and is expressed in mathematical form as [ ]J= f x f x f x f x f x f x f x 1 1 1 2 1 3 2 1 2 2 2 3 3 1 f f x f x 3 2 3 3 (1.47) For constrained optimization problems, it is possible that moving in the gradient direction can result in moving into the infeasible region. In such an instance one wishes to move in some other search direction and would like to know the rate of change of function in that direction. The directional deriva - tive provides information on the instantaneous rate of change of a function in a particular direction. If u is a unit vector, then the directional derivative of a function f(x) in the direction of u is given by ∇f(x)T u The Hessian matrix H represents the second derivative of a function with more than one variable. For a function f(x1, x2, x3) with three variables, the Hessian matrix is written as [ ]H = 2 12 2 1 2 2 1 3 2 2 1 2 22 2 f x f x x f x x f x x f x f x2 2 3 2 3 1 2 3 2 2 32 x f x x f x x f x (1.48) The Hessian matrix should be positive definite at the minimum of the func - tion. A matrix is positive definite if its eigenvalues are positive. For a square matrix, there exists a nonzero vector such that when multiplied with the square matrix it resul
Enter the password to open this PDF file:
MyAssignmenthelp.com strives to deliver quality content to students of USA and deliver assignment writing services as per individual assignment assistance. We have built up a pool of 3800+ assignment experts who provide academic writing help in more than 100+ subjects. Our skilled and experienced assignment writers deliver custom-made assistances, and they offer need-based university assignment help to students as per their assignment demands.
On APP - grab it while it lasts!
*Offer eligible for first 3 orders ordered through app!
ONLINE TO HELP YOU 24X7
OR GET MONEY BACK!
OUT OF 38983 REVIEWS
Received my assignment before my deadline request, paper was well written. Highly recommend.