My special gratitude to Mr. Nezar Salih, who taught me a lot in transformer design. ...... Generate a list of all potential solutions to the problem in a systematic ...
Sudan Academy of Sciences Engineering Research and Industrial Technologies Council
TITLE:
DESIGN OPTIMIZATION OF DISTRIBUTION TRANSFORMER USING MATLAB (Application, Comparative Study) A Thesis Submitted as Partial Fulfillment of the Requirements for the Degree of M.Sc. in Mechanical Engineering Design
Prepared By:
AHMED HASSAN AHMED HASSAN Supervised By:
Dr. EL-KHAWAD ALI EL-FAKI AHMED
MAY, 2013
بسم الله الرحمن الرحيم
Sudan Academy for Science
Engineering Research and Industrial Technologies Council
TITLE:
DESIGN OPTIMIZATION OF DISTRIBUTION TRANSFORMER USING MATLAB (Application, Comparative Study)
Prepared By:
Ahmed Hassan Ahmed Hassan
Supervisor: Dr. EL-KHAWAD ALI EL-FAKI AHMED
MAY, 2013
Wisdom
“Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction.” Quotation of Albert Einstein
I
Dedication
TO MY PARENTS who waited long for this
AND
TO MY NATION who is waiting for more of this
II
Acknowledge I would like to express my thanks and gratitude to Allah, the Most Beneficent, the Most Merciful whom granted my ability and willing to start and complete this thesis. Acknowledgement is due to the Sudan Academy for Science, for supporting this research. I would like to express my deepest sense of gratitude to my supervisor Dr. El-Khawad Ali El-Faki Ahmed, for his patient guidance, encouragement and excellent advice throughout this study I would like gratefully acknowledge with thanks the support of the SUDATRAF (Sudanese Egyptian Electrical Industry), for the Support and the provided helpful data. My special gratitude to Mr. Nezar Salih, who taught me a lot in transformer design.
III
Table of Contents Wisdom ........................................................................................................................................................ I Dedication .................................................................................................................................................. II Acknowledge ............................................................................................................................................ III Table of contents ....................................................................................................................................... IV List of tables .............................................................................................................................................. VI List of figures .......................................................................................................................................... VII Thesis abstract ........................................................................................................................................ VIII 1.
chapter 1: introduction ........................................................................................................................ 1 1-1 1-2 1-3 1-4 1-5
2.
OBJECTIVES ................................................................................................................................ 2 RESEARCH LAYOUT ..................................................................................................................... 3
TRANSFORMER BASICS ................................................................................................................... 4 DISTRIBUTION TRANSFORMER CONSTRUCTION.............................................................................. 6 TRANSFORMER DESIGN FORMULAE .............................................................................................. 12 ALLOWED LOSSES AND IMPEDANCES (DESIGN CONSTRAINTS) ...................................................... 18 TRANSFORMER DESIGN PROCEDURE – AS CURRENTLY APPLIED ................................................... 18 LITERATURE SURVEY ON TRANSFORMER DESIGN OPTIMIZATION ................................................. 19
OPTIMIZATION DEFINITION .......................................................................................................... 23
.............................................................................. 23 CLASSIFICATION OF OPTIMIZATION PROBLEMS ........................................................................... 25 TECHNIQUES OF OPTIMIZATION ................................................................................................ 27 BRUTE FORCE SEARCH .............................................................................................................. 33 OPTIMIZATION TOOLS IN MATLAB ............................................................................................... 36 STATEMENT OF AN OPTIMIZATION PROBLEM
chapter 4: design formulae ............................................................................................................... 55 4-1 4-2 4-3 4-4
5.
SCOPE OF THE STUDY .................................................................................................................. 2
chapter 3: optimization overview ..................................................................................................... 23 3-1. 3-2 3-3 3-4 3-5 3-6
4.
PROBLEM IMPORTANCE ............................................................................................................... 1
chapter 2: theoretical background and previous studies .................................................................... 4 2-1 2-2 2-3 2-4 2-5 2-6
3.
PROBLEM DESCRIPTION:.............................................................................................................. 1
OVERVIEW ................................................................................................................................... 55 PROBLEM A .................................................................................................................................. 55 PROBLEM B, C AND D ................................................................................................................ 57 NOTES ....................................................................................................................................... 59
chapter 5: methodology .................................................................................................................... 60 5-1 5-2 5-2 5-4 5-5 5-6 5-7 5-8
METHODOLOGY OVERVIEW
......................................................................................................... 60
FEATURING MODELS OF PROBLEMS TO BE SOLVED ...................................................................... 60 SELECTION OF OPTIMIZATION ALGORITHMS ................................................................................ 63 APPLICABILITY OF METHODS ON RESEARCH PROBLEMS .............................................................. 65 BRUTE FORCE SEARCH ALGORITHM ............................................................................................ 65 GENETIC ALGORITHM .................................................................................................................. 71 PATTERN SEARCH ALGORITHM ................................................................................................... 72 SIMULATED ANNEALING ............................................................................................................. 72
IV
5-9 5-10 6.
chapter 6: methods implementation; results and discussion ............................................................. 74 6-1 6-2 6-3 6-4 6-5
7.
COMBINED METHODS ................................................................................................................ 72 TOMLAB OPTIMIZATION TOOLBOX ............................................................................................ 73
PROBLEM A ................................................................................................................................ 74 PROBLEM D ................................................................................................................................ 90 GENERAL RESULTS SUMMARY .................................................................................................. 97 MANUALLY SEARCH APPROACH DESIGN
................................................................................... 98
ADDITIONAL IMPROVEMENTS OF THE CREATED APPLICATION ................................................. 100
chapter 7: conclusion and future work............................................................................................ 109 7-1 7-2
CONCLUSION ........................................................................................................................... 109 FUTURE WORK ........................................................................................................................ 109
references ................................................................................................................................................ 110 appendix a: m- files ................................................................................................................................. 111 A-1 PROBLEM A M-FILES .................................................................. ERROR! BOOKMARK NOT DEFINED. A-2 PROBLEM D M-FILES .................................................................. ERROR! BOOKMARK NOT DEFINED.
V
List of tables
TABLE 2-1 CLEARANCES BETWEEN TRANSFORMER PARTS .......................................................................... 14 TABLE 2-2 ALLOWED LOSSES AND IMPEDANCES ......................................................................................... 18
.......................................................................................... 32 TABLE 3-2 OPTIMIZATION DECISION TABLE ............................................................................................... 37 TABLE 3-3 CHAPTER 3 DECISION TABLE OF GLOBAL OPTIMIZATION TOOLBOX ........................................... 39 TABLE 5-1 APPLICABILITY OF ALGORITHMS ............................................................................................... 65 TABLE 6-1 RESULTS OF PROBLEM A, USING BRUTE FORCE SEARCH METHOD............................................... 75 TABLE 6-2 PROBLEM A-MODEL-1 RESULTS USING GA ................................................................................. 78 TABLE 6-3 PROBLEM A-MODEL-1 RESULTS USING PS .................................................................................. 79 TABLE 6-4 PROBLEM A-MODEL-1 RESULTS USING SA.................................................................................. 79 TABLE 6-5 PROBLEM A-MODEL-1 RESULTS USING GA-PS ............................................................................ 80 TABLE 6-6 PROBLEM A-MODEL-1 RESULTS USING SA-PS ............................................................................. 81 TABLE 6-7 PROBLEM A-MODEL-1 RESULTS USING PS-PS ............................................................................. 81 TABLE 6-8 PROBLEM A-MODEL-1 RESULTS USING TOMLAB GLCFAST ......................................................... 82 TABLE 6-9 PROBLEM A-MODEL-1 RESULTS USING TOMLAB OQNLP ............................................................. 82 TABLE 6-10 PROBLEM A-MODEL-1 RESULTS USING TOMLAB GENO............................................................. 82 TABLE 6-11 COMPARISON OF METHODS FOR PROBLEM A-MODEL-1 ............................................................ 84 TABLE 6-12 PROBLEM A-MODEL-2 RESULTS USING GA ............................................................................... 84 TABLE 6-13 PROBLEM A-MODEL-2 RESULTS USING PS ................................................................................ 85 TABLE 6-14 PROBLEM A-MODEL-2 RESULTS USING SA................................................................................ 85 TABLE 6-15 PROBLEM A-MODEL-2 RESULTS USING COMBINED GA-PS ........................................................ 86 TABLE 6-16 PROBLEM A-MODEL-2 RESULTS USING COMBINED SA-PS ......................................................... 86 TABLE 6-17 PROBLEM A-MODEL-2 RESULTS USING COMBINED PS-PS ......................................................... 86 TABLE 6-18 PROBLEM A-MODEL-2 RESULTS USING COMBINED GS-SA-PS.................................................... 87 TABLE 6-19 PROBLEM A-MODEL-2 RESULTS USING TOMLAB/ GLCFAST ...................................................... 87 TABLE 6-20 PROBLEM A-MODEL-2 RESULTS USING TOMLAB/ OQNLP .......................................................... 87 TABLE 6-21 PROBLEM A-MODEL-2 RESULTS USING TOMLAB/ GENO ........................................................... 88 TABLE 6-22 COMPARISON OF METHODS FOR PROBLEM A-MODEL-2 ............................................................ 88 TABLE 6-23 SOLVERS OF PROBLEM A.......................................................................................................... 89 TABLE 6-24 OBTAINED TR CHARACTERISTICS ............................................................................................. 91 TABLE 6-25 MAIN RESULTS FOR PROBLEM-B USING BRUTE FORCE METHOD ............................................... 93 TABLE 6-26 RESULTS SUMMARY OF NUMERIC METHODS FOR PROBLEM D .................................................. 95 TABLE 3-1 INTEGER PROGRAMMING METHODS
VI
List of figures FIG. 2-1 SINGLE-PHASE TRANSFORMER ......................................................................................................... 5 FIG. 2-2 THREE-PHASE TRANSFORMER .......................................................................................................... 5 FIG. 2-3 THE DISTRIBUTION TRANSFORMER CONSTRUCTION ......................................................................... 6 FIG 2-4 GEOMETRIC PARAMETERS FOR FINDING THE OPTIMUM STACKING STEP CONFIGURATION ................. 9 FIG 2-5 3D DRAWING OF AN ASSEMBLED IRON CORE .................................................................................. 10 FIG 2-6 MAIN DIMENSIONS OF AN ASSEMBLED IRON CORE .......................................................................... 10 FIG 2-7 IRON CORE CROSS SECTION ............................................................................................................ 11 FIG 2-8 COPPER WINDINGS DURING PRODUCTION ....................................................................................... 11 FIG 2-9 IRON CORE LOSSES CHART .............................................................................................................. 13 FIG 2-10 HALF CROSS-SECTION OF ONE WINDING AND CORE LEG................................................................ 14 FIG 2-11 HALF CROSS SECTION-SHOWING DIMENSIONS CONTROLLING REACTANCE ................................... 17 FIG 2-12 FLOW CHART OF SUDATRAF CURRENTLY USED ALGORITHM ......................................................... 19 FIG 3-1 MINIMUM OF F(X) IS SAME AS MAXIMUM OF –F(X) ......................................................................... 24 FIG 3-2 FLOW CHART OF GENERAL ALGORITHM OF BRUTE FORCE SEARCH ................................................. 35 FIG 4-1 CORE CROSS SECTION ..................................................................................................................... 56 FIG 4-2 CORE AREA CALCULATIONS............................................................................................................ 56 FIG 5-1 FLOW CHART IMPROVED DESIGN ALGORITHM ................................................................................ 67 FIG 5-2 FLOWCHART OF ALGORITHM GIVES COMBINATIONS WITH ACCEPTED B (STAGE1, 2) ...................... 68 FIG 5-3 FLOWCHART OF ALGORITHM GIVES ALL COMBINATIONS OF N AND N PER LAYER OF SECONDARY WINDING (STAGE 3)
........................................................................................................................... 69
FIG 5-4 FLOWCHART OF ALGORITHM GIVES SECONDARY WINDING CONDUCTORS SIZES WHICH PASSED THE DOES NOT EXCEED THE CURRENT DENSITY LIMIT (STAGE 3) .............................................................. 69 FIG 5-5 FLOWCHART OF ALGORITHM GIVES PRIMARY WINDING CONDUCTORS SIZES WHICH PASSED THE DOES NOT EXCEED THE CURRENT DENSITY LIMIT (STAGE 3) .............................................................. 70 FIG 5-6 FLOWCHART OF ALGORITHM GIVES ALL POSSIBLE ACCEPTABLE DESIGN (STAGE4) ........................ 71 FIG 6-1 CORE DIA. VS TIME ......................................................................................................................... 78 FIG 6-2 GUI FOR PROBLEM-A..................................................................................................................... 100 FIG 6-3 GUI FOR PROBLEM-A WITH RESULTS ............................................................................................. 101 FIG 6-4 GUI-PROBLEMS B, C AND D ........................................................................................................... 102 FIG 6-5 GUI- PROBLEMS (B,C AND D) WITH FILLED DATA .......................................................................... 103
VII
Thesis Abstract Due to the huge number of distribution transformers yearly consumed and installed in the utility networks, it is always required and targeted to build transformers with the most economical cost. Achieving the guaranteed characteristics of transformers is an important factor that should be considered. It is well know that the transformer design task is time consuming. Several optimization techniques to obtain the optimum design of distribution transformers are proposed and examined. Brute Force Search algorithm written in MATLAB, is the first technique while the second is using Genetic Algorithm, Pattern Search, Simulated Annealing and hybrid optimization algorithms, provided in MATLAB optimization toolboxes; the third is to use dependent toolbox that runs in MATLAB environment, namely TOMLAB optimization toolbox. The transformer design mathematical formulation is explained in a systematic way for the conventional type of distribution transformers. Optimization methodologies and implementation results are presented. Results show the effectiveness of using optimization methods in transformer design problem and the reduction of total cost when compared to conventional designs. Namely, Brute Force Search method happened to have the most effectiveness in accuracy and timing; furthermore, Improved Brute Force Search programs have been developed; using GUI (Graphical User Interface).
Keywords: Distribution transformer design, Optimization, Brute Force Search algorithm.
VIII
خالصة الرسالة بسبب االعداد الكبيرة التي تستخدم و تركب سنويا من محوالت التوزيع في الشبكة الكهربائية, فانه من المطلوب تصنيع و عمل محوالت باقل تكلفة .كما ان استيفاء و تحقيق المواصفات المطلوبة للمحوالت هو عامل مهم البد من اخذه في االعتبار ,و كما هو معلوم فان تصميم المحول هو مهمة قد تستغرق وقتاً طويل .عدة طرق للتصميم األمثل اقترحت هنا و اختبرت .في هذه الرسالة تم طرح محاولة ناجحة لتصميم محول التوزيع الكهربي ,و ذلك باستخدام عدة طرق :الطريقة االولى :هي باستخدام خوارزمية البحث البدائي الشامل و ذلك ببرمجتها بالماتالب ,الطريقة الثانية :و هي باستخدام صناديق االدوات الخاصة بالتصميم االمثل المتضمنة في برنامج الماتالب ,و الطريقة الثالثة :و هي باستخدام صناديق ادوات قائمة بذاتها و لكنها تعمل في بيئة برنامج الماتالب ,بالتحديد .TOMLAB toolboxو تم استعراض الصيغة الرياضية المستعملة حاليا في تصميم محوالت التوزيع ,و تم تطبيق الطرق المتعددة .من النتائج تبرهن فعالية و جدوى العمل ببعض الطرق المقترحة و تتبين تقليل التكلفة الناتج .و بالتحديد فان طريقة البحث البدائي الشامل قد اثبتت فعاليتها من ناحية دقة و سرعة الحل ,و قد تم تطوير برنامج لهذه الطريقة و ذلك باستخدام واجهة المستخدم الصورية.
العبارات الدالة :تصميم محوالت التوزيع ,التصميم االمثل ,خوارزمية البحث البدائي.
IX
Chapter 1: Introduction
1. Chapter 1: Introduction 1-1 Problem Description: Transformers are passive devices for transforming voltage and current, they are usually categorized as power and distribution transformers. Distribution transformers are the most numerous and varied types used in electricity supply network. In Sudan, with the increasing electrical power generation, the electricity networks are spreading daily through the developing country. This spread needs huge amount of different rates of distribution transformers. Hundreds of distribution transformers are produced in Khartoum, Sudan monthly. Additional hundreds are imported to Sudan monthly. The cost of distribution transformer varies according to the vendor, even for the same rating of transformer, and even the fact that all vendors provide transformers for the same specifications. The cost of the transformer includes the material cost, manufacturing cost and losses cost; the main determiner of the cost is the design of the transformer, which determines the material and losses costs. The transformer manufacturers in Sudan are: 1- SUDATRAF (Sudanese Egyptian Electrical industrial). 2- TRANSUDAN (Sudanese Company for transformers). So, the main goals are: a- Obtaining an optimum transformer design using a suitable optimization tool. b- Localizing the transformer design practice in Sudan. c- Evaluating imported designs by comparing them with the optimum design obtained in this work.
1-2 Problem Importance The designs of transformers produced in Sudan are imported from companies in Egypt and Italy, or relying on an earlier design that has been used in manufacturing transformer with similar main characteristics. The transformer design practice in Sudan is modifying imported designs by a trial and error process by varying some of the design variables until achieving the required specification. Those designs are successfully used to achieve designs per acceptable standards, but they do not guarantee a cost effective product. So, there is always that blindness of the uncertainty of the effectiveness of the design.
1
Implementing the proposed improvement of the design could lead to reduction in the transformer cost, even a slight reduction in cost per transformer unit would be huge reduction of the transformers purchasing budget due to the large amounts of them. Also, by applying the improved algorithm, the uncertainty of chance of minimum cost is vanished. By localizing the transformer design practice in Sudan, hard currency could be saved.
1-3 Scope of the study This study concerns only in improving the distribution transformer design procedure, but not the design formulae. The procedure of trial and error that currently used by SudaTraf company is replaced with several methods. The proposed procedure algorithms are limited only for:
Distribution transformers limited between 50 KVA to 2000 KVA.
Three phase oil-immersed, three-legged, core type.
Stacked circular core section.
Design of the active parts only (copper windings and iron core) which control the cost of the transformer.
Primary line – line voltage of 11 KV or 33 KV
Secondary line – line voltage of 433 V
Copper round wire primary layered winding construction.
Copper rectangular wire secondary layered winding construction.
1-4 Objectives In this thesis the following objectives have been formulated: 1-
To develop computer based algorisms to give the optimum design of the iron core section with given available sizes and given number of steps. Which is very useful program that solves a main problem in transformer core design.
2-
To develop computer based design program algorithms to give the optimum design for the distribution transformers, by selecting design parameters, i.e. copper size, core size, insulation thickness, cost, etc. has been developed.
3-
To compare the total cost of a transformer obtained using the proposed algorithms with the total cost obtained using the currently used design method. This objective is needed to evaluate the gain of implementation of the proposed algorithms in transformer design. 2
1-5 Research layout This thesis constructed of seven chapters; the flow of thesis presentation will be as follows: Chapter 2 presents the theoretical background of distribution transformers, transformer design, in addition to the previous studies. Chapter 3 presents an overview of optimization literature and optimization in MATLAB programming language. Chapter 4 presents the design formulae of transformer design problems to be solved. Methodologies of The proposed design procedures of the transformer are declared in chapter 5. Chapter 6 presents the implementation of the several methods with results and discussion. Chapter 7 which included the conclusion and future works recommendations.
3
Chapter 2: Theoretical background and previous studies
2. Chapter 2: Theoretical Background and previous studies 2-1 Transformer Basics Definition: Transformers are passive devices for transforming voltage and current. They are among the most efficient machines. [1]. A power transformer is a static device that, by electromagnetic induction, transmits electrical power from one alternating voltage level to another without changing the frequency. Transformers are necessary components in electrical systems as diverse as distribution terminals for multi-Megawatts power generating stations to hand-held radio transceivers operating on a fraction of a watt. [1]. Power transformers are usually categorized as power and distribution transformers. [2]. Distribution transformers are normally considered to be those transformers, which provide the transformation from 36 kV and lower voltages down to the level of final distribution network. In Sudan, the rated primary voltages are 33 kV and 11 kV while the secondary rated voltages are 433 V. [3]. Distribution transformers are by far the most numerous and varied types of transformers used in electricity supply network. There are thousands of distribution transformers installed in Sudan electricity supply system. They range in size from 50 kVA to 2000 kVA [1]. Transformer cost consists mainly of two parts. The capital cost of transformer and the system losses cost. [2]. A transformer normally consists of a pair of windings, primary and secondary, linked by a magnetic circuit or core. When an alternating voltage is applied to one of these windings, generally by definition the primary, a current will flow which sets up an alternating magneto motive force. And hence an alternating flux in the core. This alternating flux in linking both windings induces an electro motive force. In each of them. It has two or more windings of wire wrapped around a ferromagnetic core. These windings are not electrically connected, but they are magnetically coupled, i.e., the only connection between the windings is the magnetic flux present within the core. The electrical energy received by the primary winding is first converted into magnetic energy that is reconverted back into a useful electrical energy in the secondary winding. [2] Let Ep = applied primary voltage, Es = induced secondary voltage, Np = number of primary turns, and Ns = number of secondary turns. Since the induced voltage in the
primary coil equals the applied voltage and since the induced volts per turn is the same for both primary and secondary, 𝑬𝒑/𝑵𝒑 = 𝑬𝒔/𝑵𝒔
Eq. 2-1
𝑬𝒑/𝑬𝒔 = 𝑵𝒑/𝑵𝒔 Eq. 2-2 The ratio 𝑁𝑝/𝑁𝑠 is called the transformer turns ratio. The current ratio is given by: 𝑰𝒔/𝑰𝒑 = 𝑵𝒔/𝑵𝒑
Eq. 2-3
fig. 2-1 (single-phase transformer)
fig. 2-2 (three-phase transformer)
5
2-2 Distribution transformer construction The basic arrangement is the iron core in the shape of three legs and tow upper and lower yokes, forming a rectangular shape with inner middle leg, with almost circular. The iron core is covered by the windings (coils), first, by the low voltage coil, which consists of layered copper foil or flat wires, the copper turns are fully insulated with insulating paper, the low voltage coil is covered by the high voltage coil, which also consists of layered insulated copper wires. The next figure shows the main constructure of the distribution transformer.
fig. 2-3 (The distribution transformer construction)
In distribution transformers, the layered winding construction is mostly adopted. Most of utilities prefer or accept copper windings only. This is due to its conductivity in addition to its excellent mechanical properties. Its value in transformers is particularly significant because of the benefits, which result from the saving of space and minimization of load losses. [4] Foil windings are frequently used as low-voltage windings in distribution transformers. In this form of construction, the winding turn of copper foil, occupies the full width of the layer. This is wound around a mandrel, with intermediate layers of paper insulation, to form the required total number of turns for the windings. Strips of the conductor material are welded or brazed along the edge of the foil at the start and finish to form the winding leads. This arrangement represents a very cost effective method of manufacturing low-voltage windings and also enables a transformer to be built with good mechanical short-circuit strength. In addition to the foil windings described above, 6
distribution transformers frequently use other types of winding construction not found in larger transformers. Because of the low kVA ratings, the volt per turn is usually very low so that for a higher-voltage winding a considerable number of turns will be required. The current is also low and the turn cross-section area, as a result, is small too. The winding wires are frequently circular in section and enamel covered. The iron core is made of an iron alloy called Silicon Steel, also called ( Electrical steel, lamination steel or transformer steel ) which consists of 1% up to 6.5%Si, increasing the amount of Silicon inhibits eddy currents and narrow the hysteresis loop of the material, which reduces the core losses. [1]. The most used Silicon Steel material is Cold Rolled Grain Oriented Silicon steel (CRGO), which is processed Silicon steel in such a way that the optimum properties are developed in the rolling direction, due to the tight control of the crystal orientation relative to the sheet. Due to the special orientation the magnetic flux density is increased by 30% in the coil rolling direction, but the magnetic saturation is decreased by 5%. Coating of Silicon steel sheets: Silicon steel is coated to: increase resistance between laminations, provide resistance to corrosion and rust and to act as a lubricant during die cutting. The usually used coating is oxide coating (commercially known as Carlite). [5]. There are two factors those determine the core effective area, weight and losses, those are: a- Core Stacking Factor (SF): Stacking factor is a correction number that represents the space lost between laminations. Both surfaces of lamination (electrical sheet) are provided with an insulation of oxide coating (commercially known as Carlite). The stacking factor of lamination improves by using thicker laminations. This will increase the eddy current loss in proportion of square of the thickness of the lamination. Therefore, to reduce the eddy current losses, thinner laminations are preferable even though the stacking factor will go down. In conventional silicone steel sheets, the stacking factor is in the range of 0.95 – 0.98 depending on the thickness of laminations. [4]. In this thesis, the considered stacking factor for each lamination thickness is 0.96 for the 0.27mm thickness laminates (sheets), and 0.97 for the 0.30mm thickness laminates. These values are usually provided by the supplier of the Silicon Steels. b- Building factor (BF): It is a number that should be multiplied by the ideal core loss. It is the sum of different factors that increase the losses in the core such as the gap between different laminations at corners where the induction must overpass. Another factor that increases the loss is 7
burrs produced by cutting and slitting of sheets. The building factor of stacked cores is generally in the range of 1.1 – 1.3 which varies from manufacturer to another. [4] Total weight of the core results from multiplying the ideal area of core by SF and this is then multiplied by the total length of core by the density of silicon steel. Silicon steel density is usually provided by steel manufacturer which is considered as 7.65 kg/cm3. Therefore, the core weight is: [4] 𝑪𝒐𝒓𝒆 𝒘𝒆𝒊𝒈𝒉𝒕 (𝒌𝒈) = 𝑰𝒅𝒆𝒂𝒍 𝒂𝒓𝒆𝒂 (𝒄𝒎𝟐) × 𝑺𝑭 × 𝑪𝒐𝒓𝒆 𝒕𝒐𝒕𝒂𝒍 𝒍𝒆𝒏𝒈𝒕𝒉 (𝒎𝒎) × 𝟕. 𝟔𝟓 × 𝟏𝟎^ − 𝟒 Eq. 2-4 Where: 𝑪𝒐𝒓𝒆 𝒕𝒐𝒕𝒂𝒍 𝒍𝒆𝒏𝒈𝒕𝒉 = 𝟑 × 𝑪𝒐𝒓𝒆 − 𝒘𝒊𝒏𝒅𝒐𝒘 𝒉𝒆𝒊𝒈𝒉𝒕 (𝒎𝒎) + 𝟐 × 𝑪𝒐𝒓𝒆 − 𝒚𝒐𝒌𝒆 (𝒎𝒎) Eq. 2-5 A Transformer designer always tries to maximize the core area in order to obtain optimum stacking arrangement by using specific number of steps. It is a normal practice that transformer manufacturer will keep standard widths of sheets or do slitting based on certain sheet width increment such 5 or 10 mm. In this thesis, the considered steel width increment is 10 mm. From practical point of view, this is not possible since it requires numerous ranges of laminations widths. Usually, a core is made of different number of steps, for smaller cores of distribution transformers; this could be as few as five or less. For larger transformers, this might be 11 steps or more. The geometric parameters which can be used to get the optimum stacking arrangements, namely the x and y coordinates of stack coroners which touch the circle of radius, done by Vecchio [6], the normalized x coordinates which maximize the core area for a given number of steps are given below, notice that it will give step sizes that do not accepted as standard sizes, in this thesis a better way to select steps is developed.
8
1 2 3 4 5 6 7
Fractional circular occupied A/π2 0.6366 0.7869 0.8510 0.8860 0.9079 0.9228 0.9337
8
0.9419
9
0.9483
10
0.9534
Number of steps (n)
Normalized x coordinates, xi/r 0.7071 0.5257 0.4240 0.3591 0.3138 0.2802 0.2543 0.2335 0.9732 0.2164 0.9283 0.2021 0.8836
0.8506 0.7070 0.6064 0.5336 0.4785 0.4353 0.4005 0.3718 0.9763 0.3476 0.9376
0.9056 0.7951 0.7071 0.6379 0.5826 0.5375
0.9332 0.8457 0.7700 0.7071 0.6546
0.9494 0.8780 0.9599 0.8127 09002 0.9671 0.7560 0.8432 0.9163
0.4998 0.6103 0.7071 0.7921 0.8661 0.4680 0.5724 0.6648 0.7469 0.8199 0.9793
fig 2-4 (Geometric parameters for finding the optimum stacking step configuration) [6] 9
fig 2-5 (3D drawing of an assembled iron core)
fig 2-6 (main dimensions of an assembled iron core)
10
fig 2-7 (iron core cross section)
The copper winding consists of copper turns in layers; Copper conductor could be: Foil, Flat or round cross section. The foil copper is used when high current is to be in the coil, e.g. Low voltage (high current) in the large rating transformer. That because it has a large cross section area. The flat copper conductors have less cross section area, so it can carry less current than Foils. It can be used in the high voltage windings and also in the low voltage windings. The round copper conductors have the least cross section areas, so it can carry little currents only, so it is only used in the high voltage (Low current) windings.
fig 2-8 (copper windings during production)
11
2-3 Transformer design formulae Transformer design can be considered as one of the most complex electrical equipment's design problems. In transformer design, there are many characteristics and factors to be achieved such as the rated power, voltage ratings of windings, and prespecified voltage impedance value, etc. At the end, a variety of designs can be found which guarantee the required characteristics of a transformer. [4]. Transformer cost consists mainly of two parts. The capital cost of transformer and the system losses cost. [2]. These different end products differ in their design parameters such as core flux density and radius, dimensions of windings including radial and axial, secondary or primary number of turns, current densities of primary and secondary conductors, etc. Even though the user requirements are achieved, the above parameters (i.e. core flux density and radius, dimensions of windings, etc.) play a main function in determining the transformer cost. The main elements in transformer design are design of iron core and copper coils. The main formulae are: a- Flux density: The main equation in the transformer design is:
𝑩𝒎 =
𝑬 𝑵
𝟒.𝟒𝟒∗𝒇∗𝑨𝒄
Eq. 2-6
Whereas: 𝐵𝑚 = maximum operating flux density (Tesla), has a maximum value, that value is specified by the manufacturer. 𝐸 = voltage of primary or secondary coils. (Volt) 𝑁 = number of turns in the primary or secondary coils. 𝐸 𝑁 𝐸 𝑁
= volt per turn. (E, N) must be for the same coil. (Either primary or secondary). (Volt) Has the same value in the primary and secondary coils.
𝑓 = frequency of the alternating current, which is standard in Sudan as 50 Hz. 𝐴𝑐 = effective area of the core leg (limb). (m2)
b- Core losses: The total core loss will be as follows: 𝑪𝒐𝒓𝒆 𝑳𝒐𝒔𝒔 (𝑾) = 𝑪𝒐𝒓𝒆 𝒘𝒆𝒊𝒈𝒉𝒕 (𝒌𝒈) × 𝑩𝑭 × 𝑳𝒐𝒔𝒔/𝒌𝒈
Eq. 2-7
The loss/kg can be obtained from the core loss curves based on the operating flux density (Tesla), material type, sheet thickness and frequency. The curves are provided by the 12
supplier; the properties of core iron losses are varying according to vendors, however, it would look like the figure below:
fig 2-9 (iron core losses chart) [7]
c- Insulation paper between layers: A quite fair and practical equation for determining the thickness of insulating paper between winding layers is the following equation [4]. 𝑻𝒉𝒊𝒄𝒌𝒏𝒆𝒔𝒔 𝒎𝒎 >= (𝑽𝒐𝒍𝒕/𝒕𝒖𝒓𝒏 ∗ 𝒕𝒖𝒓𝒏𝒔/𝒍𝒂𝒚𝒆𝒓)/𝟒𝟓𝟎𝟎
Eq. 2-8
d- Cooling ducts calculations: The maximum quantity of conductor layers between cooling ducts is given by 𝑳𝒂𝒚𝒆𝒓_𝒑𝒆𝒓_𝑫𝒖𝒄𝒕 = 𝟏𝟎𝟎/ (𝒄𝒐𝒏𝒅𝒖𝒄𝒕𝒐𝒓 𝒕𝒉𝒊𝒄𝒌𝒏𝒆𝒔𝒔 ∗ (𝒄𝒖𝒓𝒓𝒆𝒏𝒕 𝒅𝒆𝒏𝒔𝒊𝒕𝒚) 𝟐) Eq. 2-9 𝑵𝒐. 𝒐𝒇 𝒄𝒐𝒐𝒍𝒊𝒏𝒈 𝒄𝒂𝒏𝒂𝒍𝒔 >= 𝑵𝑶 𝒐𝒇 𝒍𝒂𝒚𝒆𝒓𝒔 / 𝑳𝒂𝒚𝒆𝒓_𝒑𝒆𝒓_𝑫𝒖𝒄𝒕
13
Eq. 2-10
e- Clearances between components: The following practical minimum clearances between two adjacent HV coils, HV-LV winding and LV-core can employed safely:
Table 2-1 (clearances between transformer parts) [4] Gap HV – HV Gap LV – HV Coil – Yoke ( Axial ) LV end fill ( Axial ) HV end fill ( Axial ) cooling Duct size used
11 KV 10 8 5 14 16 3.1
33 KV 20 12 11 16 20 6.1
Gap core-LV (radial) = 6 mm when the line voltage of the low tension coil < 2000 V, Or = 8 mm when the line voltage of the low tension coil > 2000 V,
fig 2-10 (half cross-section of one winding and core leg)
14
f- Transformer winding impedance: One of the main transformer characteristics that end users or system designers request to be guaranteed is the transformer impedance. Transformer designers always seek to meet the limits of minimum and maximum values of specified impedance. In the case of distribution transformers, utilities specify a standard impedance value for each rating of transformer. The normal way to express the transformer impedance is as a percentage voltage drop in the transformer at full load current. This reflects the method which it is seen by the system designer. Percentage resistance and reactance of windings are the components that determine transformer percentage impedance. The general formula that specifies the percentage impedance is as follows: Eq. 2-11: %𝑍 = 𝐼√𝑅 2 + 𝑋 2 ∗ 100 𝐸 Where: I = transformer primary or secondary full load current E= transformer primary or secondary open circuit voltage R= coil resistance per phase X= coil reactance per phase
g- Transformer winding Resistance: Since distribution transformer wires cross-sectional area is not large, we can use only the DC resistance of conductor to determine the windings resistance. DC resistance of transformer windings can be calculated as follows: Eq. 2-12: 𝑅=
𝑝∗𝐿∗𝑁 𝐴
Where: p= copper conductor resistivity at the temperature of interest L= mean length of conductor N= number of turns A= cross section area of conductor Resistance calculated by this equation must be multiplied by three to obtain the total windings resistance. 15
Copper resistivity at 20oC is 1.724 × 10-8 Ω.m can be recalculated at temperatures of interest (75oC or 85oC as per IEC60076 and ANSIC57 international standard respectively) as follows: Eq. 2-13: 𝑝 = 1.724 ∗ 10−8 [
𝑇𝑟𝑒𝑓 + 234.5 ] 234.5 + 20
The accuracy of determining the windings resistance depends on how accurate the mean length of turn can be calculated. Obviously, winding dimensions play the main role in computing the mean length of turn which can be calculated as follows: Eq. 2-14: 𝑀𝐿𝑇𝐿𝑉 = 𝜋(𝐼𝐷𝐿𝑉 + 𝑅𝐷𝐿𝑉 ) where: MLTLV = mean length of LV winding turn (mm) IDLV = LV winding inside diameter (mm) RDLV= LV winding radial depth (mm)
Eq. 2-15:
𝑀𝐿𝑇𝐻𝑉 = 𝜋(𝐼𝐷𝐻𝑉 + 𝑅𝐷𝐻𝑉 ) where: MLTHV = mean length of HV winding turn (mm) IDHV = HV winding inside diameter (mm) RDHV= HV winding radial depth (mm)
h- Transformer winding Reactance: There are different techniques for the leakage-reactance evaluation in transformers. The most common technique is the use of the flux leakage in different elements and estimation of the flux in different parts of transformer in terms of windings dimensions. Transformer leakage reactance can be calculated using the equation below: [8]
16
fig 2-11 (half cross section-showing dimensions controlling reactance)
Eq. 2-16: (2𝜋)2 𝑢𝑓𝑉𝐼 𝑅1 ∗ 𝑑1 𝑅2 ∗ 𝑑2 𝑋= { + + 𝑅𝑚 ∗ 𝑔} (𝑉/𝐼)2 ℎ 3 3 where: h = (h1+h2)/2 V = primary or secondary phase voltage I = primary or secondary phase current N = primary or secondary number of turns u = magnetic space constant = 4π*10-7
i- Transformer winding load losses: By definition the load loss of a transformer is that amount of losses produced by the presence of load current. The load losses of distribution transformers consist of losses due to the resistance of winding and stray losses. The major source of load losses in transformer coil conductor is I2R. Since the cross sectional area of distribution transformer wire is not large, we can use only the dc resistance of conductor, i.e. I2Rdc. The rest of load losses are mainly due to the stray losses. Stray losses are made of a number of components such as stray loss in transformer tank, stray loss in the clamping structure, and stray loss in the windings. Dominant portion of stray losses in distribution transformers takes place in the winding where it consists mainly of eddy current loss. It shall be clearly distinguished between eddy loss in transformer windings, which is due to presence of leakage flux in windings and eddy current loss in transformer core which is part of No Load Loss. Therefore: [4] 17
Eq. 2-17: Load losses = (I2RHV+I2RLV+Eddy LossHV+Eddy LossLV+ connection lossesHV + connection lossesLV) * 1.04 Eq. 2-18: Eddy losses (for each LV and HV coils) = (conductor thickness) 4 * (NO. of turns) 2 * (I2R) /100000 Eq. 2-19: Connection losses (for each LV and HV coils) = Iphase * I2R / 20000 Stray losses: Assumed as a percentage of the total load losses, it is 4%, so the total load loss is multiplied by 1.04.
2-4 Allowed losses and impedances (Design Constraints) According to the specification for distribution transformers, No. SEDCSP- 2 – 12, Sudanese electricity distribution company ltd., published in October 2011, the allowed losses and impedances are as follows:
Table 2-2 (allowed losses and impedances) [3] Transformer rated 100KVA ,11/0.433 kV 200KVA ,11/0.433 kV 300KVA ,11/0.433 kV 500KVA ,11/0.433 kV 750KVA ,11/0.433 kV 1000KVA ,11/0.433 kV 1500KVA ,11/0.433 kV 2000KVA ,11/0.433 kV 100KVA ,33/0.433 kV 200KVA ,33/0.433 kV 500KVA ,33/0.433 kV 750KVA ,33/0.433 kV 1000KVA ,33/0.433 kV 1500KVA ,33/0.433 kV 2000KVA ,33/0.433 kV
No Load loss(W) 270 480 650 765 1000 1650 1400 1700 340 460 765 1000 1650 1400 1700
Full Load Loss(W) 1700 2680 3550 5500 7800 10500 17000 22100 1550 2450 5500 7800 10500 17000 22100
Impedance Z% 4 4 4.75 4.75 6 6 6 6 4 4 4.75 6 6 6 6
2-5 Transformer design procedure – as currently applied The procedure used currently in SudaTRAF is a trial and error method, it depends on trying some variables values until obtaining an acceptable solution. This solution cannot be approved for being optimum. Additionally, to end a long run of failure trials,
18
some constraints could be violated. The below figure shows the flowchart of the used design procedure.
fig 2-12 (flow chart of SudaTraf currently used algorithm)
2-6 Literature survey on transformer design optimization Many studies were done in optimizing the transformer design; many of them were in the field of materials properties improvement, which is not practicable or useful for our situation. Other studies used optimization techniques for selecting the optimum values of design variables using the available properties of materials, such as ( Eleftherios I. Amoiralis, Pavlos S. Georgilakis) in their work which is : (Methodology for the 19
Optimum Design of Power Transformers Using Minimum Number of Input Parameters) [9], they used the decision tree optimization approach, which is easy to work with, but so difficult to program it, since it requires building a knowledge base structured on more than 2500 previous designs. Another research was done by (Li Hui, Han li, He Bie) in the paper of: (Application research based on improved genetic algorithm or optimum design of POWER transformers) [10], represents an Improved Genetic Algorithm (IGA) optimization method applied to the power transformer design problem. The target of this method was to overcome the Simple Genetic Algorithm (SGA) common problems. The design was limited to the use of rectangular copper strip in primary and secondary windings. Moreover, distribution transformers frequently use a type of winding construction not found in large transformers. An useful work is (Cost effective design of conventional transformer using optimization techniques) [4], done by (MOHAMMED YOUSEF ABU-SADA), who used two optimization techniques to improve the distribution transformer design, the techniques were 1- nonlinear optimization and 2genetic optimization, the two techniques used are considered as continuous optimization techniques, which are powerful, but they return the optimum solution using impracticable values of variables, because many transformer design variables are integers and many have standard values. He applied the optimization techniques and rounded-off the obtained variables values, and accepted them as values of optimum design. But we know that the author of the book of (Engineering optimization, theory and practice) [11] wrote in his book, that :" it is POSSIBLE to use any of continuous optimization techniques and round off the optimum values; However in many cases it is difficult to round off the solution without violating any of the constraints, Frequently the rounding of certain variables requires substantial changes in the values of some other variables to satisfy all the constraints; Further, the round off the solution may give value of the objective function that is very far from the possible optimum value. All these difficulties can be avoided if the optimization problem is posed and solved as an integer programming problem". All those previous studies are based on finding the only optimum solution using an optimization tool, the tools used are optimization tools for continuous variables, which are not suitable for transformer design variables. [2]
20
Chapter 3: Optimization Overview
21
3. Chapter 3: Optimization Overview 3-1. Optimization definition Optimization is the act of obtaining the best result under given circumstances, also it could be defined as: a body of mathematical results and numerical methods for finding and identifying the best candidate from a collection of alternatives. There is a huge literature on optimization techniques, vary in application and complexity. There is no single method available for solving all optimization problems efficiently. Hence a number of optimization methods have been developed for solving different types of optimization problems.
3-2 Statement of an optimization problem [11] a- General statement An optimization or a mathematical programming problem can be stated as follows: Find X={x1, x2, x3…}, which minimizes F(X); Subject to the constraints: gj(X) < 0, where: J = 1, 2,..., m Ij(X) = 0, where: J = 1, 2..., p Where X is an n-dimensional vector called the design vector; F(X) is termed the objective Function; gj(X) and Ij(X) are known as inequality and equality constraints, respectively. The number of variables n and the number of constraints m and/or p need not be related in any way. The problem stated above is called a constrained optimization problem. Some optimization problems do not involve constraints; such problems area termed as: unconstrained optimization problems. The next figure shows that if a point X* corresponds to the minimum value of function F(X), the same point also corresponds to the maximum value of the negative -of the function, F(X).
fig 3-1 (minimum of F(X) is same as maximum of –F(X))
Thus, without loss of generality, optimization can be taken to mean minimization since the maximum of a function can be found by seeking the minimum of the negative of the same function.
b- Design Vector (X) Any engineering system or component is defined by a set of quantities some of which are viewed as variables during the design process. In general, certain quantities are usually fixed at the outset and these are called preassigned parameters. All the other quantities are treated as variables in the design process and are called design or decision variables (xi); i= {1, 2,... n}. The design variables are collectively represented as a design vector (X) ={x1, x2, x3, … xn}.
c- Design Constraints In many practical problems, the design variables cannot be chosen arbitrarily; rather, they have to satisfy certain specified functional and other requirements. The restrictions that must be satisfied to produce an acceptable design are collectively called design constraints. 24
Constraints that represent limitations on the behavior or performance of the system are termed behavior or functional constraints. Constraints that represent physical limitations on design variables such as availability, fabric-ability, and transportability are known as geometric or side constraints.
d- Objective Function The conventional design procedures aim a finding an acceptable or adequate design which merely satisfies the functional and other requirements of the problem. In general, there will be more than one acceptable design, and the purpose of optimization is to choose the best one of the many acceptable designs available. Thus a criterion has to be chosen for comparing the different alternative acceptable designs and for selecting the best one. The criterion with respect to which the design is optimized, when expressed as a function of the design variables, is known as the criterion or merit or objective function. The choice of objective function is governed by the nature of problem. The objective function for minimization is generally taken as weight in aircraft and aerospace structural design problems. In civil engineering structural designs, the objective is usually taken as the minimization of cost. The maximization of mechanical efficiency is the obvious choice of an objective in mechanical engineering systems design. Thus the choice of the objective function appears to be straightforward in most design problems.
3-3 Classification of optimization problems [11] Optimization problems can be classified in several ways, as described below:
a- Classification Based on the Existence of Constraints As indicated earlier, any optimization problem can be classified as constrained or unconstrained, depending on whether or not constraints exist in the problem.
b- Classification Based on the Nature of the Design Variables Based on the nature of design variables encountered, optimization problems can be classified into two broad categories. In the first category, the problem is to find values to a set of design parameters that make some prescribed function of these parameters 25
minimum subject to certain constraints. Such problems are called parameter or static optimization problems. In the second category of problems, the objective is to find a set of design parameters, which are all continuous functions of some other parameter, which minimizes an objective function subject to a set of constraints. Here the design variables are functions of the parameters. This type of problem, where each design variable is a function of one or more parameters, is known as a trajectory or dynamic optimization problem.
c- Classification Based on the Nature of the Equations Involved Another important classification of optimization problems is based on the nature of expressions for the objective function and the constraints. According to this classification, optimization problems can be classified as linear, nonlinear, geometric, and quadratic programming problems. This classification is extremely useful from the computational point of view since there are many special methods available for the efficient solution of a particular class of problems. Thus the first task of a designer would be to investigate the class of problem encountered. This will, in many cases, dictate the types of solution procedures to be adopted in solving the problem.
Nonlinear Programming Problem: If any of the functions among the objective and constraint functions is nonlinear, the problem is called a nonlinear programming (NLP) problem. This is the most general programming problem and all other problems can be considered as special cases of the NLP problem.
Geometric programming Problem: (GMP) is one in which the objective function and constraints are expressed as posynomials in X.A function h(x) is called a posynomial if (h) can be expressed as the sum of power terms each of the form: ci* x1ai1 * x2ai2 … xnain Where: ci > 0, xj > 0, aij = constants.
Quadratic Programming Problem: A quadratic programming problem is a Nonlinear programming problem with a quadratic objective function and linear constraints.
Linear Programming Problem: If the objective function and all the constraints are linear functions of the design variables, the mathematical programming problem is called a linear programming (LP) problem.
26
d- Classification Based on the Permissible Values of the Design Variables Depending on the values permitted for the design variables, optimization problems can be classified as integer and real-valued programming problems. If all of the design variables of an optimization problem are restricted to take on only integer (or discrete) values, the problem is called an integer programming problem or in case of discrete values, the problem is called combinatorial optimization problem. On the other hand, if all the design variables are permitted to take any real value, the optimization problem is called a real-valued programming problem. If only some of the variables are restricted to be integer, and the other variables are not restricted to that, it would be called a mixedinteger optimization problem.
e- Classification Based on the Deterministic Nature of the Variables Based on the deterministic nature of the variables involved, optimization problems can be classified as deterministic and stochastic programming problems. A stochastic programming problem is an optimization problem in which some or all of the parameters (design variables and/or pre-assigned parameters) are probabilistic (nondeterministic or stochastic).
3-4 Techniques of Optimization [11] a- Classical optimization techniques The classical methods of optimization are useful in finding the optimum solution of continuous and differentiable functions. These methods are analytical and make use of the techniques of differential calculus in locating the optimum points. These methods assume that the function is differentiable twice with respect to the design variables and the derivatives are continuous. For problems with equality constraints, the Lagrange multiplier method can be used. If the problem has inequality constraints, the KuhnTucker conditions can be used to identify the optimum point. But these methods lead to a set of nonlinear simultaneous equations that may be difficult to solve. Since some of the practical problems involve objective functions that are not continuous and/or differentiable, the classical optimization techniques have limited scope in practical applications. However, a study of the calculus methods of optimization forms a basis for developing most of the numerical techniques of optimization. Some of these classical methods are: 27
-
Direct Substitution
-
Constrained variation
-
Lagrange multiplier
b- Linear programming methods The simplex method continues to be the most efficient and popular method for solving general LP problems. Among other methods, Karmarkar's method, has been shown to be up to 50 times as fast as the simplex algorithm of Dantzig. If a LP problem involving several variables and constraints is to be solved by using the simplex method, it requires a large amount of computer storage and time. Some techniques, which require less computational time and storage space compared to the original simplex method, have been developed. Among these techniques, the revised simplex method is very popular. The principal difference between the original simplex method and the revised one is that in the former we transform all the elements of the simplex tableau, while in the latter we need to transform only the elements of an inverse matrix. Associated with every LP problem, another LP problem, called the dual, can be formulated. The solution of a given LP problem, in many cases, can be obtained by solving its dual in a much simpler manner. As stated above, one of the difficulties in certain practical LP problems is that the number of variables and/or the number of constraints is so large that it exceeds the storage capacity of the available computer. If the LP problem has a special structure, a principle known as the decomposition principle can be used to solve the problem more efficiently. In many practical problems, one will be interested not only in finding the optimum solution to a LP problem, but also in finding how the optimum solution changes when some parameters of the problem, such as cost coefficients change. Hence the sensitivity or post-optimumity analysis becomes very important. An important special class of LP problems, known as transportation problems, occurs often in practice. These problems can be solved by algorithms that are more efficient (for this class of problems) than the simplex method. Karmarkar's method is an interior method and has been shown to be superior to the simplex method of Dantzig for large problems. [11]
c- Non-linear programming methods Several methods are available for solving a nonlinear minimization problem. These methods can be classified into two broad categories as direct search methods and descent (Indirect) methods. The direct search methods require only the objective function values 28
but not the partial derivatives of the function in finding the minimum and hence are often called the non-gradient methods. The direct search methods are also known as zeroth order methods since they use zeroth-order derivatives of the function. These methods are most suitable for simple problems involving a relatively small number of variables. These methods are, in general, less efficient than the descent methods. The descent techniques require, in addition to the function values, the first and in some cases these second derivatives of the objective function. Since more information about the function being minimized is used (through the use of derivatives), descent methods are generally more efficient than direct search techniques. The descent methods are known as gradient methods. Among the gradient methods, those requiring only first derivatives of the function are called first-order methods, those requiring both first and second derivatives of the function are termed second-order methods. All the nonlinear minimization methods are iterative in nature and hence they start from an initial trial solution and proceed toward the minimum point in a sequential manner. It is important to note that all the unconstrained minimization methods require an initial point X0 to start the iterative procedure, and differ from one another only in the method of generating the new point Xi+1 (From Xi) and in testing the point Xi+1 , for optimumity. In the direct methods, the constraints are handled in an explicit manner, whereas in most of the indirect methods, the constrained problem is solved as a sequence of unconstrained minimization problems. [11].
d- Geometric programming It is a relatively new method of solving a class of nonlinear programming problems. It is used to minimize functions that are in the form of posynomials subject to constraints of the same type. It differs from other optimization techniques in the emphasis it places on the relative magnitudes of the terms of the objective function rather than the variables. Instead of finding optimum values of the design variables first, geometric programming first finds the optimum value of the objective function. This feature is especially advantageous in situations where the optimum value of the objective function may be all that is of interest. In such cases, calculation of the optimum design vectors can be omitted. Another advantage of geometric programming is that it often reduces a complicated optimization problem to one involving a set of simultaneous linear algebraic equations. The major disadvantage of the method is that it requires the objective function and the constraints in the form of posynomials. [11] 29
e- Dynamic Programming: In most practical problems, decisions have to be made sequentially at different points in time, at different points in space, and at different levels, say, for a component, for a subsystem, and/or for a system. The problems in which the decisions are to be made sequentially are called sequential decision problems. Since these decisions are to be made at a number of stages, they are also referred to a multistage decision problems. Dynamic programming is a mathematical technique well suited for the optimization of multistage decision problems. The dynamic programming technique, when applicable, represents or decomposes a multistage decision problem as a sequence of single-stage decision problems. Thus an N-variable problem is represented as a sequence of N single-variable problems that are solved successively. In most cases, these N sub-problems are easier to solve than the original problem. The decomposition to N sub-problems is done in such a manner that the optimum solution of the original N-variable problem can be obtained from the optimum solutions of the N one-dimensional problems. It is important to note that the particular optimization technique used for the optimization of the N singlevariable problems is irrelevant. It may range from a simple enumeration process to a differential calculus or a nonlinear programming technique. [11]
f- stochastic programming: Stochastic or probabilistic programming deals with situations where some or all of the parameters of the optimization problem are described by stochastic (or random or probabilistic) variables rather than by deterministic quantities. The sources of random variables may be several, depending on the nature and the type of problem. For instance, in the design of concrete structures, the strength of concrete is a random variable since the compressive strength of concrete varies considerably from sample to sample. In the design of mechanical systems, the actual dimension of any machined part is a random variable since the dimension may lie any where within a specified (permissible) tolerance band. Similarly, in the design of aircraft and rockets the actual loads acting on the vehicle depend on the atmospheric conditions prevailing at the time of the flight, which cannot be predicted precisely in advance. Hence the loads are to be treated as random variables in the design of such flight vehicles. Depending on the nature of equations involved (in terms of random variables) in the problem, a stochastic optimization problem is called a stochastic linear, geometric, dynamic or nonlinear programming problem. The basic idea
30
used in stochastic programming is to convert the stochastic problem into an equivalent deterministic problem. The resulting deterministic problem is then solved by using familiar techniques such as linear, geometric, dynamic and nonlinear programming. There are stochastic linear, nonlinear, geometric, and dynamic programming techniques. [11].
g- Integer programming In all the optimization techniques considered so far, the design variables are assumed to be continuous, which can take any real value. In many situations it is entirely appropriate and possible to have fractional solutions. Also, in many engineering systems, certain design variables can only have discrete values. However, there are practical problems in which the fractional values of the design variables are neither practical nor physically meaningful. For example, it is not possible to use 1.6 boilers in a thermal power station, 1.9 workers in a project, and 2.76 lathes in a machine shop. If an integer solution is desired, it is possible to use any of the optimization and round off the optimum values of the design variables to the nearest integer values. However, in many cases, it is very difficult to round off the solution without violating any of the constraints. Frequently, the rounding of certain variables requires substantial changes in the values of some other variables to satisfy all the constraints. Further, the round-off solution may give a value of the objective function that is very far from the original optimum value. All these difficulties can be avoided if the optimization problem is posed and solved as an integer programming problem. When all the variables are constrained to take only integer values in an optimization problem, it is called an (all)-integer programming problem. When the variables are restricted to take only discrete values, the problem is called a discrete programming problem. When some variables only are restricted to take integer (discrete) values, the optimization problem is called a mixed integer (discrete) programming problem. When all the design variables of an optimization problem are allowed to take on values of either zero or 1, the problem is called a zero-one programming problem. Among the several techniques available for solving the allinteger and mixed-integer linear programming problems, the cutting plane algorithm and the branch and-bound algorithm have been quite popular. Although the zero-one linear programming problems can be solved by the general cutting plane or the branch-andbound algorithms, an efficient enumerative algorithm for solving those problems was developed. Very little work has been done in the field of integer nonlinear programming. 31
The generalized penalty function method and the sequential linear integer (discrete) programming method can be used to solve all-integer and mixed-integer nonlinear programming problems. Integer programming problems could have linear and/or nonlinear objective functions and constraints. The selection of suitable method to solve depends on the linearity of the problem. The next table shows the classification of the integer problems among with the suitable solving techniques. [11]
Table 3-1 (Integer Programming Methods) [11]
h- combinatorial optimization: In applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimum object from a finite set of objects. It operates on the domain of those optimization problems, in which the set of feasible solutions is discrete or can be reduced to discrete, and in which the goal is to find the best solution. [12]. Combinatorial optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. 32
So the most primitive way in combinatorial optimization methods is the Brute Force search, which also called Enumerative Algorithm, the simplest approach to solving a combinatorial optimization problem is to enumerate all finitely many possibilities (as long as the problem is bounded). However, due to the “combinatorial explosion” resulting from the fact that the size of the set S is generally exponential in the number of variables, only the smallest instances can be solved by such an approach. A more efficient approach is to only implicitly enumerate the possibilities by eliminating large classes of solutions using domination or feasibility arguments. In combinatorial problems the solution (if is exists) is an element of a set of combinatorial objects – permutations, combinations, or subsets. The brute-force approach consists in generating the combinatorial objects and testing each object to see if it satisfies some specified constraints. [13]. The most suitable technique for transformer design is combinatorial optimization, since the design of optimization is actually the selecting of variables set from given standard or stored choices. [14]
3-5 Brute Force Search a- Definition Brute force, also known as an exhaustive search, is a paradigm in computer science where all possible cases for deriving a problem's solutions are explored. [8].It is a straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved. It is a method of computation wherein all permutations of a problem are tried until one is found that provides a solution, in contrast to the implementation of a more intelligent algorithm. [15]. Brute Force Search method describes a primitive programming style, one in which the programmer relies on the computer's processing power instead of using his or her own intelligence to simplify the problem. [15]. Whether brute-force programming should actually be considered stupid or not depends on the context; if the problem is not terribly big, the extra CPU time spent on a brute-force solution may cost less than the programmer time it would take to develop a more ‘intelligent’ algorithm. [15]. Brute force is a very fundamental approach to problem-solving where every perceivable answer is tested for correctness. [13]. The most general search algorithms are brute-force searches since they do not require any domain specific knowledge. So brute-force search is also called uninformed search and blind search. To find all possible solution, search methods are the suitable solution, after 33
finding all the possible solutions selecting one of them based on any desired criteria is a good optimization practice.
b- Method: [12] - Generate a list of all potential solutions to the problem in a systematic manner. -
Evaluate potential solutions one by one, disqualifying infeasible ones and, for an optimization problem, keeping track of the best one found so far
-
When search ends, announce the solution(s) found.
Exhaustive search offers an easily designed but long-running-time approach. Exhaustive-search algorithms run in a realistic amount of time only on very small instances. In many cases, exhaustive search or its variation is the only known way to get exact solution
c- Strengths: [12] - Wide applicability, -
Simplicity and
-
Yields
reasonable
algorithms
for
some
important
problems
d- Weaknesses: [12] - Rarely yields efficient algorithms, -
Some brute-force algorithms are unacceptably slow, and
-
Not as constructive as some other design techniques
e- Efficiency and Time: [12] Brute force is a guaranteed way to find the correct solutions to a problem because it tests every possible candidate as an answer. Given the speed of modern computers, this paradigm may be the optimum method for very small data sets; however, for larger problems, this method may go up to and beyond an unacceptable order of growth. In real-world applications, brute force in many situations is not a valid approach for solving problems with huge data, as it would literally take many years to test all the combinations of values. The problem with all brute-force search algorithms is that their time complexities grow exponentially with problem size. This problem is called combinatorial explosion, the combinatorial explosion results in limited size of problems that can be solved with brute-force search techniques 34
f-
Simplicity vs. Efficiency: [12]
The brute force paradigm is to algorithmic analysis, although the brute force paradigm is simple and very easy to follow, its simplicity pays its price in efficiency. It is notorious for its inefficiency and is considered the lower limit when comparing other solution methods or sorting procedures. Therefore, the brute force method is considered the "baseline" of all algorithmic paradigms. Brute force tactics generally do not apply any complex heuristics (decision-making) to alter its search scheme or search space, but instead, the paradigm relies solely on pure computational power to examine all possible solutions. We should keep in mind that the brute force method will always yield a correct answer given that the process is allowed to execute in its entirety. [13]. To illustrate the concept of Brute force Search, a tree is made of possibilities, like the following example, in fig.13; which shows the producing all the combinations, and then test them all.
fig 3-2 (flow chart of general algorithm of Brute Force Search)
35
3-6 Optimization tools in MATLAB There are two toolboxes provided with MATLAB software those deal with optimization problems: [15] -
Optimization toolbox.
-
Global optimization toolbox.
-
TOMLAB; Other dependent commercial optimization toolbox work in MATLAB environment.
1- Optimization toolbox There are four general categories of Optimization Toolbox solvers: a- Minimizers: This group of solvers attempts to find a local minimum of the
objective function near a starting point x0. They address problems of unconstrained optimization, linear programming, quadratic programming, and general nonlinear programming. b- Multi-objective minimizers: This group of solvers attempts to either
minimize the maximum value of a set of functions (fminimax), or to find a location where a collection of functions is below some pre-specified values (fgoalattain). c- Equation solvers : this group of solvers attempts to find a solution to a scalar-
or vector-valued nonlinear equation f(x) = 0 near a starting point x0. Equation-solving can be considered a form of optimization because it is equivalent to finding the minimum norm of f(x) near x0. d- -Least-Squares (curve-fitting) solvers: This group of solvers attempts to
minimize a sum of squares. This type of problem frequently arises in fitting a model to data. The solvers address problems of finding nonnegative solutions, bounded or linearly constrained solutions, and fitting parameterized nonlinear models to data.
Minimizers
The minimizers in MATLAB optimization toolbox are: a- fmincon: Find minimum of constrained nonlinear multivariable function. b- linprog: Solve linear programming problems. c- bintprog: Solve binary integer linear programming problems.
36
d- quadprog: Quadratic programming. e- fminsearch: Find minimum of unconstrained multivariable function using derivative-free method. f-
fminunc: Find minimum of unconstrained multivariable function.
g- fseminf: Find minimum of semi-infinitely constrained multivariable nonlinear function. h- fminbnd: Find minimum of single-variable function on fixed interval. The following table is designed to help in choosing a solver for minimizer problems; Use the table as follows: -
Identify your objective function as one of five types:
-
Linear
-
Quadratic
-
Smooth nonlinear
-
Non-smooth
-
Identify your constraints as one of five types:
-
None (unconstrained)
-
Bound
-
Linear (including bound)
-
General smooth
-
Discrete (integer)
Use the table to identify a relevant solver. Table 3-2 (Optimization Decision Table) [16] Constraint type None
Linear n/a (f = const, or min = )
Objective type quadratic Smooth nonlinear quadprog
fminsearch, fminunc
Nonsmooth fminsearch, *
Bound
linprog,
quadprog
Linear General smooth Discrete
linprog,
quadprog
fminbnd, fmincon, fseminf, fmincon, fseminf,
fmincon
fmincon
fmincon, fseminf,
*
Bintprog, *
*
*
*
fminbnd, * *
* means relevant solvers are found in Global Optimization Toolbox functions (licensed separately from Optimization Toolbox solvers).
37
- fmincon applies to most smooth objective functions with smooth constraints. It is not listed as a preferred solver for least squares or linear or quadratic programming because the listed solvers are usually more efficient. - The table has suggested functions, but it is not meant to unduly restrict your choices. - The Global Optimization Toolbox GA function can address mixed integer programming problems.
2- Global Optimization toolbox Global Optimization Toolbox provides methods that search for global solutions to problems that contain multiple maxima or minima. It includes global search, multi-start, pattern search, genetic algorithm, and simulated annealing solvers. You can use these solvers to solve optimization problems where the objective or constraint function is continuous, discontinuous, stochastic, does not possess derivatives, or includes simulations or black-box functions with undefined values for some parameter settings. There are five Global Optimization Toolbox solvers: a- GlobalSearch and MultiStart b- ga (Genetic Algorithm) c- patternsearch, also called direct search d- simulannealbnd (Simulated Annealing)
Choose an optimizer based on problem characteristics and on the type of solution you want. Solver Characteristics contains more information that can help you decide which solver is likely to be most suitable. From the next table following are observed: -
The only solver solves discrete programming problems with nonlinear objective function and/or constraints using MATLAB optimization toolboxes is the ga (genetic algorithm)
-
GA can solve problems when certain variables are integer-valued.
-
Also by suitable constraints, one can use pattern search in discrete optimization.
38
Table 3-3 (decision table of Global optimization toolbox) [16] Desired Solution Single local solution Multiple local solutions Single global solution Single local solution using parallel processing Multiple local solutions using parallel processing Single global solution using parallel processing
Smooth Objective and Constraints
Nonsmooth Objective or Constraints fminbnd, patternsearch, Optimization Toolbox fminsearch, functions; simulannealbnd GlobalSearch, MultiStart GlobalSearch, multiStart, patternsearch, ga, patternsearch, ga, simulannealbnd simulannealbnd MultiStart, Optimization patternsearch, ga Toolbox functions MultiStart
-
MultiStart
patternsearch, ga
ga,
A- GlobalSearch and MultiStart [14] The Global Search and MultiStart solvers apply to problems with smooth objective and constraint functions. The solvers search for a global minimum, or for a set of local minima. For more information on which solver to use, GlobalSearch and MultiStart work by starting a local solver, such as fmincon, from a variety of start points. Multiple Runs of a Local Solver GlobalSearch and MultiStart have similar approaches to finding global or multiple minima. Both algorithms start a local solver (such as fmincon) from multiple start points. The algorithms use multiple start points to sample multiple basins of attraction. The main differences between GlobalSearch and MultiStart: [16]
GlobalSearch uses a scatter-search mechanism for generating start points.
MultiStart uses uniformly distributed start points within bounds, or user-supplied start points.
GlobalSearch analyzes start points and rejects those points that are unlikely to improve the best local minimum found so far. MultiStart runs all start points (or, optionally, all start points that are feasible with respect to bounds or inequality constraints).
MultiStart gives a choice of local solver: fmincon, fminunc, lsqcurvefit, or lsqnonlin. The GlobalSearch algorithm uses fmincon.
MultiStart can run in parallel, distributing start points to multiple processors for local solution. To run MultiStart in parallel
39
B-
Genetic algorithm [11]
Many practical optimum design problems are characterized by mixed continuousdiscrete variables, and discontinuous and non-convex design spaces. If standard nonlinear programming techniques are used for this type of problem they will be inefficient, computationally expensive, and in most cases, find a relative optimum that is closest to the starting point. Genetic algorithms (GAs) are well suited for solving such problems, and in most cases they can find the global optimum solution with a high probability. Philosophically, GAs are based on Darwin's theory of survival of the fittest. Genetic algorithms are based on the principles of natural genetics and natural selection. The basic elements of natural genetics: reproduction, crossover, and mutation; are used in the genetic search procedure. GAs differ from the traditional methods of optimization in the following respects: a- A population of points (trial design vectors) is used for starting the procedure instead of a single design point. If the number of design variables is n, usually the size of the population is taken as 2n to 4n. Since several points are used as candidate solutions, GAs are less likely to get trapped at a local optimum. b- GAs use only the values of the objective function. The derivatives are not used in the search procedure. c- In GAs the design variables are represented as strings of binary variables that correspond to the chromosomes in natural genetics. Thus the search method is naturally applicable for solving discrete and integer programming problems. For continuous design variables, the string length can be varied to achieve any desired resolution. d- The objective function value corresponding to a design vector plays the role of fitness in natural genetics. e- In every new generation, a new set of strings is produced by using randomized parents selection and crossover from the old generation (old set of strings). Although randomized, GAs are not simple random search techniques. They efficiently explore the new combinations with the available knowledge to find a new generation with better fitness or objective function value.
1- Genetic Algorithm terminology in MATLAB [16] a- Fitness Functions: 40
The fitness function is the function you want to optimize. For standard optimization algorithms, this is known as the objective function. The toolbox software tries to find the minimum of the fitness function. Write the fitness function as a file or anonymous function, and pass it as a function handle input argument to the main genetic algorithm function. b- Individuals: An individual is any point to which you can apply the fitness function. The value of the fitness function for an individual is its score. An individual is sometimes referred to as a genome and the vector entries of an individual as genes. c- Populations and Generations: A population is an array of individuals. For example, if the size of the population is 100 and the number of variables in the fitness function is 3, you represent the population by a 100-by-3 matrix. The same individual can appear more than once in the population. At each iteration, the genetic algorithm performs a series of computations on the current population to produce a new population. Each successive population is called a new generation. d- Diversity: Diversity refers to the average distance between individuals in a population. A population has high diversity if the average distance is large; otherwise it has low diversity. Diversity is essential to the genetic algorithm because it enables the algorithm to search a larger region of the space. e- Fitness Values and Best Fitness Values: The fitness value of an individual is the value of the fitness function for that individual. Because the toolbox software finds the minimum of the fitness function, the best fitness value for a population is the smallest fitness value for any individual in the population. f- Parents and Children: To create the next generation, the genetic algorithm selects certain individuals in the current population, called parents, and uses them to create individuals in the next generation, called children. Typically, the algorithm is more likely to select parents that have better fitness values.
2- Outline of the GAs Algorithm in MATLAB [16]. The following outline summarizes how the genetic algorithm works: 41
-
The algorithm begins by creating a random initial population.
-
The algorithm then creates a sequence of new populations. At each step, the algorithm uses the individuals in the current generation to create the next population. To create the new population, the algorithm performs the following steps: a- Scores each member of the current population by computing its fitness value. b- Scales the raw fitness scores to convert them into a more usable range of values. c- Selects members, called parents, based on their fitness. d- Some of the individuals in the current population that have lower fitness are chosen as elite. These elite individuals are passed to the next population. e- Produces children from the parents. Children are produced either by making random changes to a single parent—mutation—or by combining the vector entries of a pair of parents—crossover. f- Replaces the current population with the children to form the next generation.
-
The algorithm stops when one of the stopping criteria is met. See Stopping Conditions for the Algorithm.
3- Characteristics of the Integer GA Solver in MATLAB: [16]. There are some restrictions on the types of problems that ga can solve when you include integer constraints: a- No linear equality constraints. You must have Aeq = [] and beq = []. For a possible workaround, see No Equality Constraints. b- No nonlinear equality constraints. Any nonlinear constraint function must return [] for the nonlinear equality constraint. For a possible workaround, see Example: Integer Programming with a Nonlinear Equality Constraint. c- Only double Vector population type. d- No custom creation function (CreationFcn option), crossover function (CrossoverFcn option), mutationfunction (MutationFcn option), or initial sc ores(InitialScores option). If you supply any of these, ga overrides their settings. e- ga uses only the binary tournamentselection function (SelectionFcn option), and overrides any other setting. f- No hybrid function. ga overridesany setting of the HybridFcn option. 42
g- ga ignores the ParetoFraction, DistanceMeasureFcn, InitialPenalty,and PenaltyFactor options. The listed restrictions are mainly natural, not arbitrary. For example: -
There are no hybrid functions that support integerconstraints. So ga does not use hybrid functionswhen there are integer constraints.
-
To obtain integer variables, ga usesspecial creation, crossover, and mutation functions.
Calling GA in MATLAB [16] [x,fval]=ga(fitnessfcn,nvars,A,b,Aeq,beq,LB,UB,nonlcon,… IntCon,options); Whereas: -
x: the design variables that give the founded optimum.
-
fval: the optimum value of the objective function at founded x.
-
fitnessfcn: Handle to the fitness function.
-
nvars: Positive integer representing the number of variables in the problem.
-
A: Matrix for linear inequality constraints of the form A x ≤ b.
-
b: Vector for linear inequality constraints of the form A x ≤ b.
-
Aeq: Matrix for linear equality constraints of the form Aeq x = beq.
-
beq: Vector for linear equality constraints of the form Aeq x = beq.
-
LB: Vector of lower bounds.
-
UP: Vector of upper bounds.
-
nonclon: Function handle that returns two outputs: [c,ceq] = nonlcon(x)
-
options: Structure containing optimization options.
-
IntCon: Vector of positive integers taking values from 1 to nvars. Each value in IntCon represents an x component that is integer-valued.
-
if IntCon is used then Aeq, beq and ceq =[];
C- Pattern search (direct Search) algorithm [11] Direct search is a method for solving optimization problems that does not require any information about the gradient of the objective function. Unlike more traditional optimization methods that use information about the gradient or higher derivatives to search for an optimum point, a direct search algorithm searches a set of points around the 43
current point, looking for one where the value of the objective function is lower than the value at the current point. You can use direct search to solve problems for which the objective function is not differentiable, or is not even continuous. Pattern search, although much less well known, is an attractive alternative to the genetic algorithm as it is often computationally less expensive and can minimize the same types of functions. Pattern search operates by searching a set of points called a pattern, which expands or shrinks depending on whether any point within the pattern has a lower objective function value than the current point. The search stops after a minimum pattern size is reached. Like the genetic algorithm, the pattern search algorithm does not use derivatives to determine descent, and so works well on non-differentiable, stochastic, and discontinuous objective functions. And similar to the genetic algorithm, pattern search is often very effective at finding a global minimum because of the nature of its search. Global Optimization Toolbox functions include three direct search algorithms called the generalized pattern search (GPS) algorithm, the generating set search (GSS) algorithm, and the mesh adaptive search (MADS) algorithm. All are pattern search algorithms that compute a sequence of points that approach an optimum point. At each step, the algorithm searches a set of points, called a mesh, around the current point—the point computed at the previous step of the algorithm. The mesh is formed by adding the current point to a scalar multiple of a set of vectors called a pattern. If the pattern search algorithm finds a point in the mesh that improves the objective function at the current point, the new point becomes the current point at the next step of the algorithm. The GPS algorithm uses fixed direction vectors. The GSS algorithm is identical to the GPS algorithm, except when there are linear constraints, and when the current point is near a linear constraint boundary. The MADS algorithm uses a random selection of vectors to define the mesh. The main disadvantage of the pattern search method is that its’ need to be provided with a feasible initial point, which sometimes not available. Another disadvantage is that, when applied to discontinuous, non-smooth problems, it tends to find the nearest local minima point. 1- Pattern Search terminology in MATLAB [16] a- Patterns: A pattern is a set of vectors {vi} that the pattern search algorithm uses to determine which points to search at each iteration. The set {vi} is defined by the number of independent variables in the objective function, N, and the positive basis set. Two
44
commonly used positive basis sets in pattern search algorithms are the maximal basis, with 2Nvectors, and the minimal basis, withN+1 vectors. b- Meshes: At each step, pattern search searches a set of points, called a mesh, for a point that improves the objective function. Pattern search forms the mesh by: -
Generating a set of vectors {di} by multiplying each pattern vector i by a scalar Δ m. Δ m is called the mesh size.
-
Adding the {di} to the current point - the point with the best objective function value found at the previous step.
c- Polling: At each step, the algorithm polls the points in the current mesh by computing their objective function values. When the Complete poll option has the (default) setting Off, the algorithm stops polling the mesh points as soon as it finds a point whose objective function value is less than that of the current point. If this occurs, the poll is called successful and the point it finds becomes the current point at the next iteration. The algorithm only computes the mesh points and their objective function values up to the point at which it stops the poll. If the algorithm fails to find a point that improves the objective function, the poll is called unsuccessful and the current point stays the same at the next iteration. When the Complete poll option has the setting On, the algorithm computes the objective function values at all mesh points. The algorithm then compares the mesh point with the smallest objective function value to the current point. If that mesh point has a smaller value than the current point, the poll is successful. d- Expanding and Contracting: After polling, the algorithm changes the value of the mesh size Δm. The default is to multiply Δm by 2 after a successful poll, and by 0.5 after an unsuccessful poll.
e- Calling Pattern Search in MATLAB [16] [x,fval]=patternsearch(fun,x0,A,b,Aeq,beq,LB,UB,nonlcon,… options) -
x: the design variables that give the founded optimum.
-
fval: the optimum value of the objective function at founded x.
-
fun: Handle to the fitness function.
-
A: Matrix for linear inequality constraints of the form A x ≤ b.
-
b: Vector for linear inequality constraints of the form A x ≤ b.
45
-
Aeq: Matrix for linear equality constraints of the form Aeq x = beq.
-
beq: Vector for linear equality constraints of the form Aeq x = beq.
-
LB: Vector of lower bounds.
-
UP: Vector of upper bounds.
-
nonclon: Function handle that returns two outputs: [c,ceq] = nonlcon(x)
-
options: Structure containing optimization options.
D- Simulated annealing [11] Simulated annealing is a method for solving unconstrained and bound-constrained optimization problems. The method models the physical process of heating a material and then slowly lowering the temperature to decrease defects, thus minimizing the system energy. At each iteration of the simulated annealing algorithm, a new point is randomly generated. The distance of the new point from the current point, or the extent of the search, is based on a probability distribution with a scale proportional to the temperature. The algorithm accepts all new points that lower the objective, but also, with a certain probability, points that raise the objective. By accepting points that raise the objective, the algorithm avoids being trapped in local minima, and is able to explore globally for more possible solutions. An annealing schedule is selected to systematically decrease the temperature as the algorithm proceeds. As the temperature decreases, the algorithm reduces the extent of its search to converge to a minimum.
1- Simulated annealing terminology in MATLAB [16] a- Objective Function: The objective function is the function you want to optimize. Global Optimization Toolbox algorithms attempt to find the minimum of the objective function. Write the objective function as a file or anonymous function, and pass it to the solver as a function handle. b- Temperature The temperature is the control parameter in simulated annealing that is decreased gradually as the algorithm proceeds. It determines the probability of accepting a worse solution at any step and is used to limit the extent of the search in a given dimension. You can specify the initial temperature as an integer in the InitialTemperatureoption, and the annealing schedule as a function in the TemperatureFcnoption. c- Annealing Schedule 46
The annealing schedule is the rate by which the temperature is decreased as the algorithm proceeds. The slower the rate of decrease, the better the chances are of finding an optimum solution, but the longer the run time. You can specify the temperature schedule as a function handle with the TemperatureFcnoption. d- Re-annealing Annealing is the technique of closely controlling the temperature when cooling a material to ensure that it is brought to an optimum state. Re-annealing raises the temperature after a certain number of new points have been accepted, and starts the search again at the higher temperature. Re-annealing avoids getting caught at local minima. You specify the re-annealing schedule with the ReannealIntervaloption
2- Outline of the Simulated annealing Algorithm in MATLAB [16] The following is an outline of the steps performed for the simulated annealing algorithm: -
The algorithm begins by randomly generating a new point. The distance of the new point from the current point, or the extent of the search, is determined by a probability distribution with a scale proportional to the current temperature.
-
The algorithm determines whether the new point is better or worse than the current point. If the new point is better than the current point, it becomes the next point. If the new point is worse than the current point, the algorithm may still make it the next point. The algorithm accepts a worse point based on an acceptance probability.
-
The algorithm systematically lowers the temperature, storing the best point found so far.
-
Re-annealing is performed after a certain number of points (ReannealInterval) are accepted by the solver. Re-annealing raises the temperature in each dimension, depending on sensitivity information. The search is resumed with the new temperature values.
-
The algorithm stops when the average change in the objective function is very small, or when any other stopping criterion is met.
3- Calling Simulated annealing in MATLAB [16] [x, fval] = simulannealbnd(fun,x0,lb,ub,options)
47
whereas: -
x: the design variables that give the founded optimum.
-
fval: the optimum value of the objective function at founded x.
-
fun: objective function.
-
x0: Initial point of the search.
-
lb: Lower bound on x.
-
ub: Upper bound on x.
-
options: : Structure containing optimization options.
Note that: Simulated annealing can deal with problems those have no constraints except upper and lower bounds.
f- TOMLAB optimization toolbox The TOMLAB Optimization Environment is a powerful optimization and modeling package for solving applied optimization problems in MATLAB. [17]. It is a commercial toolbox that runs in the MATLAB environment, it not included with the MATLAB package, the solvers that can be used will depend on what the user is licensed for; a free trailer version is used in this study. The programming commands shared from MATLAB environment. A quick user guide is provided with the trailer, which is just enough to use the toolbox. TOMLAB has very specialized algorithms for each problem kind for wide range of problems. TOMLAB handles a wide range of problem types, among them:
Linear programming
Quadratic programming
Nonlinear programming
Mixed-integer programming
Mixed-integer quadratic programming with or without convex quadratic constraints
Mixed-integer nonlinear programming
Linear and nonlinear least squares with L1, L2 and infinity norm
Exponential data fitting
Global optimization
Semi-definite programming problem with bilinear matrix inequalities
Constrained goal attainment
Geometric programming 48
Genetic programming
Costly or expensive black-box global optimization
Nonlinear complementarity problems
TOMLAB has many solvers in their Base module, and another many solvers as standalone solvers. The TOMLAB Base Module includes 32 solvers for a wide variety of problem types, routines for problem setup and analysis, sparse and dense solvers for global optimization, linear/nonlinear least squares and nonlinear programming are also included. The optimization solvers in the Base module include general global optimization solvers, general local optimization solvers, specialized optimization solvers like: binary, mixed-integer, quadratic…etc. Here we concern in general global optimization technique, preferring solvers with mixedinteger capability. The Global optimization techniques in the BASE module are: a- glbSolve: Global Optimization algorithm DIRECT by Don Jones et.al. It solves the global bounded and UNCONSTRAINED optimization problems. b- glbFast : Is just a modified glbSolve DIRECT algorithm in faster Fortran version. c- glcSolve: It solves CONSTRAINED Mixed-Integer Global Optimization. d- glcFast: It is just a modified glcSolve constrained DIRECT algorithm in faster Fortran version. e- glcCluster: It is a solvers suite, a combination of solvers. A Hybrid of glcFast / cluster algorithm / local solver. Also, there are many standalone optimization algorithms come with TOMLAB; also called MEX solvers; they are out of the BASE module; they act as alternatives of the BASE module solvers. These standalone solvers cover a wide range of problem types; among them there some of global optimization algorithm; those are: a- glcDirect: It is similar to glcSolve, programmed to be faster than usual glcSolve. b- glbDirect: It is similar to glbSolve, programmed to be faster than usual glbSolve. c- OQNLP:
49
Solves constrained nonlinear mixed-integer problems. It is Multistart heuristic algorithm to find global optima of smooth constrained nonlinear programs and mixed-integer nonlinear programs. d- GENO: TOMLAB GENO Multiobjective Genetic Solver; GENO solves general constrained mixed integer uni- and multi-objective, optimization problems using a genetic algorithm.
2- Calling TOMLAB solvers in MATLAB call with: function Result = tomRunFast(Solver, Prob); where: -
tomRun assumes first argument is a string
-
tomRun assumes second argument is a structure
-
Solver:
The name of the solver that should be used to optimize the problem.
-
Prob:
Problem structure.
-
Result:
Structure with optimization results
3- Assign problem TOMLAB There are many utilities of assigning a problem; choosing one of them depends on the problem itself; the following is a list that contain those utilities: -
probAssign
Setup a Prob structure for a problem of certain problem type
-
lpAssign
-
lpconAssign Define a Nonlinearly constrained LP problem
-
qpAssign
-
qpconAssign Define a Nonlinearly constrained QP problem
-
conAssign
Define a NonLinear Programming problem (constrained or not)
-
mipAssign
Define a mixed-integer programming problem
-
miqpAssign
-
miqqAssign
Define a Linear Programming problem
Define a Quadratic Programming problem
Define a mixed-integer quadratic programming problem Define a mixed-integer quadratic programming problem with
quadratic constraints -
clsAssign
Define a nonlinear least squares problem (constrained or not)
-
llsAssign
Define a linear least squares problem (constrained or not)
-
glcAssign
Define a global optimization problem (constrained or not)
50
-
sdpAssign
Define a semidefinite program
-
bmiAssign
Define a bilinear semidefinite program
-
minlpAssign Define a mixed-integer nonlinear (MINLP) program
-
simAssign
Both the function and the constraints are computed
-
expAssign
Define an exponential fitting problem
For our problems which include integer variables, it is convenient to use glcAssign; which is a direct way of setting up a global mixed-integer programming. The information is put into the TOMLAB input problem structure Prob. Prob = glcAssign(...) It is then possible to solve the glc problem using a TOMLAB glc solver
Syntax of glcAssign:
function Prob = glcAssign(f, x_L, x_U, Name, A, b_L, b_U, c, c_L, c_U, x_0, IntVars, VarWeight, fIP, xIP,
fLowBnd,
... x_min,
x_max, f_opt, x_opt); -
f
Name of objective function f(x)
-
x_L
Lower bounds on x, finite bounds must be given.
-
x_U
Upper bounds on x, finite bounds must be given.
-
Name
The name of the problem (string)
The rest of the input parameters are optional -
A
The linear constraint matrix
-
b_L
The lower bounds for the linear constraints, b_L