An Original Geometric Programming Problem Algorithm to Solve Two Coefficients Sensitivity Analysis

Problem statement: It has been noticed by Dinkel and Kochenberger tha t t ey developed sensitivity procedure for Posynomial Geometric Prog ramming Problems based on making a small changes in one coefficient. Approach: This study presented an original algorithm for fin ding the ranging analysis while studying the effect of pertu rbations in the original coefficients without resol ving the problem, this proposed procedure had been trapp ed on two coefficients simultaneously. We also had developed one of the incremental strategies to make suitable comparisons. Results: Comparison results had been done between the gained result fro m the sensitivity analysis approach and the incremental analysis approach. Conclusion: In the standard Geometric Programming Problem, we obtained an original algorithm, for the first time, by changing two coefficients simultaneously in the objective function.


INTRODUCTION
This study deals with the sensitivity analysis in the case of less than type inequalities. Techniques designed to study the effects of small changes in the input variables on the optimal solution of an optimization problems, with out having to solve the entire problem again and again, are known in the literature as sensitivity analysis techniques [1] . Dinkel and Kochenberger studying the effect of changing coefficients separately on the optimal solution [2,4] .

MATERIALS AND METHODS
The mathematical formulation of the sensitivity analysis for posynomials (polynomials with positive coefficients) are discussed in the research of Dinkel et al. [3] as follow: Theorem 1: Suppose that the primal geometric program has d>0 and rank (a ij ) = m If the solution to the dual geometric program has δ*>0 and if the Jacobian matrix J(δ)with components is: The major restriction of this result, from an applications point of view, is that are no inactive primal constraints at the optimal solution * i ( 0for alli) δ > .Thus assuming the problem has been reformulated, if necessary , to meet this restriction.
For differential changes dc i that maintain the positivity conditions on all dual variables, the new dual solution is estimated as: where, dδ i is given by (3). Once the dual solution is known the estimate of the new primal solution is computed as: and n o is the number of terms in the primal objective function.
Theorem 2: Suppose the primal GPP has d>0 and let b(j), j = 1,…, m are linearly independent. If the submatrix with components b i (j), I = 1,…, n o and j = 1,…d has rank d then j(δ), given by (1), is nonsingular for each δ>0. Since we are interested in other than differential changes we will interpret dν ν and i i dc c as rates of change [3] . That is: An original GPP algorithm: Before we make some observations of the new original procedure, let us consider the outlines of this algorithm: Step 1: Put δ i + dδ i = 0 as an equations of the two variables ∆ 1 and ∆ 2 where i = 1, 2…,n is the number of dual variables. Step 2: Calculate the cofactors of ∆ 1 and ∆ 2 in those equations obtained in the step 1, we note that the sign of ∆ 1 is the opposite to the sign of ∆ 2 for each i = 1, 2…,n.
Step 3: Categorized those equations in two groups: • The first group is containing the +ive cofactors of ∆ 1 and the -ive cofactors of ∆ 2 • The second group is containing the −ive cofactors of ∆ 1 and the +ive cofactors of ∆ 2 Step 4: From the first group, calculate the lower bound of ∆ 1 and the upper bound of ∆ 2 , while the upper of ∆ 1 and the lower bound of ∆ 2 will be calculated from the 2nd group.
Step 5: Since our searching is concerned about the range of any two coefficients in the objective function by changing them simultaneously so any small change in the lower bound of ∆ 1 will effect on the upper bound of ∆ 2 similarly, upper bound of ∆ 1 and lower bound of ∆ 2 will be effected, this connection gives us an ability to construct the cross-shape in Fig. 1.
Step 7: Determine the pieces of the those lines between the intersection points and study all points at that pieces to find the most important answer on the following most important question: At which point on the pieces of the 1st and 2nd groups will we find max∆ 1 with min∆ 2 simultaneously?
Step 8: After finding those points, apply the following rule: • The upper bound on ∆ 1 is then the minimum of ∆ 1 >0 for those i when (14)<0 for which (13) is satisfied. If ∆ 1 <0 evaluating (13) for those I for which (14)>0 then the lower bound on ∆ 1 being the maximum such ∆ I [1] , by regarding the observations (a), (b) and (c) in Note 2 Step 9: End. Some theoretical observations: Note 1: • If we attempt to change the upper bounds of ∆ 1 and ∆ 2 simultaneously or the lower bounds∆ 1 and ∆ 2 this will shift the cross-shape right or left respectively. The important thing now, because we have consider the change in two coefficients, this yields two dimensional space for which ∆ 1 is the horizontal axis and ∆ 2 is the vertical axis .The equations δ i +dδ i = 0 are straight lines in ∆ 1 and ∆ 2 plane Note 2: (a) We suggest that the lower bound on ∆ 1 don't exceed the negative value of c 1 to maintain the posynomial nature. Also for ∆ 2 (b) We make the same steps on the bounds of ∆ 2 with replacing (A) by (B) (c) At changing in c 1 and c 2 simultaneously we must note that this changing is with respect (the cases if ∆ 1 >0 → ∆ 2 >0 and ∆ 1 <0 → ∆ 2 <0 are out of our ranges since it is contradict the conditions in the problems) Note 3: • The above algorithm is originally designed by us with a numerical evidence we put those results in Table 1-4 which are verified by using our programs writing in Matlab • If we try to change three coefficients simultaneously, this required to study three dimensional space and this is not the domain of our research in this research but it is a good field to study in future     Here the degree of difficulty is d = 9-6-1 = 2. The dual objective function is: We have: Let: Evaluating (A) and (B) for i = 1, 2,….9 we will get Table 1.
Substitute these values in (13) and solve the following nine optimization problems: The solutions of these problems can be tabulated as Table 2.