https://optimization.mccormick.northwestern.edu/api.php?action=feedcontributions&user=Cindyxchen&feedformat=atomoptimization - User contributions [en]2021-10-23T06:58:10ZUser contributionsMediaWiki 1.21.3https://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-06-07T07:22:04Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Algorithm=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a perturbation factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \mu\log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
=Application=<br />
The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. <br />
<br />
<br>Starting with a general optimization problem:<br />
<br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f(x)\\<br />
\text{s.t.} & ~~ h(x) = 0\\<br />
& ~~ g(x) \le 0 \\ <br />
\end{align}<br />
</math><br />
<br />
<br>Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:<br />
<br />
<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math><br />
<math>h(x) = 0</math><br><br />
<math>g(s) + s =0</math><br> <br />
<math>[\lambda_g] s - \mu e=0</math><br><br />
<math>(s, \lambda_g, \mu) \ge = 0</math><br><br />
<br />
<br>Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br><br />
Next, calculate slack variables and corresponding multipliers with: <br><br />
<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br><br />
<math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br><br />
<br />
To calculate the perturbation factor, <math>\mu</math>, use primal-dual distances: <br><br />
<math>\mu = \sigma * pdad = \sigma * \dfrac{\lambda_g^t s}{niq}</math><br><br />
where <math>\sigma</math> defines the trajectory of the optimal solution, pdad is the primal-dual average distance, and niq accounts for the inequality constraints.<br />
<br />
<br><math>\sigma</math> ranges between 0 and 1. For the extreme conditions:<br><br />
<math>\sigma = 0</math> corresponds to ''affine-scaling direction'' where the optimal point is obtained through non-perturbed solution of KKT<br><br />
<math>\sigma = 1</math> corresponds to ''centralization direction'' where the non-optimal solution is found with a primal-dual distance equal to the initial value of <math>\mu</math><br><br />
<br />
<br> In a conventional primal-dual IP method, a constant value is assigned to <math>\sigma</math> (usually close to 0.1) for the iterations. This results in a search direction where 90% is defined towards the optimal point and 10% is allocated to trajectory of centralization.<br />
<br />
=Illustrative Example=<br />
Perform 1 iteration of IP method to solve the following NLP:<br><br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f = 0.25x_1^2 + x_2^2\\<br />
\text{s.t.} & ~~ 1 \le x1 - x2 \le 7\\<br />
\end{align}<br />
</math><br />
<br />
<br>To solve, first form the Lagrange function:<br />
<br><math>L(x,\lambda)=f(x) + \lambda F(x) = 0.25x_1^2 + x_2^2 + \lambda(x_1 - x_2)</math><br />
<br><math>U(x,z) = f_x^T (x) + zF_x^T (x) = 0</math><br />
<br><math>f_x = x_1 - x_2</math><br />
<br><br />
<math>F_x = \begin{bmatrix}<br />
0.5x_1 & 2x_2 \\<br />
\end{bmatrix}</math><br />
<math>S = U_x (x,z) = f_{xx} + \sum_{i=1}^3 z_i f_{xx} (x) = \begin{bmatrix}<br />
\lambda^T (1-x_2)+1 & \lambda^T (x_1-1)+1 \\<br />
\lambda^T (1-x_2)-1 & \lambda^T (x_1-1)-1 \\<br />
\end{bmatrix}</math><br><br><br />
<br />
Using the initial solution <math>x=[1,0], \lambda = -2, k = 3</math>:<br><br />
<br />
<math>S = \begin{bmatrix}<br />
2x_2-1 & 3-2x_1 \\<br />
2x_2-3 & 1-2x_1 \\<br />
\end{bmatrix} = \begin{bmatrix}<br />
-1 & 1 \\<br />
-3 & -1 \\<br />
\end{bmatrix}</math> <br />
<br><br><br />
<br />
Solving with Newton's Method:<br />
<br><math><br />
\begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix}^{new} = \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix}^{old} + \begin{bmatrix} \delta x_1 \\ \delta x_2 \\ \end{bmatrix} =<br />
\begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} + \begin{bmatrix} 5.25 \\ -0.25 \\ \end{bmatrix} = \begin{bmatrix} 6.25 \\ -0.25 \\ \end{bmatrix}</math><br><br><br />
<br />
So after 1 iteration: <br><br />
<math>f(x_1, x_2) = f(6.25,-0.25) = 0.25(6.25)^2 + (-0.25)^2 = 9.828</math><br><br><br />
<br />
After running through multiple iterations, the perturbation factor can be minimized to reach a solution close to the true value.<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]<br><br />
3. Castronuovo, Edgardo D.; Campagnolo, Jorge M.; Salgado, Roberto. "New Versions of Interior Point Methods Applied to the Optimal Power Flow Problem." Optimization Online (2011). [http://www.optimization-online.org/DB_FILE/2001/11/405.pdf Link] <br><br />
4. Momoh, James A. ''Electric Power System Applications of Optimization.'' Second Edition. CRC Press, 2008. 233-256. Online [https://books.google.com/books?id=3ifNBQAAQBAJ&lpg=PR8&ots=aq8f36GTc-&dq=example%20of%20nlp%20ip%20method&pg=PA257#v=onepage&q=example%20of%20nlp%20ip%20method&f=false Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-06-07T06:30:14Z<p>Cindyxchen: /* Application */</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Algorithm=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a perturbation factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \mu\log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
=Application=<br />
The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. <br />
<br />
<br>Starting with a general optimization problem:<br />
<br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f(x)\\<br />
\text{s.t.} & ~~ h(x) = 0\\<br />
& ~~ g(x) \le 0 \\ <br />
\end{align}<br />
</math><br />
<br />
<br>Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:<br />
<br />
<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math><br />
<math>h(x) = 0</math><br><br />
<math>g(s) + s =0</math><br> <br />
<math>[\lambda_g] s - \mu e=0</math><br><br />
<math>(s, \lambda_g, \mu) \ge = 0</math><br><br />
<br />
<br>Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br><br />
Next, calculate slack variables and corresponding multipliers with: <br><br />
<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br><br />
<math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br><br />
<br />
To calculate the perturbation factor, <math>\mu</math>, use primal-dual distances: <br><br />
<math>\mu = \sigma * pdad = \sigma * \dfrac{\lambda_g^t s}{niq}</math><br><br />
where <math>\sigma</math> defines the trajectory of the optimal solution, pdad is the primal-dual average distance, and niq accounts for the inequality constraints.<br />
<br />
<br><math>\sigma</math> ranges between 0 and 1. For the extreme conditions:<br><br />
<math>\sigma = 0</math> corresponds to ''affine-scaling direction'' where the optimal point is obtained through non-perturbed solution of KKT<br><br />
<math>\sigma = 1</math> corresponds to ''centralization direction'' where the non-optimal solution is found with a primal-dual distance equal to the initial value of <math>\mu</math><br><br />
<br />
<br> In a conventional primal-dual IP method, a constant value is assigned to <math>\sigma</math> (usually close to 0.1) for the iterations. This results in a search direction where 90% is defined towards the optimal point and 10% is allocated to trajectory of centralization.<br />
<br />
=Illustrative Example=<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]<br />
3. http://www.optimization-online.org/DB_FILE/2001/11/405.pdf<br />
4. https://books.google.com/books?id=3ifNBQAAQBAJ&pg=PR8&lpg=PR8&dq=example+of+nlp+ip+method&source=bl&ots=aq8f36GTc-&sig=QYpK82WLq1dzNMTpKly9HBsSmEk&hl=en&sa=X&ei=xd1zVbeNE8n5yQTb_4HgBg&ved=0CCAQ6AEwAA#v=onepage&q=example%20of%20nlp%20ip%20method&f=false</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-06-07T06:23:49Z<p>Cindyxchen: /* Application */</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Algorithm=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a perturbation factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \mu\log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
=Application=<br />
The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. <br />
<br />
<br>Starting with a general optimization problem:<br />
<br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f(x)\\<br />
\text{s.t.} & ~~ h(x) = 0\\<br />
& ~~ g(x) \le 0 \\ <br />
\end{align}<br />
</math><br />
<br />
<br>Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:<br />
<br />
<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math><br />
<math>h(x) = 0</math><br><br />
<math>g(s) + s =0</math><br> <br />
<math>[\lambda_g] s - \mu e=0</math><br><br />
<math>(s, \lambda_g, \mu) \ge = 0</math><br><br />
<br />
<br>Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br><br />
Next, calculate slack variables and corresponding multipliers with: <br><br />
<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br><br />
<math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br />
<br />
To calculate the perturbation factor, <math>\mu</math>, use primal-dual distances: <br><br />
<math>\mu = \sigma * pdad = \sigma * \dfrac{\lambda_g^t s}{niq}</math><br><br />
where <math>\sigma</math> defines the trajectory of the optimal solution, pdad is the primal-dual average distance, and niq accounts for the inequality constraints.<br />
<br />
<math>\sigma</math> ranges between 0 and 1. For the extreme conditions:<br><br />
<math>& \sigma = 0</math> corresponds to ''affine-scaling direction'' where the optimal point is obtained through non-perturbed solution of KKT<br><br />
<math>& \sigma = 1</math> corresponds to "centralization direction" where the non-optimal solution is found with a primal-dual distance equal to the initial value of <math>\mu</math><br><br />
<br />
<br />
<br />
<br />
<br> In a conventional primal-dual IP method,<br />
<br />
=Illustrative Example=<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]<br />
3. http://www.optimization-online.org/DB_FILE/2001/11/405.pdf<br />
4. https://books.google.com/books?id=3ifNBQAAQBAJ&pg=PR8&lpg=PR8&dq=example+of+nlp+ip+method&source=bl&ots=aq8f36GTc-&sig=QYpK82WLq1dzNMTpKly9HBsSmEk&hl=en&sa=X&ei=xd1zVbeNE8n5yQTb_4HgBg&ved=0CCAQ6AEwAA#v=onepage&q=example%20of%20nlp%20ip%20method&f=false</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-06-07T06:21:52Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Algorithm=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a perturbation factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \mu\log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
=Application=<br />
The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. <br />
<br />
<br>Starting with a general optimization problem:<br />
<br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f(x)\\<br />
\text{s.t.} & ~~ h(x) = 0\\<br />
& ~~ g(x) \le 0 \\ <br />
\end{align}<br />
</math><br />
<br />
<br>Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:<br />
<br />
<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math><br />
<math>h(x) = 0</math><br><br />
<math>g(s) + s =0</math><br> <br />
<math>[\lambda_g] s - \mu e=0</math><br><br />
<math>(s, \lambda_g, \mu) \ge = 0</math><br><br />
<br />
<br>Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br><br />
Next, calculate slack variables and corresponding multipliers with: <br><br />
<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br><br />
<math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br />
<br />
To calculate the perturbation factor, <math>\mu</math>, use primal-dual distances: <br><br />
<math>\mu = \sigma * pdad = \sigma * \dfrac{\lambda_g^t s}{niq}</math><br><br />
where <math>\sigma</math> defines the trajectory of the optimal solution, pdad is the primal-dual average distance, and niq accounts for the inequality constraints.<br />
<br />
<math>\sigma</math> ranges between 0 and 1. For the extreme conditions:<br><br />
<math><br />
\begin{align}<br />
\& ~~ \sigma = 0 corresponds to ''affine-scaling direction'' where the optimal point is obtained through non-perturbed solution of KKT\\<br />
\& ~~ \sigma = 1 corresponds to "centralization direction" where the non-optimal solution is found with a primal-dual distance equal to the initial value of \mu\\<br />
\end{align}<br />
</math><br />
<br />
<br />
<br />
<br> In a conventional primal-dual IP method, <br />
=Illustrative Example=<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]<br />
3. http://www.optimization-online.org/DB_FILE/2001/11/405.pdf<br />
4. https://books.google.com/books?id=3ifNBQAAQBAJ&pg=PR8&lpg=PR8&dq=example+of+nlp+ip+method&source=bl&ots=aq8f36GTc-&sig=QYpK82WLq1dzNMTpKly9HBsSmEk&hl=en&sa=X&ei=xd1zVbeNE8n5yQTb_4HgBg&ved=0CCAQ6AEwAA#v=onepage&q=example%20of%20nlp%20ip%20method&f=false</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-06-07T06:06:13Z<p>Cindyxchen: /* Application */</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Algorithm=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a perturbation factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \mu\log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
=Application=<br />
The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. <br />
<br />
Starting with a general optimization problem:<br />
<br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f(x)\\<br />
\text{s.t.} & ~~ h(x) = 0\\<br />
& ~~ g(x) \le 0 \\ <br />
\end{align}<br />
</math><br />
<br />
Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:<br />
<br />
<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math><br />
<math>h(x) = 0</math><br><br />
<math>g(s) + s =0</math><br> <br />
<math>[\lambda_g] s - \mu e=0</math><br><br />
<math>(s, \lambda_g, \mu) \ge = 0</math><br><br />
<br />
Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br><br />
Next, calculate slack variables and corresponding multipliers with: <br><br />
<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br><br />
<math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br />
<br />
<br />
<br />
<br />
To solve optimization problems through the IPM for<br />
Nonlinear Programming (NLP) a perturbation parameter is<br />
introduced in the complementarily Karush-Kuhn-Tucker<br />
(KKT) condition [2]. In the present work, it is shown that<br />
several versions of the IPMs can be used to the calculation of<br />
this parameter. Depending on the approach adopted for this<br />
calculation, it is possible to decrease the number of iterations<br />
and to improve the convergence characteristics in both<br />
simples and complex problems. In this work, the<br />
performance of five algorithms of nonlinear OPF is analyzed,<br />
with basis on results obtained for test-systems and real size<br />
networks<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-06-07T05:57:11Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Algorithm=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a perturbation factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \mu\log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
=Application=<br />
The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. <br />
<br />
Starting with a general optimization problem:<br />
<br />
<math><br />
\begin{align}<br />
\text{min} & ~~ f(x)\\<br />
\text{s.t.} & ~~ h(x) = 0\\<br />
& ~~ g(x) \le 0 \\ <br />
\end{align}<br />
</math><br />
<br />
Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:<br />
<br />
<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math><br />
<math>h(x) = 0</math><br><br />
<math>g(s) + s =0</math><br> <br />
<math>[\lambda_g] s - \mu e=0</math><br><br />
<math>(s, \lambda_g, \mu) \ge = 0</math><br><br />
<br />
Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br />
Next, calculate slack variables and corresponding multipliers with: <br><br />
<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br><br />
<math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br />
<br />
<br />
To solve optimization problems through the IPM for<br />
Nonlinear Programming (NLP) a perturbation parameter is<br />
introduced in the complementarily Karush-Kuhn-Tucker<br />
(KKT) condition [2]. In the present work, it is shown that<br />
several versions of the IPMs can be used to the calculation of<br />
this parameter. Depending on the approach adopted for this<br />
calculation, it is possible to decrease the number of iterations<br />
and to improve the convergence characteristics in both<br />
simples and complex problems. In this work, the<br />
performance of five algorithms of nonlinear OPF is analyzed,<br />
with basis on results obtained for test-systems and real size<br />
networks<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T05:12:12Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
=Penalty and Barrier Method=<br />
[[File:IP_NLP_Ellipse.png|thumb|250px|right|A trajectory of local unconstrained minimizers of the logarithmic barrier function (red).]]<br />
To ensure the program remains within the feasible region, a factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br><br><br />
<br />
The logarithmic barrier function is based on the logarithmic interior function:<br />
<br />
<math>B(x, \mu) = f(x) - \muI_log(x) = f(x) - \mu\sum_{i=1}^m ln(x_i)</math><br><br />
<br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/File:IP_NLP_Ellipse.pngFile:IP NLP Ellipse.png2015-05-24T05:09:44Z<p>Cindyxchen: </p>
<hr />
<div></div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T04:26:42Z<p>Cindyxchen: /* References */</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
To ensure the program remains within the feasible region, a factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br><br />
subject to <math>Ax=b</math><br><br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br><br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T04:26:01Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
To ensure the program remains within the feasible region, a factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br><br />
subject to <math>Ax=b</math><br><br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br />
2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T04:20:11Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
To ensure the program remains within the feasible region, a factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br><br />
subject to <math>Ax=b</math><br><br />
<br />
=Conclusion=<br />
The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Although successful, the IP method is no longer very popular as it was replaced by more competitive methods, such as [http://en.wikipedia.org/wiki/Sequential_quadratic_programming sequential quadratic programming].<br />
<br />
=References=<br />
1. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T04:13:53Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
To ensure the program remains within the feasible region, a factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br><br />
subject to <math>Ax=b</math><br><br />
<br />
=References=<br />
1. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf link]</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T04:06:21Z<p>Cindyxchen: /* Introduction */</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
To ensure the program remains within the feasible region, a factor, <math>\mu</math>, is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by <math>\mu</math>. A large value of <math>\mu</math> gives the analytic center of the feasible region. As <math>\mu</math> decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in <math>\mu</math> during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in <math>\mu</math> for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.<br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br><br />
subject to <math>Ax=b</math><br></div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T03:58:18Z<p>Cindyxchen: /* Algorithm */</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
<br />
Use value of mu to control how much value is given to the barrier, large mu means we stay far from boundaries. First solve with large value of mu gives us analytic center of the feasible region. As we decrease mu, we can find the optimal value by tracing out the central path. Traveling along the central path is time consuming and computational intense - can approximate by using Newton’s method for solving NLPs. To get polynomial ordered time complexity, we decrease mu slowly, using one Newton step each time we decrease mu. This results in small zig-zag convergence to optimal point. In practice, we can often decrease more rapidly and converge faster. Newton steps approximate central path through interior of feasible region. <br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx - \mu\sum_{i=1} ln(x_i)</math><br><br />
subject to <math>Ax=b</math><br></div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-24T03:52:28Z<p>Cindyxchen: </p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Introduction=<br />
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function ([http://en.wikipedia.org/wiki/Duality_%28optimization%29 duality]) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity. <br />
<br />
<br />
Use value of mu to control how much value is given to the barrier, large mu means we stay far from boundaries. First solve with large value of mu gives us analytic center of the feasible region. As we decrease mu, we can find the optimal value by tracing out the central path. Traveling along the central path is time consuming and computational intense - can approximate by using Newton’s method for solving NLPs. To get polynomial ordered time complexity, we decrease mu slowly, using one Newton step each time we decrease mu. This results in small zig-zag convergence to optimal point. In practice, we can often decrease more rapidly and converge faster. Newton steps approximate central path through interior of feasible region. <br />
<br />
=Algorithm=<br />
<br />
<br />
minimize <math>c^Tx-\mu\sum_i^mln(x_i)</math><br><br />
subject to <math>Ax=b</math><br><br />
<br />
<br />
==Section 1.1==<br />
==Section 1.2==</div>Cindyxchenhttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLPInterior-point method for NLP2015-05-21T17:38:07Z<p>Cindyxchen: Created page with "Author names: Cindy Chen<br/> Steward: Dajun Yue and Fengqi You <br/> =Section 1= ==Section 1.1== ==Section 1.2=="</p>
<hr />
<div>Author names: Cindy Chen<br/><br />
Steward: Dajun Yue and Fengqi You <br/><br />
<br />
=Section 1=<br />
==Section 1.1==<br />
==Section 1.2==</div>Cindyxchen