3 – L4 03 Optimization With Constraints V3

Earlier, we discussed a basic optimization problem. It’s possible for optimization problems to have an additional wrinkle. Constraints can be placed on the problem to limit the region where you look for a solution. In other words, you try to find the smallest value of the function that also satisfies the constraints. Let’s discuss how to solve an optimization problem with constraints. First, let’s clarify some terminology. The function we are trying to optimize is called the objective or cost function. The variable we are trying to optimize, which in portfolio optimization is the vector of weights on the various assets in the portfolio, is called the optimization variable. Then we may have various constraints. The constraints can take the form of inequality or equality conditions. If there are no constraints, we say the problem is unconstrained. If the objective function is x minus one squared plus one and we apply the constraint that x must be less than or equal to zero, then the new minimum value is two because this is the smallest value of the function that also satisfies the constraint. A vector x is called optimal or a solution of the problem if it has the smallest objective value among all vectors that satisfy the constraints. The set of points for which the objective and all the constraint functions are defined is called the domain of the problem. A point in the domain is said to be feasible if it satisfies all the constraints. The problem is feasible if there exists at least one feasible point, and it is infeasible otherwise. If there are feasible points for which the objective function reaches negative infinity, we say the problem is unbounded below. This is the case for the function f of x equals the negative natural log of x with no constraints. A general optimization problem can be very difficult to solve. You can have functions of many variables and the objective function and constraints can be very complicated. So, instead of thinking about how to solve all types of optimization problems, let’s just think about how to solve certain types. It turns out that there are well developed methods to solve some sub-types of problems, and they will be of great help to us as we try to optimize our portfolios. It turns out that a very important type of optimization problem is one for which the objective function and inequality constraints are convex. What does that mean? Well, it’s actually very intuitive. A convex function curves upward everywhere. Any straight line you draw between two points on the function has to lie above the function. So, this wiggly function would not be convex. An important property of a convex optimization problem is that it only has one minimum. So, if you find a local minimum, you know it is the globally optimal value over the whole feasible region. This is actually hugely helpful. When the objective function and inequality constraints are convex and the equality constraints have the form f of x equals a times x plus b, the optimization problem is called a convex optimization problem. If you find a local optimum for this problem, you know it is a global optimum. There are well developed techniques for solving these problems, and the set of problems we deal with during portfolio optimization will generally be of this type.

Dr. Serendipity에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

Continue reading