Optimization Methods

Optimization is the selection of a configuration while minimizing or maximizing an objective function. Let’s assume we have a objective function f(x) and a goal function g ∈ {min, max}. Then we search a configuration Xᵢ which leads to the minimal or maximal value of f(X). The most simple way to find such configuration is to test different X and to compare f(xᵢ). For example if we have three pairs of shoes sandals, boots and stilettos and my personal prettiness function fₚ(s) mapping a shoe s to numeric value representing how pretty it is. Then we estimate fₚ for all three pairs and obtain 3 for sandals, 9 for boots and 0 for stilettos. Assuming our goal function is max, then the optimal configuration is s = boots. But what if the set of possible configuration is to large to test them all or the set of configurations is even infinite? In this case the solutions would be: Give up on optimality and search for the optimal configuration among the smaller subset, which can be tested. The subset can be generated by peeking the configurations randomly or if the configuration have an order define a raster with configuration for the subset. There is a variety of optimization problems which allow us to handle large number of possible configuration more elegant then randomly selecting a subset. When we work with problems based on differentiable objective functions we are able to determine whether a particular configuration is not optimal and where to search for the optimum next time.

Related

Author: Artem Leichter
Last modified: 2018-10-18