Most business problems faced by a modern enterprise can be represented in terms of optimization. Companies usually seek to maximize desirable traits like profit and customer satisfaction, or minimize unwanted traits like material wastage and churn. Achieving these objectives is non-trivial, especially when confronting real life constraints imposed on their operation. The ability to build complex, optimization-based solutions for solving critical business problems is essential to competing in today’s data driven era.

 

The Goal of Solving Optimization Problems

The goal of solving an optimization problem is to find the optimal solution out of all possible solutions.  These problems can be looked at as either minimizing or maximizing an objective function. An objective function is a mathematical expression that represents the quality of a possible solution in the context of a specific problem, while taking the constraints and variables of the problem into consideration.

 

Source: http://www.scielo.br/img/revistas/prod/2015nahead//0103-6513-prod-0103-6513170514-gf1.jpg

For example, a logistics company may want to minimize the distance traveled by vehicles and maximize on-time delivery, so the objective function would be the total travel time. Constraints might include: the number of vehicles available, the delivery schedule, the storage capacity of a vehicle, etc. There may be several possible routes that can be generated, and each will have a varying degree of optimality with respect to the objective function. Each possible route constitutes a solution, and the collection of routes is the solution space. Optimization algorithms work towards navigating possible solution spaces in an iterative process to arrive at a best possible solution.

 

Traditional Optimization Methods

Traditional methods of optimization use differential calculus or brute force search to guide the exploration of the solution space – a space consisting of all possible solutions. These are deterministically based on the changes in the objective function evaluation over iterations. These types of methods tend to be less generalized in their design and often suffer from getting stuck in a locally-optimum space.

To understand the concept of local optimum, imagine that you are climbing a mountain. The goal of optimization is to reach the summit of the mountain. Since scaling mountains is rarely a nonstop incline climb, you will likely have to go up and down small peaks while attempting to reach the summit.

 

Source: https://sosailaway.files.wordpress.com/2013/01/maxima_local_vs_global1.png

Solving optimization problems is very similar to the aforementioned example; however, traditional algorithms find it difficult to differentiate between a plateau midway up the mountain (local optimum) and the summit (global optimum). This is easy to visualize if you consider the above picture. If a climber reaches the first plateau of the mountain (local maximum), while in the clouds, they may incorrectly believe that they have scaled the entire mountain (global maximum). Because they are in the clouds, they cannot see that the actual summit is much higher up. Similarly, if the climber represents a traditional optimization algorithm, it may get stuck at this point and stop.

The fundamental problem with traditional optimization methods is that they can mislead you into believing that you have optimized a problem, when in reality you have only reached the local maximum. This is because they often only maximize the local problem and do not have the ability to analyze the global problem. For example, imagine that you are running a national retailer based in Minnesota. If you optimize inventory purely based on local sales (local maximum), then all you will carry is heavy flannel clothing. But that neglects growing your business beyond Minnesota (global maximum).

 

Advanced Optimization Methods

Advanced methods, like Genetic Algorithms and Simulated Annealing, leverage several interesting ‘tricks’ that allow them to handle complex objective functions that are difficult to solve using traditional methods. These methods are stochastic (inherently random) in nature and employ heuristics to approximate the global optimum for large solution spaces. To further explain, we will look at Simulated Annealing as a representative example.

Simulated Annealing optimization is inspired by the cooling process of molten metals, called ‘annealing’. In metallurgy, the annealing process heats metal to a high temperature and cools it down slowly. This increases the quality of metals in terms of hardness and ductility. In Simulated Annealing, there is a temperature variable, ‘T’, at all times that dictates how the algorithm progresses throughout the iterations in the optimization. This ’T’ value starts out high at the beginning and then decreases with each iteration. Similarly, Simulated Annealing starts with a random initial solution and perturbs the previous solution in each subsequent iteration to arrive at a new solution.

This can be visualized in terms of climbing the mountain. Note that in the iterative process of Simulated Annealing, the new solution may be better or worse than the previous solution. An interesting feature of Simulated Annealing is that it accepts a less optimal solution probabilistically, with the probability proportional to the temperature variable ‘T’. One should ask, “why would we want to accept a worse solution at all?” This is to effectively avoid getting stuck in local optimum. Again, the mountain climbing analogy helps here. If you reach a plateau during your climb, you will have to descend for a while before ascending again. This is similar to the difference between the green path and the blue path in the image above. For the green path to reach the highest possible point, it has to go towards the red peak and then move back down. Moving back down is an example of accepting a less optimal solution because it is a smaller value, since the goal of the algorithm is to find the most optimal solution. Similarly, Simulated Annealing avoids the local optimum by allowing solutions to get worse before they get better, thus avoiding the trap of local optimums. Simulated Annealing optimization converges and approaches the final solution when the value of ‘T’ is sufficiently low, and when further perturbations cease to result in better solutions.

 

Source: http://www.frankfurt-consulting.de/img/SimAnn.jpg

Simulated Annealing has been proven to be extremely useful for faster optimization. This is especially true in cases where the number of possible solutions is huge, and traditional techniques either take prohibitively long to optimize or get stuck in a local optimum. Recently, there has been a steady increase in the application of such probabilistic optimization techniques for solving real world business problems.

 

Optimize Your Business

Companies looking to gain a competitive advantage by embracing these types of advanced techniques should reach out to Soothsayer Analytics. Our team includes several people with deep backgrounds in optimization such as Physicists and Material Scientists. We have diverse experience ranging from optimizing inventory to optimizing vehicle routes and from optimizing supply chains to optimizing intercontinental ballistic missiles. We synthesize our skills in Artificial Intelligence, Advanced Math, and Data Science to solve a variety of business problems, likely relevant to you.

For more details, contact Christopher Dole at Soothsayer Analytics:

Chris@SoothsayerAnalytics.com

614-902-0294

If you would like to explore how Soothsayer can help your company become more data driven, visit us at www.SoothsayerAnalytics.com, call us at 1-844-44-SOOTH, or e-mail us at info@soothsayeranalytics.com

Leave A Comment