So as I mentioned in my last post, I am currently taking a discrete optimization class from Coursera. It is a pretty challenging class, especially because I have never taken an algorithms class before (I majored in Conservation and Resource Studies in college). I have been fairly successful in implementing some tried and true algorithms to solve some of the problem sets, but I have been having a lot of difficulty getting an understanding of the performance details of the various algorithms. My typical routine has been to write up a few solutions to each problem, and use the run-time, solution score, and my (limited) intuition to try and improve the implementations.
In yesterday’s post I discussed a little plotting function that I wrote to help visualize my implementations to the Traveling Salesman Problem (TSP). Today, I am going to talk about using a line profiler to get detailed feedback on the performance of your code. I found this blog post to be very helpful in navigating the various options for performance analysis.
For the (TSP) problem, I wrote up a Tabu Search algorithm, which is a type of Local Search algorithm, that uses ‘short term memory’ to ensure that old solutions are not revisited. While I have successfully implemented a version of the algorithm in python, I am fairly certain that I am glossing over a few key implementation details as my algorithm is not performing nearly as well as it should. A good way to get detailed feedback on the algorithm’s performance is to use the line profiler package. It is extremely easy to use and it provides valuable feedback on your code’s performance. You simply add the
@profile decorator to any function in your program that you want feedback on, then run a python executable at the command line (see the source blog post for details). I threw in the profile decorator on my main Tabu function and got the following output:
Line # Hits Time Per Hit % Time Line Contents ============================================================== 85 @profile 86 def search(cities, candidate_list_size, max_iter): 87 1 13160 13160.0 0.1 current_vector = random_permutation() 88 1 839 839.0 0.0 current_cost = calc_cost(current_vector) 89 1 3 3.0 0.0 tabu_list =  90 1 5 5.0 0.0 path = [current_vector[:]] 91 10001 30512 3.1 0.3 for i in range(0, max_iter): 92 10000 9332657 933.3 99.1 new_perm, new_cost, tabu_list = generate_candidates(current_vector, tabu_list) 93 10000 37371 3.7 0.4 if new_cost < current_cost: 94 #path.append(new_perm[:]) 95 11 53 4.8 0.0 current_vector = new_perm[:] 96 11 34 3.1 0.0 current_cost = new_cost 97 1 4 4.0 0.0 return current_vector, current_cost, path
The output of the profiler displays Hits (which is the number of times a line of code was run), Time Per Hit, and % Time. Looking at the output from the profiler, it looks like the line of code that uses the most resources is the
generate_candidates function call in the main for loop (which makes sense, as the
search function is basically a for loop over the
generate_candidates function . So in order to take this analysis further, I added a profile decorator to the
generate_candidates function and displayed the results below.
Line # Hits Time Per Hit % Time Line Contents ============================================================== 74 @profile 75 def generate_candidates(best_perm, tabu_list): 76 10000 37295 3.7 0.4 perm = best_perm[:] 77 10326 29927 2.9 0.3 while True: 78 10326 426132 41.3 4.7 perm = two_opt(best_perm) 79 10326 73413 7.1 0.8 if perm not in tabu_list: 80 10000 95461 9.5 1.1 tabu_list = tabu(perm, tabu_list) 81 10000 28268 2.8 0.3 break 82 10000 8269102 826.9 92.0 perm_cost = calc_cost(perm) 83 10000 29902 3.0 0.3 return perm, perm_cost, tabu_list
As you can see from the output of the profiler on the
generate_candidates function, the bottleneck appears to be the
calc_cost function. This was not surprising as cost functions, which are used to measure the effectiveness of your solution, can be expensive to compute but are an important tool in providing your algorithm feedback in order to direct it to an optimal solution. In this case, the cost function is a simple for loop that calculates the distance traveled in a given solution, so if I want to improve upon it, I am going to need to figure out a better way to estimate or approximate the cost without having to do the full calculation.
So know that I know where my bottleneck is, I am going to get to work on trying to find a way to improve the algorithm so that I can get some good solutions for the problem sets. I will post an update, with some comparative metrics on performance if/when I find a way to reduce the bottleneck.