Description

We continue our journey with the Traveling Salesman Problem (TSP), where this we imagine a salesperson has to travel to every single city in an area, visiting each city only once. Additionally, they need to end up in the same city where they starts their journey from, and do this in the most efficient manner. However, in this episode, we are going to speed our salesperson up by using a bottom-up approach!

Show Notes

This episode of the Basecs podcast is based on Vaidehi's blog post, The Trials And Tribulations Of The Traveling Salesman from her basecs blog series.

Transcript

[00:00:02] SY: Welcome to the Base.cs Podcast where we explore the basics of computer science concepts. I’m your host Saron, Founder of CodeNewbie.

[00:00:09] VJ: And I’m Vaidehi Joshi, Author and Developer.

[00:00:12] SY: And she is the brilliant mind behind the Base.cs Blog Series. Today, we’re talking about…

[00:00:16] VJ: Speeding up the Traveling Salesman Problem, which we decided last time is the Traveling Salesperson Problem.

[00:00:23] SY: That’s right, Traveling Salesperson Problem. This season of the Base.cs Podcast is brought to you by DigitalOcean and Heroku. DigitalOcean’s cloud infrastructure is optimized to make it super intuitive and easy to build faster and scale easier. The pricing is predictable. They have flexible configurations and excellent customer support. Get started on DigitalOcean for free with the free $100 credit at DO.co/basecs. That’s DO.co/basecs.

[00:00:55] Did you know you can build, run, and operate applications entirely from the cloud? With Heroku’s intuitive platform, you can streamline development using the most popular open source languages, so you don’t have to worry about infrastructure logistics. Also, you’re not walk-in to the service. So why not start building your apps today with Heroku?

[00:01:20] Okay. So let’s do a quick recap. What is this problem about?

[00:01:24] VJ: In its essence, the traveling salesperson problem is really just a graph problem, and the issue we’re trying to solve is that we have a salesperson and four cities and they are trying to figure out the most efficient, the least costly way of going and visiting every single one of these cities, but ending back at the same place they started from. And so we can think about that as nodes in a graph and so the graph problem aspect of this is basically visiting every single vertex in the graph only once and then ending back at the vertex that we started at.

[00:02:01] SY: So we had this one salesperson, four cities, and we were able to find the shortest path by just brute forcing it, right? We used recursion, and honestly it seemed pretty straightforward, but this episode is about speeding up the problem, which I guess means that the way we did it was not very efficient.

[00:02:21] VJ: This is accurate. It was a good way to begin, but it’s worth noting that we had four cities that we were trying to visit, but in order to like list out all of the possible paths, which is the approach we took, we basically ended up looking at six paths for four cities total.

[00:02:42] SY: That doesn’t seem too bad.

[00:02:44] VJ: Yes. But imagine if instead of four cities, we had five cities. The problem with this is we won’t do this in this episode, thankfully, but if we were to add another city, if we are going to list out all the possible permutations of all the paths we could take, that would be 24 different paths, which is like all of a sudden, things are way, way, way worse. And actually this like pattern of how many permutations, how many possible paths we have to take as our number of cities, as our input grows, it’s actually factorial time or O of N exclamation point and it is not good. It is very, very bad. It’s worse than exponential time.

[00:03:28] SY: That’s been pretty bad.

[00:03:29] VJ: Yeah. So we picked like a simple enough example in the last episode, thankfully, and we still didn’t even really list all of the paths. We sort of focused on one or two just because it’s so hard to do when you’re recording it and you don’t have a visual, listing it all can be confusing, but like imagine if we had five cities and then we were doing 24 paths. It’s not great.

[00:03:49] SY: Okay. So the way that we did that problem, the way we brute forced it is we started with our W and then we said, “Cool, we’ve got three other nodes we could go to,” X, Y, and Z and then we visited each node, and then we said, “Okay, where can we go to next?” And we basically started at the top and then we kind of worked our way down. So what process are we going to use for the not brute-force way? Are we going to follow the same pattern? Are we doing something different?

[00:04:17] VJ: We’re going to take a different approach to solving this problem and I think it’s worth mentioning before we get into it that we actually looked at these two different approaches in a previous episode about dynamic programming. I can’t remember what season, I feel like it was the last season, Season 8, but we are learning about different approaches to algorithm design and the different design paradigms and we learned about dynamic programming and the different approaches you can take. And it turns out the approach we took last time, the previous episode, where we started with one complex problem and then broke it down into the smaller sub-problems. That was the top-down approach, and that was what we used in our brute force version of this.

[00:05:03] SY: Right.

[00:05:03] VJ: And that’s fine, but as we saw, there are some drawbacks to it. So we know of another approach from our knowledge about dynamic programming and that one is the opposite. It’s the bottom-up approach. So what we can do today is flip it on its head, and instead of doing top-down, we’ll do bottom-up.

[00:05:22] SY: Okay. If I remember that tree correctly, we started with W, we went down to X, Y, Z, then we went down to all those different paths. But ultimately, all the paths ended back at W. So if we do bottom-up, are we kind of still starting at W?

[00:05:39] VJ: Well, if you’re doing bottom-up in this scenario, I think what we need to think about it as is the smallest possible problem that we’re trying to solve, not so much a specific node, because what we were trying to do last time is we took the node that we are starting with and then we are enumerating all the paths and we had the hard problem of how do I get from W back to W and go through all the nodes in the process. In this situation, we sort of need to shift our paradigm to go from focusing on the nodes and enumerating the paths to instead focusing on the function calls, which is a little weird to think about, but that’s why it’s better to talk about top first first instead of bottom-up because this is like a very different way of thinking about the problem.

[00:06:26] SY: Okay. So you said we’re going to focus on the smallest problem and I guess from that perspective the node W isn’t really the problem. That’s just the node. So I guess the most bottom problem is getting to that W and there are three ways that we could get to that W. We could get to the W from X, from Y, and from Z. So is that where we’d begin?

[00:06:49] VJ: Yeah. And those are actually those simplest function calls. So those are the most basic versions, the simplest versions of those function calls that we can have in order to get to our ending node. That’s why we’ll start with the simple function calls and keep track of their distance to the ending node W because that is where we eventually want to go but really we don’t care about W because we know the distance to get there is zero. We care about the way that we’re getting to W or the function call that could get us to W, if that makes sense.

[00:07:22] SY: Yeah. I think that makes sense. So if we say that we can go from X to W, we can go from Y to W, and then finally we can go from Z to W, there is a cost to actually getting to W. So we would want to keep track of that. So it sounds like if I want to see how much it cost me to get from my X to my W, I can use that adjacency matrix again, right?

[00:07:45] VJ: We can definitely use it. We already know the lookup time of that is pretty good, and we used it last time, so we can refer to that again to figure out the edges and the weights associated with each one as we worked through all these function calls.

[00:07:58] SY: So I use my adjacency matrix and I see that X, W costs us six. Y, W costs one and Z, W costs us three. Okay. Now what do I do?

[00:08:12] VJ: So what these three function calls represent are the smallest or the simplest function calls that can terminate at our starting node W and the weight that you just read out for each of them is the cost that it takes to get to W. but these are the smallest possible sub-problems. The next thing we need to do is figure out how we get to any of these three nodes, right? Because we’re trying to figure out which function calls call these nodes or not nodes, functions, I guess. It’s very hard to not call them nodes, but it’s important to think of them as functions.

[00:08:46] SY: Yes.

[00:08:47] VJ: So basically what we have to do is we’re going to have to keep track of the distance between all of these different nodes as we build up our bottom-up approach. So we’re basically going to build a tree, but instead of one root node W and then enumerating all the paths from it, instead of that, we’re going to start with our three function calls that can get us to W. So there’s three possible paths to W. So we’ll start with those three, and they’re sort of like three root nodes, which is a little weird to think about because that’s not how trees work. But if we think about how those three possible function calls can get us to W, the next question we want to ask is, “Okay, well, which function calls can get us to X? Which ones can get us to Y? Which ones can get us to Z?” Because we’re trying to figure out how to get there, just like we figured out the way to get to W is through X, Y, Z. We need to, for each of those problems, figure out what are the ways to get to X and then Y and then Z.

[00:09:51] SY: Okay. So I remember when we did the traveling salesperson problem. Last episode, we were keeping track of I think it was the nodes that we had left to visit. Are we keeping track of anything? I know we’re not really thinking about it in terms of nodes. We’re thinking about in terms of function calls, but are we keeping track of anything like that for this bottoms-up approach?

[00:10:12] VJ: We are. And I’m glad you said that it is function calls, not nodes, because really what we’re keeping track of is the function call that we would call next, if that makes sense.

[00:10:26] SY: Okay.

[00:10:27] VJ: So in the case of X, Y, and Z, our three weird root nodes, if you imagine this as a tree, those are the simplest function calls that can get us to W. So there’s nothing really that you would call next because that’s sort of like our base problem, the simple sub-problem. So because those three sub-problems don’t really have anywhere else to go, the next place is W, that’s sort of it, each of those sub-problems would have a list, but it would be empty because there’s nowhere to go. That’s like the simplest sub-problem.

[00:11:02] SY: No additional function calls.

[00:11:04] VJ: Right. Yeah. Like those are the last function calls, but it’s weird because we think of it as last, but really, if you’re flipping it on its head, they’re the three root nodes of this tree, so there’s sort of the first.

[00:11:16] SY: Yeah.

[00:11:17] VJ: Just remember everything’s backwards. That’s not helpful. Sorry.

[00:11:19] SY: Yeah. They would be executed last, but we’re figuring it out first.

[00:11:25] VJ: Yes. Oh, I like that. That’s a great way of describing it. Yes. So for the Node X, if X was executed last, there is no other function call to call next, its list is going to be empty. But as we just learned, we can figure out the ways to get to X and it turns out there’s only two ways to get to X, Y, or Z. But you might be able to guess what would be in their list.

[00:11:50] SY: Okay. So I’m at X, so we’re going to talk about the first of my pseudo root nodes, and I want to get to X from Y, which I have already figured out cost me four. Well then that means that when I’m at Y, my next function call would be X because I’m about to go visit X.

[00:12:13] VJ: Exactly. Yeah. And similarly, if you had the same thing, like there’s two ways to get to X, right? One of them is Y, but the other one is Z. In that situation, if you’re working backwards where you end at X and you came from Z, you’d need to take into account the cost of going from Z to X.

[00:12:34] SY: And that’s my number three.

[00:12:35] VJ: Yeah, exactly.

[00:12:36] SY: And then I know that if I’m at Node Z, I’m going to go to X. So in my list, I would have the function call X.

[00:12:44] VJ: That’s right.

[00:12:44] SY: Okay. So both Y and Z have the same list.

[00:12:47] VJ: Exactly, yeah, because we’re sort of like saying, “What are the ways for me to have ended up here?” And so whether we go from Y to X or Z to X, both of these are representing the ways that you could have called that function and the function that we’re calling is X or the way to get to X. So that’s why both Y and Z have a list that just has X in it, even though they have different costs because both of them are going to call the function X, if that makes sense.

[00:13:11] SY: Okay. Cool. Okay. So I’m assuming that the next step is figuring out how to get to Y and how to get to Z.

[00:13:19] VJ: Exactly. Yes.

[00:13:20] SY: Okay. So let’s see. I am at Y. I know that I’m about to go to X. That’s my next function call. So that means that in order to get to Y, well, I must have come from Z, right? Because that’s kind of all that’s left.

[00:13:33] VJ: That’s right.

[00:13:34] SY: Okay.

[00:13:34] VJ: Conveniently there’s only one option.

[00:13:36] SY: Yeah. I like this simple routing game that we’ve created. Okay. So I’m coming from Z to get to Y. I’m going to look at my adjacency matrix to see what that costs me. I see that it costs me two and so I know to get from Z to Y, it costs me two. So now I need to figure out what’s going to be on my list at Node Z. So I know that we just talked about going to Y. So in my list, I know I’m going to have my function call Y. Is that it or is there anything else on my list?

[00:14:14] VJ: Well, you’ll remember from last week’s episode that when we did the top-down approach, the list was basically the remaining nodes to visit. In this situation, it is the nodes we will visit sort of.

[00:14:28] SY: Okay.

[00:14:28] VJ: Which basically it’s not actually nodes. I had to keep correcting myself because it’s very hard to not say that, but really what it is…

[00:14:34] SY: I think you like nodes. Function calls.

[00:14:35] VJ: Yes, especially when you think about it in a tree, it’s like it’s a node, but really what we’re doing is we’re creating a tree structure to help us visualize the possible function calls. So in this situation, for each function call, that list represents is the functions we will call next. So when we were at, for example node X, Y, and Z initially, the function we’d call next is nothing because you basically pay whatever cost it takes to get to W and then you’re done because we’re doing this in reverse. But in order to get to X, let’s say we took Y, we know that there’s two options actually. You could go by Z or by Y, but let’s say we take Y in order to get to X. Inside of that list, we just have the next function call, which is X. But then if we go one step before that in time, basically we’re at Node Z because there’s only three nodes really, and we’ve already figured out Y and X. So the only option is Z. And that’s what you said earlier. So when you get to Z, basically what you’re saying is, “Okay, what is the function call I’m going to call next?” It’s Y. And then after I call Y, I still have to call X also. So inside of Z’s list, we have Y and X both because you’re representing all the function calls you sort of still have to do.

[00:15:56] SY: So now that we’ve established that at the Node Z, we have two function calls that we’re going to make. We’re going to make Y, we’re going to make X. We know the price we have to pay to get. From Z to Y. We know the price we have to pay to get from Y to X. I feel like we’ve kind of figured out that whole function call system, and I assume we have to do that for all the other remaining nodes, right?

[00:16:19] VJ: Yes. So what we’ll end up having to do is basically take our strange function tree with three nodes, three root nodes if we expanded it out and basically said, “For each of these possible paths, what are the functions I have to call to get here?” You would basically get another tree-like structure, but with those three root nodes, and you would see that you actually have some repetition because if you enumerate it out, it’ll still look like six paths, right? But three of those paths are going to be exactly the same as three other paths.

[00:16:54] SY: So really we only have three unique paths.

[00:16:57] VJ: Correct. Yeah. So we have overlapping subproblems, and that’s sort of a sign that like, “Oh, we can use dynamic programming because, look, we already know that it’s not six things we need to do. It’s really three possible routes that we can take.” And so we can condense our tree, and if you could imagine like in your mind, if the six branches of the tree, if the ones that were the same sort of slid together, and suddenly you just had three root nodes that each connected to two children, and then those two children both connected to one sort of base node, those could represent those repeated problems. So instead of having six branches at the end, you sort of have just three that sort of come together.

[00:17:50] SY: Right.

[00:17:51] VJ: You could imagine that like if you knew that there were six paths that were duplicated and you just had three, we could just sort of like make life a little easier and just delete the repeated nodes and just have them all point to the same place.

[00:18:04] SY: Okay. That sounds way more efficient. And the great thing is when we have repetition, we don’t like repetition, right? Because we’re developers. We don’t like to repeat ourselves. We know that we can simplify things and make them a little bit cleaner.

[00:18:15] VJ: Yeah. And when we see our sub-problems, we basically can just cut out our repeated work and we don’t need to walk through the whole example because a lot of trees and nodes and it’s hard to explain sometimes.

[00:18:28] SY: But pretty though. So you should definitely check out the blog posts. It’s a very pretty graph that Vaidehi created for us. Yeah.

[00:18:36] VJ: Yeah. And if you use the bottom-up approach, you end up finding out that the efficient path is exactly the same as the top-down approach, but you can do it a little nicer, but the truth is I do want to say one thing, which is that you said that it’s a lot more efficient. It’s not that much more efficient in reality. This is just like a hard problem.

[00:18:57] SY: Yeah. Okay.

[00:18:59] VJ: This gives us the same results as our brute-force method. We don’t need to make six recursive calls and we don’t need to generate a giant tree and we could even optimize this even a little bit further by using memoization for some of the paths because if you know that like the distance from Z to Y is two, you don’t need to look it up all the time, right? You can memoize it. So we could still be a little bit more efficient, but really at the end of the day, this dynamic programming approach, even though it speeds up traveling salesperson, it still runs an exponential time.

[00:19:34] SY: But it’s better, right, than our factorial?

[00:19:36] VJ: It is definitely better than factorial, and in fact, this approach that we walk through today is known as the Bellman-Held-Karp Algorithm, and it speeds up traveling salesperson from O of N exclamation point, which is factorial to O of two to the power of N, N to the power of two, which is exponential. That was a little nicer. We still have to like build up a tree and we still need to do some calculations. As you can imagine, if we added another city, like yes, it will be a little bit more efficient, but it’s not linear, it’s not logarithmic. It’s definitely exponential, but it’s better than factorial.

[00:20:17] SY: There we go. So all we can ask for is to improve, do things a little bit faster. I’ll take it.

[00:20:23] VJ: I like your attitude.

[00:20:26] SY: And that’s the end of today’s show. If you liked what you heard, please leave us a review and make sure to check out Vaidehi’s blog post. Link to that is in your show notes. I also want to give a huge shout-out to DigitalOcean and Heroku. DigitalOcean is a simple developer-friendly cloud platform, which makes managing and scaling apps easy with an intuitive API, multiple storage options, integrated firewalls, load balancers, and more. There’s also a robust community that provides over 2,000 tutorials to help you stay up to date with the latest open source software, languages, and frameworks. Get started on DigitalOcean for free with a free $100 credit at DO.co/basecs. That’s DO.co/basecs. There’s a reason over nine million apps have been created and ran on Heroku’s cloud service. They not only manage over two million data stores, but they make over 175 add-on services available to you. So whether you’re creating a personal app, a free app, or an enterprise app, start building today with Heroku. This episode was edited and mixed by Levi Sharpe.

[00:21:35] VJ: Bye everyone.

[00:21:36] SY: Thanks for listening. See you next week.

Thank you for supporting the show!

Thank you for supporting the show!