In our final look at depth-first search (DFS), we explore how to implement this lovely algorithm in coding terms. We also dig into Big O notation, breaking down how to determine the time and space complexity of DFS.
[00:00:00] SY: (Music) Welcome to the Basecs podcast, where we explore the basics of computer science concepts. I'm your host, Saron, founder of CodeNewbie.
[00:00:09] VJ: And I'm Vaidehi Joshi, author and developer.
[00:00:12] SY: And she is the brilliant mind behind the basecs blog series. Today we're continuing our conversation about...
[00:00:22] VJ: Depth-first search.
[00:00:22] SY: This season of the Basecs podcast is brought to you by our wonderful friends at Twilio. If you haven't checked out their API yet, you totally should. With Twilio, your app can send text messages and make phone calls with just five lines of code. And for a lot of developers, Twilio is the first API they've ever used. And that first time you type a few lines of code, run your program and see your phone write up with the text message? It's kind of magical. And as a thank you for being a podcast listener, you get a promo code for twenty dollars in free Twilio credit. Just text your name to 480-485-4321. You'll also get a link to a quick start guide for your favorite programming language. So text 480-485-4321. Link to that is in your show notes. Ok. Let's get started.
[00:01:12] SY: It's kind of ended up being a little mini series accidentally.
[00:01:15] VJ: Oh my goodness.
[00:01:16] SY: Depth first search is just so deep that...
[00:01:19] VJ: That we're drowning.
[00:01:20] SY: I know. (Laughing)
[00:01:22] VJ: Time moves slowly when you're off on a leaf wondering how deep the tree really goes. Please stay tuned for my upcoming audiobook, a self-help book where I just talk like this the whole time.
[00:01:39] SY: I would buy that. (Laughing) I would buy that. I'm just saying I would buy that. Ok, so let's do a quick recap of what depth-first search—I'm getting better at saying it, by the way. Did you notice?
[00:01:49] VJ: I have, yeah.
[00:01:49] SY: Depth-first search. Hey.
[00:01:51] VJ: It is a algorithm for traversing through—well technically for traversing through graphs and trees, we're gonna just talk about it and we have been talking about it in the context of trees. As the name would suggest, you search through a tree data structure by going deep rather than going broad. So basically the way to think about it is that once you start down a path in the tree, you don't stop searching through that path, through that branch basically until you get to the end of it, which is usually going to be a leaf node. So effectively you're traversing down a branch of a tree data structure and then getting to the leaf and then working your way back up the trunk of the tree. So you're going deep instead of wide or broad.
[00:02:36] SY: Ok. So what does this look like in code? Because when we were walking through the strategies and seeing how it worked, we were I mean literally like looking at, you know, looking at the tree and saying, "this—you know, we're gonna go here and then jump over here." And I assume that's not how it works when you actually have to code it. So what does it look like an in, in code?
[00:02:57] VJ: The first part of the answer to your question is understanding what you would—like how you would represent a node in code. Oh, I like that. Node in code.
[00:03:09] SY: Node in code. Ah.
[00:03:12] VJ: There's, there is some other merch...
[00:03:14] SY: This is such a great episode. This is such a great episode.
[00:03:14] VJ: There's some other merchandising opportunity there, but I don't know what it is.
[00:03:18] SY: That's like our sixth shirt. You're welcome, world (Laughing) Ok. Node in code.
[00:03:25] VJ: Yeah, so to basically understand what a node looks like in—when you would like write in code format regardless of what language you use, it might seem kind of daunting, but actually it's pretty simple. And we'll think back to the three things that you do with a node. The three things you can ever do with a node is you can read the data that it has and then you can look at its left child if it has one and you can look at its right child if it has one. So we can often represent nodes as simply as maybe like a hash or a dictionary or some sort of structure that has three pieces of data. One piece of data is going to be whatever actual data it contains. So like for example, if you had a node that had the number one, its data, its attribute called data would be one. And then it may or may not have children, and the two other things that are going to be in that node structure are a reference to a left node if it exists and a reference to a right node if it exists. And that's it. So it's actually not too scary.
[00:04:22] SY: That's not too bad.
[00:04:23] VJ: Yeah. You can probably visualize it, too, right? In code...
[00:04:26] SY: Yeah.
[00:04:26] VJ: ...like represent it as?
[00:04:28] SY: Yeah, 'cause I was thinking when we were walking through it, I felt like me explaining it in human English language was harder than, than what it sounds like it would look like in computer languages.
[00:04:40] VJ: Yeah, yeah. So that's the first part. So that's what a node looks like. And trees usually have many nodes, but we wanna search through all of them.
[00:04:47] SY: Yeah.
[00:04:48] VJ: And if we search through a tree structure using one strategy, we're gonna use one strategy for the whole thing. So once we pick a strategy, we're pretty much going to continue to repeat that process. So imagine that we were running a pre-order search on a tree. We would first want to read the data of every single node that we are looking at. So for example, we’re gonna start with the root node. We'll read the data of that node. Then we'll need to do a search through the left reference. And then we'll eventually want to do a search through the right reference. So because we know that a node can only have three things inside of it, that function would actually be not too complicated. And we do know that at some point we're going to run a search with the left reference, run a search with the right reference and also read the data. The difference between those three strategies is the order in which you do those things.
[00:05:47] SY: That's not too bad.
[00:05:48] VJ: Yeah. And I think that one of the cool things is you actually really only need to write one function no matter which strategy you use because you're going to do the same three steps not just for the root node, but for every other node in the tree, right? So basically you can write a recursive function where you define a function, and then inside of it you call itself again.
[00:06:16] SY: Ok, so where, where would that happen? How do I get from that first node, which is—yeah, which is always the root node, right? And like at, at what point would I be recursing—recursing? Is that a word?
[00:06:32] VJ: Sure.
[00:06:32] SY: At what point would I be calling self? (Laughing) And, and inceptioning myself? At what point would I do that?
[00:06:41] VJ: Where does the dream end? Yeah.
[00:06:42] SY: Yes. (Laughing)
[00:06:44] VJ: So actually let's, let's keep going with that pre-order search example. So...
[00:06:47] SY: Ok, ok.
[00:06:47] VJ: ...let's say that we have a function that's called pre-order search. We know that the first thing we'll do is pass that function—the root node, right? 'Cause we have nothing else to go off of. That's, that's the only piece of data.
[00:06:58] SY: It's all we got.
[00:06:58] VJ: So the first thing we'll do probably is—the most important step that's very easy to forget is you wanna check that the node actually exists.
[00:07:05] SY: Oh yeah. That's a good point.
[00:07:07] VJ: Because if the node doesn't exist that, means the tree is empty. But also remember...
[00:07:11] SY: Sounds a little sad.
[00:07:13] VJ: I know. It could be...
[00:07:14] SY: An empty tree.
[00:07:14] VJ: Empty tree is...
[00:07:15] SY: Dead on the inside. (Laughing)
[00:07:23] VJ: Sure, we can take it there.
[00:07:24] SY: Like my soul.
[00:07:27] VJ: Well it's, it's—part of it is 'cause the tree could be empty, but the other...
[00:07:31] SY: Ok.
[00:07:31] VJ: ...thing is remember that this function that we write because we're gonna be, you know, clever and do some recursion, we need it to work for every single node that we pass into it. So initially we'll pass in the root node, but at some point, we could be way down in the leaves and get to the end. And like if we don't check that there is a node or not, we're not giving a—like our recursive function a base case. And base cases are so important in recursion because that's when you tell a function, "hey stop calling yourself. Please stop. (Laughing) We're done." Otherwise...
[00:08:05] SY: You look like an idiot. We're finished here. (Laughing)
[00:08:08] VJ: Yeah. You have to tell the recursive function like what is the scenario in which you must stop doing what you're doing and exit out.
[00:08:15] SY: Ok.
[00:08:16] VJ: And that is like a little reference to our, to a previous episode we had in like—I don't know, which, which season? Was it one? Two? I don't know. I've gotten so old.
[00:08:24] SY: There's just so many.
[00:08:27] VJ: But there was, there—we had an episode about stack overflows. And like I remember we were talking about like bacon and stuff. Good episode. You should go back and listen to it...
[00:08:34] SY: So good.
[00:08:34] VJ: ...if you haven't. Basically, if you don't tell, if you don't tell a function when to stop, it'll just keep calling itself if it's recursive. So checking that a node exists or not is how you make sure we don't end up in a stack overflow situation.
[00:08:47] SY: Ok.
[00:08:47] VJ: So we'll check that a node exists. Then because we're doing pre-order search, the first thing we wanna do is read our own data. So depending on what language you're using, you could be like printing it out or putting it on a line. Whatever. Just logging it. It doesn't really matter. But you're gonna read the data of the node that you pass in first, and then you wanna look at the left node and look at the right node, but then you wanna do the same steps with that, too, right? For the left node you're gonna wanna read the data, look at its left, look at its right. And for the right node, you eventually are gonna wanna do that also. So what you can actually do is call the same function that you've just defined inside of that function, and you can pass it the left node and pass it the right node.
[00:09:28] SY: Oh. Ok, so when—this whole time when we've been saying we wanna look at the left, look at the right. In code terms, what we're really saying is we want to call the function that we are already inside of on the left and then call the function that we are already inside of on the right node.
[00:09:48] VJ: Totally. Yeah. And then...
[00:09:48] SY: Whoa.
[00:09:49] VJ: Yeah. Yeah, it's kind of wild 'cause it's kind of like a, an on—it's like a Russian doll situation or like an envelope...
[00:09:55] SY: Yeah.
[00:09:55] VJ: ...within an envelope where for each node, you know you're gonna have a bunch of function calls inside of it which will have more function calls inside of it, but it's all the same function.
[00:10:04] SY: Yeah.
[00:10:04] VJ: And the reason it can work like that...
[00:10:06] SY: Oh, man.
[00:10:06] VJ: ...is 'cause it's the same steps you're taking each time.
[00:10:13] SY: Ok, so that makes sense 'cause when we were walking through the, the node hopping that we were doing, when we were just, just walking through it out loud, we kept saying, "ok, now we're on the left. But in order to do the left, we have to check the left's left. And then now that we're on that node then we have to check the—and we just kind of kept going without resolving anything.
[00:10:32] VJ: Yeah.
[00:10:32] SY: So that's, that's what we're talking about now in terms of the code. Like the, the not resolving it part happens because in investigating that left node, we have to now call its own function on the left node and that calls it on it's left node. Then that calls on it on its left. And then it just kind of goes all the way down till we hit our leaf.
[00:10:54] VJ: And then what do you—like I bet you can imagine what happens. Like what would you guess happens when you get the leaf? Like what part of the function fires and does the resolving?
[00:11:05] SY: Ok, so if I am all the way at the leaf, and I am kicking off my function all over again, then... Ok, so I just got my leaf. So I'm gonna say, "is node there? Yes, node, node is there." So I'm not exiting yet. And then I would say, "ok, pre-order search node.left?" And it'd be like, "nope." 'Cause that's, that's—oh, oh, oh, oh. I lied. I lied. (Laughing) It would call my pre-order search node.left, and then we would kick off that function again this time for, for the, the left node of the leaf, which doesn't exist because it's a leaf. So at the start of that new function I just called, it will say, "if node is null, return." And there is no node, so if node is null returns true, which means it will return, which means the pre-order search node left check that we just kicked off finally resolves itself and returns back to my leaf's pre-order search function call.
[00:12:07] VJ: Yep. And the same...
[00:12:07] SY: Yeah.
[00:12:07] VJ: ...thing would happen to the right, too. And so you've just...
[00:12:09] SY: Yeah.
[00:12:09] VJ: That's, that's the important little bit of checking that the node exists 'cause not only does it prevent a stack overflow, but that's like, that's your base case. That's how you resolve that whole tree...
[00:12:19] SY: Yeah.
[00:12:19] VJ: ...situation that we were talking about last week. And that's so important because that's how you work your way back up to the trunk, right?
[00:12:28] SY: Yeah, so I would do the same thing on the right. So then on the right I would go, "ok, I need to pass my right node into my pre-order search function." I would do that, and then it would say, "node, are you there?" Node is not there. Returns, which means I'm, I'm now out of my function. Now I'm back at my leaf. And now that I'm back inside my leaf, that's the end of the leaf's function call. So then that resolves. And now it gets kicked up to the leaf's parent node? Right?
[00:12:57] VJ: Yep.
[00:13:02] SY: Yep, parent node. And then if at that point we were in my—yeah, at that point, we'd probably be in like the left child, the left node, right?
[00:13:08] VJ: Probably, yeah.
[00:13:09] SY: So if you are the left one, then—or yeah, if that was the left one, then you'd have to go to the right one next. Ok. And then you—yeah, you just keep calling functions if, until you get to a leaf. And then once you're done, you, you resolve yourself, and you go back up.
[00:13:26] VJ: Yep. And it's kind of like you can imagine that there are like little boxes within boxes or the Russian doll image that I like.
[00:13:33] SY: Yeah, yeah, yeah.
[00:13:34] VJ: It's like once you get to the littlest one, you're like all right well nothing left to open here. I'm gonna put it back inside the, the box it came in. Close that, and put that back inside of its parent box and close that. And eventually you like kind of zip your way back up to the top.
[00:13:53] SY: Yeah. Huh. Very, very cool. Ok. So all of that sounds like a lot of work. (Laughing) It was, it was, you know, it's why it's taken us three episodes to cover depth-first search. There's a lot going on. It, it's pretty hard. How hard is it?
[00:14:08] VJ: When we talk about how hard an algorithm is, what people will categorize as something being difficult or taking a lot of work has to do with Big O notation, which we've talked about in previous episodes. What we really want to know is how hard is it in terms of how much time it takes and how hard is it in terms of how much like memory and space it's going to take to do this algorithm. And you can kind of evaluate algorithms based on time and space. And that's why Big O notation is also called the space time complexity of an algorithm. But I think you can also evaluate it just in terms of time or just in terms of space.
[00:14:48] SY: Ok. So in today's day and age of modern computing, do I care more about the time? Do I care more about the space? Am I optimizing for doing well on both?
[00:15:01] VJ: I think what you really wanna do is try to, try to find a decent, reasonable solution to both because you do care...
[00:15:07] SY: Ok.
[00:15:07] VJ: ...about both time and space. But sometimes what a lot of people will do is when they're writing algorithms or they're fusing, you know, two different algorithms together and trying to leverage both parts of them, which does happen, what you want to do is like try to find the best algorithm that works in terms of not just the data structure you're using but the problem you're trying to solve. So sometimes you might have more space that you're ready to allocate and so space is like, you know, it's okay if it's not the best. And sometimes you might not care about it being super fast. And so in that point, you might pick something that has really good space complexity and maybe is a little slower. So it's, it's always...
[00:15:45] SY: Yeah.
[00:15:45] VJ: ...a tradeoff. But generally, you do wanna care about both because an algorithm that doesn't take up very much space but then just takes forever isn't great either.
[00:15:54] SY: Yeah, that's true. That doesn't sound like fun. Ok so going back to depth-first search. I think one of the things that's really weird about that algorithm is it feels super risky, right? Because if my, my number, the number that I'm looking for is in that very, in, in the furthest left-most, you know, part of the, the tree. And I'm picking my—let's say I'm picking my, one of my three strategies, my pre-order strategy, for example. Even if it's a leaf, but it's in that first, you know, path, right? Then I can get there pretty fast, right? So depth-first search sounds awesome. But if it's the leaf on my furthest right one, and it's, you know, the, one the last ones that maybe I, I go through, then that kinda sucks. (Laughing) And depending on, you know, like how many, how many kids I have, and, you know, all this. Like it just, it just feels like it could be super fast or take for, forever.
[00:16:52] VJ: Yeah.
[00:16:53] SY: So when we're trying to quantify it in terms of Big O notation, how would we do that?
[00:17:01] VJ: So yeah, it is risky exactly as you mentioned. And so when we talk about Big O notation, we're usually talking about it in the context of the worst case.
[00:17:07] SY: Ok.
[00:17:08] VJ: So you evaluate any algorithm's Big O notation imagining hey say I do want to look for the rightmost leaf all the way deep down, what is the worst situation for me? And that's kind of like how you evaluate because that's, that's kind of the worst-case scenario for any algorithm. Obviously, best case is like you're looking for the root node. Ta-da, you found it.
[00:17:29] SY: Oh yeah.
[00:17:31] VJ: But...
[00:17:31] SY: Congratulations.
[00:17:31] VJ: But you always want to like think of it in the context of very worst-case what's gonna happen because...
[00:17:38] SY: Yep.
[00:17:38] VJ: I don't know what that says about computer science, but planning for the worst. But that is how you evaluate Big O notation. So in terms of talking about how much time something takes or how much space, the answer is always in terms of how much time if we're looking at the worst-case scenario? And how much space if we're looking at worst-case scenario?
[00:18:00] SY: So in terms of time, the question is yeah, how, how long would it—if we're looking for the, the last, the last thing, then how long would it take to get to that very last thing, right?
[00:18:11] SY: Yeah. So if you had a tree with like ten nodes and you're looking at the worst-case scenario, the worst-case scenario is probably the right-most leaf. So like say you had the numbers one through ten, and like you're looking for ten. And you had to search—say you did like a in-order search where you are looking at the left and then looking at a node, the data, and then the right, which is LDR. In that situation, if you're looking at the largest thing, rightmost leaf all the way at the bottom, you are gonna have to search through everything that comes before it. So if you're looking through numbers one through ten, you have to search through numbers one through nine before you get to ten. And that's the worst case. The worst case is I'm looking for the number ten, and I had to search through everything before that. And so the answer in terms of the time complexity is well in the worst case, you could potentially look through every node in the tree.
[00:18:59] SY: Yeah.
[00:19:00] VJ: The best case is hey, you're looking for the first one, yeah. But worst case is you're looking through every node. Ten is...
[00:19:06] SY: Ok.
[00:19:06] VJ: ...the number nodes. And so the way that we would talk about that is the, the time complexity of depth-first search is O of N, where as the number of nodes grows, the amount of time it'll take you...
[00:19:18] SY: How long it takes. Will grow.
[00:19:19] VJ: ...will increase. Yeah.
[00:19:22] SY: Ok. That makes a lot of sense. Ok, so what about for space complexity? What do we look at for that?
[00:19:33] VJ: This is a good one. In terms of the space complexity, recursion ends up being a big part of that because as you'll recall, every time we recurse in a function, we have to add something to the stack...
[00:19:44] SY: Yes.
[00:19:44] VJ: ...because every function allocates space, and every time you call a function, you allocate a stack frame. So if you have a recursive function and you're going very, very deep inside of the tree, then every node you check is adding another function call to the stack. So you can imagine that if you are looking through a tree, and it's very, very deep. And you go all the way down to the leaf, but say the leaf is like 50 nodes deep. Then the worst-case scenario, you're looking for a node that is a leaf, and it just takes you a long time to get there. Because as you mentioned, it's a lot of work, right? So you could be going very, very deep and then finally find the node you're looking for, but you had to go all the way down this branch to get there. And along the way you had to allocate all this space for every single recursive function call. And now your call stack is—actually, maybe you can tell me this. What do you think the call stack correlates to?
[00:20:43] SY: So if I'm starting at my root node, then I have my very first function call, which is my pre-order search, and then I'm passing in my root—no, let's just call it node one. And then within that, I know that part of the function is looking at my left node and calling the pre-orders search function on that left node, which in this case will call it node two. So now I have two stack frames. I have my, my one from node one and then my new function call from node two. And then within my node two function call, it's going to say, "ok, I need you to look at my, my left node." So let's assume like there was a right node from the root, but we aren't on that one yet. And if that one was three, than the left node of my node two would be node four. So then now, I'm calling the pre-order search function on node four. So I have added a third stack frame onto my call stack. And as we know, as we can probably guess, in my pre-order search function call from node four, I'm gonna say, "ok, I need to look to my, you know, left node." And that's gonna call a new pre-order search function call on its left node. And that's my fourth and final stack frame added to the call stack. And if that is my, my leaf, then that's it. That's the last one. So I have four stack frames that I need to resolve, I guess, that I need to, to go through to be able to get to my leaf.
[00:22:22] VJ: Yeah, exactly. The important bit about that is that that represents an amount of memory, right? That's how much memory your...
[00:22:26] SY: Right.
[00:22:27] VJ: ...your machine needs, your—it's how much memory is on the stack for that one function. So when you had to go four functions deep, which basically means like four nodes deep, I guess, you had to allocate four function calls-worth of memory in order to even get to that point where you could check it. So imagine that instead of four you had forty how much memory that would have required.
[00:22:48] SY: Whoa.
[00:22:49] VJ: Right?
[00:22:49] SY: Got it. Yeah.
[00:22:50] VJ: So the space complexity of depth-first search is really closely tied to that.
[00:22:59] SY: Ok, but there are other leafs, right?
[00:23:01] VJ: Yeah.
[00:23:01] SY: Like there are other, other leafs, Jesus Christ. There are other leaves, right? (Laughing)
[00:23:07] VJ: Yeah.
[00:23:08] SY: They're other leaves so even though for the, the very—to get to that first leaf I had to call, I had to make four function calls, which means four stack frames if that leaf is not actually the node that I'm looking for, the data I'm looking for. I will still have to add an, and call more functions and therefore have more stack frames. But I guess the, the key is that to get to that leaf and to kind of like get back up and call like a, a new one, I will have resolved it. So I'll basically have—I'll be like removing and kicking off these stack frames.
[00:23:47] VJ: Exactly. You'll, you'll be...
[00:23:48] SY: Is that it? Ok.
[00:23:48] VJ: ...popping those basically...
[00:23:51] SY: Yeah, as they get resolved.
[00:23:51] VJ: Yeah. You'll be popping those off the stack. So it's never gonna be—like your stack frames are never going to be the number of nodes you have because the way that we kind of crafted this function is that you're only going to recurse through the children, and you're only going to recurse 'till you get to a leaf. So pretty much...
[00:24:08] SY: Yeah.
[00:24:09] VJ: ...the worst situation in terms of memory for you is you have a very, very deep tree. And in that case, depending on how deep you go, you could have lots of stacked frames. But deep is actually not even the right word for it. The real technical term is the height of a tree, which is the distance between the root node and its furthest away leaf.
[00:24:34] SY: Ok. That makes sense. So in the process of, you know, trying to find this one node, I could call the function, you know, 20 times, 30 times. I could call it a million times. But, but the most amount of stack frames that I would have at one time is the height of the tree, and I would never have more than that that I would need to have on my call stack at once.
[00:24:53] VJ: Totally. Yeah. The, the height of the tree will tell you, "oh, this is like the deepest recursive function call that I could have," which means this is the most amount of memory that I could need for this algorithm at any given time. So if you have a tree that's like five levels deep, ok like five recursive calls is not terrible. But if you have a tree that's like 5 million levels deep, that, that could, that could be a lot of memory that you don't even realize you require if you end up searching...
[00:25:22] SY: Yeah.
[00:25:22] VJ: ...through a deep part of the, the tree, maybe the...
[00:25:26] SY: Yeah.
[00:25:26] VJ: ...furthest away leaf. You could be looking for that node and then find yourself in a bad situation (Laughing) out on a branch, so to speak.
[00:25:35] SY: Ok. So, so basically if I have a couple of different leaves but the first leaf I hit I, I only need to go through, you know, three levels to get there, then in that situation my call stack is only three stack frames. But if I have my one, my one trouble leaf, who is ten levels in and then that would mean that the worst-case scenario is that my call stack would be ten stack frames...
[00:26:04] VJ: High?
[00:26:04] SY: High? I guess. (Laughing) So that goes back to the whole point of Big O notation trying to capture the worst-case scenario. So best-case scenario, I only, you know, need one stack frame or maybe three stack frames, but because of that height, because of that one pathway that has ten different levels, I need to be ready in terms of memory. I need to be ready to have ten stack frames in my call stack.
[00:26:29] VJ: And the way that you would talk about that is you, you could say that in terms of space complexity, the Big O notation of this depth-first search algorithm is O of H. And H usually is used to represent the height of a tree.
[00:26:47] SY: Ok. And with that, we have finally completed depth-first search. And I think we got to the bottom of things.
[00:26:54] VJ: Ha, ha, ha, ha. (Laughing)
[00:26:58] SY: Which means that next episode, we're going to talk about a different type of search. We're gonna get into breadth-first search. Question: are they at all related? Can we transfer some concepts and ideas from depth-first search into the breadth-first search (Music) world?
[00:27:12] VJ: Definitely. In fact, I think we're more...
[00:27:14] SY: Oh, thank God.
[00:27:14] VJ: ...well suited to learn about breadth-first search because we've already covered so many of the basic things about...
[00:27:18] SY: Awesome.
[00:27:18] VJ: ...nodes and stuff.
[00:27:20] SY: Oh, that's wonderful. Well if you liked what you heard, please leave us a review and make sure to check out Vaidehi's blog post. Link is in the show notes. Also wanna give a huge shout out to Twilio for sponsoring the season. Make sure to check out their awesome API. To get your twenty dollars in free Twilio credit, text your name 480-485-4321. Phone number and link are in your show notes. Vaidehi, you wanna say good-bye?
[00:27:45] VJ: Bye, everyone.
[00:27:45] SY: Thanks for listening. See you next week.
[00:27:49] SY: But the most—but the highest, the highest? The highest. (Laughing) The biggest.
[00:27:54] VJ: Tallest.
[00:27:55] SY: What? (Laughing) The tallest.
Thank you for supporting the show!