[00:00:05] SY: Welcome to the CodeNewbie Podcast where we talk to people on their coding journey in hopes of helping you on yours. I’m your host, Saron, and today we’re talking about A/B testing with Leemay Nassery.

 [00:00:16] LN: If you are not A/B testing, you can. If you’re starting from zero and trying to go to two, one, three, to start small, evaluate what you already have at your company. You don’t have to have a sophisticated, overwhelming infrastructure platform just to get started with this experimentation methodology.

 [00:00:33] SY: Leemay talks about her career thus far and all things A/B testing after this.

 [MUSIC BREAK]

 [00:00:45] SY: Thank you so much for being here.

 [00:00:46] LN: Thank you. Happy to be here.

 [00:00:48] SY: So what first got you interested in code?

 [00:00:50] LN: Well, I studied it in college. I majored in computer science. So that was probably the first intro to it. I wasn’t one of those humans that programmed in high school or like before that. I started in college. I originally was a physics major for the first semester and I wanted to be in engineering because the university I went to was predominantly engineering and I looked at all of the engineerings, like mechanical, chemical, all the ones that you typically hear of and I thought computer science would be the easiest.

 [00:01:22] SY: Interesting. So it was very strategic. I respect that. I respect that. Very strategic. I like that. What was your experience like doing undergrad in CS?

 [00:01:32] LN: It was hard. I’d say like, again, because I didn’t program prior to college or took my freshman year of college and I feel like a lot of the folks that were my classmates did. So I felt a little bit behind, but I survived. I made it.

 [00:01:46] SY: Nice. And once you graduated, did you know what you wanted to focus on or did you have anything particular in mind after college?

 [00:01:53] LN: Oh, that’s a great question. No, I had no idea. I just pretty much did a bunch of interviews, got a few jobs, picked the best one. I think I probably picked the one that had the highest salary.

 [00:02:03] SY: Nice. Again, very strategic. I love the strategic side of you. This is great.

 [00:02:08] LN: Strategy, “no strategy”. No. I think I started working at Comcast. I remember going into the mindset where I’m going to do it for a year and if I don’t like it, because I didn’t really enjoy university as much, then I’m just going to pivot. But I found writing code or developing software in the real world was way more interesting than school.

 [00:02:28] SY: What made it more interesting?

 [00:02:29] LN: You’re solving real problems, like you’re not building a binary search treat from scratch, or you’re not building a compiler that’s due on like Halloween night, like the deadlines are very different. I was lucky the job I worked at straight off college, I’d stayed there for nine years, so I really enjoyed it until I didn’t, which was like the ninth year. But I worked on really cool projects and they were all user facing and they were related to content discovery, video, TV, so it just made the job so much more compelling because you’d make a change and then you’d essentially see that change on your TV for like millions of people to engage with.

 [00:03:06] SY: That’s cool.

 [00:03:07] LN: So like the problems you solve I think made it a career that I really wanted to stick with.

 [00:03:12] SY: So staying at any job in tech for eight, nine years is quite a long time. That’s like a lot for our industry. We’re pretty famous for job hopping and kind of sticking around for maybe a couple years and then moving on to something else. What was it about either Comcast or your experience there that made you stick around for so long?

 [00:03:29] LN: Yeah. I think it’s a combination of luck and my own curiosity to move around in the company. So I’d say like I was on a different team every two years. I got to work on I felt like almost every part of the video experience. So from building like a front-end web app that was used internally to create the editorial experiences that you’d see on the TV to building the back-end web services that served content for users to engage with, to working on personalization platform, to building an A/B testing platform, like it just felt like I was having new experiences very often, and I also had very good managers that supported me in that. So there was no reason to leave. Like I felt like every two years it was kind of like a new job, which is hard.

 [00:04:15] SY: Yeah.

 [00:04:15] LN: Like you have to seek those out. Like you can’t just wait for them to come. You know?

 [00:04:18] SY: Absolutely. And you went from being a software engineer to a senior software engineering manager to director of software engineering. So you definitely had plenty of different roles. What’s one thing that sticks out to you that you think played a large role in helping you advance in your career and move up and get those promotions and get those new opportunities even within the same company?

 [00:04:38] LN: There are probably many variables that attributed to that, but two come to mind. One, taking risks. I was unlike a lot of my peers and that I was okay like jumping to another team. It’s really easy to stay and be comfortable, especially because there’s like a lot of uncertainty. Your new manager may like not be that great or like the project may not be as interesting as what you originally worked on. So I think like taking risks and being okay with the uncertainty of each new team or each job was a variable. And the other one was I feel like I got really lucky, like I started working at the company when X1, this is the video product I worked on, it was only 900 users and I was like an Engineer 1 when it was 900 users. And as I grew my career, from ENG 1 to 2, to 3, to manager, to director, the platform grew too. So it went from 900 users to 15 million users. So I just happened to be on those good projects that were relevant to the business.

 [00:05:39] SY: Right. Right. Very cool. So after Comcast, you went to Spotify, then you went to Dropbox, Etsy, and then back to Spotify. What was it about Spotify that brought you back to the company?

 [00:05:50] LN: I love Spotify. Everything about the company brought me back. I mean, it’s a great company to work for. The product is lovely. You don’t not smile when you think of the Spotify product and the podcasts and the music that you listen to there and that you have an affinity for.

 [00:06:04] SY: It really is a phenomenal product.

 [00:06:05] LN: Yeah, I love the product. So that’s like a big part of it is you can’t not be happy by working on a product like this. But also the culture, the people, I missed the people. There’s something special about the folks here. And I’m saying this because I came back. It’s not like I was there and then I was like, “Oh yeah, I found this and the other companies too.” The other companies were fine. But I don’t know, it’s hard to explain. I really enjoy working here and the work that we do and the people that do it. You can work on a great, interesting technical project, but if the people you work with it aren’t that great, it’s not that cool anymore.

 [00:06:42] SY: What’s really awesome about your experience is that you’ve worked at some pretty big companies, and I think that when you work at these big companies, it’s easy to kind of get lost doing things that maybe you never see the light of day or maybe feel so small that they don’t really feel impactful, but it feels like you’ve been able to do some really cool projects. What is one of your proudest moments? What’s one of the coolest projects you’ve done across your career?

 [00:07:03] LN: Yeah. I’d say probably the most proud was building a 4U homepage from scratch at Comcast with a very small team. So I was a manager at this time, so I couldn’t say that I contributed to the code. I did write a little bit code for it, but I managed a like six-person engineering team, six wonderful individuals, and we launched a 4U homepage on a product that didn’t have it. The video product, like it wasn’t primarily like editorial driven. So like the promotions that you would see were picked by humans, and so we were shifting it to be chosen by algorithms. Not the promotions, but the recommendations. And that was just a fascinating ride, like launching something from scratch. It’s not something that you get at most companies. Usually, it’s like incremental updates.

 [00:07:46] SY: Right. Exactly. So in your career, you’ve been both a manager and you’ve also worked as an engineer. What do you personally like better? Do you like the engineering part or the managing and overseeing others?

 [00:07:57] LN: Definitely the management aspect. I think your reach to do good is far wider. I mean, you really can have an impact on your team and your organization culture that you can’t have as much as an engineer. I did enjoy being an engineer. I did enjoy solving problems and launching features and pushing code into production and seeing that reflected on the TV. That was awesome. But I think management’s a lot more fulfilling. But you don’t see it until you look back like a year ago. You’re like, “Oh, wow!” We were in a very different place. Whereas when I was an engineer, I felt wins a lot more often, if that makes sense.

 [00:08:37] SY: Yeah, that does make sense. So let’s get into A/B testing. How would you describe A/B testing to someone who’s never heard of it, never heard that term before, has no clue what it means? How would you describe it?

 [00:08:48] LN: All right. The easiest way to describe it, I’d say you have Version A. Version A can be like a new feature that you’re adding to your website, your product, your mobile app, and then you have the existing version. So it’s what’s already in the hands of your users and you want to compare the new version to the existing. You want to see the impact of that change on a subset of users.

 [00:09:11] SY: And what is that process actually look like? When you have these two versions, how do you go from having them, you know, I guess on your computer it would be to actually putting it in the hands of users and seeing the results and analyzing those results?

 [00:09:26] LN: There’s a lot to it, but the most basic A/B testing platform, let’s say, that’s running one experiment at a time. The way you’d go about it is you have your sample of users, so you have your user population, so it’s all the users that use the technology or the software that you’ve built. And then you randomly sample users from the base population. You do that twice. You do one for the control and the control gets the existing experience and then you do it again for the treatment or the variants of the different thing that you want to test. And then once you’ve had the two groups of users, you make your platform, show those different experiences to those groups and then you let the test run for some bit of time. And obviously there’s some statistics behind it for how long you should run it, how many users should be exposed to the control and test experiences, and then you end it and then you do some data analysis. That is the most basic way of describing it. There’s a lot more to it, but the layman’s term.

 [00:10:20] SY: So tell me about what tools go into it when it comes to actually deploying the two versions. What’s the technical implementation of that, of that design that you just described?

 [00:10:31] LN: Yeah. Like at a really high level, if you had an A/B testing platform that you were building from scratch, I’d say you need some software that identifies the users to be in the experiment. So you could call that like your user segmentation service or your variant allocator. So that’s one piece, the software that picks the users. The other software in your A/B testing platform would be the data pipelines that compute the metrics. And so those metrics are computed for each of those two user groups. And then you’d have the software component that essentially determines which users get which feature. So like that could be somewhere in your front-end code that says, “Okay, this button, let’s say, that should go to this user group, and then this other button should go to this other user group,” kind of like traffic directing.

 [00:11:20] SY: And is that something that I would build on my app, like a custom thing I would build on my app? Or is that something that I would hire a third party or outsource it to some type of API to do it for me?

 [00:11:31] LN: You could build that in-house or you could… I know for certain, there’s definitely third-party vendors that support A/B testing, integration with a product. So there are definitely a lot of third party vendors that like Google Analytics, like there’s a lot out there, Split.io, that make these things really easy. It just depends on the integration part. I think the thing that’s hard to answer with that is I just never know how long integration will take. So like from a bird’s eye view, going with a third-party vendor feels like the solution of all solutions. It does everything. It does your metrics computation. It selects your users. It allocates your users. It ends your test. It starts your test.

 [00:12:06] SY: Right.

 [00:12:07] LN: But you just never know how long it takes to like bring that system into your system.

 [00:12:13] SY: So in what context is A/B testing useful? If I’m a developer, when would I reach for that in my tool belt?

 [00:12:22] LN: If you’re an engineer, I think one good reason to leverage A/B testing is to de-risk your changes because you’re evaluating the change that you’ve made to a product on just a subset of users. Think of it kind of like a really slow rollout, like an incremental rollout. So instead of launching this like a new feature to a hundred percent of your user base, you’re exposing it to just a subset of that. And then you get to understand not just like the user engagement impact of that, but also like your engineering system metrics, your business metrics. It’s a way to de-risk changes. And then similarly, I’d say another reason to leverage A/B testing, it helps you make better decisions. If you build something and let’s say one of your metrics is engineering system metrics, so like latency, and then you realize your latency metrics are quite high during the A/B test, maybe you could reevaluate how you’ve built that before you launch it to all your users.

 [MUSIC BREAK]

 [00:13:31] SY: Tell me about the types of things that you’ve A/B tested. When you think about your experience with A/B testing and your history with it, what kinds of experiments have you run and what have you learned from your tests?

 [00:13:43] LN: Let’s see. I can look back at Comcast. Well, there was the 4U launch. We definitely A/B tested that. So that was introducing a 4U homepage onto the video product. And then before we launched it, there were changes that we evaluated to our machine learning algorithm. So whether one implementation of our recommendation’s algorithm outperformed or was worse than the previous version. I know that when we first started getting experience with A/B testing at Comcast, the first test that we typically ran was whether a recommendation’s algorithm outperformed popularity and popularity was like somewhat of our baseline metric to always beat. What other things have we A/B testing? A lot of it is oriented around machine learning and the user experience.

 [00:14:25] SY: Those all makes a lot of sense to me. And when I think about A/B testing, I think because it feels so scientific, it’s a science experiment, right? I’m A/B testing.

 [00:14:33] LN: Yeah.

 [00:14:33] SY: It feels like something that is made for big companies or at least products with a lot of users. It feels like something that only makes sense if you have millions of users. You have a lot of people that you can do these experiments with and notice a change because there has to be some type of statistical analysis tied to it, right? Is that actually true? How small can you be and still run A/B tests in a scientifically sound way?

 [00:15:00] LN: That’s a great question. There’s another way to answer this. There’s a cost to A/B testing and that cost is setting up the A/B test. It’s spending time, like waiting for your test to complete the duration that you’ve set it to so that you can get statistically significant results. So that could be two weeks, that could be a month. It really depends on the statistical analysis that you did prior to ensure that the change that you’ve made, the effect that you’re measuring is not by chance. And all of this is time and energy. So I’d say if you’re building a product that is in the hands of like 20 users, it just depends. Do you want to extend that time towards making that decision using A/B testing with just 20 users? It’s energy and I’d say it’s more about the energy and cost of it. It really depends on what matters to you. Like if you do want to make a better decision and you have three versions of something and you want to do that against like a hundred users, if you’re okay putting in the time for that, then that’s fine. Like go for it.

 [00:15:59] SY: Okay. So even if you have a small group, A/B testing is still a tool that you might want to leverage?

 [00:16:04] LN: Yeah. Again, it’s just the cost. It’s like how long are you going to have to run, how much energy you’re going to have to put in it, is it worth it, are you going to really make a decision based on it, or were you going to launch it anyways. You know?

 [00:16:13] SY: Right. Right. Yeah, exactly. If you’re going to launch it anyways, you might as well not even. I think that figuring out what is the purpose of your A/B test and what decision are you trying to make seems like a good place to start.

 [00:16:24] LN: Exactly.

 [00:16:25] SY: Another thing that I’m hearing from your description of this process is it feels like there’s kind of two segments to this. There’s the setting up the experiment itself, figuring out what should you A/B test, how big of a chain, like I remember hearing stories of Google A/B testing like different shades of blue, for example. I guess you can get that granular. I mean, you know, that’s detailed.

 [00:16:46] LN: Yeah, for sure.

 [00:16:47] SY: But there is the question of, “What should you be testing? Who you should test it with? How do you divide up the group?” Also, “How many users you have?” There’s kind of all these setting-up-the-experiment questions. And then once you set it up, there’s the actual implementation of it. How do you decide the setting up steps? Is there a tool for that or is that something that you would hire an expert, someone like you to come in and evaluate? Like how do you make those kinds of decisions? Because it feels like those are pretty crucial.

 [00:17:15] LN: Yeah, I say a lot of what type of metrics you should use to measure your experiment, how long you should run it, how many users should be allocated to it. I think you should rely on data scientists. There definitely are tools out there. There are sample calculators, but when in doubt, lean on data scientists to ensure that your test is valid, to ensure that you’re not having false positives or false negative.

 [00:17:39] SY: Got you. So given that, is this really a topic more for data engineers and data scientists? Or is there a place for software engineers, kind of web developers, app developers to get in on A/B testing too?

 [00:17:52] LN: Oh yeah. All the above. For one, there’s the platform itself, like who builds the A/B testing platform. It involves front-end developers creating the user interface to set up the test. It involves back-end engineers to create the web services that are invoked when the product needs to understand whether a user should be in Treatment A versus Treatment B. It involves data engineers to compute the metrics and the datasets that are needed to determine like if a user should be allocated to a test, for example, and then obviously data scientists to help ensure that your test is valid. So I think when you build a platform, it involves an array of individuals or different domains. And then when you use the platform, it’s the same. Like if you’re a web engineer or a front-end engineer building a feature, you want to use this because you want to evaluate the effect of your changes on the product. When you’re a back-end engineer making like back-end engineering changes, you want to do the same. When you’re a machine learning engineer or a researcher, you want to know the effect or the impact of your algorithm. And so it kind of spans all of the domains.

 [00:18:57] SY: Yeah, that makes sense. So in what situation does A/B testing come in handy? And when can you maybe skip it and just release the feature? How do you decide when you should do it, when it makes the most sense to, versus just kind of letting the feature go and hoping for the best?

 [00:19:13] LN: Well, I’d say if you want to understand the impact, the effect of a change, you should use A/B testing. If you know without a doubt that regardless of the impact, regardless of the metrics, this change is being shipped to production, it is going, it is part of the product strategy, it’s part of the engineering strategy, it may not be worth the cost of engineering because again, it takes energy and time to configure the test. However, caveat that with, even if you know for certainty that the change is going to go to production for all your users to engage with, it does not hurt to run an A/B test if you’re okay with the cost of it, because it’s better to know the impact than to launch it and not know the impact or the effect, if that makes sense. But I’d say like to answer the latter part of your question, like when not to run an A/B test, if it is very cost, if there is an incident or an outage and you need to get this fixed into production, like push it to prod, don’t like, be like, “Hey, guys, let’s evaluate this first.” If there is any change that’s really degrading the user experience that you need to do a quick roll out for, it’s not worth it. Maybe do a canary deploy, like maybe deploy it to one instance and measure and look at the impact of the change, like for a few hours. But don’t waste time evaluating if you really have to get the change out.

 [00:20:42] SY: Coming up next, Leemay talks about what to do if you would like to try A/B testing, but might not have the resources for it. And we learn more about her book, Practical A/B Testing, after this.

 [MUSIC BREAK]

 [00:21:01] LN: What do you recommend for teams or maybe even individual developers who are curious about A/B testing, who want to give it a try, but don’t necessarily have the resources, don’t have access to a data engineer or a stats person, what do you recommend for them? Is it worth even trying or is it something that they can make some progress with, maybe learn something from even as a smaller, slimmer team?

 [00:21:26] SY: Yeah. So I think you can start with very little. You don’t have to have a sophisticated A/B testing platform to get started, if that makes sense. Over time, you’ll want that as you realize the impact that A/B testing can have on a company’s culture, product vision, engineering vision, like you’ll want to build something more scalable, something that doesn’t require much handholding, but if you’re just trying to get started, like there are ways you can like… you could leverage, for example, if you have a feature flag system, you could technically hook your A/B testing platform into your feature flags. So for example, feature flags.

 [00:22:02] SY: And what are feature flags?

 [00:22:03] LN: Easiest way to explain a feature flag is turning a feature on and off with like a configuration, like dynamic config. So this feature is only on for like 10% of the users while we’re rolling it out and we incrementally update it. So what I’m trying to say is evaluate your current engineering stack, see if there are components that do parts of what you want. So if you already have a way to identify users and then tag them so that they are allocated to the test and control treatment, try to extend that for your first basic A/B testing platform. Again, one of ways to do that is a feature flag system. You could also start simple. When you associate a user to a test or control treatment, you can either do that in real time as they are exposed to the product or you could somewhat do batch allocation, meaning you could do it offline. You could identify the users that go into your test and control treatments before the A/B test has even started. That’s a way to get you over the, like, “Let’s just get an A/B test out the door.” And if you don’t have like the metrics of your dreams, let’s say you don’t have like data engineers, but you have access to data, you could start simple. You could use business metrics. There’s somebody at your company that likely has metrics oriented around the effect of the product on key business metrics like revenue retention. See if you could extend that for your A/B testing use case. So I think the theme is like if you’re starting from zero and trying to go to one, two, three, to start small, evaluate what you already have at your company. Because it’s definitely possible. The example at Comcast where we launched a 4U homepage, we built the A/B testing from scratch and we built it with three engineers. But that was done because we leveraged a lot of what we already had to work for A/B testing. And it wasn’t sophisticated. Like it worked for a handful of A/B tests. It was difficult to use. It required a lot of handholding to set up the test. We were monitoring jobs. We had to rerun data pipelines to compute the metrics. Sometimes they like didn’t run for some reason. There was a lot of backfilling. There was a high cost to it, but that’s kind of a hump you have to get through to progress. You know?

 [00:24:20] SY: Yeah, that makes sense. So you wrote a book called Practical A/B Testing. Can you tell us a little bit about it?

 [00:24:26] LN: Yeah. It is a practical way to think about A/B testing. Why I wrote the book, we already know a lot of these bigger companies like Netflix or Airbnb or Spotify, they run A/B tests. They’re well known for A/B testing. All these companies have tech blogs that talk about it. And what should everyone else do is the goal of this book, like what is a practical way to get started that’s written in a lighthearted, somewhat fun way that’s not overwhelming. For example, like the chapters in the book, like I start out with, “Why should you A/B test?” And then I talk about the anatomy of an experiment, the hypothesis, the metrics, the user segmentation. And then after that, I talk about different types of A/B test and what a simple platform may look like if you don’t have one. So really it’s a practical introduction to A/B testing.

 [00:25:17] SY: Very cool. And is this targeting software engineers? Is it more of a business book? Who is the audience for this?

 [00:25:23] LN: It’s really anyone with an interest in A/B testing and to get started. So definitely engineers would be interested in it if they don’t have an A/B testing platform at their own company, especially if they want to advocate for it. There’s a chapter about the culture of experimentation, how to try to gain adoption for this methodology in their company, but also products, definitely product and design, really anyone who is interested in learning. I think it can be geared towards them.

 [00:25:51] SY: And when is the book available?

 [00:25:52] LN: I think as of March 1st, so very soon.

 [00:25:55] SY: March 1st. Nice! Very cool!

 [00:25:56] LN: Thanks!

 [00:25:57] SY: So if you could leave us with one fact that we should all take away from this podcast about A/B testing, what would it be?

 [00:26:04] LN: Okay. So one fact that everyone should take away about A/B testing is if you are not A/B testing, you can and you can by starting simple, by having simple components to get you started. You don’t have to have a sophisticated, overwhelming infrastructure platform just to get started with this experimentation methodology.

 [00:26:24] SY: Very cool. Now at the end of every episode, we ask our guests to fill in the blanks of some very important questions. Leemay, are you ready to fill in the blanks?

 [00:26:37] LN: I’m totally ready.

 [00:26:39] SY: Number one, worst advice I’ve ever received is?

 [00:26:42] LN: Years ago I had a manager that wasn’t great and a lot of folks were telling me to stay. Do not change a team. Just grind through it. Push through it.

 [00:26:53] SY: Oh, no.

 [00:26:54] LN: And I think better advice would be if a manager doesn’t like you that much and you find like the working relationship isn’t that great, you should probably pivot.

 [00:27:02] SY: Yeah. Yeah. That seems like better advice. Number two, best advice I’ve ever received is?

 [00:27:07] LN: When I was an Engineer 1 straight off college, the engineer that I sat next to every day for about two years, he asked me what I’m working on one day and I was like, “That’s such a strange question.” Like, “You should know what I’m working on. We sit next to each other. It was real casual and then I explained to him what I was working on and then he followed that with, “Does it matter?” [00:27:29] SY: Oh!

 [00:27:30] LN: And I was like, “What? Like obviously it matters. My manager like assigned me to this task. Why wouldn’t it matter?” And then I like thought about it a lot and I was like, “You know what? This project really doesn’t matter.” And so what I realized that like as an Engineer 1 is the projects you worked on should matter. Like if you want to progress in your career, like you should be working on things that are relevant to the product, the company, the team. It was just very mind blowing because I did not understand the first question and then towards the end I was like, “Whoa!” [00:27:59] SY: That is such great life advice, I feel. You know, like when you’re stressing out about something, when you’re working on something, how are you spending your time, just taking a moment to pause and think, “Does this actually matter?” And kind of readjusting either your emotions or your priorities accordingly.

 [00:28:16] LN: Definitely. It can definitely extend a life.

 [00:28:18] SY: Yeah. Yeah. I’m going to write that on a Post-It and put it on my computer. Thank you for that.

 [00:28:23] LN: Nice! Yeah.

 [00:28:24] SY: Number three, my first coding project was about?

 [00:28:27] LN: I built a back-end web service that response body was XML. So think about that. Nobody uses XML anymore. And it returned the structure of the content discovery experience on TV at Comcast.

 [00:28:41] SY: Oh, cool! Wow! That’s a pretty cool first coding project. Neat. What did you use that for? What did you do with it?

 [00:28:46] LN: Well, again, this is when the product was only in the hands of 900 users, and I’m fairly certain like 800 of those users were employees. But yeah, it returned… like if you browse, like if you go on any streaming app these days, you’ll see like options like TV, movies, sports. It returned that structure.

 [00:29:06] SY: Okay, very cool. Number four, one thing I wish I knew when I first started to code is?

 [00:29:12] LN: I wish I knew that you don’t have to build a lot from scratch. For example, in university, we had to build a binary search tree from scratch. I mean, we had to build a compiler from scratch. There were so many of our projects were building things that you could just have a library for, and eventually when I graduated and started working at my internships or at the real job, I realized like, “Wait, this has already been written. I can just extend this library and then use this data structure and I don’t have to make sure I’m allocating the right objects to this list that I’ve created from scratch.” I didn’t know that until later.

 [00:29:49] SY: Very cool. Yep. That is very true. I remember in my early days I had kind of a similar realization where I was thinking, “Oh man, I got to write this thing from scratch. Then I go, “Wait, no, there’s plugins, there’s libraries. There’s APIs.” There’s so many other things that you just put in there and it makes it a lot easier. Frankly, it made me less impressed with other people’s projects because I would go, “Oh, I know you didn’t build that. I know you just used an API.” [00:30:12] LN: Right. Right. Right. Right.

 [00:30:13] SY: Wonderful. Well, thanks so much for joining us, Leemay.

 [00:30:16] LN: Thank you. This was fun. This is cool.

 [00:30:20] SY: You can reach out to us on Twitter at CodeNewbies or send me an email, hello@codenewbie.org. For more info on the podcast, check out www.codenewbie.org/podcast. Thanks for listening. See you next week.

Copyright © Dev Community Inc.