Nashlie

Nashlie Sephus

Applied Science Manager Amazon AI

Dr. Nashlie Sephus is currently an Applied Science Manager at Amazon AI in Atlanta where she was formerly the CTO of startup Partpic (acquired by Amazon in 2016). She focusses on computer vision, machine learning, fairness and biases in AI, and is founder of Mississippi-based non-profit The Bean Path (https://thebeanpath.org), having received her B.S. in computer engineering at Mississippi State University (2007) and Master's/Ph.D. at Georgia Tech (2014).

Description

At the time of this recording, the New York Times released a report titled "As Cameras Track Detroit’s Residents a Debate Ensues Over Racial Bias," which discussed some of the issues in machine learning such as algorithmic bias, and facial recognition software giving more false matches for black people than white people. We chat with Nashlie Sephus, CTO of Partpic, which was acquired by Amazon in 2016, and now an Applied Science Manager at Amazon Web Services, about her journey into machine learning, developing Partpic, and tackling some of the ethical issues in machine learning in her new role at Amazon.

Show Notes

Transcript

Printer Friendly Version

[00:00:00] SY: (Music) Welcome to the CodeNewbie Podcast where we talk to people on their coding journey in hopes of helping you on yours. I’m your host, Saron, and today, we’re talking about machine learning with Nashlie Sephus, the CTO of Partpic, which was acquired by Amazon in 2016, and now an Applied Science Manager at Amazon Web Services. 

[00:00:24] NS: The key to training a machine learning model is the more better quality of data

you have, the better quality of the model, almost like garbage in, garbage out. 

[00:00:34] SY: Nashlie talks about what drew her to pursue machine learning, the creation of Partpic, and the work she’s doing at Amazon, reviewing the ethics of things like facial recognition and other AI after this. 

[00:00:53] Thank you so much for being here.

[00:00:54] NS: Thank you for having me. 

[00:00:56] SY: So tell us how you got into code. 

[00:00:58] NS: I started learning about what coding was in the eighth grade because, of course, I knew nothing about coding or engineering or any of that till my eighth grade science teacher, she pulled me to the side and said, “Hey, you should probably check out this summer engineering camp at my undergrad where I went at Mississippi State University.” So I went there that summer and they pretty much introduce all the disciplines of engineering. I remember just being blown away never having thought that this was even a thing because pretty much when you said engineer during that time, people thought about people who work on railroad trains and train tracks and things like that.

No one in my family, you know, was an engineer or a computer scientist or anything like that. So this was somewhat new. So I think once I got in, there was a day where we studied computer engineering and they taught us that hey, computer engineering is kind of like the best of both worlds. They told us about being able to control the hardware with the software, being able to type in things to control the hardware like robots. We did some of the Lego robots. I just remember thinking, “Man, this is so cool.” I think I was kind of controlling it anyway at the time and so it’s like you could do this. Yes, yes, so it fit my personality perfectly. From that point on, I knew that that’s what I wanted to do. 

[00:02:21] SY: It sounds like that camp was amazing, an amazing way to explore so many different topics and get exposed to this whole world you didn’t know about. When you left that camp, what was the biggest takeaway for you? 

[00:02:33] NS: Because it was a girl’s engineering camp, I think that was key for me because I have been to summer camps before for math and science and “technology”, and no one had ever really just pointed out engineering per se. Also, we had been told that it pretty much was something that a lot of women didn’t really do so being that it was a woman’s engineering camp, I was surrounded by other girls like me who were intrigued by this, it kind of made it possible and it made it something that was real that I knew I could pursue. Actually, the mentors at the camp were female engineering students and it was almost like looking up to them. I actually kept in touch with a few of them all the way through high school. 

[00:03:17] SY: Wow! 

[00:03:17] NS: And really inspired me and led me to continue to go into that field. So I think just seeing you know how they say, “How can you be what you can’t see?” I really think that’s what stuck with me that, hey, women are out there doing this and they’re having fun, and they’re teaching other women about it. And I really love that. That’s one of my MOs today. Like I try to be what people can see and be accessible to people who otherwise probably wouldn’t have access to something like that. 

[00:03:44] SY: Clearly, you kept on coding. Tell me about what happened after that camp. How did you pursue coding afterwards? 

[00:03:51] NS: When I graduated from high school, I play the piano. So I’ve been playing the piano since I was nine, classical piano, jazz piano, concert band, percussion. When it came time to decide a major in high school, I had to decide, “Okay. Do I want to major in music or do I want to go into engineering?” I thank God I chose engineering because you can always do music on the side is what someone told me. Although I have an uncle who still to this day thinks I should have done music. He’s a musician himself and he still holds me to that, but I’m like, “I did pretty well for myself though.” But I remember just trying to decide and I always say that I kind of naively went into computer engineering. Even though I had been exposed to it, I had no idea how difficult it would be and all the challenges that I would face. 

For example, my very first coding class as an undergrad, as a freshman, I remember it seemed like everyone else knew how to code except for me. You know, I joke like, “Did you all meet up over the summer or something? How does everybody know?” Because I had no idea what I was doing, but the key to that was finding really good mentors. The dean at the time at the college happened to be a female, a wonderful woman, Dr. Donna Reese, she was someone that I kind of leaned on and really motivated me, but other than those challenges, oftentimes, I was the only female in my class. Oftentimes, I was the only African-American. Even sometimes I was the only American. 

[00:05:21] SY: How interesting. 

[00:05:22] NS: So it was depending on the course. It was pretty interesting. 

[00:05:26] SY: Tell me a little bit more about that freshman class. Do you remember a moment in particular where you felt lesser than or maybe behind everyone else? 

[00:05:35] NS: Absolutely. I mean, you would notice the difference when, you know, “Okay, hey, everybody, here’s your assignment. Pick your groups of three or four,” and I would often be the person that nobody picked and it seemed that everybody else was moving pretty rapidly through the course. Everybody else seemed more advanced at least exposed to the topic. You get in the class where you feel like you’re afraid to ask questions almost because you’re embarrassed because you don’t want people to think you’re stupid. So I ended up going to office hours like a lot. I think all my professors pretty much knew me by first name. Granted I was a top student in my class. I was in the top three. I was also in the Hall of Fame by the time I graduated from undergrad, but I had to work extremely hard to catch up. 

It doesn’t matter where you start. If you put in the work, you put in the time, you have the right support system, you can get to where you need to be despite what anybody tells you. There’s so much free information online. A lot of what you learn is self-taught even in college. A lot of what I learned as far as like working on the side, making extra money here and there, doing websites and things like that, app development, I taught it to myself. So it was outside of the curriculum, but I do appreciate that training in college that got me the degrees. It taught me how to learn. It taught me how to go and find answers to problems and how to allocate resources.

I’m a big advocate of the bootcamps in the self-taught learning, but I’m also an advocate of people going the traditional route as well because the big companies, that’s what they look for when they’re hiring and if we’re not getting these advanced degrees, especially underrepresented groups, then we’re often being left out of the product design. That’s a big problem, too. 

[00:07:25] SY: Do you remember there being a turning point in your education where you started to think to yourself, “You know what? Maybe I know what I’m doing. Maybe I’m going to be okay”? 

[00:07:35] NS: Yes, yes, definitely. Interning, internships, side projects, side contracts, when you get to the point where you’re getting paid for what you do and you start getting accolades and recognition for your work, it really makes a difference. I also did something called Cooperative Education. You work for a semester then you go to school for a semester. You alternate for about two years. The people at the job, it’s almost like a self-esteem booster because you see how much of an impact you have and how well your work plays into the bigger picture. It kind of makes you want to keep going. It’s like, “Wow! I am somebody.” At school, I’ve been feeling like, you know, I’m at the bottom and I have to struggle all the time and everything’s a challenge everything. But at work, everything was so much easier and I’m getting paid for it. 

I definitely recommend to people out there, if you can, if you’re in school, definitely do internships and if you’re not in school, definitely find side contracts or do an apprenticeship where you can work alongside with someone. You may not be getting paid top rate, of course, but getting paid something and seeing how your work satisfies your customer, it really helps you. It keeps you motivated. 

[00:08:51] SY: So what really got you interested when you were learning all that stuff? Because I imagine you were exposed to so many different things when you were in school. Was there anything that got you really excited? 

[00:09:01] NS: Yes. As I mentioned before, I am a musician at heart. I really liked sound and audio. So I got into the field that they called Digital Signal Processing which today pretty much led into the field of machine learning and AI what it’s called today. I wanted to be able to analyze audio whether they be speech or whether they be music, being able to automatically determine people’s voices, automatically depict out what the sound is in the sound files that you’re listening to. So I got into the field of machine learning which is using lots and lots of data to predict patterns and understand, and recognize things. It so happens that the same technology is used for video, tracking people in video, facial recognition, automatic identification of is a person smiling? Are they frowning? It can automatically recognize objects and pictures so usually to get more deeper into those topics because they are more involved, they recommend you go to graduate school for that at least at that time. 

I knew I wanted to get my PhD anyway and this just made it that much more motivating. That’s when I decided, you know what, Georgia Tech is very close to my family in the Southeast. I had traveled to different places and interned at different places from the Midwest to the West Coast, a great place to go. It’s the top engineering school and they had a specialty in that area. So that’s what led me to enroll in Georgia Tech graduate school in 2008 for machine learning.

[00:10:39] SY: There’s a common thread with a lot of our interviews where people take a passion they have outside of coding and then they combine it with coding itself and make something really interesting and it kind of solidifies their love of coding. Did you ever work on anything with both music and machine learning?

[00:10:55] NS: Yes, actually, my dissertation. 

[00:10:57] SY: Tell me about it. 

[00:10:58] NS: There’s something called the Cocktail Party Problem. What that is, is we’re at a party, a cocktail party, you’re talking to someone. The room is filled with people and you hear a lot of voices back and forth, but somehow, your brain is able to focus on the person talking directly to you. And so in machine learning and in digital signal processing, we often try to imitate that, for example, speaking into a microphone right now and it’s canceling out the noises that aren’t my voice. That’s exactly what I worked on. I’m sure if you have ever used apps like Shazam where you let it listen to a song and it guesses what the song is, that is pretty much what I worked on at Georgia Tech. So being able to isolate sounds and detect what those sounds are and recognize items in music. 

That field is called music information retrieval and, of course, involves a ton of coding and a ton of research because there are several different algorithms you can use because some work better than others depending on the task that you’re trying to do. That was an example of coding mixed with music and application that I think is still very relevant today. 

[00:12:13] SY: So for a thesis like that, besides Shazam, it makes a lot of sense, it’s a great example, what was the application that you were hoping for? 

[00:12:21] NS: The application I was hoping for and if someone’s listening to this, if you take my idea then just give me some credit for it, but the application was, I used to play Guitar Hero and Rock Band. We have parties and things, and bring people over and play those video games on the Nintendo Wii when it first came out. And so I thought it would be really cool to be able to pick any song that you wanted and put it through the code that you wrote and automatically output the different tracks of the music. For example, the guitar track, the drum track, you know the bass track and maybe a trumpet or something and be able to automatically play that on Guitar Hero. So that was my idea. I found out that I was a little naive going into grad school. 

Graduate school is not really about creating a product. It’s more so about the research and more in-depth, more lower level, which I still was very interested in it, very motivated and realizing that creating a product or a startup actually might be a good segue required something totally different.

[00:13:28] SY: (Music) You also worked on a startup, a very powerful machine learning project called Partpic, which was later bought and integrated into Amazon. What is part Partpic? 

[00:13:52] NS: Partpic is a startup. It was founded in Atlanta, Georgia in 2013. I was the chief technology officer. The CEO of the company was Jewel Burks and what it is, is visual search for replacement parts. So we allowed you to take a picture of a part, it could be like a screw, nut, bolt, washer. We would not only identify the part but also measure the part to get you the exact part that you need to finish putting together the baby crib because the dog ate one of the screws or something like that. 

[00:14:23] SY: Oh, no. 

[00:14:24] NS: So yeah, that’s what we did and the computer vision is the field of machine learning that we use and that’s basically algorithms that recognize objects and images. 

[00:14:36] SY: That sounds so hard. It sounds so hard, the idea that I can take a photo of a part, some random part and that it not only knows what it is, but it knows the measurements of it. That sounds like such a challenge. I have no idea where to begin with that. Can you walk us through how part Partpic works in just layman’s terms, just high level, how does it work? 

[00:14:58] NS: We actually just take a picture of the part with a penny next to it or a coin because we would use that as a size reference. And of course, if you take a picture of something without some size reference, it could be, you know, if it’s closer to the camera, it can look really big, when it’s really small or vice versa. That was the key and that’s really what puts us apart from a lot of the other image recognition and deep learning companies at the time. Of course, the application was really great. I mean, we run into people all the time, DIY-ers, people who like to fix things or work on cars and they just need a part and they don’t know where to get it, so they go to Lowe’s or Home Depot, Ace Hardware, any of those stores and they’ll go talk to Bob and Bob can look at the screw. He has it. He knows exactly where it is because he’s been working there forever or we say, “What if we make this an easier task?” 

Our CEO Jewel Burks was actually working at a parts company where people would call in and try to help locate the part as a customer service representative. It’s very difficult to find a part for someone when they’re saying, “Hey, I need this thingamabob. It’s curvy on one side and it’s black on the other side. Can you help me find it?” Of course, that’s a challenge and so parts companies spend billions of dollars a year trying to return when they get people the right parts because of misconceptions in describing the part. Plus, how do you search for something you don’t even know what the name of it is? So you can’t even type it into the search bar. And so we allow you to take a picture of it with the penny next to it. 

The penny is actually 0.75 inches and so in our code, you can actually look at the width of the part in pixels and also look at the width of the penny in pixels, and you can convert the pixels to inches. And so you can calculate the length of the part, the width, the threads per inch, the head depth, all kinds of things to get you the exact measurement that you need for the part. We actually had to train hundreds and hundreds of different types of parts in our machine learning model, which is, of course, that’s another aspect of coding, but we would have several examples of parts in different lighting conditions, different placements, different angles because what you’re trying to do is you’re trying to anticipate what the user is going to take a picture of so that you can recognize it. So you need lots of examples. 

We already trained these models on several images of these hundreds and hundreds of parts for each category. And so once we identify the part and measure the part, we now have everything we need to search for it in a catalog like Amazon’s catalog or like MasterCars catalog or anybody’s catalog. 

[00:17:44] SY: How many of these images do you need? Because I imagine if there are, I don’t know, thousands, hundreds of thousands out of how many parts are in the database and then you have to be able to recognize them in different colors, different angles, different lighting conditions, that must be a lot of photos. How many photos are we talking? 

[00:18:03] NS: Models nowadays can be trained on millions and millions of images. 

[00:18:06] SY: Wow! 

[00:18:07] NS: The architectures are so advanced and complex now where you can have models that big and you can recognize all kinds of categories. Now, at the time, we were only doing a few hundred parts, which there really aren’t that many parts when we’re talking about the particular space we were in, which is hardware, but it can grow depending on your application. Over the years, whereas it used to take weeks to train a model, now, it could take as little as a few minutes with so many images and that’s a testament to how far the technology and the architectures and the infrastructure has come with different GPUs and coding tools such as Tensorflow, MixNet. We used to use one back in 2014 called Cafe which was state-of-the-art at the time, but now there are so many more that let you do things a lot quicker and a lot faster. 

[00:19:06] SY: Okay, so when you’re talking about GPU and stuff, you’re talking about how powerful the machine is that I’m doing the training on. Is that what you mean? 

[00:19:14] NS: Create GPUs for the gamers out there and you’re probably already familiar, NVIDIA makes some of the most powerful GPUs, Graphics Processing Unit or something is what it stands for, similar to a CPU. It’s like the brain of the computer that processes everything. GPUs came about when people wanted to do more image processing because images have so many pixels and so many computations being done on them, it takes a lot of time to process. So for example, if you’re playing a game on your computer, you notice the gaming laptops are usually a lot thicker maybe not as sexy as some of the normal laptops, well that’s because the GPUs... 

[00:19:52] SY: More robust. 

[00:19:53] NS: Yeah, there you go. The GPUs in them, they’re pretty big and they need to be able to process a lot more computation with the gaming and the images and video than you would just normal text data.

[00:20:06] SY: So if you are dealing with millions of photos, where would you get those photos? Where do they come from? Are you taking millions of photos of all these parts? 

[00:20:18] NS: It depends on what you’re trying to get. For us at Partpic, we had to create our own database of parts. So, yes, we had to create our own images and take these pictures but there are a few caveats to that. One, depending on what you’re training, like if you’re training cars, for example, there are so many pictures of cars and different car brands online, on Google for example. That’s an application where you can probably get the majority of your training images from websites just by downloading the pictures. However, a lot of the parts that we were training, there weren’t a lot of images for them and not enough good quality images which made our application slightly more difficult in achieving that data by just getting it off a website. 

We wanted to make sure we had several different angles, several different lighting conditions and a really good database of parts so that we can correctly identify. The key to training a machine learning model is the more better quality of data you have, the better quality of the model, almost like garbage in, garbage out. 

[00:21:25] SY: So how do you know that you have a good training model? Like how many photos do you usually need, what quality do they need to be, that kind of thing?

[00:21:35] NS: It really depends on the application. So for example, if you’re trying to recognize, again, I use the car as an example. If you’re just trying to recognize cars versus trucks versus SUVs, your quality doesn’t have to be as great for those because the general shapes of those are significantly different for the most part. But let’s say you were trying to recognize the actual logo on the car. So you would probably need a lot more images because one, there are more categories of logos from Honda to Toyota to Chevrolet. And then to be able to recognize those on cars, there are so many different cars. There are probably a lot of different placements of the logo. Depending on your lighting, it might be a sunny day, it might be a cloudy day. The logo may have different reflections on it. 

So you would want to get a lot more images and a lot better quality images for training something of that type which was what the case was for the parts that we were training. A lot of the parts looked very similar. For example, I remember something called a carriage bolt. It looks very similar to a hex bolt from a certain angle. And so we wanted to make sure we had just enough detail to be able to distinguish them from each other. 

[00:22:52] SY: It sounds like there are a lot of things, a lot of powerful things that go into Partpic. I’m wondering, what did the prototype look like? 

[00:22:59] NS: The very first prototype is the one that I wrote in my room. I think I spent a couple of weeks and it used an older technique called the Bag of Words technique. This was actually back in, I think, 2013, 2014 when we were first getting started. We were able to have the app and then we would recognize the part and it was a much smaller database of parts. So not like thousands or hundreds of images. We were only taking maybe 10 to 20 images per part. It was probably less than a hundred parts. So to my surprise, though, Jewel Burks, our CEO, was able to take that and raise $1.5 million for a seed round. Again, I was blown away. I just continue to be blown away. That was really promising to me to know that something that I actually really enjoy doing, really enjoy learning about was able to help someone build an entire company. 

[00:24:00] SY: (Music) Coming up next, Nashlie talks about some of the dangers of machine learning like algorithmic bias and what we can do to mitigate some of these issues after this.

[00:24:17] (Music) So paint me a picture of what it was like to actually work on this project, especially those first few weeks when you were just working on the prototype and it wasn’t quite a job for you yet. What was it like to have to come up with this solution? 

[00:24:39] NS: I mean, it was pretty innovative. I think we were able to put our heads together.

Everybody has a different background. I really like the diversity of the team. You had male, female, tech people, non-tech people, people of different ages, different backgrounds who were talking about it and we knew you pretty much can’t use a part if it’s not the right size. It’s pretty much that was a deal breaker. That was a non-negotiable so we had to figure out how do we get the size of this thing? And what is something that every user will probably have? So we went back and forth. Do we use credit cards? But then we have to run it to privacy issues. So we figured what about a dollar bill? But a lot of those are kind of wrinkly and in different sizes. It might be slightly torn. We thought, “Okay. What about the penny?” 

Most people can probably find a penny somewhere and especially if you’re a DIY-er, you’re working in a shop, there’s probably one laying around on the table or even outside on the ground which is ironic because now people hardly ever carry cash and coins, but I think at the time, that was the best idea we came up with. We ran with that idea and it’s pretty much what was very successful for us.

[00:25:51] SY: So how does that translate to outside of the US? Because everyone doesn’t have pennies, right? It’s not even the same currency outside the US dollar. So how do you deal with that? 

[00:26:02] NS: In the code, we had several coins that could be used and so as long as you can identify that there is a coin in the image, you can train a model on different types of coins. So it would be just a matter of… for example, we’re training a model on all US coins, you know, pennies, nickels, dimes, quarters, dollar coins. We were open to training models on other coins too, European coins, but in the code it’s actually pretty simple. All you need is a conversion table in your code, almost like a lookup table. And so if you know what the size of the coin is, you would just add that to your lookup table. And then it really doesn’t change anything else when you’re trying to do the measurements. It’s just you pull the lookup table, the ratio of pixels to inches or pixels to millimeters depending on whether the part was a symmetric part or not, and everything else flows the exact same way in the algorithm.

[00:26:59] SY: Do you remember the moment when things worked in that prototype? 

[00:27:04] NS: Yes, I think I was sitting on my couch and I had made a peanut butter and jelly sandwich because, again, I was a poor grad student. I think the TV was on or something. I don’t know. I wasn’t really watching it, but I just had it on and so I was just coding, trying to finish up things. I had to keep running something because there are little bugs here, things would crash. And then when it finally ran all the way through and the algorithm actually recognized a picture of a screw, I’d never been so happy. I’d never been so happy that this thing worked because that was really the beginning. People who code, they know that when you reach the part where you’ve laid all the groundwork and all the hard work is done, and it’s really just a matter of making it pretty and making it more user-accessible and easy to hand off to somebody else and that’s a really great feeling. 

[00:27:56] SY: So Partpic was acquired by Amazon in 2016. What was that like when you got acquired? 

[00:28:02] NS: That was crazy. There really wasn’t the end goal, at least not for me. We really didn’t seek this opportunity out. I was actually presenting at a conference called Rework. It was a conference for computer vision startups in Boston, Massachusetts in May of 2016. When I finished my presentation, I came down off the stage, a guy from Amazon approached me and he said, “Hey, we’re really interested in what you’re doing. Can we talk more?” I gave him my card and that’s how it all happened. 

[00:28:33] SY: Wow! 

[00:28:34] NS: We’re just at the right place at the right time I guess. Of course, then we went through months of due diligence. They came out to visit the team. We had a whole interview process because not only did they acquire the company. They hired the entire team pretty much to come on. 

[00:28:52] SY: That’s great. 

[00:28:53] NS: And build it. 

[00:28:54] SY: So what happened after the acquisition? What kind of work did you end up doing at Amazon? 

[00:28:59] NS: Partpic became Part Finder at Amazon, which we ended up launching. So we had to port the code over to Amazon’s framework, which required pretty much a total rebuild of the app. We were able to keep our data and our images, but we had to store it differently and we ended up launching Part Finder in June of last year. 

[00:29:23] SY: So what are you working on now? 

[00:29:25] NS: As of a few months ago, I’ve joined the Amazon Web Services AI Ethics Team. In particular, we’re working on a new initiative called Fairness and Faces, so basically evaluating fairness and estimating biases and facial recognition and AI in general. 

[00:29:48] SY: Interesting. Okay, that makes me feel very happy, by the way, because that’s one of the things that I think about a lot, especially with facial recognition. How do we make sure things are fair? Right, fairness in AI and how do we make sure that a lot of our systemic issues whether it’s racism, sexism, all that isn’t kind of just built into the technology such that it’s even harder to eradicate and harder to get rid of? So, I’m really excited about your new role.

[00:30:14] NS: Thank you. 

[00:30:14] SY: What kind of work do you get to do? 

[00:30:16] NS: It is a hodgepodge of a lot of things. This is a brand new initiative. It involves a little bit of working with PR. It involves working with public policy because these are regulations that Congress is very interested in especially using Amazon as an example. It involves my core, which is machine learning. Actually, it takes machine learning to evaluate other machine learning. So we have algorithms that are coming along to help you understand, okay, why did the model recognize this person as this person or why did it tag this female as a male instead of a female for example? Are we using diverse data sets? Is the data set bias? Are there biases in the algorithms and the attributes that the algorithms use? 

Technically, there’s so much we can do, not just Amazon but a lot of companies are already shifting towards how do we improve and how do we make sure that certain groups aren’t left out? How do we make sure that government and public policy is there when it’s needed? And so, again, my role requires a lot. 

[00:31:36] SY: So at the time of this recording, there is a New York Times article that just came out called As Cameras Track Detroit’s Residents a Debate Ensues Over Racial Bias. One of the examples it talks about in the article is that there are more false matches for African-Americans than for white people and things like algorithmic bias essentially not doing us any favors. So can you talk a little bit about some of the issues that are brought up in facial recognition? 

[00:32:03] NS: As you mentioned, that’s one article of probably like a hundred now that target this technology as well as other AI technology. Often, when I give talks, I show the example of my face in a video chat, which is usually, if there’s a bright background behind me, like if I’m sitting in front of a window and the sun is out, usually it just creates a silhouette of my face in the video camera. And so I often show the screenshot and the difference between when it’s me and versus when it’s someone who’s more light skinned than me whereas usually it works pretty well for my colleagues who are light skinned. 

I show that example because as engineers, we develop technology and it’s really important to make sure that you have a very diverse group when that goes into the design and we did make sure we’re also testing on unbiased data so that things like that don’t happen. That’s why this is very important. That’s why it’s important to increase diversity of your tech teams. That’s why it’s important to get input from a very wide group of people with different backgrounds. I think that’s also why it’s important to make sure that you test extensively before releasing products. And if you don’t, then you may get things like the articles that are coming up. Now granted there’s a lot that’s not being said about the technology itself with some of the articles and there’s a really great talk given by my colleague at Amazon is name is Pietro Perona at the Remars event, R-E-M-A-R, he talks about mitigating bias in AI and there’s room for bias in almost every aspect of the product. 

There’s room for bias in the data. There’s room for bias in the algorithm and there’s room for bias in the testing. There’s even room for bias in who’s labeling the data, like who’s actually saying that this is this type of person. This is short hair. This is long hair. A lot of that is relative. So you have to be really careful about the constraints in the guidelines that you give those annotators as well. So yes, there are lots of problems and this makes me feel both sad and happy, one, because how do we get this far without having these type of initiatives in place like my current role? But two, it makes me really happy because as an engineer, there’s so much room for improvement. I see so much room for impact. I definitely plan on doing all I can to help improve and mitigate the biases and understanding how fair are algorithms. It goes beyond Amazon. Even if I wasn’t in Amazon, this is something that I’m interested in and very, very passionate about. 

[00:35:00] SY: Do you think this is going to be solved by tech companies or by the government or a different institution? Because I imagine that tech companies probably don’t have a huge incentive to frankly regulate themselves. Hopefully, they do. Hopefully, they want to because it’s ethical. It’s the right thing to do, but financially, I imagine it’s not really a huge driver but the government has a much bigger incentive to solve this problem and to make sure that they’re doing the right thing. So, where do you think the initiative is going to come from? 

[00:35:30] NS: So I think all the above. As of now, there have already been Congressional hearings held on the topic. It’s actually pretty bipartisan. And so I think no government is probably going to be the main driver, but I think it’s going to take everybody coming together to work on it. I think there’s a lot of push coming from people who are on the public policy side as well as the technologists, the engineers, members of government and the regulators, all the above. Everyone has to be at the table to come up with realistic solutions that are enforceable. 

[00:36:06] SY: (Music) Now at the end of every episode, we ask our guests to fill in the blanks of three very important questions. Dr. Sephus, are you ready to fill in the blanks? 

[00:36:19] NS: Yes. 

[00:36:20] SY: Number one, worst advice I’ve ever received is? 

[00:36:23] NS: People would always want our startup to move to the West Coast but we wanted to stay in Atlanta. Cost of living is a lot cheaper. It’s a whole different culture. I feel like it’s very much so more diverse in the Southeast and we have smart people here, too. So why not? 

[00:36:42] SY: Absolutely. Number two, my first coding project was about? 

[00:36:46] NS: Outside of the standard, Hello world, I think my first big project was doing a website for my church.

[00:36:53] SY: What did that website look like? 

[00:36:54] NS: It was probably the worst website I’ve ever built, but they loved it. 

[00:37:01] SY: What did it do? What did it look like? 

[00:37:03] NS: It looked like one of the 1990s websites even though it was in the 2000s, but just a standard page with the address, it had a Google Calendar embedded in it and there was like some YouTube videos embedded in there.

[00:37:17] SY: Well, it sounds very functional. 

[00:37:19] NS: Yeah. 

[00:37:19] SY: Number three, one thing I wish I knew when I first started to code is? 

[00:37:24] NS: I have the same opportunities available to me as others. I mean going in there with that mindset instead of feeling like I was always a step behind. 

[00:37:34] SY: Absolutely. Well, thank you so much, Doctor Nashlie Sephus, for being on the show.

[00:37:38] NS: Thank you.

[00:37:39] SY: (Music) This episode was edited and mixed by Levi Sharpe. You can reach out to us on Twitter at CodeNewbies or send me an email, hello@codenewbie.org. Join us for our weekly Twitter chats. We’ve got our Wednesday chats at 9 P.M. Eastern Time and our weekly coding check-in every Sunday at 2 P.M. Eastern Time. For more info on the podcast, check out www.codenewbie.org/podcast. Thanks for listening. See you next week.

Thank you to these sponsors for supporting the show!

Thank you to these sponsors for supporting the show!