Ayodele Odubela is Founder and CEO of FullyConnected, a platform for reducing the barrier to entry for Black professionals in ML/AI. She earned her Master’s degree in Data Science after transitioning to tech from digital marketing. She’s created algorithms that predict consumer segment movement, goals in hockey, and the location of firearms using radio frequency sensors. Ayodele is passionate about using tech to improve the lives of marginalized people.
Ayodele was interviewed for GeoHipster by Mike Dolbow.
Q: Our readers are mostly in the “geo” industry, but many of us consider data scientists like yourself to be kindred spirits. Can you tell us your story about how you got started in tech?
A: It was definitely a shaky kind of start. Like a lot of college students, I wasn’t really sure what I wanted to do. I had been a computer science major, a film major, and even studied athletic training. When I ended up at computer science, it felt close, but not exact. Around 2010 or so, I was coding in C++, but didn’t feel like I was learning. I had a lot of digital media experience from my film studies degree, so I ended up with a digital communications undergrad degree. That allowed me to work in marketing for a few years. I did some social media marketing, and landed at an app company, doing social analytics with A/B testing, in-app messages, and that sort of work. By the time that startup ran out of funds, data science was starting to become more popular, like “the sexiest job in the 21st century”! I went back to school for a Master’s in data science. This felt like a good next move. Since then I’ve worked for all kinds of companies doing a wide variety of work, like sensor recognition for firearms.
Q: Ethics in tech is having quite a “moment” – or maybe you might say a decade. You’ve been quite vocal on Twitter about Google’s recent firing of Timnit Gebru, as have many others. If you’re a technologist inside an organization that is making questionable decisions, what is your first move? Where do you draw the line between trying to change an organization from within, versus speaking out against it – and probably leaving?
A: I think it comes from having really hard conversations. Hopefully you’re in a workplace where a respectful challenge is seen as a good thing. I’m thankful that I’ve been in workplaces where I’ve felt enough freedom to bring up these types of problems, and bring up difficult conversations. They don’t always lead to change, but at least I’ve surfaced specific issues.
I think for a lot of technologists, the first move is to start talking to management about existing policies. A lot of times people break policies without realizing it. Take policies around things like proper data use and cyber security: we get trained, but we’re human and still make mistakes. We’re not always on top of it.
By first going to management, or a trusted manager, you can start to discover the incentives and the reasons why certain decisions are being made. I think you’ll often find it’s profit or revenue based, and in some instances I’ve been able to persuade teams to change their course of action by generating different processes and systems that don’t have such significant issues.
For example, in a past role they wanted us to create a tool that predicted someone’s gender based on their name. When this was first brought to me, I thought, “this is something we shouldn’t do”. I went digging and brought the “5 whys” to the problem.
It turned out, the marketing team wanted more data for push messaging and in-app notifications, because they noticed stark differences between how women and men interacted with the product. So, they didn’t have a nefarious motive, they just wanted more information – but they were still going about it in the wrong way. Instead of using that gender classifier, I created a user classification model to help them with this segmentation.
These decisions are going to be different for everyone. I personally have a lot less that I would deal with before leaving, because I have an intimate knowledge of how badly this kind of thing can hurt people. With the knowledge of the incentives behind organizational decisions, it should be easier for technologists to set their boundaries. Like with Timnit’s firing, if you’re in a situaton like that and you realize that the organization isn’t committed to being ethical or transparent, it can make it easier to leave.
For me, seeing that situation, where part of an organization that was labeled as “Ethics in AI” went and fired one of their leaders for speaking out, that was kind of the last straw for me as a user. But that can be very scary, especially the closer you get to it. Since technologists in the past have felt like our role is “neutral”, it’s not fun to think about law enforcement coming to your house because of the job you’re doing, when you’re just trying to tell the truth.
Q: You recently published “Getting Started in Data Science”, which looks like a great way for someone new to launch into your field. Can you tell us more about the book? What compelled you to write it? What will readers get if they buy it?
A: I was compelled to write this book because I had a hard time getting started in Data Science myself. I didn’t have a very technical background, and I was struggling to learn things like statistics and coding in what felt like a vacuum. Once I got to grad school, there was a snowball effect of learning; I began building on prior experiences, and getting help through real-life conversations.
Then when I got into industry, I was shocked by how even the learning from grad school didn’t match what my employer wanted me to do. In this book, I share a lot of industry knowledge that I’ve gained, like managing project deliverables, juggling stakeholders, that kind of thing. Readers get an introductory book that contains a lot of hints and tips that I didn’t have when I started.
Readers will also get a clear path into Data Science, depending on where they’re starting from. They can go the academia route, use boot camps, or some other journey, and I’m giving them details on their path from there, in particular how to leverage the domain knowledge they may already have. People come into this field from so many different backgrounds; it’s nice to transition into it when you already have some understanding of the domain’s important metrics or KPIs. I think the book is especially good for career transitioners, so they can leverage some of that prior knowledge in the next chapter of their career.
(To our readers: Ayodele is generously offering 25% off her book to our readers with the code GEOHIPSTER. –Ed.)
Q: I think a lot of our readers in the geospatial industry will recognize that advantage. There are a lot of us who are well-versed in one or two verticals, and also bring enough of the geospatial knowledge to bear in order to solve problems in those industries.
That makes sense; I think if you have any kind of specific knowledge, there are a lot of companies looking for that, so leverage it!
Q: Your experience has spanned from working for travel agencies to drone companies. There are obvious connections with mapping here – ever get sucked into a cartography rabbit hole? If not, is there anything about the mapping space that is attractive to you – or is it just an afterthought?
A: Not so much cartography, but I am very interested in sensor and geospatial data! I am kind of a geo-nerd. I took geology and geography courses in college and loved them. I actually considered switching majors to geology, but then saw how almost all the jobs were in oil/gas industries, and I knew that wasn’t for me.
But I have always enjoyed maps, and have a special relationship with them. As an only child on road trips, I would look at maps all the time as we traveled. When I was at AstralAR, I was playing with drone radio sensor data, and then was exposed to multi-dimensional spatial data for the first time. When I started to work on ML projects that would predict locations of items, that’s when I started to get a deeper understanding of this 3D world that we live in!
A lot of my hands-on work has been on sensor identification and understanding, like knowing there’s a very small range of amplitudes for different firearms. Telling apart a .45 from a .32 caliber weapon is a small change in amplitude, but we can easily differentiate them from other noises, like hand claps or stuff like that. There’s a natural connection between maps and sensor work, so geography is definitely more than an afterthought for me.
Q: Now for something a little lighter – any hobbies you want to share with our audience? What do you do for fun?
A: I’m a really huge hockey fan, and one of my grad capstone projects was predicting hockey goals. I’d love to see the NHL take on embedded sensors for player body positions, and take an exploratory look on the various positions players are in when they score really cool goals. I think there’s a lot of interesting location data out there that we have increased access to as IoT has grown.
Beyond hockey, I have a few personal interests, but it’s tough to pursue a lot of hobbies during a pandemic! I know I’d be kayaking a lot more if we weren’t dealing with COVID-19 right now!
Q: Had you ever heard of GeoHipster before I contacted you? We’re … kind of a niche publication. 🙂
A: No, actually, but I checked out your website and I like your stuff! I noticed that it didn’t feel like it was all boring GIS colors, and I was really drawn to that aesthetic.
Q: As I write this, you’re currently looking for work – I hope that doesn’t last too long! But describe the lucky company that’s going to get you on their payroll. What do they do? What don’t they do? Where do they operate?
A: My ideal employer is anywhere that truly takes accountability and transparency in AI to heart. I’m not picky about specifics; there’s so many interesting kinds of data I can work with. I just don’t want to be hampered with bringing up ethical issues all the time. I hope,with everything that has happened lately, there are more organizations that are truly open to being accountable and transparent, even if it’s at the cost of losing profits.
Q: Any advice for our readers, or aspiring data scientists?
A: If you’re an aspiring data scientist, when you’re dealing with data about real people, make sure to frame your work the way you would if the data was about your friends and family. We need to sometimes step away from thinking technically and preserving neutrality, and fix problems that well-intentioned tech has created or made worse. It’s not enough to just be ethical or work on responsible AI; we want to get closer to creating an equity utopia: designing for a world we want to live in, and understanding that historical data almost never reflects that. Every time we use historical data, we’re relying on imperfect humans from the past and their decisions. And that’s difficult when we’re trying to predict the future about a changing society. The earlier you do this, the easier it will be to be transparent and think about fairness in your work.