“I started looking at responsible AI as something to move into and to focus on about four or five years ago, and in part because I was trying to think about how do I make sure that my sons are prepared for the workforce of tomorrow? Are we doing the right things to ensure that they’re learning and getting exposed to the right information and experience the skills to be ready to compete?” — Diya Wynn
Show Summary
In this episode of “Stories from the Field,” we dive deep into the realm of Artificial Intelligence, exploring perspectives not often discussed by the media. Our guest, Diya Wynn a Senior Practice Manager in Responsible AI, sheds light on the lesser-discussed aspects of AI’s impact on our world. From uncovering hidden challenges in designing and using AI responsibly to examining its potential to address pressing societal issues like racism and corporate inequities, Oana, Jason & Felicia interview Diya to gain more insights on what this new world with AI looks like. Join us as we navigate through the complexities of AI with a focus on inclusivity and empowerment, and discover how individuals and industries can champion a responsible AI revolution.
Learning Highlights from this Episode:
- Her experience as a technologist and responsible AI with AWS.
- How we as new users can champion and approach for responsible AI products.
- What it looks like to empower individuals with their use of responsible and ethical AI.
Hear the Full Episode On:
About Diya Wynn
Diya has more than 25 years of experience as a technologist and has been at AWS for six years. She is the Senior Practice Manager in Responsible AI heading the worldwide practice dedicated to customer engagement. Diya has honed her skills in scaling products for acquisition, spearheading initiatives for inclusion, diversity & equity, leading operational transformation across various industries, and leveraging the historical and social contexts that influence her approach to responsible and inclusive AI/ML products. She also meets with legislators and policy makers globally to provide a perspective and influence imminent regulation and policy on AI. Diya is an international best-selling author and speaker at industry and DEI events across 15 countries including Ukraine, Belgium, Australia and United Nations General Assembly. Her model of leadership and advocacy has been recognized with the Sungard AS Pathfinder Award (2016), AWS Inclusion Ambassador Award (2020), Makers Influencers & Innovators in STEM (2021), ID&E Technologist of the Year (2022), 100 Brilliant Women in AI Ethics (2023) and she was a finalist for the Women in IT Advocate of the Year (2020) and VentureBeat Women in AI – Responsible AI (2022). When she isn’t working hard on the future of AI and emerging technology, she’s working hard to influence the future. For Diya, this starts at home with her two sons, whom she encourages to color outside the lines, defy the odds, and break boundaries. She also likes to travel, learn about other cultures, and engage with people.
Resources & Links
LinkedIn: https://www.linkedin.com/in/diyakwynn/
Full Transcript
Felicia Scott: Hi, Diya. Welcome to Stories from the Field, a Firefly podcast. And so just going to hand it right over to you, because we want our audience to get to know you. So tell us a little bit about your background, your passion about your field, and then we’re going to jump right in and have a fantastic conversation today.
Diya Wynn: Well, thank you so much for having me, Felicia, and all of you at Firefly. I am excited about the opportunity to be able to share with you and your audience.
I am Diya Wynn and I have had the pleasure of working in technology for over 25 years or so. I pause because I think about that, right? It’s been a good little while, but I’ve enjoyed the opportunity to be in technology and it’s brought me to the place where I am now, where I get to work on and think about constantly how we influence technology in ways that actually are beneficial to people, and to society, and in particular that’s around those that are often marginalized and underrepresented. And I love the work that I get to do, but I also recognize it’s a very important work, because there are not enough folks actually thinking about this, like what does technology do and the ways in which technology is reshaping how we engage and interact in the world. Someone needs to be paying attention to that. And then it means something for what we do in the future.
So I enjoy the privilege that I get to lead a body of work that is helping other customers and enterprises and startups think about the way in which they’re building with artificial intelligence and machine learning. And I’ve been with my company for now over six years, doing this, and prior to that have been working at various startups and early stage companies, helping them to build their products in preparation for acquisition. So usually on the side of things around products engineering, and as well as professional services and sort of customer implementation. So a good set of background and different skills and expertise there that I bring to the conversation. And I think that that matters as we start to look at this.
Jason Rebello: Diya, again, it’s an honor to be able to talk to you and get your insights on all the topics that we’re going to cover today. And specifically around AI, we hear so much about AI in the news these days. It’s unavoidable, especially in relation to doom and gloom stories around how it’s left unregulated, what happens then? The end of the world, who knows, right? And even as early as today, right, there’s seven AI companies that agree to, quote unquote, “safeguards,” whatever that means, and however we define that, some pressure from the White House. But I guess what I want to be able to create an opportunity for our listeners, people in our space and even just general public, what’s not being talked about that we really need to be aware of as it relates to this technology and this technology being released into the wild, as I like to say?
Diya Wynn: Yeah. So you’re jumping right into the deep end with this question, right? But let me first say that I don’t buy into this rhetoric or belief that we are going to see the end of our world, right? This sort of dystopian destructive perception that is being communicated. And that’s often the case. I remember telling people when you think about AI, and I use this as an introduction to some of my conversations, when you think of AI, what’s the first thing that you think of? And somebody will talk about movies like Terminator or I Robot, right, and/or most recent, what was it, M3GAN, right? These movies and what we’ve done in the media is sort of create this relationship where people’s expectation is that AI means that we are going to inevitably have systems that will think on its own and destroy the world, right, or kill us, right? And that’s usually the narrative that happens in some fashion, right? Robots go rogue and we’re all going to die.
And now, you have some prominent individuals that actually are speaking like that as well. And I think it’s unfortunate because the reality is that we’ve been using AI for a long time. We interface and interact with it every day, most of us do in some fashion. Unless you are still holding onto your phone that’s attached to the wall, you are probably interfacing or interacting with AI. You call customer service and there is a automated voice response system that is detecting what you say and then responding to that and routing you to the right conversation. If you’re using your phone and any of the voice assistance that we are become common using is directing us or responding to our request. If we are traveling somewhere and using GPS on our phones, it is navigating us through traffic and certain things.
All of that is AI at work, right? If you write a text message or an email and get automatically suggested words that you should use, that’s AI at work. And so I think one of the things what’s not being talked about is just the many ways in which it’s being used and how those are safe use cases. We probably aren’t actually thinking about that have been helpful in making us more productive in some ways, right? Helping facilitate certain activities, giving us improvements and things that we… giving us access to capabilities that we didn’t have before. And I think unfortunately in media, we have talk a lot, our movies, our views, et cetera, overemphasize the bad, overemphasize the negative, and not some of the things that we are seeing in the way of good benefit. And so I don’t believe that we’re going to be extinct. We’re not on the verge of extinction, but I do think that there are some very real things that we have to pay attention to.
And this question about regulation and the move that you referenced happening today at the White House is important because we know absolute power corrupts. And when we have technology with such great power and such great capability, there is a way for there to be corruption. There’s always a duality in play where something can be used for good and something can be used for evil. I use this example, I can take a hammer to build a house and you would say that’s a good thing. But if I take a hammer and hit you across the head, right, and injure you, that’s a bad thing. It’s a tool. And so we actually have to have the constraints, the regulation, the guide or guardrails in place so that people keep using it as a tool that is beneficial and not one to harm.
Jason Rebello: I love that. I always use the analogy of even just nuclear energy, right, and how, on one side, it can be used to create weapons of destruction or on the other hand, it can be used to create regenerative energy, right?
Diya Wynn: Absolutely, yeah.
Jason Rebello: Right.
Diya Wynn: And there are risks with that, right? Some people could say a slippery slope. There’s a fine line, right, that we have to cross. And absence of those guardrails, how do you know someone’s not going to build a bomb and use that to annihilate a segment of population, right?
And so regulation is important and I think that the steps, right, and the guardrails, the standards that they’re talking about, starting to follow for our big tech companies, including mine, I think are important ones. And there should be some agreement across the board because, look, we’re talking about organizations that are businesses looking to one-up one another in a space. And that’s true even when we think about this in the context of countries and regions who are playing in this technology space. We want to be better than the other. And so having a place where we have some common baseline, some common standards that everyone is agreeing to hopefully will create a more level playing field where we don’t have one who is just willing to run at all cost, and not considering the cost, right, or the true cost or implications of that.
Oana Amaria: Yeah. It’s so interesting to us to have this conversation because, you know, in addition to being a subject matter expert in this space, you also, you’ve come at it from the DEI lens and you’ve come at it through the global lens, which is very similar to us at Firefly, a lot of the work that we do is through the global lens.
If I’m counting correctly, you have at least eight awards that we know about, that you told us about, right? And I’m saying this, one, because I’m just going to be your cheerleader because you’re amazing, but also because I want our listeners to know like what it means when I’m going to ask you this next question, right? So when we think about how much you have done in this space to champion responsible and ethical AI… And ironically, I don’t know if you saw this, Diya, our very first podcast was around AI also. So I feel like this is very much… Our birthday at Firefly is at the end of the month and I’m like full circle, we’re coming back, you know?
But help us and help our listeners really think about what does it mean to have an approach to responsible and inclusive AI and machine learning. I think people also don’t know the difference between the two, which we’re not going to deep dive into that one, but what does it look like? So my assumption is similar to like equity and design questions, right? Like what is that approach like for products, for companies who are going to try to one-up themselves? We just saw in Brazil, right, they bought a whole facial recognition biometric system for Sao Paolo and like we know all the flaws, we teach all the flaws in our programming, right, and the issue with bias and racism. So what does it look like? Help us understand the positive side or what could be done.
Diya Wynn: Right. And thank you for making the connection. I come at this work first as a technologist, right? That is my background, in computer science and math and have been working in the field for a long time. But it’s also hard to ignore, right, the opportunity or ignore, let me say, my own background and what I bring, and recognizing just the ways in which technology can have an impact on our experience.
I’ll answer the question, but let me just share this. I started looking at responsible AI as something to move into and to focus on about four or five years ago, and in part because I was trying to think about how do I make sure that my sons are prepared for the workforce of tomorrow? Are we doing the right things to ensure that they’re learning and getting exposed to the right information and experience the skills to be ready to compete?
And I think that’s important, especially as a person of color recognizing that I have two Black sons and I want them to be able to hopefully have a different lifestyle and experience and upbringing and sort of than that that I experienced. So I’m thinking, like proactively thinking, intentionally thinking about what do we need to expose them to. And one of the things that I found was that there were three sort of trends or sort of ways that are shaping and adjusting the way in which we work, and what work will look like in the future. And that was one, the sort of notion of data, right, and data driving business and culture. Two was around artificial intelligence and of course robotics get related to that. And three is this sort of extended reality and virtual worlds and how these sort of three areas of technology or focus are reshaping what is the workforce of tomorrow. And there was an absence of us, people that look like me or that look like my sons. And then to see the negative impact of that absence and what that results in.
So like you were describing, some of the challenges with biometric systems or just the reality that someone may have less access to resources, information, data, et cetera because of bias or because of some unintended consequence of the system. And so that was my leap into this saying, “Well, then we’ve got to do something about this,” right? And how not it being at a company like where I work that I have an opportunity to put forward an idea and a approach to be able to address this.
And so fundamentally, one of the things that I focus on is the one that when we talk about building inclusively and responsibly, we’ve got to keep a people-centered focus. And I’m sure that that is core to what you talk about as well. We have to recognize that people are at the center or have to be at the center of what we’re building.
It’s ultimately what makes us successful as businesses, both in those that work the business and those that receive the benefit of the products and services that our businesses build. We have to keep them at the center and then that means that we have to consider the experiences they will have, right, as a result of our products and our services and the benefit that they might receive.
And in order to fully fulfill that, then we have to consider, do we have the right people, perspectives and voice at the table. So in addition to being people-centered, we have to recognize this as a strategic sort of initiative, right? We’re talking about transformation, digital transformation essentially, but in the context of the cloud. And so any transformation from an organization perspective has to be driven with the right kind of leadership support, right, that is driving that from the top down. And then it gets infused into what is the organizational strategy or technology strategy as well as the ways in which they are establishing goals and building their products.
Now, there are some very practical things that companies or businesses, teams can start to look at as well, right? We talk about inclusion. Well, if we have a sort of top level leadership support, then we want to make sure that we also have value alignment, right? Are we thinking about the ways in which we’re going to employ the system or use the product or deliver this service in ways that are consistent with what we say our core values are, our mission and our vision? And there can certainly be misalignment because we’re not thinking about, well, how could this impact the people that we want to serve if we use it in this way? And so really unpacking that and having that discussion.
Then the inclusion piece of this, and I think this is an interesting place where it’s not just, are we thinking about all the consumers that are going to be affected, or our stakeholders and their products? But are we bringing the right people to the table? And there’s certainly as user-centered, human-centered design techniques that help us, one, consider the right personas and their characteristics, anti-personas, right, the ones that we don’t want to serve, and we make sure that we’re accounting for those as well, as well as bringing in customer voice, right? How do we do that? What does that look like?
Some of this is tried and true. Like we have practices and principles that already allow us to do this work. We just need to be applying it in the right ways and thinking about that, because what the inclination has been is to build for the majority, and I use this example often when I talk about seat belts. Seat belts when they were first created were designed for, and based on male bodies. Because at the time in the 1950s, 80% of the drivers were men. They didn’t think about the 20% that were women. They didn’t think about the other stakeholders, the children that were in the car.
And so as a result, they found that in some cases, they were seeing more severe injuries and even deaths to women and children while they were potentially able to decrease the number of deaths for men. Why? Because they designed and built for the majority. And so where we often find some of the areas of bias and the disadvantage and those sorts of things are on the outliers. And so we can’t just design for the 80%, we can’t just build for those, but we have to be thinking about all those that would be impacted. And that talks about both, right, what is intended for good because seat belts are certainly a good thing but also had an unintended consequence that we now to start to look into and inspect so that we understand that.
And that’s one of the challenges. And if I go back to your first question earlier about like what’s not being spoken, there is an unintended consequence, some of which we haven’t even began to understand or we may not be able to unpack at this particular time, but there’s always some unintended consequence of the choices that we make or the systems that we use, the technology we employ. It’s just like taking medicine, right? Remember that list of all the side effects that they give you? It’s like, you could have dry mouth and you’re going to dream bad dreams and et cetera, et cetera, but it’s going to help you with your stomach upset. Well, I mean those are unintended consequences. Or even a better example would be chemotherapy, radiation, right, it kills the bad cells, but it kills some of the good ones too, right? There’s an unintended consequence.
And while we better understand some of those implications now, we didn’t always at the beginning, and I think that that is probably, if I go back to that question, like what’s something that we’re not talking about? There is going to be a cost, right, to the ways in which we’re using technology.
And we’ve seen some of this over time like with industrialization. We thought it was great that we started having food that was being mass-produced and made it easier so that the women that were workers were no longer in the homes and they could still ensure that their family’s got food, but now they’re doing that and they’re not home building. Now, well, what did that do for us? Well now we have increased rates of obesity. We are pumping chemicals into our systems all the time. Like those are the unintended consequences of something that happened in industrialization that was a benefit to us.
Diya Wynn: But there are things that we can do now, right, and the ways and some of the stuff that I was talking about, thinking about the outliers, how do we build a design for them, including the right voices that we can start to get our arms around where we know their risk and start to anticipate some unintended impact so that we can hopefully better contain the blast radius of negative sort of result-… while we have the greater benefit from the technology.
Oana Amaria: Yeah. Do you mind if I just double click into something that you said that I think is really important? So we actually share the seatbelt example all the time in our session and like the height difference, and like we talk about like what’s the average height and all that, right?
And I think just recently, when we talk about like equity and design, right? Something that you shared about like making sure that we have the right people in the room. I for example, use ChatGPT all the time to do the divergent thinkers. So I’ll say “Pretend you are…” and then I add all the people that would be the naysayers, and then “Give me a counter argument to…” Right?
And what I think is really tough is we are actually in rooms with a lot of executives, with a lot of business process leaders, right? And the pushback is always like, get it done, get it fast, right? Fix it, bandaid it, sense of urgency, all this stuff, right, that’s never ending. Or like as you know, there’s different models in organizations where it’s like you’re either moving up or you’re moving out, right? And so like the stacking, and we’re not set up for what you’ve described or what we preach, right because the enterprise incentive is progress is more.
Diya Wynn: Yeah. Well that’s why I talk about this having to be a strategic imperative, just like we talk about… and so here’s one of the unfortunate things, right, because we’re seeing… we see a lot of attention on inclusion and diversity over the course of the last 2021 and we’re seeing some decline. People are being impacted by the economic situation. We are losing ID&E resources that had been focused on that, though they’re their first go in some of these cuts that we’ve been seeing. We also are seeing increases where we saw increases in number of diversity–diversity in companies, those numbers are decreasing in the season. And so there’s a real challenge there, right? But I think that, one, when we talk about sort of this transformation that’s necessary, and that’s one of the reasons why I position it as so.
So you will find that I will talk about this topic depending on the audience differently. My objective is the same, but I have to recognize my audience. So as you mentioned, a lot of times, I walk into rooms, I’m the only person of color, I’m the only woman. I walk 95% of the time, maybe even more, I am talking to white men. My conversation can’t be about the race and bias immediately, right, when we come out talking about this.
I have to talk about this from the context of how do we scale their use of AI? How do we meet their business objectives? Well, that means that we have to build systems of trust. How do we build systems of trust? That means that we have to ensure that the products are not going to have a negative effect on your customers. How do we do that, right? And so we walk our way back.
So what do I mean by that? If we’re going to be effective, right, in having this sort of strategic holistic approach, then we have to think about it as a business imperative that is part of the strategy as it relates to AI, that the company, that the enterprise is now rolling out, that the customer, that the leaders is buying into.
This also requires in some level a cultural shift, right, because we’re saying that we want you to put things in place and have this system or a culture of responsibility, just like some of what we talk about with ESG, right? When we care and we’re designing sort of this culture now that is responsive to the needs of our customers, that’s responsible to how and the impact that we’re going to have on society, then we do things differently. And our decisions and our decision processes are going to reflect that. So it has to be part of what becomes the value of the organization.
Now I’m saying that that’s part of the conversation. I recognize that that’s not easy. And I also understand that cultural shift and in particular in the organizations and those in enterprises are things that take a while. We were talking about that when we started this whole move into the internet and with cloud, right? We’re talking about cultural change. Because we are used to having a server room and a set of servers and systems and you’re like, “Put it in the cloud, what does that mean?” And we had to shift our processes and make adjustments. And so we’re saying the same thing is required for us to actually build in ways that aren’t going to do harm to our people.
Oana Amaria: Yeah.
Diya Wynn: Now, I think that’s what’s needed, right, in order to be able to address this. And again, it’s not always an easy conversation, but when we position this, this is going to be a necessity. And here’s the thing, what we mentioned earlier about the White House and now this sort of need or the alignments of tech companies to actually adhere to certain standards, well the standards are coming, the legislation is coming. So in some ways, and while that is a baseline, a starting point, in some ways, we’re going to see that companies are going to have to make some adjustments to meet some of these needs. And I am hoping that our legislation that’s coming is going to really force us in that way to make this conversation a little bit more palatable.
Oana Amaria: I love that. I mean, I just can imagine being in that room and even asking a simple question like, how do you come up with the true need? And having a bunch of sameness in that room, trying to answer that question is very difficult. So thank you, thank you.
Diya Wynn: Well, for them, their need is about the bottom line. So it’s like how do you impact the bottom line? And we can show, right, that if you lose customer trust, that impacts your ability to sell into, right, and will impact your bottom line.
So the sort of angle here, if you will, using that word, is around trust. Trust in the product that you deliver to your customers, trust in the technology, because there’s a trickle-down effect, right? What did they say? A customer’s trust that’s lost is not easily regained, and so but when they lose… when one consumer has had a negative experience with AI, right, and that becomes a news story that then gets the rest of the community and environment or public riled up about that and then they start questioning the technology, that has an effect. So we want to talk about how do we continue to build trust, because you want to be able to use this technology, right, to benefit or to innovate. How do you build trust?
And so that is one of the approaches, right, that we use and talk about often. We want them to trust the technology, we want them to trust you as a business and a provider of service and product that is AI-driven. That’s going to open the door for greater innovation and greater opportunity and ultimately impact or affect the bottom line.
Felicia Scott: As you’re sharing, what comes to mind are like two things that are in the news are like the Hollywood writer strike, which started, what, it’s like maybe five weeks ago, et cetera, writers wanting more and being able to access the revenue that comes from the streaming systems, et cetera. And then now we have the SAG strike, the actor strike. And the actor strike, AI is even part of that conversation because what was put forth is like they want to be able to take an actor, pay them for half a day, which is SAG rate $200, and then be able to use their image, their likeness for however long they want to, whenever they want to. And then so you’ve got a person that could potentially appear in hundreds of hours of movies and they got paid $200 for it.
And so you talked about unintended consequences and I think that’s a real thing, but I think there are things that happen and people make decisions because they go, “That’s inconsequential.” So they realize there can be fallout, they realize people can be hurt, but they deem it’s inconsequential based on their revenue, based on the majority, et cetera.
And so what I think a very real part of what we do at Firefly, and I think it needs to be a real part of how we are having this DEI conversation going forward is, what can people do? So what can consumers do to demand more responsible AI? What can we do to not just trust, but check in on where we’re trusting? What are some ways that we can begin to equip ourselves so that we’re not at the whim of when people decide to do what’s right, but we are paying attention and we’re like, “Hey, I see what you’re doing and you’re going to do what’s right and here’s what we need.”
Diya Wynn: Yeah. So this one is interesting because I say it to folks that we all have a responsibility in responsible AI. And one of the first things that I believe is foundational is getting educated. There are a lot of us that have believed the hype, and we are fearful, right? And I think that we need to take opportunities just to be better informed, because that will help in being able to… to be able to bolster or support an argument one for another.
I think the other is that there are folks from a legislative perspective that are paying attention to this, trying to figure out what do we do? How do we put in guardrails? And so we need to be advocating for legislators that are going to be mindful of, right? So that means that we have to vote for people at the local and state level, right, that are going to be representing our ideals and thought processes around how we contain or control this.
And I think that matters, right? There are people now that are going to be in place looking at this and trying to help make decisions about how do we contain, control and put the right guardrails in place so the… And it can go either way, right? Either that that’s going to benefit the actors or be in benefit of business, right? And those are people. So just like we advocate for, we need to be writing into our legislators with our position or our thoughts about that. But we can’t do that, if we’re not close to the issue and we don’t understand. So I think education is really a part of that.
And I think the other part is about we change things with our dollars, right? What businesses do we support? What products do we buy? And sort of the opportunity to hold those businesses accountable at some level, right? Like I want to use something or work with someone or use a product that I know. It’s like the reason why people, and the movement around Ben & Jerry’s was so impactful, right? Because people felt like I know their position and they care, right? And their voice is going to be heard as well, right?
And I think that some of that is necessary, right, for us to be thinking about that. Who will we do business with? Now, I know in some cases this is hard, because some of our technology companies are in places and you’re like, “Well, what’s the alternative? What else do I use? What options? Like, I want to get off Twitter, but like what else? Everybody else is here for our business. We have to be,” right? And we perhaps maybe need to create some other options, right, so that there are… And that’s where the entrepreneurs and the businesses out there just considering things like maybe we be or provide an option or alternative to some of what we know that isn’t.
And then I think the other thing is that in our own ways, I’ve seen us drive, when I say us, people, drive inclusion and diversity initiatives from the bottom up. It’s not ideal, but we have, right? We’ve created affinity groups and we’ve had other sort of common interest groups in organizations because we cared about a specific thing. And I think that as we all become more educated, that we as regular individuals in our businesses can be telling our teams, like, “We should pay attention to this. Those of us that are managers can help influence how our team is thinking and building,” right? And that we start to build the groundswell of interest and responsibility in our organizations on the level that we’re in. And I think that’s another way because sometimes we do have some sphere of influence and control that we can take advantage of.
Jason Rebello: I love that. And pulling that string or that thread a little bit more, we’re constantly in conversations with the VCs and the conversation around what gets funded and what new companies are getting funded. And I often joke with them like, “Please don’t tell me you’re funding another food delivery app, right? The world does not need another… I already have five different ones on my phone. We don’t need investment dollars going into that.” And then pulling back my analogy earlier, and your analogy of this is a tool, how are we going to use it? What are your thoughts on how we can actually utilize this powerful tool and actually aim it towards some of these bigger challenges like overcoming these dynamics around corporate greed, oppression, racism, health inequities, like actually utilizing this incredible tool to actually solve real problems, real challenges and advocating for the next generation of founders to start building and utilizing this tech to actually solve real problems that create societal shift and change.
Diya Wynn: So let me give you two perspectives on this. And one is that this is challenging, right? You talk about like these problems and how do we tackle those. And some of what is necessary, right, to have systems that are going to remove the inequities and not be biased I think are very much like some of the conversation we have around inclusion and diversity. We recognize that these are matters of the head and the heart, right? And so while I could give you the business case for why it’s valuable for us to have diverse teams and that we know that diverse teams outperform those that don’t, and we can talk about having diversity on our boards and all that other kind of stuff, you still have companies that aren’t leaning in and doing the work, right? When we know that it increases your cognitive ability and we have better opportunities to innovate, you would think this is a no-brainer, we should be doing this, and why not?
When we have studies like what came out in the Federal Reserve, what, in 2007 or something like that, where they showed that they already know that credit risk rating was biased and that there were unequitable outcomes, but yet we still use it as a foundational model for assessing risk and determining whether or not individuals get access to resources. And you go, “Why?” Well, we know where the bias is.
And so I think that in some ways, like how do we unpack this and actually use it for good? I think we’re going to still grapple with some of the struggles that is in our humanness, right, where we have to be able to unpack and get like one, the sort of understanding that this is good business and valuable to the business, and then on the other side of things, be able to unpack the things that we’ve learned that make this a heart decision, the perspectives that we have that are deeply rooted in our beliefs about people and what inalienable rights and dignity that they have or deserve or the beliefs that have formed us from our upbringing, or our family, and our backgrounds, our culture and our education.
Like all of that becomes a part of this when we start looking at how do we unpack this and remove, right, like the oppression and the inequity. And so from that perspective, this is a bit of challenging work, right, that we, I think in partnership with organizations like yourselves and others that continue to talk about and advocate for inclusion will be part of helping us realize and see.
Now that said, there are some really great use cases even now where people are starting to do some great things to provide access, opportunity. There’s a company I worked with and familiar of in Latin America, right, that’s trying to remove bias from the hiring decisions and recruiting, and they actually have used AI to mask, like if they use a video, mask the voice of the individual, so you wouldn’t distinguish or know that they were female or not, right, to remove like distinguishable characteristics from their resume so that you wouldn’t know that they were a female or from a certain group, right?
And you could say, well, those are things that could be meaningful so that people hopefully are on the same playing field maybe in reviews for recruiting. You have folks using AI in interesting ways to provide accessibility solutions. There is a company, and I forget the name of it, that with generative AI, is trying to provide a better site opportunities for those that are blind, and they could use someone and voice and be able to speak or they could voice what is being seen to help actually an individual who is blind to have visibility, if you will, right, with words and text. And so like to be able to improve like captioning and speak recognition to help in accessibility situations is stuff that we’re seeing.
You have companies now looking at and exploring interesting ways in order to be able to provide better educational opportunities and personalized learning that help those that might be neurodiverse have better learning aids to support their learning and to overcome perhaps some of those challenges or differences that they might have in learning style, right?
And so there are things that are happening now that help to address some of the inequity or enable–there’s a company that is based out of Lagos that is using AI to provide access to financial resources to the traditionally underbanked. And so there are some real use cases and opportunity there. I think we still need people to be thinking about like not only in those cases, but when we start looking at things like health equity or other places where AI is being used, making sure that we, again, bring the right voices, perspectives, evaluation tools who are looking at fairness because we know at the source of some of the data are opportunities for us to perpetuate additional biases. And so we have to be mindful of that.
Oana Amaria: I feel like I’m processing everything you’ve shared. We often talk about, in our sessions, we go through all the examples of bias, beyond the talent life cycle, right, because that’s where DEI goes to die is like everybody just thinks about it in recruiting. And so we talk about the ecosystem, and there’s a lot. There’s a lot of really cool ways to try to leverage. I think oftentimes for us, in this space, it’s hard to maybe make them match with everything that we’ve talked about in the world and how AI is working and the business decisions being made in those rooms that only you are the only person of color or woman. How do we make a meaningful difference? What’s something that we don’t know to ask you to be able to make that difference, whether it’s these tools that you have shared or…
When you think about us, like us in a room with our clients, for example, right, and bringing this back to the workplace, we read all this stuff. We are part of the AI Justice League and like we’re trying really hard to stay on top of things, right? But like what don’t we know to connect to really make an impact with folks like you? Like if you were in our session and you’d hear our stuff and you’d say, “Oh, that’s cute. Yeah, uh-huh, seat belts,” right? Like can you upskill us a little bit in the sense of like what’s the bam? Like, hit the table. Like, yeah, now I’m listening to Jason. Now I’m listening to Felicia. Now I’m listening to Oana.
Diya Wynn: You know, I was thinking about this and I think one of the challenges is that folks don’t know that it matters to all of us, right? Like there isn’t just an impact or AI is not just impacting people of color, right? We just don’t have like these negative or exacerbated effects for those that are from different or have different sexual orientations or identify differently than male and female, right, or have differing abilities. AI is affecting us all. And in some ways, there are many of us that are unaware. Perhaps people think that it’s like that’s their problem because it doesn’t affect me. And so there’s ways in which I could perhaps ignore, right, some of the focus that we need to give to this because it doesn’t matter.
But one of the things that I describe is does it matter to you? Or one of the things I use often is, does it matter to you that you don’t curate or you don’t control what you consume, right? That’s not a Black, white thing, right? That’s not a ability thing, right? Because of how we have now designed systems, because of the way in which mass personalization is being driven, right, so much of what we consume is being curated by someone else. Your ability to control what you consume is dwindling. And we are sitting oftentimes in echo chambers. That’s an issue for us all, right?
And so I think one of the challenges, and oftentimes the thing that makes it hard for people to see is because we promote and talk about, “Oh, bias.” And I go, “Well, that’s not my issue,” right? But there are others.
And so that’s perhaps the one, if we wanted to go here, the aha, like this isn’t affecting you, it’s affecting you and what you see and what you consume sometimes even in what you believe, because you’re only getting one voice or one perspective oftentimes, or the curated voice or curated perspective, right, whether it’s what I’m looking at or what I want to look at next on Netflix or what I think I want to buy from whatever store or all the other ways in which it’s tied together to continue to promote and push things your way. And that could get worse, or that could have even more devastating effects. And we’ve already seen some remnants of that in our political systems. And it’s happening not just here, but in other places as well, right? Does it matter to you that like what that democracy could be affected or impacted? Does it matter? Because again, that’s not an issue of Black, white, or even Republican, Democrat, right?
Oana Amaria: Yeah, yeah, yeah. I love that example because it’s similar to how we talk about racism. We do a lot of anti-racism. Like the unlearning that you talked about, right, like the systems we inherit, the reason why our social constructs are the way that they are, and we talk about how racism is poisoning us all. Just because you’re at a different level of the house, doesn’t mean it’s not going to get you. You know?
Diya Wynn: Yeah.
Oana Amaria: And so what I’m hearing you say is something similar. It’s like you may not notice yet, but it’s coming for you. And so I actually think that’s a really powerful point of reflection for all of us in this conversation. So thank you for that example.
Jason Rebello: Love that. I want to take a moment to not change gears, but can maybe shift the energy a little bit towards something you’re working on that you’re actually really excited about, right? I can’t wait to hear what is it that you’re working on that you’re really excited about in this space?
Diya Wynn: I don’t know if I can tell you. You know what, I will… And thank you, I appreciate this sort of shift. It felt a little heavy, right, maybe?
Jason Rebello: Yeah, right. Right. Right.
Diya Wynn: But I am excited about, I think I really get to do some pretty cool things in the sense that the conversations I have, the groups that I get to sit in front of, whether that’s… There’s a couple of legislative sessions and conferences that I’m going to be sitting with congressional folks and others or I’ve had opportunities to go overseas and talk with legislators as well. I mean that’s pretty exciting because these are folks that don’t necessarily have the perspective, aren’t as close to technology. We didn’t like bring in technologists necessarily for Congress, right? We weren’t thinking of that. And hopefully being able to help them unpack a lot of what’s happening so that they can be smart about the legislation that they’re proposing, I think is pretty exciting.
But one of the things I think that’s most exciting to me is, I’ll tell you, there’s a program that I’m working with right now. It is a data science camp, and where we’re kicking off two weeks, a program that was started by Joyce Hunter, former appointee of President of Barack Obama. And what’s cool about that is that it’s creating an opportunity and exposure for high school students to see the world of data and data science. And we’re doing that with sports.
So a focus on sports data science is also in part a partnership with the Brian Westbrook Foundation. And super cool, right, one, to be able to talk about sports. I love football. So I get to share some of the things that we’ve done–AWS–working with the NFL, but that this is creating exposure for students that are 14 through 18 that may… Some people think that they’re going to play football and some may not have the opportunity or going to get into sports, but exposing them to what is the world of opportunity that data provides and then analytics and AI.
And I think it’s pretty cool, and we’re working towards a great program, and I’m always excited about exposure because it is what made a difference for me, right? That girl from the South Bronx, I told you, right, getting a computer in the third grade is what’s made me say that I wanted to be a computer engineer. And that was insane because I didn’t have anybody that was in computers around me. But it was that exposure, right? And so how do we create other… We can only rise to the level we’re exposed. And now, we’re creating an opportunity to expose them at another level, right? And so I’m super excited about things like that, where we get to open a world of opportunity and hopefully create some new… be a part of the foundation of building some new scientists and some new technologists and new engineers.
Jason Rebello: That is so exciting as, I mean, we all have young people in our life that we are either directly responsible for or intimately involved with helping shape. And hearing a project like that is just really exciting because we all know that that’s the truth. Like the exposure at the youngest levels and this next generation that are going to inherit whatever state we leave this world, and for them to-
Diya Wynn: The mess we leave it in, you mean?
Jason Rebello: … be the next version of them. Yeah.
Diya Wynn: Hopefully not the mess, right?
Jason Rebello: Right, right, right. Whatever state it is. No judgment either way. But I’m really hopeful, even more hopeful for what this next generation will be able to do if we empower them as early as possible in the ways you’re talking about. So thank you for that.
Felicia Scott: Yeah. And to even add to that, I have to tell you, it’s not just young people that are inspired by hearing what you’re doing and seeing you. I’m probably older than you and I am incredibly inspired. When you said you were talking to people in Congress, that lifted a weight because like as we’re seeing a lot of things unfold in the news, you wonder who they’re listening to. So I can tell you, I will rest a lot easier tonight knowing that you’re in some of those rooms. So thank you so much for just being who you are and doing what you do and doing your purpose. It really makes a difference and it matters.
And you spoke about it earlier, like we’re all connected. We can’t afford to think, “That’s not my problem, that’s not my business.” Because in some way, shape or form, because of the link of humanity, it’s all going to come back to you and it’s all connected. So we’ve had a lot of talk about what you do. What we’d like to ask you is in terms of your work and in terms of the conversations you have and the rooms that you’re in, what would your parting request be for people like Jason, Oana and I, in terms of people who are in that DEI space, where inclusion champions and practitioners, what would a parting request be for us to get the work done?
Diya Wynn: Well, the first thing I would say is keep doing what you’re doing, right? Because this conversation around inclusion and diversity is extremely important, right? And I mean it is, in some ways, the foundation for some of what we want to be able to see in our technology, right?
And I think if organizations have the opportunity to get a real context for the value and importance of inclusion and diversity and they start making that a core part of their sort of organizational strategy, then this conversation, when we talk about how we are using tech, becomes easier, right, because there’s been a foundation laid. So I would absolutely say keep doing what you’re doing.
I think the other thing, just like I said earlier about like how we educate ourselves, I think that… and I’ve advocated for this in some ways, that our inclusion and diversity practitioners can be impactful in this conversation as well, in particular because you all are already thinking about the lens of inclusion and wanting to find ways and have strategies and approaches to how we build cultures that support inclusion and bringing and invite those people in, right?
And so I think that you all can be an incredible resource to technologists as well, to teams that are building even in their own companies. Because if we can’t get the diversity, then we could certainly leverage the expertise and the experience that you all have in looking through that lens to help influence what we build. And so I think that’s another avenue that you all could have an impact, but the most of all is just being in this work, because like you said, we’re connected. And I think that this is so, so, so extremely important to just what, as you all described, what we leave for the next generation.
Jason Rebello: Can I ask a follow-up to that-
Diya Wynn: Yes.
Jason Rebello: … to drill down even deeper? And this is almost from a selfish standpoint, just from Firefly. You mentioned the fact that you work with early stage companies. It’s something that we are more and more looking at engaging with, right? I’m an alumni of Techstars, so one of the things that we’re even thinking about is how do we start working with these earlier stage companies that are in the process of building all the tools that they’re building, utilizing AI.
And outside of the helping their leaders think about DEI from a general sense, from a high performing team sense, from a high trust organization sense, so you can have diverse people, and maybe the way I’ll ask this is what are the things that we need to upskill ourselves around as far as knowledge is in this area so we can be even more intentional about… I’m looking at this next cohort of classes. 70% of you are all employing AI or building out your AI tools. What are some of the things that we need to be more knowledge around so that we can ask the more poignant questions, whether it’s like how are you cleaning up your data sets or whatever it is that we can just be like, “Hey, by the way, you’re not thinking about this, but you should be thinking about these 10 things specific to how you’re building out your AI tools.”
Diya Wynn: Yeah. So I think it’s great to be connected to some of those accelerators and programs, the incubators, right, so that you get to have some early engagement with companies. And I’ve been trying to push that a lot more and doing some sessions for even some of our startup sort of accelerators as well to get them thinking. Because I think the great thing is that a lot of them are in early stage, they’re building something now, and it’s like you want to help influence the design and the beginning.
So if we can start thinking about in like inclusion by design, right, or born inclusive, that that gets baked into, built into the way in which we build. And so you being able to apply that thought process at the beginning in terms of how we think more inclusively and are building that into how our product design I think is important.
So what is it that you all need to know? I think some baseline understanding of where bias occurs, and how that affects the data and AI is important. And we didn’t really get into this, but one of the things, right, data. The bias occurs in the data and the algorithms and in the people responsible for building them.
And so when we start talking about AI, oftentimes we don’t focus on the latter, right? We put in tools and we say, “Okay, we need to measure for and detect for bias, and we need to address that.” And maybe we’ll do some things in terms of our data collection and start in trying to synthesize the data. But there’s a good part that we can also do in helping people understand, how do I interrupt the bias that I bring? And how does that bias then influence the way in which I look at a problem that I need to solve or even how I analyze the results, right? And so that, you understanding or having this knowledge and awareness of like the process from a… like what’s the overall product process when we’re building AI or machine learning and where does bias occur, I think may help you influence like those decision points along the way. And I think that can be impactful in terms of like elevating their thinking and preparing them for how they might build.
Oana Amaria: I love that. I think I’m going to misquote Paul here, but I’m scraping my brain to four years ago when he’s like, just think about how we define… He was talking about credit or credit worthiness or like even the parameters that are put in place as the weighted variables for the data, right, when you’re doing analysis, right? Which I think is really powerful. And how much of our own lens gets baked in because we can’t envision a different reality because we were born in a specific socioeconomic class or whatnot. And how does that apply to different markets? So you talked about like various startups coming in from Africa or from the underbanked, et cetera, right? So there’s just so much to it that I think is really interesting to think about, right?
Diya Wynn: One of the things that I’ve been using as a… trying to apply as a tool, I use this outside of my day job, but I’ve been trying to pull this in too, is like using the ladder of inference, right, to be a tool in which we help people unpack, like how do they then form the bias well that they have so that we could see where there are opportunities.
And because the ladder of inference gives you sort of steps in the process, that for like engineers and technical folks, right, there’s a very sort of methodical way of thinking that you can help unpack, right, to be able to help them interrupt even or question like, “Where did I get that perspective? Am I looking at this in the right way?” To be able to hopefully shift.
And then I think one of the things, and we don’t say this enough, right, but like leaning into and being brave enough, right, to sort of test the boundaries and explore and have conversations with other people, right? I mean, it’s easy for us to sit in a room and we all do it, right? We go into the cafeteria, we go into somewhere in the business and we sit with folks that look like us.
Oana Amaria: Yeah.
Diya Wynn: It’s just natural, right? But how do we push those boundaries a little bit? Because that even will challenge us in our thinking, right, and the way in which we expose, and the way in which we reflect or consider a problem. And it’s something that small, but sometimes it could be something that big.
Oana Amaria: I love that. We always tell people in this session, we’re not covering jerks. Like you think about like the difference between like people, unwillingly, like excluding… Waking up in the morning and excluding somebody is not happening with 98% of people except for that one person, right? And so, yeah.
Diya Wynn: You know what, and that’s true with technology as well, right? Like the vast majority of folks are not going into this and going like, “Oh, we’re going to build a system that will not work for you,” right? Like, that’s not the intent. Actually, it’s quite the opposite. Most of the time, we think that my product is going to be great and everybody in the world is going to want to use it, right? Not recognize that we kind of built it in this way to fit this narrow or this particular demographic. And sometimes, we do like the right kind of product studies or whatever, but most of this is unintended, right, or just because we’re not taking the intentional action to ensure that we include.
Oana Amaria: Yeah.
Jason Rebello: Yeah, this conversation is so important and so timely and something that I really feel that the broader, the public needs more awareness around. So I think conversations like this with subject matter, you know with people like you that are just experts in this space across, not just from the… especially from the technological and just kind of the technologist side, but also bringing in your lens is really important. So people can really start to wrap their heads around the power of this tool, right, and dispel and get the fear out of their minds around it, so that we can start asking the right questions and learning the things that we need to learn just as the general public to be able to have more informed conversations around this. So on behalf of Firefly, I just want to say thank you for taking the time to share your brilliance with us, and we are filled with gratitude and appreciation for taking the time.
Diya Wynn: Well, and let me say thank you to you all as well. It was a pleasure for me to be able to share with you and your audience, but I would also say thank you for leading into this conversation that everyone is saying “We need to talk about this,” and I’m thinking about it. So I’m glad that you all are leaning in and willing to have the messy or challenging conversation. And there’s still a lot more in terms of sort of mess and challenge that we need to sort through. But I am hopeful, right, that having more conversations like this will open the door and create opportunity for us to do better.
Oana Amaria: I love it. Hand over heart. Thank you, Diya.
Diya Wynn: Thank you. I appreciate you all.
Paragraph