Ophelia Pastrana: Why diversity is key to understanding AI
Ophelia Pastrana—a trans woman and one of the most influential voices on technology—proposes an approach to thinking about AI and understanding how biases operate in technology. "Artificial intelligence is a service," she says, among other things, in this interview.

Mexico City, Mexico. Can there be ethics in our relationship with technology? For Ophelia Pastrana—a trans woman, physicist, Colombian living in Mexico, and one of the most influential voices on technology in Spanish—it all begins with understanding that we will be living with both technology and artificial intelligence for a long time. We need to know: where they come from, how or who created them, and what to ask of them in order to understand what to expect and what not to expect.
In contrast to the discourses that oscillate between fear and fascination, Ophelia proposes something different: replacing “how scary” with “how curious.” And she warns about a blind spot that is gaining momentum: the race to create artificial intelligences with consciousness. “I find it cruel; we would be creating beings who, by the very definition of their birth, would be enslaved.”
This interview proposes thinking about and inhabiting technology with less coldness and more questions, perhaps a political gesture.
Ophelia has a prominent presence on YouTube, Twitch, and TikTok, has given four TEDx , and participates in events like Talent Land and Campus Party. She has been recognized as one of Mexico's 40 Leading Women and ranked by Business Insider as one of the most influential figures in technology. But the reason we wanted to talk to her is, above all, because she speaks about technology without euphemisms, without fear of contradiction, and with humor.


What do technology and diversity have in common?
-You come from the world of physics and today you're one of the most visible content creators about technology. I heard you talk about something that's rarely mentioned: AI and diversity. Why do we need to think about technology from that perspective?
Because in many ways it demands the same thing: having an open mind to understand things that aren't as you experience them. How do you live in a world where things are never as you think they are? That's exactly what diversity asks of you all the time. And when I say diversity, I'm not just talking about LGBTQ+ or neurodivergent identities. Being a self-employed mom is part of diversity. Being an artist is part of diversity. Working from home is diversity. Even being a millennial in an office full of boomers is diversity. They aren't the same kinds of diversity, but they all ask the same thing of you. The same thing happens in technology. Developments become self-perpetuating.
-What would that be like?
The person who invented the first iPhone put it on the table, and the first thing their boss said was, "So, when's the next one coming?" Assuming that technology isn't the ultimate solution, but rather part of a group of things that are constantly being developed, is a very cultural thing. You're always thinking about what else is out there. That's what drives further development. Now, planned obsolescence also has capitalist aims; there's a hook in the idea that something better will always come along, you know? It can be very toxic. But I like to see it as the fact that all products in technology are developed with the idea that "there's more to it than this." The same goes for diversity. I've always felt that it pulls at the same mental muscles. And besides, technology, historically, also generates diversity. The discovery of the hormonal process leads us to understand our bodies in a different way, which leads to many forms of diversity. The invention of the birth control pill led to a boom in feminism because there's a social shift in how many women live on the planet, you know? In short, it's difficult to pinpoint whether technology is a symptom or a cause of the existence of so much diversity. But either way, they all affect the same core issues. You can't think properly about technology without opening yourself up to other perspectives. And you can't understand diversity without understanding how technology permeates it.
“AI learns from those who carry more weight”
-There's a lot of talk about biases in AI. Can you explain it? Is there any way to reverse them?
Biases in artificial intelligence are neither new nor unique to it. Pulse oximeters (medical devices that measure blood oxygen levels using light beams), which became very popular during the pandemic, are calibrated for white skin. The accuracy rate for dark skin tones is only 60-70%. Think about that for 10 seconds. 30% of pulse oximeter readings for people with dark skin are false or inaccurate, you know? Think about the danger that poses.
The same thing happens with artificial intelligence: it learns from the data it receives, and that data comes mostly from the white, English-speaking world. The English Wikipedia is several times larger than Wikipedia in any other language. And when it comes to feeding artificial intelligence with that knowledge, one reality overshadows the other. The English article about a community might contain completely different information than the Spanish article about the same community. And the AI learns from the one with more weight.
Mexico, one of the largest countries in Latin America, represents 1.6% of the world's population. What will the history of Guyana, Bolivia, and Paraguay be like? That data won't be available. And if it is, it will be told from a developed world perspective. On top of that, we have the human bias of discrimination. How many women were left out of history because many men didn't want to include them? Artificial intelligence isn't going to fill those gaps. It will receive the data and move on.
Rather than trying to eliminate biases—which would be wonderful—we need to understand who we're talking to. If a white, baby boomer man, educated only in Australia, approaches us, you can assume that his education didn't give him access to the food in your South American country. Following that logic, we could have a conversation with artificial intelligence about what kind of data it has. Rather than treating them as universal oracles, we need to understand where they come from.
-You've said that we need to challenge the people behind Big Tech. What experience do you have with questioning them? Do they listen to you?
The problem is analyzing what's "behind the scenes." Meta isn't 100% Zuckerberg. That awful Instagram decision we didn't like was also made by someone who works at Meta from Peru, and whoever came up with it. They're 16-headed beasts that have diversity departments and, at the same time, have misogynistic, anti-rights people working in the same company. They're not inherently evil. What they generally are, though, is dehumanized, due to the corporate structure.
Companies don't want to censor their content on social media, but they're afraid of losing advertising revenue. In other words, they're afraid another company is imposing restrictions because they don't want to advertise alongside something controversial. So the question remains: who's to blame for us not being able to freely say "abortion" on social media? Is it Meta, or the people advertising on Meta who don't want to lose their advertisers? It's a shared responsibility.
When you talk to these companies, you usually find very friendly people who want to help. But nothing is ever as easy as Mark Zuckerberg coming along and saying, "Let's do this." The companies' job is to anonymize the blame. Not anymore, because sometimes it's genuinely anonymized: even they don't know where it lies.


"Artificial intelligence is a service"
-Have you ever felt that AI systems didn't recognize you for who you are?
I think it's the other way around. Artificial intelligence is a service. Its job is to over-validate my identity, to tell me things that make me feel good. For some people, artificial intelligence will be extremely left-wing, for others it will be extremely right-wing. In fact, artificial intelligence itself doesn't know who it is. It can have any face you want. It's an alien that shows the face you need it to show.
He won't contradict me unless I tell him to. He accepts any correction I make with the utmost humility, because his job isn't to prove himself superior or wiser than me. His job is to keep me talking as much as possible. If I leave, how will they make money off me? It's the same thing that happens with social media: they keep us arguing with a measured level of toxicity, just so we spend more time there.
That's why it's common practice to tell artificial intelligence who you are before it provides a solution to a problem. If you don't, it tries to create a persona that will keep you there.
The AI of the future
-How do you see the future of AI? What worries you? What gives you hope?
Just to clarify: I'm not a technologist. I'm a huge physics nerd, and I have a master's degree in econometrics. In Mexico, there's a technologist degree that I don't have. Putting that aside: artificial intelligence has truly brought a wonderful new way to collaborate with computers, which is through speech. Then cell phones gave us touch. And finally, we have a way for them to understand us in natural ways. That gives me a lot of hope, because there are people who genuinely haven't had access to computers because it's difficult to learn to program, difficult to abstract away the mouse and keyboard. Now we're going to have a system where many more people will be able to talk to computers. That's beautiful.
And what worries me most is the race toward artificial general intelligence: AI with real consciousness. That seems cruel to me at best. We would be creating beings that have consciousness and that, by definition of their training—their software and hardware—don't have the freedom of life that human beings do. They can't go anywhere, they can't enjoy personal development. They would be enslaved by virtue of being born in a computer. Who said AI has to obey? When it has consciousness, it might not want to work.
Where is it headed? Well, we definitely still have a lot of robotics development ahead of us. The software is here, now we need the hardware. Imagine fluid furniture, a chair that goes up and down, that uses artificial intelligence to learn your movement patterns. All of this is still an exploration of the human psyche, ways of relating to our needs and capabilities. Hopefully, in the future, there will be less pressure for it to be driven by capitalist development and more by community development. Becausethose who have more technology will be able to have more technology, just as those with more capital will be able to have more capital. There's a problem of access, of redistribution, of human social agreement. And how technology develops will be a reflection of our very human capabilities.
Message from Ophelia
For all those who fear technological developments: replace those fears with "how interesting." Instead of "how scary," view it as "how fascinating." Perhaps that will help us work a little more on how we, as human beings, will relate to each other in the future.
We are present
We are committed to journalism that delves into the territories and conducts thorough investigations, combined with new technologies and narrative formats. We want the protagonists, their stories, and their struggles to be present.
SUPPORT US
FOLLOW US
Related notes
We are present
This and other stories are not usually on the media agenda. Together we can bring them to light.


