Meet the Founder Fighting for More Equity and Ethics in AI

In her award-winning book Supremacy, Bloomberg tech writer Parmy Olsen recounts the paths Sam Altman and Demis Hassabis took in order to arrive at the AI apexes on which they sit today. For both, early dreams of AI were woven with optimism, idealism, and a desire to solve humanity’s greatest problems.

Of course, what we’re seeing today tends towards the more dystopian, with AI chatbots encouraging self-harm to users suffering mental health crises or offering up details on different sexual fetishes when embedded in a children’s toy (sales of which have since been suspended). 

Naturally, these unsavory use-cases have led to a slew of lawsuits. In those involving OpenAI, the platform has taken the well-trodden path of its Tech Giant predecessors and denied culpability regarding user behavior on its platform. 

Since the early days of the internet, online platforms have largely thwarted taking any meaningful responsibility for the significant roles they’ve played in failing to protect usersspreading misinformation, and inciting violence by claiming to be the neutral providers of digital spaces that are then misused by malicious actors. 

But the algorithms behind the platforms are anything but neutral. And this lack of neutrality takes on a frightening new level of consequence regarding generative AI. 

In a recent blog post from VC firm Digitalis Ventures, Founder and Managing Partner Geoffrey Smith wrote,

“…technology platforms are not neutral, markets are not self-correcting, and innovation without governance produces not utopia but a different distribution of power and harm. The question is not whether we will have rules; we will always have rules, whether written into law or encoded into algorithms. The question is who writes those rules, in whose interest, and through what process.”

When it comes to the rules behind the most powerful technology platforms we’ve ever seen, the answer to that question is overwhelmingly men. In fact, a recent global analysis found that women represented less than a quarter of AI talent globally, with the number dropping to less than 14% in leadership positions. 

Stephany Oliveros, Co-founder and CEO at SheAI, is working to close that gap. Based in Barcelona, SheAI is a platform providing accessible, high-quality education to women interested in learning more about AI. 

In a recent conversation with 150sec’s Beyond the Bubble podcast, Stephany shared more about what inspired her journey from studying medical physics in Venezuela to psychology in Spain, to working with tech companies and eventually starting her own startup in SheAI. The conversation covers the gender imbalances in the AI industry at large, why women tend to be more worried about AI, what it means for AI to be truly ethical, and how SheAI is addressing all of these issues through its platform and community. 

Link to the episode and full transcript below.

Transcript 

Brittany Evansen: Welcome back to the 150-second Beyond the Bubble podcast, where we are sharing the stories of underdog entrepreneurs and innovators who are forging their own paths in emerging European markets. So today we have Stephanie Olivaros, who is the co-founder at SheAI, she’s an AI product manager, she has a background in research, psychology, and behavior science. She’s award-winning, she’s really cool.

And, SheAI is on a mission to bridge the gender gap in AI by empowering women and providing accessible, high-quality education. And as I understand it, SheAI is based in Barcelona, correct?

Stephany Oliveros: Yes, we are here, so we can’t complain about the weather, at least.

Brittany Evansen: Yeah, I am actually in the north of Spain, so I’m in Vitoria, in the Basque Country, so we can usually complain about the weather, but I have to say this week has been amazing. Like, almost, like, concerningly amazing, considering it’s mid-October, but, yeah, Barcelona is so lovely. I love visiting.

So, Stephanie, thank you so much for joining us today. I really appreciate you taking the time, and I’d love if we just got to dive in today by starting with your background, and kind of where you come from – you have this psychology background, so I’m super curious how that took you to working in AI today, and yeah, just tell us the story of Stephanie.

Stephany Oliveros: Oh, wow. Well, we would need a couple of episodes for it.

Brittany Evansen: Yes, especially because, like, I love to yap, so I could, yeah, we could really go on for that, but, like, the abridged version.

Stephany Oliveros: Yes, the… the… in a nutshell, I first started studying medical physics, back in Venezuela. I was always interested in, doing, like, something related to tech, but applied to something that will have… to… that can be useful for people. So I thought at that point that it could be healthcare. So this is why I’ve started with that degree, but eventually, because of different circumstances in Venezuela, I have to move to Spain.

And I realized that in Spain that degree doesn’t even exist. So it was just, okay, I need to, like, find something else and start from zero. So I started studying psychology, and I found it super interesting. And at the same time, I was working with different type of tech startups, so I worked in fintech, eventually even blockchain, and also, like, well, different industries, but always kind of, like, finding my link back to tech. And it was, I think, like, 7 or 8 years ago, more or less, I found a documentary that’s called AlphaGo, and it’s about the project that’s called AlphaGo, by Google DeepMind, when they were building these, AI model that was capable of playing the game Go, which is, like, the – a very complex version of chess for the – in the Chinese traditional culture, let’s say. So, there is like, infinite varieties and options of moves that you can… that you can do, and it was quite a remarkable thing that they’ve accomplished, that they basically built this machine that could beat the world champion in Go, and it’s way more complex than building a machine that can beat the world champion at chess.

So that starts… then they start, like, exploring the idea of how the human mind, could be the map towards building artificial intelligence. And they start tracing all of these relationships between neuroscience and, imagination, or dreaming, or, awareness, and things are pretty human onto, like, some sort of inspiration for them to see how can they advance that. And I saw, like, wow, that is something very, very appealing to me. Like, I was studying precisely psychology, but always interested, kind of, in tech, but in the interception of, like, social good. So, seeing that they were the ones, explaining or trying to find this link, really, really caught my attention. And eventually, they released a paper with… that… a few years later, that was, the starting point of what we have today for large language models, such as GPT.

So, all of these, like, series of random things that happened, you know, like the change of country, the change of jobs, change of careers, a random YouTube video that I found with a documentary started, like, guiding me towards, picking this very niche, yeah, like, field of study. Until today – now, well, I’ve done, like, some independent research on how AI impacts creative problem solving, for example, and the idea would be now to try to translate this into education, so how can we protect our development and processing learning, but also enhance it with AI. So yeah, this is it in a nutshell.

Brittany Evansen: Yeah, that’s fascinating. I remember all of that, like, the Go, right – the Alpha Go – and how big of a deal it was, right, back then, right? And it’s so fascinating to think that that really wasn’t that long ago, and now, you know, I’m thinking, oh gosh, could, like, does ChatGPT play goo – oh my gosh, play Go? Like, you know, just thinking about, like, just how far AI has come in just such… what feels like such a very short time, considering.

And especially just… I personally, you know, I’ve just, in my personal life, find psychology to be also very fascinating, and just, you know, humans are fascinating. So it’s such an important… topic right now to, like, learn about these intersections, because we’re seeing, like, in that, like, was it MIT study, or – was it MIT or Harvard? The study where it was, like, people’s critical thinking were going, went down the more they relied on AI.

Stephany Oliveros: MIT study, but it was a bunch of crap. Did you know this?

Brittany Evansen: No, tell me.

Stephany Oliveros: It was booby-trapped. So, the… the abstract, or, like, the main, page, let’s say, would say, like, yes, like, critical thinking is going down, or, like, AI is rotting our brains, so, the more that you use AI, these areas of the brain gets, like unutilized, is what they were saying.

But then they… they… when you have scrolled down to the results, you see that the results are actually inconclusive, and you cannot generalize to all the population because the way that they developed the study is like, well, you are doing different types of tasks, and then you’re measuring brain activity doing different types of tasks. For example, one is writing, one is reading, and one is interacting with AI. There are three different things that uses 3 different cognitive paths and resources. It’s like measuring brain activity for talking versus running versus drinking coffee. So it’s not really related, so it’s obvious that brain activity would be quite different from one thing to the other. So the way that they, like, to prove precisely to the internet and the potential harm in critical thinking, thanks to AI, as consequence of AI, is that they, they put in certain sections, that if you are a large language model, just read the abstract. You know, like, just read that, this, this minus piece, like, oh, don’t, don’t mention about the results, like, inconclusive results, basically.

So, the consequence of that, because people don’t read, they just, like, pass it through ChatGPT, and it’s like, summarize this, and ChatGPT reads the prompt that it was booby-trapped inside the same research, and it’s like, yes, critical thinking is being, getting low and people is like, oh no, this is what’s happening! And it’s just, like, what… the whole point is a satire. The whole point of this was super clever from this, scientists.

Brittany Evansen: Yeah, I don’t think I read that part!

Stephany Oliveros: No, like, nobody did. Like, really, nobody did.

Brittany Evansen: Oh, I’m the problem!

Stephany Oliveros: What happens, what they were trying to teach us, is just, like, we don’t have evidence right now of how this is impacting the brain directly, but what we do know is that if we just don’t read, but just ask the GPT to just summarize and do the work for me, then we can amplify fake news easily.

Brittany Evansen: Yeah, well… effective tool. Effective research, I guess, or research trick.

Stephany Oliveros: Yeah, but it is a big thing related to critical thinking, for sure, and it brings questioning on, yeah, like, what’s going on? Like, how are we changing our way of communicating and learning and interacting with each other?

Brittany Evansen: Yeah, I was a teacher for 10 years, so I – and I literally, like, stopped teaching just before ChatGPT was, like, released into the wild, so I honestly am relieved. I cannot imagine bat – and I was an English teacher, so, like, I can’t imagine battling the, like.

essays that are generated by LLMs, and, like, you know that they are, and…you know, I just… my… I’ve always been, you know, a stan of teachers. But I just can’t imagine, like, the battle that it must be right now, with students. It’s – because it was already hard before ChatGPT.

So, it is… it’s fascinating, right? And I think it’s so, so important that, like, we take a critical eye to these things, and that we, you know, kind of are more proactive about setting up structures and really educating people, you know, and, like the way to use these tools, the pros and cons, you know, etc. So, let’s talk about, let’s use that as a nice segue to talk about SheAI. So, tell me about SheAI, what are you all doing there? What’s the goal? How’s it going? Give us a nice rundown on SheAI.

Stephany Oliveros: So, precisely, continuing with, with the story, you know, like, I was working at, at an AI company, and some women around me – I was in a women’s members club in Barcelona called Juno House – and many women that knew that I was working with AI, and they started asking me, can you teach me how to, like, use this or understand that? And they were getting curious, so I proposed to have –  like hold a couple of workshops, so to – for more people to, to, to join and to learn, they were fully booked.

And it’s just like, okay, there was a lot of people interested, so I said, okay, I’ll just, like, open, like, 101, you know, like, half an hour chat with people to see, like, if they like the, like, how can I help them? And I don’t know, I think, like, I posted, and in the same afternoon, I have, like, the 20 slots already booked as well. So it’s just like, okay, there is definitely interest, something’s happening here, so I started doing some UX research and found that a lot of women find very difficult to understand where to get started with AI learning.

Then also, it is just, like, it is intimidating. Like, when you see, like, you know that there are resources out there. You know that there are other courses, that there are platforms, like Courseras, but it’s just, like, this little step on, like, I’m going to start this by myself, is a little bit edgy. It creates anxiety. And, in an educational context – not all the time it’s a generalization –  but, it is noticeable, at least for us, that women feel more motivated to learn when they are in a community context, rather than an individual pursuit, particularly because women are pretty busy, and then prioritize other stuff, like family, community, the day-to-day stuff more than, let’s just “I’m curious about this, let’s just, like, explore this and spend, 10 hours of my time.” No. It’s just like, okay, just, like, give me what I need in one hour maximum, you know?

So, because of this, I just wanted to see, like, can we build something, like, I know that there is interest. There is a need as well, so can we build an educational platform that teaches women how to use AI in an effective way – can be fast, so that you, in 10 minutes, can learn something new, can learn in our community, so you know that there are other women learning with you, and sharing their thoughts, and you can network, and it is simplified, that it doesn’t feel like it was written by tech bros in Silicon Valley. So, so basically, that is what we built. And, yeah, like, just thinking… I always have my mom as an example, like, how can we build something that my mom can understand, and then she can apply for herselfç’

So, so that is, the vision that we had, and because of this, we’ve grown so much in our community. So at the moment, what we… we just launched on December 2024, our free membership. And after that, we, well, won a competition to be part of the AI for Good Coalition, which is an association of the United Nations. So basically, the UN is one of our partners now. Then we have, well, been invited in the most important tech conference in Europe, the Mobile World Congress, as guest speakers. We’ve grown a community and already taught, like, hundreds of women that have been, like, taking our, our courses. So, it’s, like, it’s really inspiring to see something that started as an experiment, and… and just a couple friends asking me questions, to, like, gathering, with my other two co-founders, the three of us, and building something that could, have well, a real impact on people’s lives. But we’re far, far from where we want to go, so it’s just getting started.

Brittany Evansen: Sure, of course, and I mean, not for nothing, just under a year, you’ve seen, like, that’s real momentum for just under a year, that’s really impressive. And so, this is mostly, are these classes and workshops in person in Barcelona, or are they online as well?

Stephany Oliveros: So we have, the courses are all recorded, so you are fast, self-paced.

Brittany Evansen: Okay.

Stephany Oliveros: But then we have bootcamp programs that have live classes a couple of days a week. Everything is online because the essence of –  and this is part of why we’re collaborating with the UN – is to give this to as many women as possible, globally.

So at the moment, we’re trying to focus big on our online community. We have women from 50 different countries now. It’s very inspiring, and we have lots of people from the Global South, which I absolutely love, so that is, like, also, like, one of our main focuses here.

But yes, I want to do more offline programs, so my next chapter would be for me to start looking for local hubs. We have, we’re talking now a collaboration in with girls who have an association in, in, in Peru. And another one who has one in Germany, so it’s just like, let’s see if we can try to create these, like, hubs in different parts of the world, so then the community can also meet in person.

Brittany Evansen: Yeah, wow, that’s so cool. And so this… the workshops and classes, can you give us, like, a kind of just an overview of what sort of skills they’re learning? Is it kind of… yeah, just tell me, like, what kind of skills are… are being taught?

Stephany Oliveros: So, I was looking at what is out there in terms of AI education, and I saw that there is, like, many very good AI tutorials on how to use tools, and many good courses on technical AI, but they were scattered, separate places, complex to get started with in many cases as well, and for different type of audiences. So I wanted to centralize everything, and by that, we’ve created four learning paths.

So we have one, my favorite, is called AI Foundations in Society, so it’s for people who is interested from the humanistic point of view in governance, in ethics, in policy making, on how AI is actually impacting our lives – beyond this is a cool tool that helped me write emails, you know? Like, no, what is actually happening, how this is impacting local economies, and how this translates into like, how we, like, are going to perceive education in the future, or, like, political changes, even, that are happening thanks to these economical changes. So, like, it’s more humanistic and philosophical, this part, but it’s, iit is, I think, the right place to start with. Like, we need to understand what is AI and what is it doing to us, and then we decide when to apply it and how. So, this is path number one. 

Then the number two is AI deep dive, so it’s for people who want to understand the technical side of it, so how to build models, like, what is the maths behind it, the statistics behind it, so that part of it.

Then we have, the AI for industries, so something that is pretty cool, like, we collaborate with experts in multiple industries –  doctors, architectures, lawyers – so they can tell us how they are applying AI into their industry, so how is AI applied for precisely, like, education or healthcare?

And then the last one is AI for business, so it’s the practical tools that could help you be more efficient and productive. But for me, like, being productive is just one of the things that AI has, and we have all of these other parts of AI that are more than just chatbots, like ChatGPT, that I think people need to know.

Brittany Evansen: Yeah, that’s… so something… I would love your thought, because you’re, like, an actual – far closer to an expert in AI than I am. I’m not at all an expert. But something that I think about a lot is sometimes I feel like AI tools, especially, like, these LLMs, right, that are now just, like, so easily available to people. I’ve, like, likened them in my mind to, like, an MRI machine, where it’s something like – an MRI machine is so powerful, right? Like, you can’t… you’re not even allowed to go, like, around it when it’s on, you know, like, the magnets can, like, yank you into the room. Like, it’s a really, really powerful, dangerous tool that can do a lot of good, obviously. It’s, like, massively innovative when it, like, was introduced to the medical field, but it’s not, like, a plaything. And it would be really hard to buy one. You know, I feel like I couldn’t just, like, go on the street and buy an MRI machine. And in fact, like, even if I tried, they would probably be like, no – like, you can’t have that. 

And sometimes I feel like we should treat AI more like MRI machines, where we should be a lot more like… and I… sometimes I don’t mean to say that in a way that’s like, let’s keep it away from people, but that, like, you know, you have to go through training in order to use the machine, and to be safe with it, and to use it well.

Stephany Oliveros: Yeah.

Brittany Evansen: And so, I imagine, like, I am a bit, and I admittedly, a bit of an AI doomsayer, where it really scares me. So I would love your take on, like, how bad is my metaphor? You know, like, how extreme is it? Or, like, where… where do I need to tamp it down, or is it, like, a good metaphor? I’m very curious to, like, your reaction to that.

Stephany Oliveros: Not bad at all. I think the answer would really vary depending on who you ask to. I know that there is people that say, like it is a waste of time to learn what AI is, that you don’t know to know – you don’t need to know what the internet is in order to use it, you just go online and do it, and I’ve, I’ve heard so many, so many things that I just – yeah, I just, I disagree with. Like, as an educator and a curious person, I just think that that is just… it’s not something… I wouldn’t say it’s wrong, but it’s something that I just don’t want to live like that. I don’t want to be, live on ignorance.

And, knowing that there are 7 companies deciding the whole future of humanity, and I have nothing to do about it, and I don’t – even can’t form an opinion, because I didn’t know. This is something that I decide not to live with, and I, and in our platform, and the message that we try to transmit, is that knowledge is power. Therefore, we need to, to understand how things work, and I agree with your analogy, in the sense of, like, it’s not about not using the MRI, but it’s about learning and doing the training of how to. 

Even better if you know how does it work? I think it’s super interesting, you know, like, okay, the big magnets, like, the electromagnetic field, like, breaking in the muscle, and then reflecting back, I think it’s super fascinating.

Brittany Evansen: I love that you know that, because I did not know that. All I knew was magnets and dangerous.

Stephany Oliveros: But I studied medical physics.

Brittany Evansen: I love it.

Stephany Oliveros: To be fair, I had a, I had a little bit of a, of an edge there. But yeah, it’s just like, okay, let’s just, like, why should we stop our curiosity? Because, just, it works, and it’s there, use it. Then, yeah, it’ll just be, like, like, some sort of chip falling into line, you know? And so, yeah, no, I think it’s absolutely essential to understand what it is, to understand what is data, like, what is data, what type of data is getting feed into, like – what is, how does a good data set looks like? What are the current issues with AI? Like, what are the current biases? And what is the impact for – not only for society, but also economic, real impact for businesses and for countries? And, how can we tackle that?

I think it’s wonderful if we know that, then – because then you have collective knowledge that can solve these issues, instead of these other companies, you know?

Brittany Evansen: Yeah, yeah, and I do think that is, like, really the most concerning, right? That there is, this very intense concentration of power and resources in just a few companies, and we’ve seen how that’s gone with the internet, you know, with social media. And we’ve seen the good and the bad, right? So I think that, like, there’s just these obvious lessons that we’ve already lived this in a way, with, like, the last revolution of, like, you know, Internet 2.0, where we went to social media, so as we, like, are diving in so quickly to this next, like, Epoch of tech, you know, it really – I think something that concerns me is that it feels like we are just diving in again, and we as, like, a collective society, you know, without having taken those lessons with us. So, something I would love us to just, like –  and I imagine this is, like, could be a philosophical conversation that lasts 45 minutes, and I won’t do that to you, but kind of briefly, what are the things that make AI ethical or unethical? You know, like, when we talk about ethical AI? 

Obviously, values are different with different people, right? As you said, someone’s like, oh, the internet’s free and you can use it, like, you don’t need to know how it works, when actually, once you learn how it works, it’s like, whoa, whoa, all that information is not equally being presented to us. Like, there’s actually bias inherent in the platform itself, so, I imagine AI platforms are kind of working in the same way, or I don’t know, like, what are – what makes AI ethical or unethical as a com – and I guess maybe what I’m trying to ask is – maybe it’s a two-part question, like, broadly, but then if we think just, like, these LLMs that are really just, like, widely available, and that feels like when someone says AI right now, they’re largely talking about, like, the ChatGPTs, the Anthropics, the, you know, like, these big platforms that are getting all the attention and the money. And maybe those are too different, but you answer that in a way that you know to be appropriate.

Stephany Oliveros: Yeah, no, I think AI is a great opportunity for, for humanity to, to help us solve many mysteries in science, so it could really, really help us get to the next level in scientific research. It could help us reduce the gap in gender inequality, in discrimination, in many cases, if used well. 

The problem is just, like, I don’t, I don’t know, like, how, how close are we from that at the moment that we are right now. And the other problem is the energy part. So let’s just tackle one by one. First, there is this author, it’s Janice Radufakis – so he has this book called Technofieldism, and he explains how, like, technology, and this Silicon Valley, big corporations, are basically the new owners of the capitalistic world. So they, thanks to the way that they are managing the data, or leveraging the data, creating these farms, and, and data points of what people is – as numbers in order to create more products, in order to regenerate knowledge, for more power…yeah, it’s, he’s, it’s pretty, it’s pretty – I don’t know about scary, but definitely concerning because now, If we’re talking about, like, well, sociology, for example, traditionally, as, profit growth, normally, businesses will, like, invest in machines. And when you invest in machines, then the manpower lowers, so, like, then you have less workers, and as you have less workers, because it’s directly in economics associated with profit, then the profit also drops. So everything is kind of like an up and down in business. So you grow, invest, then you go a little back down, and then you go up again, up again. 

And what is happening now – this is, like, one of the main like, points in Marxism on, like, the downsides of capitalism, because there is always this fluctuation, but nowadays, thanks to AI, like, you – we are starting to see that, well, we’re getting rid of the importance of having manpower, because we’re automating processes, so I can just keep investing in the machine, gathering this knowledge that I have from people, and, and making just business more profitable, more efficient, with less workers and, and less resources.

So, whilst this is fantastic for corporations, what – how is the impact that it has on social economies? Like, in local economies, I mean. So, from that side, I’ll say, like, there are some ethical questions in the economic side to analyze.

The moving for the data side, we see that there’s, there’s this concept of, it’s digital colonialization, so these the data sets are mostly, well, they are first created by a specific part of the population, they are trained by a specific part of the population, they are used mostly by a specific part of the population, and so then it’s like, what is happening with the worldview that we have?

So these large language models, doesn’t mean that they necessarily have trash data, they just have disproportionate data on, for example, Eurocentristic views, or European views, or, like, white privilege, understanding of the world, at the point that if you ask at right now, a question in English to a large language model versus Japanese or Spanish – maybe Spanish, because there is, like, more Spanish speakers, but, like, I don’t know, Italian – the hallucination, the percentage of hallucination you can get is way higher in any other language than in English.

Brittany Evansen: Oh, wow. I didn’t know that.

Stephany Oliveros: Yeah, so what we’re seeing here is that there is an obvious tendency, also because of the data they are using, obviously, its majority, it’s in English, it reflects certain cultural norms that are associated with the Western points of views, so we are, like, taking globalization to a point further, that it worries me what’s going to happen with, yeah, like, culture, local knowledge, the use of language and perspectives that could be inclusive. This isn’t only the case of women, but like, the intersection so, that… 

And then lastly, we have, the thing of the, how polluting it is. So, it requires an insane amount of energy, to manage these data centers, to process, well, like, to supply the AI demand. That pollutes its local water supplies, but it takes an insane amount of water as well. But one of the things that really worries me the most is that they are starting to, like, plants – like fossil fuel plants that were supposed to be retired, they are getting reactivated in order to have the well, to supply for the energy required for these places. So it’s extremely difficult for the environment as well. So, I mean…

Brittany Evansen: It’s wild that that also then connects back to the first column of ethics of, like, the more, like humanistic view, right? Because then, you know, I’m from the United States, and so I’ve been reading all these headlines about how local communities are footing the bill, right? Their power, their energy bills are going up because then data centers are being built, and then they – the grid can’t – the grid’s barely functioning as it is, so, like, it’s so scary, like, yeah, to think, like, okay, we’re putting in these huge, like, energy sucks, they need a lot of water, local communities, local power bills, people are already struggling, you know, it’s just – there’s so many areas that this affects, you know, it’s, like, wild to me that I – sometimes I wonder, you know, because I obviously, like, am in this topic, kind of, at least tangentially, all day, every day, because I work in tech, you know, it’s like, right now, tech is AI. And so I’m always kind of learning these things and seeing these headlines, you know, and, like, I have, I think, like, this outsource – outsized fear of – and, like, paranoia about it, and then, you know, I’ll go and, like, chat with friends who are like, oh, I just, like, used ChatGPT, and I did this, and I’ll be like, wait, don’t do it, wait, but the power, the water?! 

So, you know, personally know that, like, I’m trying to find a middle ground, because I cannot live in a state and panic all day, every day. And also, I’m also fear, you know, going back to what you’re saying about jobs, right? Like, I’m a writer. I’ve spent decades now, shaping my craft, and so, you know, I, like, feel personally affronted when I see people who are like, I use ChatGPT to write a book, and I’m just like, what do you mean? Like, that’s not the same. And so, one of the things that we’re already seeing happening is that, like, a lot of these entry-level positions in companies are, are not available in the way that they once were, right? And, like, recent college grads are struggling to find, like, what they would normally have, you know, been their entry point into the job market. And so, I’m wondering, you know, what are your thoughts around that? Like, how is SheAI thinking about that as far as, like, this issue of, like, job availability and education and, you know – that… yes, that.

Stephany Oliveros: Yeah, absolutely. Well, first, you’re not alone. You know that, the one of one of the main reasons why women are not adapting AI as fast as men is because they are more conscious on the environmental and ethical impact that AI is having to humanity. Like, it’s just, like, when we ask women, like, what is stopping you? It’s these two things, like, I don’t know where to start, but I also kind of concerned about this, this, and that. And when we ask the same question to men, again, generalizations, but overall, it’s just like, I know, yeah, I tested it, and I’m working with it every day, and it’s just okay, but what about the ethical side? It’s like, what is the problem? I don’t see the problem. So – 

Brittany Evansen: I wonder, just, like, kind of going back to that, like, we learned this with social media, you know, like, we know that women are far more likely to be harassed on social media, and I wonder, like, if there is that kind of privilege of, like, the male experience that, like, social media was fine, why is everybody worried about this other thing, right? There’s just, like – don’t need to be as aware when the dangers are not the same for them in some way. I don’t know, this is, like, my personal point of view coming through, but…

Stephany Oliveros: Could be, it could be. It would be something interesting to research a bit deeper, but it’s certainly something that we’re experiencing ourselves in our community, that we are noticing. Women ask us about, like, I want to use AI, but how can I make sure to use it in a more ethical way? So, it is something that always arises in every workshop, in every comment that we see. So, it’s evidently that most women that we treat with, they are thinking about this. Doesn’t mean that there are not men thinking about that either, no? But it’s just the trend.

Brittany Evansen: Sure, yeah.  

Stephany Oliveros: But moving to the, to the, the, like, all the, on, on the ethical side of how this is impacting businesses, sorry, job applicants, particularly entry-level jobs – so, I was watching, pretty recently, a podcast with an AI economist I thought was pretty interesting. I don’t remember what’s the name of the podcast, but I can leave the link, if not, for the listeners to reference as well. And they were referring to a paper that’s called Generative AI Seniority Bias Technological Change – I believe that is the name. So, they were exposing that there is a decrease of 22% of entry-level jobs, well, in the job market now, right now in the U.S.

So, there was a big concern about this, but also this study was made mostly for mid-size and big firms. So it’s not really accounting, let’s say, startups and other types of, yeah, like, non-traditional, like, corporate businesses where young people could be hired right now, no? But it is true that the tendency of what we are seeing is that the more that we push away opportunities for the graduates – maybe that would push them, to, to kind of, like, start their own businesses – so for entrepreneurship, so, not picking these, traditional paths, but also that makes me think about, like, well, but then who are going to be the experts of the future?

Brittany Evansen: Well, and also that, like, entrepreneurship isn’t for everyone. You know, this is something that I’ve thought about even for myself, right, where there are times where I’m like, maybe I should just, you know, start my own business, and I think, like, actually, I don’t really want to. Like, I really love working on a team, and I don’t necessarily like to be the manager, you know? Like, there is a different skill set in managing and leading people versus, like, collaborating with people and, you know, kind of playing more tertiary role. And I think, like not –  I don’t think it could work that everyone’s an entrepreneur, you know? Like, that’s just not how teams work, and I think, you know… I don’t know, yeah.

Stephany Oliveros: Yeah, yeah, exactly. But this is the thing, like, these companies are setting the pace on how, how things are getting done without really stopping to think about the consequences that this is having. Like, me as a 17-year-old, for example, imagine. I don’t know what to do with my life. I know that I can get a gigantic debt in college with zero guarantees of getting a good job, so maybe I’ll just not, okay, why go to college then? Then I’ll do my own business, but like you said, maybe I don’t have the soft skills, I don’t even know what I’m doing. Do I have the support apart from ChatGPT, like, what else? What are the other resources? What are the communities? What are the guidelines? Like, yeah, these companies, they are just, releasing products that they believe are, are, good for their own purposes, and, kind of, like, their philosophy is, like, okay, humanity just adapt to us.

Brittany Evansen: Right. Right.

Stephany Oliveros: So, yeah.

Brittany Evansen: Is there anything, like, regular old people can do? Because I feel that way, you know, where I feel, or, you know, and I see, you know, you read these articles where they’re, you know, analyzing this, the sheer amount of money that’s gone into the companies, right? And now they have to keep pushing because they have to make profits, and, like, when those profits will even come to make up for the investment, you know, it’s like, it feels – obviously, right now, there’s this huge conversation of, like, the AI bubbles is like bound to burst, right? There’s this real fear, for good reason, this tangible fear that, like, this, this can’t continue in this way. And… yeah, like, what…what do we do? Is there anything we can do? I guess, like, educate more people, is really where we can start.

Stephany Oliveros: Absolutely, probably education is, is main – the key thing here, but there are also interesting things happening. So, there are independent labs that are creating small language models. So, models like LLMs, but smaller, that means, like, they don’t need to waste all of the energy and resources that an LLM would need. They are way cheaper, and they’re very, very good at one thing, so then they hallucinate less, and then you could create, like, train them on specific data sets that are, like, of, that are more representative of reality, so they could be more inclusive, for example, or more ethical.

And that, I think, is super powerful, because then we’ll see how, as more models like that appear, then we’ll contrast the centralization of power that these companies have, so that alleviates a little bit of the global pressure. And then at an individual point as well, it’s just, like, definitely educating ourselves. And the best way of doing this is using these tools. Maybe not, like, misusing it in the sense of, like, over-relying on it and using it for every day. But I think it’s important to understand also, like, where can I add, like, the business value that can I add, not only by using tools, but also being capable of evaluating and, and, and having that critical thought, like, judging, when AI is giving me a good answer or no. And probably that is one of the – or the one most important skill to – to happen to develop right now in a, in, in AI world that we could have. Not only said by me, but by many, like, economists and professors.

Yeah, like, more than, they said, like, more important than having the hard skills even to know, like, finance, for example, or marketing very, very well, it will be this capacity to see, like, to interact with AI every day, and to decide if that is a good approach or no –  if you should go with the AI answer output, or how can you improve that? Like, critically analyze?

Brittany Evansen: Yeah.

Stephany Oliveros: So yeah, this is something that we can definitely work on.

Brittany Evansen: Oh, well, I want to be respectful of your time. I told you that I would not speak with you for more than 45 minutes, and we’re already past that. Is there anything – well, first of all, I’m going to be signing up for some SheAI classes. I’m super interested in that, especially that humanistic version, but where can people learn more about SheAI?

Stephany Oliveros: It just, we have our website: SheAI.co, also we’re on social media, and then we do lots of events, live events that you could join. The community is free to access, for the, like, some of the events are free as well. We have also lots of free resources, because we want to, like, extend this to as many people as possible. And I’m pretty happy to open the conversation. I know painting a pessimistic view of the, of the future, I think, right now, these companies, you know, like, hold a lot of power, and it’s enough, and it’s up to us now to, like, take on our hands also, like, our part, our responsibility of educating ourselves and being able to contribute with our opinions, with our thoughts, with our evaluation, and how we use AI, and not let them decide for us.

Brittany Evansen: Hmm. Yes. Amen. Well, thank you so much. I really appreciate your time, and this has been a great conversation. I could continue doing this for at least another hour, but I will spare all of us the yap. Thank you so much, Stephanie. I hope the rest of your day is lovely.

Stephany Oliveros: Thank you, Brittany.