Ep. 14 - Responsible AI

August 23, 2023

In this episode, Sydney talks with Nicole Janssen, Co-Founder and Co-CEO of AltaML, a leading developer of AI-powered solutions. Nicole stresses the importance of responsible AI development, discussing how to integrate ethical considerations into projects and the need for collaboration across jurisdictions. She envisions a future where you can use AI for social good in healthcare, the environment, and beyond. The episode encourages listeners to embrace the potential of AI while being mindful of its responsible implementation.

Listen to this episode

August 23, 2023; Responsible AI

Transcript

00:13 SYDNEY JOHNSON, HOST:

Welcome to this episode of Responsible Disruption. My name is Sydney Johnson Design Lead at J5 Design and the Social Impact Lab and your host for this episode. In today's episode, we will begin looking at current artificial intelligence technologies and getting some discussion going around the ethical use of these new powerful tools. There's a lot of fear and hope in the world right now for what these technologies can do. Our goal here today is to demystify AI by learning about what is happening right now and where it's going. But first, it is my pleasure to introduce our guest for today, Nicole Janssen. Nicole is a highly accomplished entrepreneur and leader in the field of artificial intelligence. As co-founder and Co-CEO of AltaML, a leading developer of AI powered solutions, she has driven the company success in diverse sectors such as agriculture, finance, health and energy. Nicole is dedicated to responsible AI, RAI and has been recognized by the RAI Institute as a thought leader in its implementation. She has received numerous accolades, including being named one of North America's Top 25 Women of Influence. Nicole is the Director at Edmonton Unlimited. Thank you so much for joining me, Nicole.

01:21 NICOLE JANSSEN, GUEST:

Thanks for having me.

01:22 SYDNEY: Awesome! So let's go back to basics for a minute. How did you find yourself working in AI and and what was the genesis of AltaML?

01:31 NICOLE: Well, it was a bit of an accident, to be honest.

01:34 SYDNEY: All the best stories are. [Laughs]

01:36 NICOLE: Yeah, my background is not in technology. I was originally in real estate and international business and my husband sold his business in digital media and we decided to come together and form a digital media company. The two of us and as we were in that space and we were a few years in, we really started to hear more and more about AI and how it was going to disrupt digital media. And so we decided that we needed to get ahead of that. And so just thought that we would just hire a few data scientists, put them to work and say, solve this problem for us. And at the time we called it algorithmic content generation. We envisioned having all of these digital media websites that we would have all of the content generated by AI. Well, what we didn't realize is we were trying to build ChatGPT with a fraction of the budget that it would have taken and a team of four and turns out that didn't work out. We were hundreds of millions of dollars short and definitely a much smaller team than was possible, but through that process we've really got to see the opportunity for AI, the emergence of that market and listen to other entrepreneurs talk about their journey and their challenges with AI. And so that is how AltaML was born and how I suddenly became a Co-CEO of an AI company.

03:07 SYDNEY: Oh, very cool. Well, I mean, every journey is the right journey. But I love that you got to start, I think everyone's going to start having their own dreams about building the next ChatGPT or whatever before it actually becomes a thing, right? So that makes sense to me. What's something that you're really proud of about Alta ML or something you'd want to share or is something you come back to when you talk about it.

03:35 NICOLE: I think what I'm most proud of is the team that we've built. When I look at the age of our company, we're five years old. We are in a relatively new industry in a field that attracts primarily individuals who were originally from other countries. We've only ever hired two Canadian data scientists because they just simply don't exist, so we have this incredibly diverse team in this emerging market in a space where everybody's still trying to figure out exactly how to do this. There is no one right way anymore. And it's just everybody's gritty, everybody's agile, and we all have a lot of fun together. And so that's what gives me the most pride is just this amazing team that we've built and we've got here that are all working to elevate human potential.

04:36 SYDNEY: Well, that's a beautiful answer. OK, getting into the meat a little bit. Ethical concerns around AI have been becoming increasingly prominent. People are talking about it. I wonder if you can tell us a little bit around what are some of the main ethical issues regarding AI and why that's important?

04:57 NICOLE: I think the challenge with responsible AI and ethical use of AI is that most people don’t understand just how big that topic really is. There's so much that relates to that. It's not just data bias which is that thing that's in the news a lot, and so a lot of people understand data bias and that is a concern which it absolutely is. But how does that relate to fairness? And then what's the transparency around what's being built? Is the model operating in a black box or is there some explainability to how it's working? Who's responsible for ensuring that this is being used ethically and that bias is not incorporated in the model? Is that the software developer who helped? Is that the machine learning developer who helped? Or is that the company that built it? Or is that the company who you worked with to build it, who's now deploying it? Who's taking that responsibility, who's accountable? And then privacy, obviously. Large issues around particularly personal identifiable information and the privacy around that, then safety and security of the data and the models and sustainability. Are we building something that helps the people in the planet or are we using AI for bad? And then how are we empowering people because if you expect one person in an organization to be responsible for the responsible AI aspect, the problem is that they don't bring a diverse background. And so what one person might recognize as data bias, another person wouldn't, because it's just simply their life experience. It's not because they themselves are biased, but it's just simply that it's not something they've experienced or would come to mind for them. And so how do you empower everyone? Not only the team building it, but the team using it to come forward if there's any concerns. So AI really is in its first inning of development and we really have to get the ethics of it right or we will lose the incredible opportunity that exists to use AI for society if there's already built in fear with media and movies and all of that, there's a lot of concern around AI. And if we don't get the ethics right at the onset, AI won't have a solid future.

07:46 SYDNEY: Yeah, I think that's a really interesting way to put it. We're losing the opportunity if we don't do this as opposed to it may not happen and then things are like the way they were before, but actually there's so many other features that could be if we don't go down this path correctly. Just for some of our listeners who may be hearing these terms for the first time, can you breakdown a little bit what is bias and transparency and explain ability? How are those things different from each other?

08:16 NICOLE: You bet. One of the easiest use cases of bias that is well documented is, for example, Google was looking for an AI model that would identify which employees or potential employees would be the best to hire. They put all of the data in from the past as to who these people were. All of their resume statistics and then put in whether they were successful or not with Google. Well, #1 is what measures of success are you using here to say someone is successful? That’s a tricky part. But the other part was they took out gender from the data set, but what they forgot is there's a lot of things on a resume that indicates gender. So if you're part of the women in technology group, you likely identify as a woman, and so that ended up creating a model that was biased towards males because they had been bias in hiring of males. So there was more successful males because there were more males. And the model could pick up whether someone was male or female based off of what was on their resume. And what happens then is that model just perpetuates that bias because then it indicates you should hire this individual. This man and then that man is successful. Oh, perfect again men. So successful. And then the bias continues to add as that model grows, and so that is sort of what I'm referring to with bias. The problem with AI is when you put bias into models, it just amplifies. Whereas a human being just has a certain level of bias. You're not amplifying it each time you do something, which is a risk that is specific to AI about the possibility around bias.

10:34 SYDNEY: So you're saying each time it is learning, it's increasing... it doubles down on that bias even more.

10:42 NICOLE: Yes, yes, absolutely. Because machine learning... that's the whole difference between it and say traditional software is that it's constantly learning from each new decision it makes or prediction it provides and so then it then sees oh that was success. I was right. Check, check.  I'm good at that and there's bias in there. It just gets worse and worse.

11:11 SYDNEY: Yeah, that makes sense.

11:13 NICOLE: And so transparency, for example, we know that Facebook uses AI algorithms to determine the content that's coming forward to each individual. But we don't know most of us who use Facebook, what is being used to determine why things are being put forward in front of me. And so there is a lack of transparency in how the data that me as a user is being used to then generate my future content. And so transparency around what is the data being used for? How is it being used? And then what is that it relates to the explainability part of, how do we explain how we got to this answer One of the projects we've worked with in investment management, we work with investment managers to help them identify signals in the market to make investment decisions. Now an investment manager has decades, often of experience that they're relying on and all of a sudden, if they see an AI model that says you should do this and it really goes against their gut, it's really hard to convince someone to go ahead and do that because they're leaning on all of this experience feeling like I don't think this is right, but if you say if the model says we predict you should do this and here is why. Then that is giving an explainability component to the AI model and it really impacts not only the ethics, but also the adoption rate of something, because people can get behind, OK, I didn't notice that piece. That does make sense. I am happy to make that decision to go ahead and sell that investment or buy that investment.

13:09 SYDNEY: Right. Yeah, that makes a lot of sense looking at the scope of all these potential missteps that we could make with in terms of the ethics of AI is there one that stands out to you as being a bit more worrisome than the next? Or that you think needs more paying attention to? Or are they all similar in your mind?

13:30 NICOLE: I don't know if I could pick one, honestly, they're all so tied to each other. You can't have one without many of the others often. You can't have privacy without security of data. I think they're really all just part of one thing. It's just so important that everyone understands how big this topic is. That it's not just data bias.

14:06 SYDNEY: How do you ensure that those ethical considerations are integrated into your development processes at AltaML?

14:16 NICOLE: We start from day one. We start from before we've even signed a contract with a client of talking about responsible AI. It's important that everyone from the executive down to the end user understands responsible AI and is aware of it because responsible AI, like I said is not deployed or implemented by one person. Everyone involved needs to have it in front of mind and so that it's just a part of everything we do and educate all the way along at all levels. And we also have many stages throughout the life cycle of a project where we stop to take a step back and really analyze the responsible AI aspects of the project. Particularly around those impacts of the end user and who are the stakeholders we should be taking a look at. So it's continuous and it doesn't ever end. What’s also really important to understand is that when you build an AI model, you don't walk away from it because it continues to learn and potentially some of the data that gets added to it can become biased. And then all of a sudden, you've got a biased model. You have to actually continually work with these models and monitor them to ensure the practices of responsible AI. And because this is such a developing field, what is the perfect right way to do responsible AI today won't be the right way in a year from now and so all of those new practices constantly have to be implemented.

16:03 SYDNEY: That's very interesting. How do you stay on top of those new practices? Because I imagine if it's going to be different a year from now, it was also different a year ago.

16:11 NICOLE: Yes, we have a dedicated team within the organization who works on it, but we also work with external organizations whose sole focus is responsible AI. We partner as members with the Responsible AI Institute as one of them, and they're one of our key partners in understanding what best practices are the latest.

16:35 SYDNEY: Right, that makes sense. Do you ever find yourself in the situation? Because I imagine there are so many stakeholders and multiple projects and it sounds to me like a lot of this work is actually education, making sure that everyone that's in that system understands that they have a part to play. Do you ever find yourself in a situation where that education piece is difficult, or people don't really want to get on board or have a hard time getting on board with responsible AI.

17:06 NICOLE: Well, yes and no, because sometimes what I hear is, well, this just adds time and money to getting this deployed and all I'm focused on is getting something deployed that's going to return on my investment. But what I try to explain is that there are business reasons, not just because we should, that responsible AI needs to be implemented. One is you're not going to keep talent in your organization if anybody gets wind that you're developing AI in a way that's not ethical. And the brand issues that you will face if something comes out that you are not deploying AI in an ethical way will be massive and could take down a company, but then also if you do get the AI wrong and these models are providing predictions that are incorrect or biased, imagine the financial impact on the business. And so I usually approach if I get any pushback, I sort of turn it around to saying well, wait a minute, there's actually real legitimate business reasons that this needs to be top priority. And then that usually squashes that conversation.

18:35 SYDNEY: Yeah, it's not something that you can really decouple from it honestly, because you're not doing the job of creating the model if it's not responsible. So looking ahead, what do you think should be the top priorities for, say, policymakers or industry leaders, or even society as a whole when it comes to this responsible development of AI?

18:56 NICOLE: I think one of the key aspects is collaboration across jurisdictions. This can't be a Canadian thing where we have a Canadian way of doing responsible AI and then there's an American way of doing responsible AI because number one, AI companies generally don't sell within one jurisdiction and what we don't want is the regulations or the ethical pieces that we're trying to get right to also stifle innovation. There's a healthy balance in there that we have to figure out, but in order to do that, you can't have different rules across every different country. And right now if you look at the US, states are putting out their own regulations state by state that don't correlate. And so you can't do business in certain states if your product is, and not that I'm saying that those products are unethical. Mostly in most cases the regulations are just a little bit severe. That potential for fines, etcetera, is so high that businesses won't do business there, even if they have an ethical product. So now you've got all of these different pieces going on. Canada is working on developing its own. The European Union is going in its own direction. And what I'm saying is, can we get everybody at the same table and know that we are not going to get it 100% right the first time? But we need to get started and do the best we can today and continue to evolve it over time because by the time any regulation gets into actually being implemented, it's outdated. The technology is changing so quickly we really need a framework that allows for the technology to evolve within it, and not just be narrowing in on specific use cases of AI that will be outdated by the time it comes to reality.

21:17 SYDNEY: This is where it becomes that our systems aren't set up for this kind of disruption. I suppose you would think of its moving so fast that your policy, whatever policies are written, it almost doesn't matter because it becomes irrelevant.

21:34 NICOLE: Well, and AI, just like every technology, will have bad actors that regardless of the regulation or the policies or anything else, will continue to be bad actors. But the best answer to that is to build AI that identifies the bad actors, and so if we are putting dollars and investing dollars in that technology and research around that technology. I do think that's a real solid move rather than expecting that that actors will be deterred by regulation. What regulation could help are those edge cases where individuals maybe don't know don't understand and the regulation helps shape them into the right direction. But let's also put some dollars down on that research to find those bad actors who know exactly what they're doing.

22:30 SYDNEY: Oh, that's an interesting take. Getting away from the doom and gloom a little bit. How do you think AI can be used for social good now and in the future?

22:42 NICOLE: I believe that our current healthcare challenges, our environmental issues, our carbon emissions, our food availability are all going to be solved by AI. And that technology exists today or the capability to build it, certainly. This is not some technology that's decades in the future. It just needs to be built and most of the challenges just come from a level of fear of people around AI and the unknown, and this new technology that they don't fully understand, and they're afraid to embrace. But once we can move past that fear, all of those things can be solved by AI and this is a bit of a bigger concept and a bit existential, but I always think of AI in the future as being our co-pilot, not only in our work life but in our home life. And I foresee a future where maybe the whole nine to five, five days a week isn't necessary because we all have AI Co-Pilots throughout different aspects of our life and maybe we work 20 hours a week to achieve the same thing. And that can go across society. So I know that's a bigger concept, but that is a potential for AI that it can solve this work life balance that is every 3rd headline in in every magazine.

24:24 SYDNEY: Yeah, I think it's the point of innovation in general, is potentially to make our lives so that we can live our lives as opposed to be in service to something else. I think that's a really interesting vision. What advice would you give people if they're either looking to get into this or be involved with this, but also just as citizens, when it comes to AI and their consumption of it?

24:59 NICOLE: I think one of it is to lean on experts, follow experts, listen to experts. Don't just listen to the headlines because there's all the headlines, the majority of them are very fear based. They're sensational. Find some key groups that are experts in that area, whether it's responsible AI or whether it's particular use cases around AI and the opportunities for AI and just follow those individuals and start small. One of my favorite books that we give to most of the people who join us who aren't technical is Prediction Machines, and it really just explains the potential for machine learning and AI in a simple way, so you're not expected to be a math whiz or a computer scientist to understand it.

26:01 SYDNEY: Amazing. Well, that's great! We'll make sure we put that in the show notes for sure. Do you think that AI can be ethical? Will we achieve a future in which it's true or will it always be subject to biases or to your point earlier we always just have to maintain it.

26:19 NICOLE: The majority of data doesn't have the opportunity for bias. Something either happened or it did not happen. For example, we do a lot of work with the wildfires and predicting wildfires. A wildfire either happened or it did not happen. There is no bias to that equation. So the data that we're using to build that model is simply this was the weather, this was the soil condition. This was all the components. And then there was a fire, or there was not a fire. And so there's a lot of AI that doesn't even have the opportunity for bias unless you're not using a robust enough data set. You're maybe looking at one fire season, and that's not robust enough to give you a really accurate model. Then when you look at the cases of where there is potential bias, that we've talked about earlier. Those cases, you do have to continually monitor them. And you can easily measure is this more or less biased than a human being had been in the past because you can go back and run the model on past data and so I would say that when we say can it ever be ethical? Well, what is ethical? Is ethical being better than a human being? So if we can be 100% better than a human being, then I would say that that would be ethical. In most cases, I think humans are generally ethical. It's just that there's bias in their background that's often not recognized. And so in fact, I think AI will probably be more ethical, if monitored properly, then human beings.

28:22 SYDNEY: Wow, that's an inspiring future in a very poignant way to say it. We're not starting from a place of no bias. We are inherently biased and our systems and structures are and so it would be a beautiful future if we could remove some of that through AI instead of it being us thinking of it. It's something that introduces that or makes that bigger. Is there something in the future... we've talked about AI potentially affecting work life balance as well as solving some of these really big world problems. Is there something that you're particularly excited about that's coming up in maybe the nearer future as it relates to AI that you're just chomping at the bit, can't wait for that to happen?

29:07 NICOLE: One of the things I'm most excited about is the potential around health and the preventative medicine opportunities that AI presents, where if given the right data, we could predict if someone is likely to become a diabetic or if someone is likely to have mental health issues. Now there's a lot of data challenges behind that, and protecting privacy, et cetera, and how do you, if you do have that prediction, then what do you do with it and how do you approach it? But I do think that we'll be able to work through that in a way that society is good with. We just have to walk ourselves down that path because we are very much in a place where people are afraid to have their data shared, but if I think about what I know AI could do with my health data for me. Then I would be totally open to sharing that if I knew that model was very secure and that that data was not going to be shared elsewhere. And so I believe we'll get there where the data issues and privacy issues will be tackled. But that is what I'm most excited about is just preventative medicine and the opportunities that we have there.

30:37 SYDNEY: Yeah, that's very exciting and back to some of that transparency explainability pieces that just really need to be in place. Sounds like we all need a bit more focus on the possibilities versus the barriers. If there is something that you would want listeners to take away from, like maybe one thing that you want them to remember about responsible AI or to have learned from this conversation. Is there anything you want to leave them with?

31:08 NICOLE: I think that understanding that responsibility AI is not one and done. It's a journey that will continually be evolving and needs to continually be considered. But don't let the robustness of that scare you away from the opportunities of AI. There are experts in responsible AI that can support companies, organizations, and individuals in pursuing AI. Lean on those people and those organizations to help with that. Don't let it scare you away from realizing the potential.

31:48 SYDNEY: Awesome! Nicole, I really want to thank you for your time today and hopping on the podcast with me. I found that very inspiring. I'm going to start making some backup career plans right now after I get off this call. It's really good to know there's people like yourself out there in companies like AltaML that are engaging in this conversation and being really thoughtful around responsible AI and thanks to our listeners for choosing to spend time with us until next time.

[Outro music]

That's all for today's episode of Responsible Disruption. Thank you for tuning in and we hope you found the conversation valuable. If you did, don't forget to follow, rate, and share wherever you get your podcasts. To stay up to date on future episodes and show notes, visit our website at thesocialimpactlab.com or follow us on social media. And until next time, keep on designing a better world.