Paramita: Hello welcome to PwC Luxembourg TechTalk. Starting this week, we'll have a series of mini episodes that'll focus on AI and data. Today I talk with Andreas Braun, Senior Manager, on AI and ethics.
Paramita: Hello Andreas.
Andreas: Hello Paramita.
Paramita: I'm so happy to have you here because you remember we were supposed to do this podcast before and somehow it didn't happen.
Andreas: But good things often take a while so I'm glad to be here now.
Paramita: Exactly. I'm so happy to have you here. And we will talk about the always interesting artificial intelligence. We did talk about it a little bit but we essentially spoke of how AI is being hyped as something that will take over our jobs in that previous episode. And you know we all know that there are so many aspects to AI that you know we can have shows after shows.
Andreas: We could talk for days.
Paramita: Yes exactly. But today I'm going to... I wanted to... Because I read an article recently on Guardian about this research that the New York University did on diversity in AI and how you know the systems they are not how can I say they're not diverse enough... Does it even make sense what I'm saying?
Andreas: It makes total sense. I mean often when we talk about AI we feed it with a lot of data and of course we have to make sure that this data is as diverse as the world. And that is what is right now still kind of a challenge to get this kind of data sets and to use it appropriately and therefore we have to make sure to be better about that in the future which is what a lot of researchers are working on but also a lot of companies, governments and us.
Paramita: Can you explain what you mean by that in the sense what do you mean by that the system needs to be diverse?
Andreas: So for example we had a few weeks in business. One example was for Amazon who tried to automate or augment with AI their hiring process by automatically analysing CVs for their software engineering positions and they of course fed the CVs they had into those systems and these were predominantly submitted by male persons. So the AI was trained in a way to discriminate a little bit against females because it was trained predominantly with male data which led to them figuring out that this is a problem that they have to work on. In the end they are not using this system right now. It was an experiment to help their HR department. But they need to get a much better and diverse dataset with more series from female software engineers and this will also help their AI in making better decisions.
Paramita: Yes so as we are talking about diversity so it automatically brings the question I think of ethics. So when we say ethical AI what are we implying by that?
Andreas: We are implying that any artificial intelligence application will also have to follow ethical rules that we have set up as a society. And this is a pretty old concept even in science fiction. You may have heard of The Three Laws of Robotics. It was pronounced by the author Isaac Asimov. So the first rule for any robot is that they are not allowed to harm humans in a way. The second rule is that they have to obey humans unless humans give them the order to harm other humans. And the third law would be that they have to preserve their own self unless it goes against the first rule. And so this is essentially a very first idea of ethical guidelines that follow almost biblical ethical guidelines in 'thou shalt not kill'.
Paramita: It's like the three commandments of robotics.
Andreas: Exactly. And this was pronounced you know I think sometimes in the 1950s when he was writing a lot of books about robotics and of course these guidelines have developed and have been refined. We've had in the last few years a lot of commissions that were working on ethical guidelines for autonomous cars for example. There was a famous case in Germany where the so-called Dobrindt commission which was writing a lot of pages about that where we have the old case or the famous let's say ethical dilemmas. So if you have to make the decision to swerve to the left to avoid an accident but you would kill somebody else, how do you choose? These ethical dilemmas will happen in autonomous cars. They happen in traffic today and machines will be confronted with these kind of topics and they will have to react accordingly. And these commissions are trying to make up the set of ethical guidelines and rules that machines or AI has to follow in the future.
Paramita: This is really fascinating and do you think that... I mean... I can't say I'm illiterate but almost in all this you know when you talk algorithms and everything... I mean is it possible to... Because what I understand is that now algorithms work in a way like you have to feed in certain data. And then artificial intelligence it needs to go through the amounts of data that are fed in and then they have to look for patterns and then you know...
Andreas: That is definitely not the position of an illiterate person. So you seem to know the topic very well.
Paramita: Well thanks to all my guests basically. But I don't understand how you can... How can you teach AI to be unbiased? Do you know what I mean?
Andreas: I know what you mean. So in the end it's because we have to go beyond the plain mathematical algorithm that is there. So we have to put this algorithm in the larger framework of how the system should act. So we can for example tell the system or any AI system what are boundaries in which direction they are allowed to learn. Are they allowed to learn to increase their level of bias or are they for example allowed to learn how to be more aggressive in case of the autonomous car. Be very aggressive drivers, try to cut off people eventually causing more accidents. So we have to set up the algorithms in the way that they act in a bigger framework of the overall system. They have to get clear rules on what they are allowed to learn. And of course however we have trained our AI system we have to check if it is really unbiased and we have methods to do that. So we can check again mathematically for the algorithm if there is a level of bias and of course we have to check before getting those systems out in production that the level of bias is as little as possible. It's really hard because we often deal with human data and no human is unbiased in a way.
Paramita: Exactly. That is exactly what I was going to say. To teach AI to be unbiased we have to be unbiased in the first place. You know the people that are handling the systems. And it brings out such a topic...
Andreas: It definitely brings up a lot of discussion and debate.
In research it's still unclear if there will be a perfect unbiased AI system for various applications because in the end you deal with real life data and it's really hard to be unbiased in that case. So the idea is more that you tried to do the best job that we can. That we try to follow any legal guidelines that exist in that way, that we follow the ethical rules that have been set up by society, that we try to discriminate as little as possible, that we make sure that our data sets are as diverse as possible, that we constantly control and check the results of our AI system to make sure that they are in line with potentially even developing ethical and legal guidelines that might change over time. So it's really a process that has to be followed. There has to be a governance framework to make sure that we will use AI responsibly and that we have trust in this kind of an AI system which is really the main purpose of why we are doing this and that in the end the systems should always support humans in a way. They should help us be more efficient and be better or even be less discriminatory. There are applications where we can foresee that AI might be less biased than humans typically.
Paramita: Really?
Andreas: Yes. So for example if you go back to this CV application datasets. The question is a human recruiter less biased than the AI system when looking at CVs and from from the evidence that we are looking at it seems they are less biased. If we are looking for example at border controls. It happens that random checks at the border or a second line check at the border often happens based on the perceived ethnicity of the person looking at the border guards’ perspective. Whereas an AI might be able to be more random in that regard or be more fair in that regard. So it's always a very fine line that one has to walk in this way to make sure that if we use AI application it certainly should never enforce our human biases or make sure that this is expressed in an even worse way. But instead we should use it either to improve how we act ourselves to even reduce our biases. This is an opportunity that is not often talked about in those discussions. So we often talk about the dangers of AI if it's not ethical. But we should also talk a little bit more about the opportunities of AI if it is even more ethical than certain humans.
Paramita: Absolutely. And what should businesses learn from these discussions from these debates? What is their role in making the correct decision when talking about ethical AI?
Andreas: So we have business applying AI in different ranges. So there are businesses that create AI applications, that build up their own datasets and train it and they have to be aware of how to build up good unbiased data sets and how to build up the governance framework to make sure that your trained AI is used well.
For other businesses that buy some AI application and apply it in their company, it has to be well introduced with all the employees in mind, with the company goals in mind. Most certainly all larger companies have an ethical framework how they should act. And of course any AI application has to follow the same ethical framework. It should be fair. It should be unbiased and it should support the humans in their productivity instead of costing any potential harm.
Paramita: For example we have this code of conduct. So you're saying that t's the same. As we need to follow the certain code of conduct that we have as a company that any AI system needs to follow the same sort of guidelines.
Andreas: Exactly. So this is again about setting the boundaries for the AI system. We have to make sure that of course all legal guidelines are followed and AI should never act unlawful. And we also have to or we have the opportunity to give our AI the same codes of conduct that we have for every other entity in the company.
Paramita: Right. And what's the role of governments, E.U? Because I think that the E.U. has come up with a set of guidelines recently for ethical AI?
Andreas: Exactly. So given how much impact AI as a topic had in the last year of course governments are trying to jump in to create these legal frameworks to set up the guidelines that everybody should follow to avoid adverse effects that we have seen in some other countries with certain AI applications where they were used in nefarious contexts.
So what the European Commission did is build up an expert group of people that are AI researchers on the technical side, of people with a strong ethical background often social scientists. And they were sitting together meeting several times to set up kind of a framework on how we can create trustworthy AI systems in Europe that are human centric that always or should act in the best interest of the humans. And they actually finished just a few weeks ago with their final version of this report which gives some guideline on how to develop these kind of ethical AI systems but also on how they can be applied ethically as we just talked about. Follow the code of conduct in this specific company, follow the legal guidelines in this specific country. It's kind of a long document of 40 pages. One doesn't have to read all but if there is some interest there of course is an executive summary that gives interesting thoughts on how for example to give some guidance on how AI should always respect human autonomy, that they have to be able to make their own decisions. It's more along the lines that they see AI only rarely in cases where it should act completely autonomous but more that it should support humans in taking good decisions in a way. And this guideline is to be reflected in all the different AI strategies of the European Commission member states. And we see similar ethical AI initiatives also in other countries. So the U.S. government is right now working on those kind of guidelines and also like research groups in the U.S. have been working on those topics and we even see these kind of developments in China which is traditionally more known for being not the most ethical about AI applications.
Paramita: That's really promising to know that China is working on something human centric. And when you call AI human centric so it basically means that it needs to serve human purposes right?
Andreas: Exactly.
Paramita: Andreas I am afraid we have only time for this much today but I'm going to sit with you once more for sure. Because AI is something and I'm sure our listeners as well they are extremely interested about you know in knowing about AI and all of the aspects... at least some of the aspects related to...
Andreas: Again if you want to talk about everything we could sit here for a few weeks, months.
Paramita: Exactly. Thank you so much Andreas. It was really insightful, thank you.
Andreas: You're very welcome. Thank you for your very insightful and interesting questions.
Paramita: Oh thank you so much. I'll see you next time.
Andreas: See you next time. Goodbye.
Paramita: So that was Andreas on ethical AI. I hope you enjoyed the show. And please leave us your feedback on Twitter or on LinkedIn with the #PwCTechTalk. I'll see you next time.
Pauline André
Director, Head of Marketing & Communications, PwC Luxembourg
Tel: +352 49 48 48 3582