TechTalk ep16: The one where Anand Rao talks to us about AI

Paramita: Hello and welcome to PwC Luxembourg TechTalk. Last week, I had the pleasure of talking to Dr. Anand Rao, PwC’s global AI leader. And what a pleasure it was! We spoke on topics ranging from his view on how AI has evolved in the past three decades to the recent concept of ethical AI. He also  tackled questions from my PwC Luxembourg colleagues. Today’s episode is the first part of our conversation.

Paramita: Hi. I am super honoured today because we have Dr. Anand Rao today with us in the studio. Thank you so much. I'll just start right ahead with my questions. So basically my conscious connection with the concept of AI was -- and I'm not ashamed to say that -- it was with Steven Spielberg's movie AI where they actually showed this robot who developed love for his adoptive mother who was a human being. So that was my introduction to AI. And it was in the 2000s. You on the other hand you did your PhD in artificial intelligence and in 1980s. So the first question that came to my mind was how... I mean what was AI at that time, first of all? And what drove you to study AI?

Anand Rao: Yeah I did my undergrad in computer science from India and AI has always been at the forefront. So no matter what time AI is always the the leading edge and the boundary of computer science, philosophy, logic, everything. It's sort of a very interdisciplinary area and it's always at the forefront. So when I finished my undergrad in computer science, I wanted to do something in computer science and the obvious area to me was A.I. And it was still very exciting at that time and people looking at it today would think oh there was nothing much happening in 1985 in A.I. But at that time it was very exciting and I'm sure the same could be told about 1960s and 1950s as well. So AI was coined in 1956 in a Dartmouth conference. So I'm sure that it was exciting for people at that time to be at the forefront of A.I.. And similarly in '85 it was still very much at the forefront and which it is today as well. So one of the great things about A.I. is everything that has been done is no longer called A.I. and we see that happening today. People say oh machine learning is different to A.I. but machine learning was very much at the heart of A.I. for many of us at least who have been with A.I. for a long time.

Paramita: And what was the perception of A.I. in the imagination of the public?

Anand: Yeah. Even then, I think it was very much around how A.I. will transform what we do, how we do those things. So in '85 I was doing my PhD, finished it around the late '80s, early '90s. Japan started the fifth generation computing and that was the 5G at that time. I know now we talk about telecommunications 5G but AI had already gone through the 5G at that time. And just as people now talk about China and China AI strategy, people at that time were very much talking about the Japanese fifth generation programme and how everyone would be looking towards Japan for all the robotics and the A.I. coming in.

And that was a big scare in the late '80s, early '90s that US would be overrun by the Japanese technology. I'm sure there's still very good in robotics but still I would say US still maintain the edge in terms of A.I. and the western world is still very much pushing forward in AI.

Paramita: And again talking about the public perception, have you seen --- since you have been in this field for so long now --- how have you seen the perception evolve over the years? For example, what you thought in the 1980s, the public I mean not really the tech people, what they thought did that come true? How has the evolution been?

Anand: I think the public perception is interesting. It has always been going up and down again very much based on the promises that the A.I. people make, the researchers make. And the challenge with A.I. has always been people have over promised and under delivered. This happened in the 1950s and '60s. Again, it happened in the '80s and again we are seeing that. So every time the over promise is not delivered or it is under delivered, we call it an AI winter where the funding is withdrawn not many people are interested A.I. becomes a dirty word. No one sort of puts A.I. on their CV and then everyone goes underground except for maybe a few researchers who have tenured positions in the university. They just continue their work and there is another break through. And then again it starts all over again. So we have seen that movie unwind at least three times. So this time around, people say that it's very different. Every time they keep saying that as well.

In the ‘80s, it was all around expert system, rule based systems, common sense reasoning and how you can use logic to do the common sense reasoning and there is also the neural nets the connectionist machine, connectionist programming. So that was sort of the biggest thing. And of course things went back and forth between the logical approach versus the connectionist approach. And then somehow the logical approach sort of won over at least during that period and then everything went down. And then of course it resurged. Resurgence came again in 2007-8. But now it is very much around the neural network, deep learning. So just as the academic community goes up and down, the feeling that the common public have also gone up and down. But I must say that this time around I think there is probably far more of an awareness of A.I. amongst the common public than it was in those years just because the technology is sort of mature enough to appear in all of our smartphones and all the things that we do… social media... So in that sense I think it is sort of more pervasive this time than the previous times. So any kind of a downturn could probably be even more damaging to AI if it happens now because the public are more aware of than in the ‘80s.

Paramita: And the question of ethics… because I know that you're going to talk about it at ICT Spring tomorrow… about responsible AI. And so the question of ethics was it always there when we started talking about AI?

Anand: Yeah. It's interesting. The question of ethics I don't think it was as pronounced as what we are seeing today. AI was still very much I would say some commercial applications. So definitely it was being used commercially but not in a very extensive manner. So we didn't see that much of AI ethics there. Again, some philosophers would talk about it. So the trolley problem as we say was sort of very much a philosophers discussion, a logicist’s discussion. But that was not really front and centre. But I think with autonomous vehicles and all of the things that have happened now with respect to bias and fairness and interpretability of models, the complexity of those models, I think it has definitely taken much more of sort of a prominence over the past I would say literally within three to four years. And our research essentially identifies more than 70 to 80 different organisations coming up with AI ethics principles with their own little slant. So of course there's the IEEE standard and the EU standards and so on. There are various large bodies but then every association is looking at AI and saying how does that AI impact my business and what are some of the ethical considerations which we believe is sort of quite good that everyone is sort of looking at that issue on the softer side as well.

Paramita: You mentioned bias. How do you define, how does one define bias or fairness in AI?

Anand: Yeah so this is something that we have looked at very carefully. So again there’s a lot of academic work and academic work has actually increased over the past two to three years. There is sort of a specialist conference on fairness, accountability and transparency and sometimes they add ethics as well. So “FAT ML” is what it's called and there's a number of academic documents that are now coming up sort of essentially formally defining some of these. So again, we have researched some of these documents and then we have defined based on other people's work almost 32 definitions. These are mathematical definitions of fairness. So these are not just English sentences. There are precisely 32 of them. Now when someone says an algorithm is biased, we need to be careful as to what they really mean. It's not that the algorithm is biased, may not be biased, it's just that no algorithm can be true to all 32 definitions. So it's basically you might say that the algorithm is biased based on your definition of fairness and I might look at it and say hey no that's fair according to my definition. So we as people don't agree on the definition of fairness. And that’s not really the algorithm’s fault. But there are instances where the algorithm can be biased because the data is biased in some way or we haven’t really used that right definition. But it's a very complex issue.

I would say quite a bit of it is a human issue in addition to it being a technology issue. I know some people just treat it as a technology issue and say we can fix the bias and we just need to get the right data, the right A.I.. But it’s much more than that. It's essentially that we as a society need to come to grips with fairness.

Paramita: Exactly. You actually answered the question that I was going to ask because my question was who defines fairness. Because like you said, it is in our heads. What is fair for me might not be fair for you. So you're saying that there are researchers who are working on algorithms and mathematical definitions. And so till now there are 32 you said…

Anand: So the answer to that I don’t think there is a clear answer that there is the definition of fairness. What we are doing, again as sort of keepers of trust if you like, as PwC our brand defines, we are essentially trying to make it easy for the business people to make those decisions. So what we are saying is if it is a specific decision let's say you're giving out mortgage, you are a bank and you have an A.I. algorithm or a machine learning algorithm which decides whether to give a mortgage or a loan to person X versus person Y. In that specific situation what are all the considerations, what definitions of fairness should you be choosing. What are the variables that you'd need to use so that you don't discriminate people on age, gender, ethnicity and so on. What are the various definitions? And is the algorithm consistent across which definitions. So we are making all of those things visible to the decision makers so without getting too technical. So that they can then make the decisions and they can document the decisions for the regulator to come and see. So just being able to say that I looked at 32 definitions, I did all of these analysis across these X number of variables and then we all sat down around the table and made this decision… I think would be sufficient due process for the regulator to accept that you have taken into account all of the concentrations.

Still it is left to the organisation to make that decision in terms of mortgages. There are other cases where the governments need to make the decision. So A.I. is being used for a number of decisions at the government level and there are people now talking about wherever A.I. is being used in the governmental sense whether it is in the predictive policing or whether for declaring parole or education, admission into universities… Let's make all of these areas where A.I. is making a decision quite explicit and also open up the algorithm for academic review and social review. So there are a lot of people asking for that. So just make it more transparent so everyone can see where and how these definitions are being worked out.

Paramita: And since these are like you said mathematical algorithms, can these be one day included in sort of a universal guideline for ethical A.I. for all countries?

Anand: So the universal ethical principles are in fact coming up. So the IEEE has some guidelines on ethics. EU has got the trustworthy A.I. A number of other countries are coming up with these AI ethical principles. At the highest level I think we can probably agree on most if not all of those ethical principles. The challenge then becomes how do you actually translate it when it comes to a very specific business for example that all A.I. should be beneficial to humanity and should not harm human beings. It's I think almost everyone would agree with that definition. But how does that translate when you are essentially making a decision on the loans again. So you might say I don't want to use any of the variables like age, gender, ethnicity and so on but you might be using for example zip code and it might so happen that the zip code is very highly correlated… in you U.S. it happens with ethnicity. And you might say I'm just using the zip code. Now technically, it's not discrimination but we know it is highly correlated. And if that is the case then again ethically you should not be using it, legally you may or may not depending on which country it is. So that becomes an interesting question or issue. And then to what extent is fairness always desirable. It is also a very interesting question. So for example, we know statistically women live longer than men. So if that is the case then when it comes to life insurance policies should we demand that males and females be treated similarly. And therefore we should not discriminate based on life insurance premium.

The insurance guys would say no.

We are accounting for the risk. So now take the same thing with respect to auto insurance or life insurance. So is it really the right thing to be fair and people may not like it but fairness has a certain cost associated with it. So can we use that cost and make those decisions. We all know that in most of society women are not really 50/50. Even though in overall population men women are 50/50 percentage. We know in higher echelons of management women are not 50/50. Let’s say an organisation has only 30% women in their overall category. Now within a year should they have 50/50? And if they did that they would be fair with respect to the entire group of women but they may be unfair to one individual male person. So being fair for a group may result in being unfair for an individual. Again gender is just one aspect. It could be ethnicity, it could be age or it could be any of those. So there is a fairness of a group versus fairness of individual, fairness of outcome versus fairness of treatment. So it's a very complex issue that we are dealing with.

Paramita: Philosophical basically…

Anand: Yeah very much.

Paramita: That was the first of two parts of my chat with Dr. Anand Rao. Make sure to tune in next week when he answers more questions.

Contact us

Pauline André

Director, Head of Marketing & Communications, PwC Luxembourg

Tel: +352 49 48 48 3582

Follow us

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Hide