TechTalk ep15: How AI keeps a an eye on bad guys

TechTalk ep15: How AI keeps an eye on bad guys

Paramita: Hello and welcome to a brand new episode of PwC Luxembourg TechTalk. We are still continuing with our series on Data and AI. And today I chat with Gregory Blachut, Director about AI's role in fraud detection or as he prefers to call it anomaly detection.

Paramita: Hello Greg, welcome to TechTalk. I'm very happy to welcome you here.

We are continuing with our series on Data and AI and today's topic is quite interesting because we will talk about how AI is used to detect fraud and to recommend to businesses what are the best practices or what they can do to detect fraud. But before starting, what do you think, what is the status quo right now? How big is this phenomenon of fraud right now in the world?

Greg: So hello, Paramita. Happy as well to be here with you for this podcast. So to come back to your question how big is the phenomenon today, I just would like first to change the word "fraud" by anomaly.

I think "fraud" is already a term with kind of a judgment. 

And Data and AI are aiming really to detect much more anomalies. And maybe these anomalies are fraud but before they are considered as anomalies.

How big is the phenomenon… So we can see in all sectors today that there is a need to detect anomalies. And the first thing that they have in mind is when there is a loss of money and here companies are really taking seriously this point. This point should be addressed because there is a direct loss and also a big impact as well in terms of reputation.

So in Luxembourg we can see today that some companies are facing some fraud and they are looking as well for anomaly detection systems. So all companies are concerned. From financial sector to operational sectors, also public institutions and hospitals. Everyone is concerned by this topic.

Paramita: OK if we are talking about anomaly, what kind of anomaly are we talking about? What types of anomaly?

Greg: So a company has a lot of asset to protect. The first one is money. The second one is data. Because today there is a value on information. They also need to protect the people. And by looking for anomalies you also look for some aspect where you can handle some situations that were not foreseen and can protect the people. So for example, in the industrial sector, you can look for anomalies of some machine and do some predictive maintenance as well on machines. And looking for anomalies, you will see clearly that the machine will need some maintenance and you can detect it rapidly and this maintenance will in a way protect your people and also help your company to have a sustainable production.

Paramita: So we are really talking not just about fraud but encompassing all abnormalities that we can find.

Greg: Yes indeed. Because thanks to AI, the methodology behind detecting a fraud or detecting anomalies or a strange behaviour is the same you know. And as I mentioned really we can say that it is a fraud when something really happens. But what we are doing as well by looking at anomaly is to be a little bit on the predictive side and to find is something wrong.

Paramita: So how can AI help in detecting these cases? How can AI and machine learning... Because when I started doing a little bit of research on this, what came out was that it's not just AI but the combination of AI and machine learning that help detect these cases. But before that, can you just tell us a little bit about what's the difference between AI and machine learning?

Greg: I think there is not a big difference between AI and machine learning. So machine learning is only the methodology to support the AI subject. So we use machine learning for example for predictive maintenance. So we take a vast amount of data to look in the past to predict the future. And this is called AI. It's already artificial intelligence and some thing that can predict. And machine learning is the way to achieve this objective. So machine learning is much more technique to be able to address the term AI. Because under AI we can put a lot of subject. But most of the time the techniques under AI is machine learning.

Is it more clear for you?

Paramita: A little bit yes. So what I get is that it's just a part of AI, machine learning. It's the first part.

Greg: No, machine nothing is how we will process the information and the data. We use some algorithms that are called machine learning to use this data for the predictive analysis, to predict the future.

Paramita: Yes. So it's like I was saying probably it's the first step before. Yes.

Greg: Yes indeed. But that's a first step but also -- and this is something that a lot of people are missing and not thinking about – is that before going on predictive there is a lot of work to be done. In companies, the first step is to find the good information, which is already a long journey.

Paramita: Yes that is that’s exactly what we discussed actually in the episode on process intelligence… about data quality right.

Greg: Yeah. So this is the first step. The second one as well is to get the information in a good format. And this is also a big challenge because if I look in financial services for example a lot of banks have a system, a legacy system and sometime you need information from the past. So you need to mix very old information with the most present information. You have to put all this information in the same format to be able to run some descriptive analytics because you need to see which type of information you have. And as soon as all these steps are done, you can only start the machine learning process and try several models to find the best model to achieve your objective. But 80% of the work is more around preparation.

Paramita: Coming back to the how to, how does the combination of AI and machine learning help detect these cases, these frauds or anomalies?

Greg: So first of all to detect anomalies you have to look at the entire population. So when I say population, it’s the dataset. We were using, we still use, a rule based system to detect anomalies.

So this is a system that has been put in place by humans with specific rules. So for example, everything with a large amount should be checked by a dedicated service. More than 10 transactions a day per client should be checked as well by the internal control services. And this is quite a robust system and this is still used. But with this system you don't know what you don't know. So you are able to put some rules. But if you are faced with people with new behaviour, you will not put these things and thanks to AI and machine learning you will be able to detect these anomalies first. And the machine will tell you this is a strange behaviour that I’m not used to see. And if I take an example of a behaviour that we can see, it is really linked to financial services and cybersecurity. So you can put all the orders that you want in the system. But if in front of you have some hackers that know basically the rules of banks, it can prepare an attack to do a transaction with respect to the existing rules. And with AI the system will see that something is wrong because this is not the classical behaviour because using the past experience of the bank system it will detect clearly that something is happening that is not normal and will ask the human to intervene. And this is where there is a clear link between human and the machine.

When the machine detects something, the machine can’t stay alone...

Paramita: And it can’t decide for itself...

Greg: It exists today some AI able to take decision. We can see it in the automotive sector...

Paramita: Yes that's what I was going to say actually. Because my next question really focuses on that part in the sense we spoke a couple of episodes back with Andreas our colleague about ethical AI. And we saw how biases can get into systems and how AI can be can be biased to a certain pattern and to a certain type of behaviour.

So my question was that how can we trust these machines to really tell us that see there's a fraud that has been committed or there is something wrong here. So where does the human intelligence come in? Where do we come in?

Greg: It's a good question. You mentioned trust and I think this is the most important word that we can have in this discussion.

I will ask you a question. So today, I will give you a car with the capacity to drive in full autonomy. Will you directly enter in the car and let it drive for you? Or do you want to keep the control of the car and test piece by piece if you are sufficiently comfortable to let the car drive?

Paramita: No way will I let the car drive.

Greg: So indeed this is the main thing. No one will let the machine run by itself without any control and before doing this because it will probably come because the machine can replace some human areas. But before we need a lot of trust and this trust will come by the fact that the machine will be done and calibrated by the human. And not only by one human. So if I take the example of fraud detection or anomaly detection in the bank, we cannot let only one or two people making the parameters of the machine because by doing this they will also enter in the machine their own biases, their own thinking.

So for the parameters of the machine it's very important to keep control and as well to have kind of a methodology allowing the machine not having too much bias inside it.

Paramita: And what would you say, what would you recommend a company or businesses as a methodology?

Greg: So to come back to a financial institution… so banks need today to react more quickly. There are new regulations coming. There are new requests from clients. You want to perform your transaction in less than a few seconds. These transactions should be effectively performed correctly and the bank needs to ensure that this transaction is a legitimate one. And today artificial intelligence would help to analyse all these transactions because on a daily basis, there a lot of transactions and a human cannot analyse all these transactions. And the rule based system will also block systematically transactions that should not be blocked etc.

So AI can analyse for the human everything quickly. And in such projects… so I'm talking about this because in such project, you cannot introduce AI in one month or two months or even in one year. In my view, this is a long-term project because you need first to analyse the needs of your institution and also to run some proof of concept. You have to invest a little bit of money because you will learn from this proof of concept. A lot of companies as well are proposing some solutions. You have to test. So for me the methodology will be test and fail, several times just to learn. And it could be done in one year where after this you can take a good decision. Because till the time that you have not seen all the technologies and what is proposed… and a lot of companies say that they have a very robust, a very solid AI and plug and play as well but in reality when you test you can see that is not the case yet.

And also one element that is important is you should have internal competencies to manage AI. You can put a system with AI and my recommendation will be to keep the classic system as well so to do things in parallel to see if the AI sufficiently robust. But you need as well a data scientist internally to keep control and it's a question of transparency. So machine can run but you need to understand how the machine is running. And without these people it's difficult because we are talking about coding or programming. You know today we are talking a lot about Python and all of these things. This is pretty new and only the new generation is used to work with such systems and you need to grow your competency internally to keep an eye on AI.

Paramita: An eye on AI, that sounds nice. So just before we end our conversation, are there any new developments, new trends that are coming up in AI especially when we are talking about detecting abnormal cases?

Greg: Yes there are some new trends and one that we see today is around the sharing of information. Just to be clear, so I'm a bank. If I take only my dataset of the bank, my past dataset, I can have my own AI. But what is important is also to see what is happening outside of my bank and what the others are doing. Knowing and sharing information will really help companies to create a very solid AI. Because the more information you will give to the AI, the more it will be accurate. And this is quite important that this data sharing exists and it could be done in an anonymous way where you know a bank should not provide the full dataset of its clients to other banks. So a bank can create a specific dataset to share among the community of banks. And if all the banks are doing the same, a big dataset can be used to test your own AI as well. So you will be able to predict anomalies really using all type of information and all the anomalies of the others.

Paramita: Yeah from the entire community.

Greg: Yes. Also in terms of trends. So we see clearly that some countries in West Europe are quite developed in this domain and it's also coming from the fact that they were very good in cybersecurity as well. And to detect a cyber attack is also the same to detect a fraud in a bank. It could be the same to detect an anomaly in a factory. So all the machine behind is almost the same. And companies that focused on cybersecurity a few years ago have some advance I would say in AI. And today these companies are proposing also some solutions.

Big players as well have their own solutions. I think everyone is taking the direction of AI and this is not anymore a trend. I think this is really the present.

And I'm pretty sure that it will continue because we can see for example in the health domain that AI can bring a lot of very good opportunity.

When I say opportunity, it's opportunity for human beings so that you can detect cancer more quickly. And this is still an anomaly detection not a fraud. It's an anomaly for humans to have cancer.

Paramita: Very good point.

Greg: And this is for me the most important element that the AI, in terms of trust, will come first thanks to the health sector and progressively it will come in your daily personal things, and in the professional elements as well.

My vision is really a positive one. AI will never replace a human. AI will help the human. And today it is already a reality. The big things to address today is ethics. And not to use AI for bad things. I can say as the human being is intelligent and the human will use AI intelligently.

Paramita: Thank you so much Greg.

Greg: Thank you.

Paramita: So that was Greg. I hope you enjoyed the show. Please do not forget to leave your comments with the #PwCTechTalk on LinkedIn or on Twitter. I'll see you next time.

 

Contact us

Pauline André

Director, Head of Marketing & Communications, PwC Luxembourg

Tel: +352 49 48 48 3582

Follow us

Required fields are marked with an asterisk(*)

By submitting your email address, you acknowledge that you have read the Privacy Statement and that you consent to our processing data in accordance with the Privacy Statement (including international transfers). If you change your mind at any time about wishing to receive the information from us, you can send us an email message using the Contact Us page.

Hide