PwC AI Lab On A Mission To Responsible AI

Article originally published on Silicon Luxembourg

It’s the story of a launch, or rather a relaunch. PwC AI Lab is (re)opening its doors and presenting its activities in a more formal way. Andreas Braun, Team Lead Artificial Intelligence & Data Science at PwC Luxembourg, shares with us what the lab is all about, how the 25-person data team is pursuing the quest of a responsible artificial intelligence and how the lab helps co-design AI solutions with its clients.
PwC AI Lab On A Mission To Responsible AI
Why does a consulting firm decide to embrace AI?

For very similar reasons as everybody else is embracing AI. We expect huge growth for our economy in the future. The same goes for big shifts in medicine and other scientific fields driven by AI. Of course, as a consulting company we have to be on top of that because our clients are expecting it. They want to use AI, they are willing to do that, and they are investing more and more in this topic. We have to make sure that we give the best possible advice to our clients. Consulting in itself is also benefiting from or changing with artificial intelligence. Consulting has a lot to do with research; it’s not all about PowerPoint presentations and talking to people. More sophisticated AI tools help us to scan through social media in a few seconds and get a lot of articles presented in a very concise way. This concretely changes how our research teams work, and helps them to create the best presentations.

Tell us about the AI Lab, its inception, what it aims at.

The AI Lab is all about experiencing artificial intelligence. We don’t want just to show what AI is or what it could do for you. It’s about bringing the clients into our space and putting some life into AI, humanize it. It’s a concept that PwC has already pursued for a few years – in the US, where the lab was founded more than five years ago or in Japan, where the lab has been operating already for the last two years. Now Luxembourg has the first PwC AI Lab in Europe. The goal is to cross-fertilize ideas from the PwC network, get experienced, and share use cases. Of course, we are much closer to the Luxembourg reality and our European clients, and able to adapt the solutions to the local market.

You are advocating for a responsible artificial intelligence. What is the meaning? How does it translate into real actions?

Responsible AI is a growing need. We are still fearful, not only in the society we live in but also at a company level, where people are worried about how AI may impact them.

To some, it might be a big job destroyer that comes with, in addition, certain biases. There is this conspicuous example, a credit card that was giving women different credit limits than men. AI is undoubtedly becoming a challenge for all of us. The European Commission is, of course, thinking of new regulations to make sure that AI is only used in an ethical and responsible way.

At PwC we’ve already created a responsible AI framework where we always want to take into account the current rules and follow the highest ethical standards. We have to make sure that we are getting the data to train clients’ systems in a reasonable way, that we have good governance around it so we develop the best artificial intelligence applications.

We have the tools to run these tests and to make sure that, when our clients want to use AI, they won’t run into pitfalls that could damage their reputation or, even worse, have some legal issues.

Read more here

There is responsible AI but, is there also irresponsible AI?

There are regulations around this. You should respect the notion of human rights in the first place. Some practices powered by artificial intelligence—particularly around social scoring and face recognition that we’ve seen in certain places such as in China when the State uses biometrics to track minorities—are a good example of AI that is not responsible.

This could be a challenge in Europe as well, but we are putting in place strong regulation to ensure that everyone can benefit from AI. Every citizen should be aware of the challenges that surround the responsible use of AI, and push their governments to create the best possible policies. AI can even help with this.

How do your customers and partners understand artificial intelligence? What are their fears, desires, etc.?

Regarding Luxembourg in particular, we’re having studies every year around these topics, asking CEOs how they see AI. A large majority confirms a great interest, with a strong will to use it. Indeed, more than three quarters of CEOs say they are eager to use AI or want to increasingly use it in the near future.

The main challenge around working with AI is the lack of people. It’s challenging to find data scientists or AI specialists on the market able to implement AI tools.

The lack of knowledge is another one. Many companies whose data journey is initiating don’t quite understand what AI actually is, what they can do with it and how they can benefit from it. This is where our AI Lab comes into play, to make them experience it.

And the last challenge to point out is around the economic uncertainty. AI is always an investment. It’s not something one can expect to benefit economically from very quickly. Therefore, there is some uncertainty. This is why we like to combine the knowledge of our business experts and data experts, to make sure that whenever AI is applied, our clients can get economic benefits.

 
One of the specificities of the Lab is to co-design projects? What is it all about?

The co-design part is based on an approach that we call BXT which stands for “business, experience and technology”. We have the technology people—data scientists like me—and other experts working in this field. Nevertheless, we may lack a full understanding of how a specific industry works or what the clients’ challenges currently are.

That’s why we always bring together business, technology and data science expertise in the BXT sessions. We gather the ideas together with the client, and conduct a series of workshops where we brainstorm, interact and exchange, and get a clearer idea of what applications would work for the client, or what additional steps we will need to take to come up with a concrete solution for them.

In the end, we are able to build a proof of concept quite quickly, it’s a 4-to-8–week challenge to get a first concrete answer. Is it reasonable to go ahead and implement it, or is it reasonable to give up? If you want to fail, at least fail fast and move on!

How to build an AI solution, concretely?

The proof of concept is usually the first step. We demonstrate whether AI might work and if it fulfills the main ideas and expectations the client has.

Then comes the second part, understanding how the client’s processes are set up, to picture if its existing cloud is sophisticated enough in terms of data processing. This is the role of our data specialists.

When our clients aren’t mature enough in terms of data management, we have teams that are composed of data managers or data engineers as well as data scientists that can propose a full-scale solution to the client, making sure data management is appropriate to apply AI algorithms.

We can also help clients that are already in the cloud, at a very mature stage, by implementing specific cloud-native tools.

How can we place the human being at the center of this new intelligence?

When we look at how AI research has moved in the past few years, we see that some AI tools are very good in terms of performance. In some cases, they perform better than human’s work, for example when it comes to recognising objects.

Of course these tools become much more powerful if we add the human side in the loop. For instance, if we use AI as an expert support tool or as a decision support tool, it can provide humans with a good set of preselected ideas to make a much more informed decision without having to go through all the data.

One can find these applications in the medical sphere where the AI tool does a pre-analysis of medical image, and then the expert can agree or disagree with what the AI has shown. It also allows the professional to look through a lot more cases in the same amount of time.

Last words about the PwC AI Lab.

Our lab is open! So anybody who wants to come to visit us is welcomed. Feel free to contact us! We can run workshops, have informal sessions, present use cases.

More information here

Stay Connected: