Insight

Traversing the responsible AI tightrope: Four perspectives from our AI soirée

By Alwin Magimay

I opened our latest AI event by saying it’s a great time to be alive (geopolitical disruption notwithstanding). I believe AI is an amazing gift. During the evening, we heard wonderful examples of how AI truly can help us build a positive human future. But we also heard about serious risks. With this gift comes great responsibility. That’s why we wanted to bring people together to explore ‘responsible AI’.

Sharing perspectives

At the event, where we bring together leading practitioners, investors, technologists, in-house experts, and other interested parties, we heard from the worlds of academia, politics and technology, and provided our own view as consultants tackling this issue for clients every day.

The overriding view was that responsible AI requires a delicate balancing act – at international, national, and organisational levels. Our speakers considered the challenges involved, covering:

  • The need to be proactive – not just reactive
  • How regulation can provide the scaffolding for effective governance
  • How the FTRA – fair, transparent, responsible, and accurate – framework can help provide structure
  • Why leaders need to plan for tomorrow’s AI, not today’s.

Being proactive as well as reactive

It was professor and director Brett M. Savoie, from University of Notre Dame, who underlined how responsible AI requires leaders to be both reactive and proactive.

“Reactive engagement means asking how AI threatens our mission. And then the proactive response is: how can AI help us better fulfil our mission? Both demand action”, he told the audience. This applies to governments and organisations alike.

Coyle Professor Savoie highlighted some challenges in academia, from how to attribute credit for research, to how to evaluate students. He described AI’s potential as beyond exciting, adding: “It’s astonishing what I’m able to accomplish in a computational space right now with AI assistance… And we can be better educators than ever before.”

Technology expert and entrepreneur, Toju Duke, contrasted AI’s ability to get things wrong with its potential to achieve positive outcomes for individuals and businesses. While outlining historic cases of bias, she also described a life-changing app which allows people with ‘non-standard’ speech to communicate.

Brett M. Savoie
Brett M. Savoie Coyle Professor and Director, University of Notre Dame Scientific AI Initiative

Getting regulation right

So, is regulation the answer? The Rt. Hon. Chloe Smith, former MP and one of our strategic advisers on innovation, shared her thoughts on global governance and national regulation. She was running the Department for Science, Innovation and Technology (DSIT) when ChatGPT burst onto the scene.

“The challenges are really tough,” she told the audience. “Internationally, can nations achieve coherence, openness, and collaboration across our differences in a way that averts risk and embraces citizens’ choices? Nationally, can we in the UK continue to foster our tech ecosystem and get growth?

“And, in all jurisdictions, legislation will always be slower than this technology, so there’s a balancing act required with people’s natural concerns with businesses’ desire for clarity.” Right now, approaches differ. The EU AI Act 2024 imposes detailed rules according to four risk levels while the UK’s approach is “to go lightly on legislation”.

Our lead on responsible AI, Elisabeth Mackay, provided reassurance that existing laws and regulations constitute scaffolding for effective governance. “In recent years I’ve heard a lot that AI is a ‘wild west’… People say to me ‘there are no regulations’ or ‘we don’t know where to start’. We can dispel that myth.”

Rt. Hon. Chloe Smith
Rt. Hon. Chloe Smith Strategic adviser to PA Consulting

Taking a structured approach

Toju Duke, an advisor on responsible AI who was previously Responsible AI Programme Manager at Google, offered a valuable breakdown of risks, including: social inequities, data and privacy violations, energy consumption, and misinformation and scams. She talked about the problems with copyright infringements, labour market impacts and transparency.

Elisabeth Mackay developed Toju’s arguments in advocating for a structured approach. She explained how we’re helping clients translate principles and policies into frameworks with concrete requirements and metrics for monitoring compliance. “If you don’t have principles… start there.” Toju’s list provides food for thought. And our own ‘FTRA’ framework helps organisations ensure systems are fair, transparent, responsible, and accurate.

Elisabeth described how we took one client’s principle of ‘accuracy’ (among others) from an abstract concept to a measurable control. “It manifested itself into documented and defined approaches to testing and evaluation of a system. It manifested into agreed time scales for retraining the system to ensure continued accuracy against things like data drift. And the documentation given to users explained what they should and shouldn’t be doing.”

Toju Duke
Toju Duke Founder, Bedrock AI and Diverse AI
Elisabeth Mackay
Elisabeth Mackay Responsible AI lead at PA

Plan for tomorrow’s AI

Coyle Professor Savoie identified perhaps the biggest challenge in implementing AI responsibly. “AI is the least capable it’s ever going to be… So, we shouldn't be planning for today’s AI, we should be planning for tomorrow’s AI.”

And Toju suggested the responsibility for getting the balance right falls on everyone adopting AI, not just ‘big tech’. This means that collaboration is key. For Chloe Smith, this means recognising that governments and businesses each have part of the answers, so working together is the best way forward.

What struck me the most from our event was that, when perched at the top of the tightrope, it’s all too easy for leaders to be paralysed by fear. Yet there was overwhelming consensus that hesitation could be the most fatal response of all. Instead, our panellists and audience spoke of how they can learn, co-operate, and continue to move forward.

Alwin Magimay
Alwin Magimay Global Head of AI at PA

It’s this approach that excites me the most: supporting our clients on the journey to become intelligent enterprises: organisations – powered by data and AI – that create radical new futures by combining technologies with human insight. And doing so in a way that is responsible and repeatable. I look forward to continuing the journey.

About the authors

Alwin Magimay Global Head of AI

Next Made Real

Intelligent enterprises – powered by data and AI – combine technologies with deep human insight to create radical new futures. What will you make real?

Responsible AI

Ensure the ethical, transparent, accountable, and effective use of AI.

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.