Insight

15 minutes with: Laura Hawkins

Laura Hawkins

By Laura Hawkins

Our experts are at the forefront of bringing ingenuity to life for our clients. They accelerate new growth ideas from concept, through design and development to commercial success. And they revitalise organisations with the leadership, culture, systems and processes to make innovation a reality.


In this series, you’ll meet some of the brilliant minds creating change every day.

Laura Hawkins
Laura, an expert in data, helps clients use AI responsibly on their journeys to the intelligent enterprise.

Tell us a little bit about you, your background, and how you came to PA?

I studied Ancient History and Egyptology as an undergrad at Swansea in the UK. I loved it, potentially influenced unduly by Rachel Weisz in The Mummy but in the end I didn’t want to go into a career in archaeology. My brother told me about a Master’s degree in intelligence and security studies that he thought would be a good match for my interest in putting pieces together and creating a narrative. I did the Master’s degree, graduating top of my class in a very alpha male class. Then I had some time out, lived in Australia for a little while, and then started working as an intelligence and resilience analyst. This incorporated crisis management, incident response, and protecting people and assets within organisations.

In 2016, cyber security and data privacy started to come to the fore a little more. The General Data Protection Regulation (GDPR) came into effect in 2018. Cyber and data threats weren’t replacing physical threats, but there was certainly an added data dimension that adversaries could leverage to disrupt and harm. I pivoted my career to focus more on that aspect rather than the response to physical attacks. I moved to the Financial Conduct Authority (FCA) for a little while. I loved working there. They have a real objective of helping society which resonated with me. I then moved to PA. So, it was a strange way to get here, but the common thread is protecting people, assets, and data.

How would you describe your role and work to someone you’d never met before?

I figure out how a firm uses data – whether it’s personal data, intellectual property, proprietary data, sensitive data, financial data, health data – and how that data flows throughout the firm to identify any risk. I'm very much a data risk person.

I help organisations manage how they use data within the realm of digital trust.”

This means looking at privacy, security, ethics, legal, compliance, and all of the different risk domains to see the potential impacts to a business.

I’ve done a lot of financial services work, but now that the regulatory landscape is evolving with GDPR, the European Union’s AI Act, and global laws around AI and data privacy, there’s increased focus on ethical and compliant data use across industries. Some of the most interesting data use cases are in consumer and manufacturing, healthcare, and life sciences. You can do some really cool stuff in energy and utilities as well, leveraging data to get better insights into how we use water, for example.

Laura Hawkins

What makes PA’s approach to AI different?

All of the people in our data privacy and digital trust teams are pretty geeky about this space. We really enjoy this subject matter and this domain. We stay on top of trends and share news proactively with each other. We have practical experience of operationalising privacy, responsible AI, or resilience across digital trust.

We not only have the practical experience, but we’re also truly passionate about our work, which I think comes across in what we do.

How has your work changed in the past few years?

In terms of my role, there’s a lot more client engagement. I think one of the exciting things since moving to New York about two years ago is that we do a lot of collaborative events with clients and prospective clients. We’ve run AI roundtables, bringing people together, in person, to have great conversations. It’s not a sales thing – what we’re doing is creating that larger community of practitioners within our client base, and people that aren’t our clients but work in this field. We do a lot more on responsible AI, data governance, and the practical implementation of AI concepts and use cases within business.

How does ingenuity show up in your work with clients?

Ingenuity, to me, is making sure you bring your authentic self to work, which I always try to do because people respond to someone who’s passionate about what they’re doing. I bring a personalised, intelligent response to each and every client as opposed to something generic that they could get from anyone. Ingenuity is never cookie cutter or generic.

My response for clients will always be tailored to them. We will flex depending on their goals, their stakeholders, and what they need so they get something that’s actually helpful to them. We never leave clients with a bunch of documents they have never seen before – we take them on that journey with us.

What is exciting you most in the world of AI and digital trust right now?

It’s definitely responsible AI. I’m at a point in my career where I can actually effect change, which is really exciting.

Responsible AI use is something I’m very passionate about, and I love the fact that we create and implement AI use cases from a responsible AI and digital trust point of view.”

Every project will always have that aspect. A lot of people are grappling with the new regulatory landscape, which is very nuanced. We have quite philosophical conversations with clients around ethical AI use and ethical data use, and the practical implications of legislation that isn’t quite where it needs to be. If we lay good foundations now, we’ll support organisations to be more competitive and successful. It’s great to see that firms are taking a risk-based approach and thinking about responsible AI and responsible data use.

Laura Hawkins and colleague

As a leader in AI, how do you see AI having an impact and in what way?

AI is great for things that a computer is great at. If I have text A and text B, it can tell me the differences between them. The other day, I couldn’t remember the name of a 90s children’s TV show, so I asked ChatGPT. It got it wrong the first time, but then it got it right. It basically acted like a scaled search engine that presents the likely answer, but interpreting the information still takes a critical eye.

If you’re not learning how to use generative AI in your day-to-day job, you should, because it’s not going away. People need to learn to use new tools such as Claude, Chat Gemini, and ChatGPT. I’m not talking about coding or learning how to create an AI system itself – I’m talking about staying digitally literate in the same way the general public now know not to click on phishing emails. Do we know how to use appropriate prompts for AI? Do we understand what a hallucination looks like? Hallucinations are more common that you might think. For example, I’ve asked GenAI tools questions about a law that I’m familiar with and received the wrong answer. People need to think critically about the output. They also need to be more aware around AI scams, and which data they put into AI systems when they don’t know where the data is going.

When organisations use AI, they need to take a scaled approach, and understand how AI aligns with the business strategy. Is it additive, or just something shiny and new?

Currently, clients grapple with AI governance, how to use AI, and what’s acceptable. It’s really important to think about the customer journey, and build trust from the get-go.”

Take chatbots, which are very common. As per the new European Union AI act, organisations need to let customers know that they’re talking to a chatbot, not a human. AI offers a chance to rethink the customer journey. What data is being fed into the AI system, how is that data understood, and how will customers interact with the technology end-to-end?

Looking ahead, clients may struggle with third party risk. When procuring an AI system, many firms buy rather than develop, because developing is expensive, and you need expertise of which there’s not a lot to go around. So, when procuring a third-party AI system, how do you ensure the system is in line with regulation, your own approaches, and that you understand what it does and how it uses data. It’s another complicated cog in an already complex supply chain. GDPR covers personal data use and third parties, but new regulations will make it so organisations must have AI systems that are explainable and transparent, including across third-party supply chains.

What project are you most proud of?

The project I’m most proud of was embedding a whole risk management framework for a fintech company. It was exciting because the firm used emerging tech, and a new way of doing things within a traditional sphere in the financial markets infrastructure. We worked with them to set up their risk management framework, hitting all domains including data security, resilience, and third party. That was definitely my favorite, because I got to create a risk management framework from scratch, with smart engaged people, which is always a dream.

Laura Hawkins and colleague

What advice would you give to somebody who wanted to follow in your footsteps?

I would advise anyone who wanted to get into the responsible AI area to get involved with non-profits that are doing a really great job in bringing together information and communities. Check out the International Association for Privacy Professionals website, and go to AI webinars (of which PA does many). But mainly, be curious. Join a community, get networking, and have conversations. Don’t be afraid that you don’t know as much as you think you should, because it’s a new space for everybody. Having a human interest in technology and how it impacts society is enough, because you’re a human in society so you have a valid opinion. You don’t need to know how an algorithm works or understand a model card or be able to code to have an opinion on how technology impacts society.

I would encourage people to think less about AI models specifically, and more about how data and technology can impact society and organisations.”

AI is currently the word du jour. Cryptocurrency was the word a couple of years ago before that it was cyber security. The next big thing will probably be quantum. So, think about what you are interested in, and if it’s the impact that technology and data have on society, get talking to people.

What are your future goals professionally, but also personally?

Professionally, it’s continuing to have a strong voice in the conversation around responsible data use, responsible AI, and responsible technology. I’m now in a place in my career where I can have an informed and experienced opinion and have really interesting conversations with people. I’d love to get more involved in not-for-profits and think tanks, and leverage some of the great learnings we have at PA to input them into how society thinks about technology.

I want to take those conversations and help them do really cool things. A lot of clients see regulation as an annoying checkbox. I want to keep supporting clients to understand how regulations around digital trust can help protect their data, make their business stronger, and add value. At the end of the day, a more resilient firm is a stronger and more profitable firm.

Personally, I’m looking forward to seeing more of the US, and seeing more of Central and South America. I’ve loved everywhere I’ve been so far, and I want to get to some national parks for sure. The natural beauty here is just incredible. I’m excited to experience that, travel more, and just live life in America.

What are you looking forward to right now?

I’m looking forward to the release of new regulations. The FCA just released an incident response consultation paper – that was very close to my heart when I was at the FCA. There are some other exciting regulations like the European Union’s AI Act and the Digital Operational Resilience (DORA) Act that are coming into force. The spirit of regulation is to keep society safe, and I love supporting clients to see that it doesn’t have to be a hindrance, and can really add value.

About the authors

Laura Hawkins
Laura Hawkins PA financial services expert

Next Made Real

Intelligent enterprises – powered by data and AI – combine technologies with deep human insight to create radical new futures. What will you make real?
Man on a call.

Reimagine AI, digital, and data

Our teams, working with yours, elevate data and digital from technologies to drivers of lasting impact.

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.