AI risk shouldn’t be seen as a distant concern. Tech companies can face the potential for serious exposure at any moment, but asking the right questions can be the difference between a costly misstep and sustained momentum.

AI is transforming brand tracking – turning a dated, rigid process into something that’s dynamic, real-time, and far more aligned with how brands actually move in market. It’s one example of a broader trend we’re seeing: AI is shifting from experimental tech to embedded infrastructure – and this is especially true of tech companies. And that’s where the urgency kicks in.

While AI adoption continues to accelerate, many tech organisations tell us they are still working to catch up when it comes to identifying and managing emerging risks. Few are thinking deeply enough about how they ensure that AI is fair, trustworthy, reliable, and accurate.

At the same time, consumers are advancing too. Our Brand Impact Index found half of consumers say AI is becoming an essential part of their lives and over half (60 percent) are comfortable using AI tools in brand interactions. But as adoption grows, so does scrutiny. The stakes rise as AI becomes more pervasive, while expectations begin to outpace how systems are designed and deployed.

For tech companies, AI risk isn’t a future-state problem. It’s a present-tense one.

The regulatory clock is ticking. Parts of the EU AI Act have already come into effect, with additional provisions planned for 2026 – the same year the Colorado AI Act will come into force. The Colorado Act is the first and currently only state-level, general and wide-applying AI regulation (as of May 2025). Some other US states have passed sector specific rules (New York on HR, Utah on focused and limited Gen AI uses, and other states have AI regulations going through legislature), but for now Colorado is the key US regulation. AI Acts require formal documentation, consumer disclosures, explainability, and ongoing risk assessments. The Federal Trade Commission (FTC) in the US, and other agencies, are already applying existing consumer privacy and fairness standards to AI – particularly in areas like chatbot interactions, and model-driven recommendations.

Managed well, responsible AI isn’t just a regulatory requirement. It can be a source of growth. This year’s Brand Impact Index revealed 40 percent of consumers are willing to pay a premium for AI-driven experiences. However, that willingness is conditional and largely built on trust. This makes AI risk management a strategic concern, not just a technical one.

The implications are real. Tech companies are building AI directly into the core of their products – autonomous systems, recommendation engines, and workflow automation tools. These aren’t just features – they’re decision-makers. And the risks they introduce are often subtle, fast-moving, and hard to unwind once they’re in-market. With firms increasingly making it a requirement of suppliers to demonstrate how AI risk has been assessed and managed, and incoming transparency requirements under emerging AI regulation requiring disclosure to consumers, failure to do the risk work upfront when it is easy will increase block sales or require expensive remediation to get necessary documentation or changes in place.

We’re building fast, but are we building safely?

Tech leaders should be actively exploring risk from multiple perspectives to make it easier to distribute ownership, assign controls, and embed risk thinking into existing workflows. Here are key considerations and questions that tech leaders should be asking their teams across the organisation:

Engineering and platforms teams (focus on infrastructure, reliability, and integration)

  • Security: Have we assessed the specific threats AI introduces, such as model inversion, data leakage, or adversarial inputs across our platform architecture?
  • Performance monitoring: Are we continuously monitoring and validating model performance in the real world? Are we detecting drift, decay, or unintended behaviors over time?
  • Third-party / Supply chain risk: Are we evaluating the risks introduced by external AI vendors, open-source models, or third-party datasets?

Data and Machine learning teams (focus on data sourcing, model training, and optimisation)

  • Data source validation: Is our training data complete, representative, and contextually appropriate? Are we testing for hidden bias or stale signals?
  • Model reproducibility and auditability: Can we reproduce the outputs of our models – and explain how we got there?
  • Understanding of limitations: Are we clear on what our models can’t do well? Are we defining appropriate use cases – and guarding against inappropriate ones?

Product, design, and marketing teams (focus on user interaction, experience, and values alignment)

  • Ethical considerations: Do our AI outputs align with company values – and user expectations? Have we involved a diverse set of perspectives in evaluating ethical tradeoffs?
  • User interaction risk: How are humans interacting with this AI? Have we designed for safe and responsible usage – especially in edge cases or high-stakes contexts?
  • Transparency: Can we explain what our systems are doing – clearly enough for customers, regulators, and internal stakeholders?

Legal, risk, and compliance teams (focus on policies, oversight, external obligations)

  • Legal compliance: Are we keeping pace with evolving legal frameworks across privacy, IP, explainability, and discrimination? Do we know which laws apply – and where?
  • Cross-functional alignment: Are our legal and risk teams in regular conversation with engineering, product, and data teams? Do we have mechanisms in place to spot and address risk before models go live?
  • Governance and accountability: Have we assigned clear ownership for each model or capability? Do we have a plan for what happens when something goes wrong?

These are not just compliance checkboxes – they’re product and business questions. And they’re increasingly shaping how tech companies compete, scale, and sustain trust.

If your organisation is building, integrating, or scaling AI into core products and platforms, the issue isn’t whether risks exist – they do. The real question is whether those risks are being surfaced, understood, and managed intentionally so the full value of your AI development can be realised. If that’s not happening yet, you’re not behind. But the window for catching up is narrowing. We’re having more of these conversations with tech leaders who are ready to move beyond general AI principles and into practical risk strategy – aligned to how their businesses operate.

Let’s make sure the systems we build don’t just scale. They scale responsibly.

About the authors

Jorge Aguilar
Jorge Aguilar PA growth strategy expert
Richard Watson-Bruhn
Richard Watson-Bruhn PA digital trust & cybersecurity expert

Brand Impact Index 2025

In fiercely fluctuating economies, what makes consumers stick with the brands they love?

Next Made Real

Intelligent enterprises – powered by data and AI – combine technologies with deep human insight to create radical new futures. What will you make real?

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.