
How to build essential skills to thrive with AI
Tags
Our Business Psychology Forum webinars tackle the topics at the top of every boardroom agenda. In our AI-focused webinar, we spoke with Pooja Patel, Senior Modern Work Enterprise Specialist at Microsoft, and leaders from the public and private sector to discuss the behaviours and skills needed to thrive with AI.
The World Economic Forum’s Future of Jobs report states that 86 percent of leaders see AI as transformative, and 85 percent expect to upskill their workforce to benefit from AI. While everyone wants to ride the AI wave, intention doesn’t always translate into successful implementation. But with the right attitude, skills, and frameworks, organisations can set teams up for success. In a recent online panel debate, we hosted senior executives to pin down the most effective ways to make AI strategies fly.
AI, welcome to the team
Much like a new team member, AI applications need time, training, and support. AI hallucinations and biases mean that it’s likely to get things wrong, catalysing a mass skills-shift – humans will have to combine prompt literacy, creative thinking, and ethical judgement to get the most out of the tech. For example, the UK Department for Transport uses AI to draft correspondence, which is then checked by humans. The AI needs to be trained how to write, but the humans also need to know how to check the content in the right way.
AI has the potential to touch almost every element of business operations and therefore drives a broad range of new training needs. One example is security awareness – employees should know not to input sensitive information into Gen AI interfaces.
Organisations can provide a range of options for different learning styles (such as videos, forums, and pilots). Training should also be able to accommodate an individual’s familiarity with AI.
At PA, our AI Academy, developed with AI, helps employees to become fluent in AI. On top of formalised training, there’s a constant need to keep up with the pace of change via daily AI newsletters and bulletins. At the end of AI Academy courses, trainees are invited to subscribe to newsletters, websites, and other sources that can help them to stay up-to-date in an ever-growing and changing environment. While the velocity of innovation might be disconcerting, a broader awareness of trends means that individuals can prepare for the next big shift.
Identifying AI champions
It’s generally agreed that when a new innovation comes to the fore, 50 percent of people are moderately interested, 25 percent of people aren’t interested at all, and 25 percent of people are gung ho. The 25 percent of enthused individuals are an important resource, and can build confidence in a peer-to-peer way. Champions should reflect the diversity of an organisation’s wider audiences, including employees.
Communities of best practice help to distribute learnings widely, and share new tools and techniques. Champions can report back on progress while flagging major updates or advances. Champions and ‘super users’ can also be given the opportunity to take part in technology pilots, testing out applications in dedicated spaces.
The right environment for AI excellence
No matter the size or remit of an organisation, a combined top-down, bottom-up approach enhances the efficacy of any new change. Board-level executives have a role in communicating a clear vision on AI, with a crystal-clear purpose that looks beyond profitability. There’s a need to provide positive messages and encouragement so AI is viewed as augmenting (not opposing) human ability.
Psychological safety is fundamental to grassroots progress, encouraging employees to experiment, try, and fail without fear of repercussions. Sharing successes and failures – especially failures – opens up opportunities to fix things that don’t go as planned, removing the taboo of ‘getting it wrong’.
A sandbox, or a ‘living lab’, can allow individuals (champions or otherwise) to freely experiment without compromising core functions. Separate experimental environments also allow employees to think deeply in the right head space. Beyond the sandbox, employees can be incentivised to experiment with AI through gamified experiences, and rewards such as small vouchers and kudos.
Internal collaboration will become more important as applications move to low- and no-code. Our AI developers spend time with the company’s cyber risk team to align progress. Partnering with external organisations can also support success. The Alliance of Data Science Professionals, for example, is a network of data scientists who are adopting shared frameworks for AI accreditation.
Measuring success beyond standard KPIs
How do leaders know if AI strategies have been successful? Standard KPIs around productivity and efficiency are important, but so are softer elements such as happiness and trust. Any employee can express if they are happy in their work, and whether using AI has made their job easier. Trust can be enhanced through transparency about the purpose and limitations of AI.
Another indicator of success is how AI-powered applications deliver real human impact. For example, in the NHS, Microsoft Copilot helps to free up staff capacity for in-person interactions. In the same vein, Patient Catalyst utilises machine learning models to coordinate patients’ unique pathways, improving efficiency and reducing time spent in hospitals and clinics.
Understanding the wider consequences
Much of the wider conversations relating to the consequences of AI focus on data security, accountability, and biases. However, making sure that AI is fit for future should also include corporate social responsibility (CSR), ethical awareness, and sustainability.
During the webinar, panellists considered how organisations can limit the environmental impact of AI. According to recent analysis, a Gen AI chatbot used in a call centre could generate around 2,000 tonnes of carbon dioxide each year, while water consumption from large-scale adoption of Gen AI could match the annual fluid intake of more than 300 million adults. Teamed with quantum computing, this impact will grow exponentially.
One strategy for organisations is to assess whether AI is right for specific tasks, and reserving its use for the most appropriate functions. Dedicated reports can also ensure that CSR, ethics, and sustainability are embedded in development from the get-go.
If you only do one thing…
To conclude the discussion, panellists offered advice for successfully implementing effective, safe, and ethical AI in their organisations. This included not treating AI-derived insights as an infallible source of truth, and getting hands-on (with the right guardrails and assurances). Building firm-wide AI literacy emerged as the most important building block for success. By implementing holistic strategies to encourage AI awareness and ability, underpinned by psychological safety, organisations across the public and private sectors can thrive with AI.
Explore more
