The leadership leap: is AI finally going to compel it?
By Dr. Priscila Periera, Director of Research & Innovation
Why this question matters to us
As a research director working for an evidence-based consulting firm, I constantly ask myself how to remain relevant so we can support our clients with confidence. Over the past year, AI has increasingly become part of our conversations with clients – sometimes as uncertainty, sometimes as opportunity, and often as confusion mixed with acceptance that AI is going to transform work.
Almost always, leadership sits at the centre of these conversations.
Leaders are being asked to make decisions about AI before they feel ready. They are expected to manage risk, reassure people, unlock value, and “lead transformation”, often without clear reference points for what good leadership actually looks like in this context. This uncertainty is reflected in corporate research, showing that many organisations report strategic alignment on AI, while feeling significantly less prepared when it comes to workforce readiness and leadership capability [i][ii].
That gap is what pushed me to slow down and go back to the evidence.
Over the past few months, we have been reviewing academic research, institutional reports, and corporate papers to better understand two related questions:
- How is AI reshaping leadership?
- How does leadership shape the way AI actually plays out in organisations?
It is a relatively new and evolving field, but some early patterns are already emerging.
Leadership theory, practice, and an old tension
Earlier in my career, I taught leadership at university. One of the first lessons was always about the evolution of leadership thinking: from early “great man” theories, which treated leadership as an innate quality, through to charismatic and leader-centric models that emphasised individual authority, vision, and personal influence. Over time, these approaches were increasingly challenged by research pointing to the limits of heroic leadership, particularly in complex and interdependent organisational contexts.
More contemporary leadership perspectives, including authentic, inclusive, and relational leadership, shifted the focus away from the leader as hero and towards values, relationships, context, and the experience of followers. Leadership became less about individual dominance and more about enabling others, creating psychological safety and more balanced power, and shaping conditions for collective performance.
At the same time, I was working full time as an HR professional.
What struck me then, and still does now, is how unevenly this evolution in leadership thinking translated into everyday organisational practice. Many leaders continued to operate, and often succeeded, within very traditional leadership archetypes. Traits associated with power and authority remained highly visible. Assumptions linking leadership with (often male-coded) characteristics persisted. Models of leadership that academic research had already begun to question decades earlier continued to be rewarded in practice.
For a long time, these archetypes endured because organisational systems reinforced them. Hierarchies remained largely intact. Decision-making authority stayed centralised. Visibility and decisiveness continued to be read as competence, even in contexts where collaboration, learning, and adaptability were increasingly critical. In this sense, leadership theory evolved faster than the organisational conditions required to support new ways of leading.
This gap between how leadership has been described in theory and how it is enacted in practice is not new. But it matters deeply for how organisations respond to AI. When systems reward individual authority and certainty, leaders adapt accordingly even when the work itself is becoming more complex, more distributed, and less amenable to control by any single individual. AI forces leaders, perhaps for the first time at scale, to confront whether the archetypes they operate within are fit for work that is increasingly collective, data-driven, and system-mediated.
So, how is AI reshaping leadership?
As AI becomes embedded in work, leadership is no longer exercised primarily through individual authority or personal expertise. Decision-making is increasingly shaped by data, algorithms, and systems that operate across teams and functions. Leaders are no longer the sole source of judgement, yet they remain accountable for outcomes.
This raises an uncomfortable question: can traditional, leader-centric archetypes survive in environments where authority is distributed, and decisions are increasingly mediated by technology?
Academic and institutional research suggests that AI functions as a structural intervention, not simply a productivity tool. AI redistributes decision authority, decomposes and recomposes work at the task level, and reshapes how value, accountability, and meaning are experienced at work [iii][iv]. Performance gains frequently coexist with changes in worker autonomy, skill use, and job quality, rather than replacing them cleanly.
In this context, leadership is expressed less through abstract traits, and more through observable behaviours. These include setting boundaries around automation, intentionally redesigning roles, explaining and contextualising algorithmic decisions, and remaining accountable for outcomes shaped by AI systems [v][vi].
The literature points to a step-change in leadership requirements. Leaders move from managing people to managing socio-technical systems; from focusing solely on outcomes to being accountable for processes; and from accumulating expertise to actively protecting human judgement and learning in AI-mediated work [vii][viii][ix].
Reinforcement, not replacement
Importantly, the evidence does not suggest that AI automatically dismantles traditional leadership patterns.
There is a real risk that AI reinforces existing power dynamics, rather than disrupting them. Control over data, models, and systems can become a new source of authority. Technical opacity can concentrate decision-making rather than diffusing it.
Institutional and policy-oriented research raises concerns that, without deliberate leadership intervention, AI may amplify inequalities in access to opportunity, influence, and capability [x][xi].
How we are approaching this at Shape Talent
At Shape Talent, we do not see AI leadership as a future skills trend. We see it as a deeper shift in how leadership is enacted and experienced within socio-technical systems; systems in which people, data, algorithms, governance structures, and organisational culture interact in ways that are still poorly understood, and often poorly led.
The challenge we see organisations grappling with is not a lack of AI tools, but a lack of leadership clarity.
Leaders are being asked to act in environments where authority is distributed, outcomes are shaped by algorithms, and accountability cannot be neatly delegated.
In the absence of clear leadership behaviours, organisations risk drifting into either over-automation, where responsibility becomes opaque, or under-utilisation, where fear and uncertainty stall progress.
What the evidence increasingly points to is not the need for a new list of leadership traits, but for a different set of leadership behaviours. Behaviours that can be observed, practised, and supported. These include:
- Governing the boundaries between human and AI judgement
- Intentionally redesigning work without stripping it of meaning or developmental opportunity
- Remaining accountable even when decisions are partially automated, and
- Intervening deliberately where AI risks widening inequities in access, opportunity, or voice.
These are not incremental adjustments. They challenge assumptions embedded in many traditional leadership models, particularly assumptions about individual authority, control, and expertise. They require leaders to shift from being decision-makers to being system stewards, and from optimising performance alone to sustaining legitimacy, trust, and capability over time.
For us, this work is about more than helping organisations “get AI right.” It is about contributing to a moment in which leadership history has an opportunity to take a turn for the better, closing the long-standing gap between how leadership has been theorised and how it is actually practised.
AI does not force that shift on its own, but it makes the consequences of inaction harder to ignore. Our role is to help leaders make that shift deliberately, with clarity about how power is exercised, how judgement is shared, and how human capability is sustained alongside increasingly intelligent systems.
For more detail on this topic, look out for our research paper on AI leadership, launching in June 2026.
FAQs
Q. What does it mean to say AI is a “structural intervention”, not just a productivity tool?
A. AI doesn’t simply speed up tasks. It can redistribute decision authority, reshape how work is decomposed and reassembled, and change how accountability and job quality are experienced. For HR and leaders, that means AI adoption requires work redesign and governance, not just tool rollout and training.
Q. How is AI changing what effective leadership looks like in practice?
A. As AI becomes embedded in work, leadership shifts from “being the expert decision-maker” to stewarding socio-technical systems. Effective leadership becomes more behavioural and observable: setting boundaries around automation, redesigning roles intentionally, explaining algorithmic decisions, and staying accountable even when decisions are partially automated.
Q. What are the biggest leadership risks organisations face when adopting AI?
A. The biggest risk is often under-leadership, not over-automation. Without clear leadership behaviours and governance, organisations can drift into opaque accountability, inconsistent decision-making, and stalled adoption driven by uncertainty. This can undermine trust and create uneven outcomes across teams and groups.
Q. How can AI reinforce existing inequities, and what should HR/DEI leaders watch for?
A. AI can amplify inequities when it is built on historical data that reflects historical bias, or when control over models and data becomes a new source of power. HR and DEI leaders should watch for unequal access to opportunities, voice and visibility; unexplained algorithmic outcomes; and decisions becoming harder to challenge due to technical opacity.
Q. What practical steps can HR and Talent leaders take to build AI-ready leadership capability?
A. Start by moving from abstract “AI leadership traits” to clear leadership behaviours that can be practised and supported. Priorities include: defining boundaries between human and AI judgement, building governance for high-stakes decisions, redesigning roles without stripping meaning and development, and ensuring accountability remains clear even when decisions are AI-assisted.
References
[i] Deloitte. (2026). State of AI in Enterprise. The untapped edge. The State of AI in the Enterprise – 2026 AI report | Deloitte US
[ii] McKinsey & Company (2025) The state of AI in 2025. Available at: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-state-of-ai
[iii] International Labour Organization (2023) Generative AI and jobs: A global analysis of potential effects on job quantity and quality. Geneva: ILO.
[iv] Stanford University, Human-Centered AI Institute (2025) AI Index Report 2025. Stanford: Stanford University.
[v] Mäkelä, E. and Stephany, F. (2024) ‘Complement or substitute? How AI increases the demand for human skills’, arXiv. Available at: https://arxiv.org/abs/2412.19754
[vi] Ledingham, D. et al. (2025) ‘Job redesign and large language models: Evidence from the UK Civil Service’, arXiv. Available at: https://arxiv.org/abs/2512.05659
[vii] Bevilacqua, S., Ferraris, A., Matzler, K. and Kuděj, M. (2026) ‘Strategic leadership at high altitude: Investigating how AI affects the required skills of top managers’, Journal of Business Research, 205, 115878. https://doi.org/10.1016/j.jbusres.2025.115878
[viii] Ghosh, A. and Sadeghian, S. (2024) ‘Artificial intelligence and meaningful work: Evidence from IT professionals’, arXiv. Available at: https://arxiv.org/abs/2406.14273
[ix] Nakavachara, V. (2025) ‘AI and worker well-being: Differential impacts across generational cohorts and genders’, arXiv. Available at: https://arxiv.org/abs/2511.11021
[x] International Labour Organization (2024) Global case studies on social dialogue, AI and algorithmic management. Geneva: ILO.
[xi] World Economic Forum (2025) The Future of Jobs Report 2025. Geneva: The Future of Jobs Report 2025 | World Economic Forum