AI has quickly become part of everyday life for software teams. Code suggestions, automated testing and AI-assisted reviews are now routine across many organisations. What’s far less routine is seeing consistent results.
A recent article from McKinsey & Co, based on a study of nearly 300 publicly listed companies across technology, financial services, healthcare, energy and retail, highlights a widening gap. While many organisations report modest improvements, a smaller group is pulling decisively ahead.
These high performers are seeing gains of 16%–30% in productivity, speed to market and customer experience. Most striking of all, they are achieving software quality improvements of 31%–45%, which is a level of progress that would have felt unrealistic just a few years ago.
What sets these organisations apart isn’t bigger budgets or access to better tools, it’s a willingness to rethink how software work actually happens, from team design and workflows to training and performance measurement. AI delivers its greatest impact when it’s treated as part of the system, not an add-on.
A Growing Gap Between AI Leaders and Followers
The gap between organisations getting real value from AI and those still struggling to see results is now clearly visible in day-to-day delivery. Across key performance measures, AI leaders are outperforming laggards by around 15 percentage points, and the difference shows up in how teams actually work.
High-performing teams tend to operate in smaller, more focused groups, making it easier to run shorter sprint cycles, maintain stronger code quality and release more reliably. Less time is spent fixing issues after launch, which frees teams up to focus on improvements rather than firefighting. The knock-on effect is felt by customers too, with higher satisfaction and more consistent experiences.
What’s particularly telling is how uneven the foundations still are. Nearly 2/3 of top performers have put at least 3 of the 5 most important success factors in place, such as clear ownership, consistent ways of working and outcome-focused metrics. Among lower-performing teams, that figure drops to just 10%. This suggests that the biggest gains are still coming from getting the basics right, rather than chasing the latest tools or features.
This gap exists even though AI adoption is already widespread. More than 90% of software teams surveyed use AI for core engineering tasks like refactoring, modernisation and testing, saving an average of 6 hours per developer each week. While that time saving is valuable, it does not automatically translate into better outcomes.
The organisations seeing the greatest gains are the ones that have stepped back and redesigned how work flows around AI. Instead of treating AI as an add-on, they integrate it across planning, development, testing and release, and rethink decision-making, quality checks and team coordination along the way. When AI strengthens good ways of working, rather than being asked to compensate for weak ones, its impact compounds and performance improves across the board.
Moving from tools to transformation
One of the clearest patterns in the research is that high performers don’t limit AI to isolated tasks. Instead, they use it across the full product lifecycle, from early design through to testing, deployment and adoption tracking.
Teams that scale 4 or more AI use cases are 6 to 7 times more likely to be top performers. Nearly 2/3 of leading organisations have reached this level, compared with just 10% of lower-performing teams. When AI supports multiple stages of development, the benefits build on each other. Improvements in design feed into better code, which leads to smoother testing and more confident releases.
This joined-up approach helps teams move faster without cutting corners, and it gives leaders clearer visibility into how work is progressing from idea to delivery.
Rethinking How Development Work Happens
The most successful teams see AI as a partner in the work, rather than a way to rush through it. Developers combine AI agents with human judgement, which allows them to cover more ground while staying in control of quality.
In practice, this often starts before any code is written. Engineers use AI to plan changes, explore customer requests and think through different approaches directly from their editor. During development, background agents handle parallel tasks such as refactoring or testing, while developers step in to review, refine and decide what moves forward.
“AI-assisted programming will increasingly let developers focus on intent, what the software should do, rather than just how it’s written. That change will reshape not only individual roles, but how teams are organised and led.”
Michael Truell, CEO and Co-founder, Cursor
Many teams now work conversationally with AI, asking questions about their own codebase, testing ideas through chat or voice, and visualising changes instantly as they work. This back-and-forth makes development feel more fluid and creative, helping teams move quickly without feeling under pressure.
New Roles Are Evolving
As AI quietly handles many of the routine engineering tasks, something more interesting is happening across product teams. Roles are opening up and people are being given more space to think, contribute and influence outcomes beyond their traditional job descriptions.
For developers, this means pairing technical depth with a clearer understanding of product intent, user experience and business trade-offs. Instead of spending time on repetitive refactoring or manual testing, they’re stepping back to focus on system design, quality decisions and how their work shows up for customers. The best teams are encouraging engineers to think like product owners, not just code writers.
Product managers are experiencing a similar shift. With AI helping accelerate delivery, they’re spending less time coordinating tickets and more time shaping the experience itself. That includes prototyping ideas, testing features early, stress-testing quality and thinking carefully about where and how AI should be used responsibly. The role becomes more strategic, creative and customer-facing.
At the same time, front end, back end and testing are blending into broader full-stack responsibilities. Designers are prototyping directly in code rather than handing over static mock-ups. Product managers are testing features themselves instead of waiting for hand-offs. Data and business teams are exploring product insights on demand, without relying on long reporting cycles.
The most important point is that this shift is expanding the levels of expertise. Teams that lean into this change build more shared ownership, faster feedback loops and stronger alignment around outcomes. AI becomes the enabler, but the real advantage comes from people who are empowered to think more broadly, collaborate more closely and make better decisions together.
Why Training Makes the Difference
AI tools on their own don’t change how people work, but training certainly does. The organisations seeing real gains are the ones that invest in helping their teams build confidence, judgement and good habits around AI, not just access to the tools themselves.
More than 50% of top-performing organisations provide personalised, hands-on AI training, compared with around 20% of lower-performing teams. The difference isn’t budget, it’s intent. High performers treat learning as part of the job, not an optional extra.
The most effective training looks a lot like real work. Instead of generic courses, teams learn how to use AI during sprint planning, code reviews and testing. Developers get practical guidance on writing better prompts, evaluating outputs and deploying safely. Product managers focus on understanding model behaviour, data governance and responsible use, so they can make better decisions earlier in the process.
Because AI tools evolve so quickly, this type of learning can’t be as effective if it is only a one-off. It needs to be built in within a continuous coaching program which can smoothly be incorporated into a teams rhythm of work. Internal forums, shared playbooks and informal communities give teams space to swap ideas, surface risks and learn from what’s working elsewhere in the business. Over time, that shared learning compounds and positivly contributes to both individual growth and collective progress.
Measuring the Progress That Matters
High-performing organisations focus their measurement on outcomes such as software quality, delivery speed and customer experience. This keeps teams grounded in impact and helps leaders see quickly where things are improving and where they need to step in and adjust.
This approach also changes the conversation. Instead of asking whether teams are “using AI enough”, leaders ask better questions such as is quality improving, are releases more predictable and are customers seeing the benefits. That slight shift in mindset makes AI feel more purposeful and less performative.
“Producing more code doesn’t mean producing better software. What matters is whether teams are shipping reliable, secure products that customers trust.”
Tariq Shaukat, CEO, Sonar
To support this, leading organisations connect data across planning tools, code repositories and AI usage into a single view. That visibility helps teams understand where AI is genuinely helping and where it might be creating friction, turning measurement into a learning tool rather than a reporting exercise.
Aligning incentives with impact
Top-performing organisations reinforce these behaviours through their performance systems. AI-related goals are built into reviews for developers and product managers alike, encouraging thoughtful, consistent use rather than box-ticking.
Instead of rewarding usage alone, leaders focus on behaviours that drive results, such as improving quality, spotting automation opportunities and reducing friction for customers. This builds accountability without penalising individuals for factors beyond their control.
Over time, this approach turns AI adoption from a short-term initiative into a lasting organisational capability.
What Leaders Are Doing Next
AI tools for software development are advancing quickly, and they will keep getting better. The organisations moving fastest are thinking end to end. They set clear goals, redesign workflows, invest in people and stay focused on outcomes. AI becomes a catalyst for better ways of working, not a bolt-on or a box to tick.
Keeping pace with AI means more than adopting new tools. It means building the skills, structures and culture that allow those tools to deliver real value, now and in the years ahead.
