AI Magnifies Your Team’s Strengths and Weaknesses – Insights from Google’s 2025 DORA Report

AI as the Great Amplifier

Artificial intelligence isn’t a magical fix for a struggling software team – it’s more like a spotlight and a megaphone. That’s the main takeaway from Google Cloud’s 2025 State of AI-Assisted Software Development report, part of the DORA research program (DevOps Research and Assessment). The report reveals that AI doesn’t “fix” broken processes or weak team culture; instead, it amplifies whatever strengths or weaknesses already exist. In other words, a high-performing team can use AI to become even more efficient and innovative, while a poorly organized team might find that AI simply highlights and accelerates their existing problems. The biggest benefits of AI come not from the tools alone, but from improving the underlying system – things like your internal development platforms, workflow clarity, and team alignment. We’ll break down the key findings from Google’s 2025 DORA report and what separates the software teams that thrive with AI from those that struggle.

AI Adoption Is (Almost) Everywhere – And It’s Boosting Productivity

One thing is clear: AI has gone mainstream in software development. Around 90% of tech professionals surveyed are now using AI at work, according to the report. That’s a huge jump (a 14% increase over the previous year’s survey) and essentially means AI-assisted coding, testing, or design tools are becoming a daily habit for most developers, product managers, testers, and other tech roles. In fact, the typical developer is now spending a median of two hours per day working with AI tools – imagine that, a quarter of your workday co-working with an AI assistant!

Why are so many people embracing these tools? Simply put, they’re seeing real benefits. Over 80% of respondents said AI has made them more productive. Routine tasks like writing boilerplate code, generating test cases, or analyzing data can be sped up with a little AI help. More than half of developers even report that AI has improved their code quality (59% saw a positive influence on code quality), since AI can suggest best practices or catch mistakes. It’s like having a diligent pair programmer who never gets tired.

But interestingly, this enthusiasm comes with a side of caution. Many developers don’t fully trust the AI’s output – at least not yet. The study uncovered a bit of a “trust paradox”: about 30% of professionals say they trust AI-generated code only a little or not at all. Only a small minority (around 4%–20%) have a great deal of trust in what AI produces In practice, this means developers treat AI as a helpful assistant, not an infallible authority. They double-check AI’s suggestions and use it to augment human creativity and effort, rather than replacing human judgment. It’s a healthy skepticism: the AI can draft a function or suggest an approach, but a human still needs to review and ensure it actually works as intended. After all, even the smartest code generator might not fully understand your unique project or might introduce subtle bugs. So, while AI is boosting individual productivity, teams are wisely keeping one hand on the steering wheel, using AI to accelerate but not autopilot their work.

The Paradox of Speed: Throughput Up, Stability Down

From a bird’s-eye view, the impact of AI on organizations is a tale of trade-offs. The DORA report found that as teams adopt AI, they often increase their software delivery throughput, meaning they’re able to ship more features and changes faster than before. This is a reversal from last year’s findings – it appears teams have learned to integrate AI better, turning it into genuine velocity. People have figured out “where, when, and how AI is most useful”, so now higher AI usage correlates with higher delivery speed and better product performance outcomes. That’s the good news.

However, this rapid acceleration can come at a cost: stability. The report observed that higher AI adoption still has a negative relationship with software stability (reliability of releases, fewer bugs in production, etc.). In other words, if you suddenly start shipping code at twice the pace (thanks to AI churning out changes quickly), you might also see more production issues unless you have strong safeguards in place. It’s the classic “move fast, break things” problem – AI lets you move faster, which is great, but any cracks in your process will become more evident as things start to break.

Why does stability suffer? The report gives a clear explanation: speed can expose weaknesses downstream. If a team doesn’t have robust control systems – think strong automated testing, good version control practices, and fast feedback loops for catching issues – then accelerating development with AI will simply push more faulty code through the pipeline. It’s like putting a faster motor in a car that has bad brakes and poor steering; you’ll go faster, but you’re also more likely to crash. For example, if your team isn’t writing enough tests or your continuous integration is flaky, an AI code generator might produce a lot of code quickly, and bugs will slip through, causing instability in your software.

On the flip side, teams that do have solid engineering fundamentals can harness AI’s speed without chaos. The research found that groups working in loosely coupled systems (modular architectures) and with quick feedback cycles saw significant gains from AI. These teams can absorb the extra volume of changes because their systems and processes can handle rapid iterations – their automated tests, monitoring, and deployment processes act like a safety net. Meanwhile, teams stuck with tightly coupled, monolithic systems and slow, bureaucratic processes saw little or no benefit from AI. If every change requires a long approval chain or can inadvertently break something in an entangled codebase, having AI generate more changes faster doesn’t help – it just creates a backlog or more breakage. This finding underscores a powerful point: AI will accelerate your car, but you’d better ensure your brakes, engine, and suspension (i.e., your technical ecosystem) can handle the speed.

Mirror and Multiplier: How AI Highlights Team Culture

Perhaps the most fascinating insight from the Google DORA study is how AI acts like a mirror and a multiplier for team culture. As one Google researcher put it, in cohesive, well-structured organizations, AI boosts efficiency – in fragmented, messy organizations, AI shines a light on the flaws. It’s a double-edged sword. If your team collaborates well, follows good practices, and maintains a healthy culture, AI will reflect those qualities and amplify your results (more productivity, more innovation). But if your team suffers from poor communication, unclear processes, or technical debt, AI isn’t going to magically smooth those over – it will make them more visible and possibly exacerbate them.

This dynamic became evident when the DORA researchers looked beyond raw performance metrics and actually studied different types of teams. They identified seven distinct team archetypes (profiles) ranging from top-tier “Harmonious High Achievers” to struggling teams trapped in a “Foundational Challenges” state or a “Legacy Bottleneck”. Each archetype is like a persona that captures not just performance numbers, but also factors like team well-being, burnout, and how it feels to work in that environment. And the differences are stark.

  • Struggling teams (“Foundational challenges”): These teams are in constant survival mode, fighting fires. The report describes them as having “significant gaps in their processes and environment,” which leads to low performance outcomes, lots of internal friction, high burnout, yet ironically high stability in the systems. (That high stability might be because they’re moving so cautiously or slowly that they aren’t breaking much – a sign of fear and inertia.) Developers on these teams often feel frustrated and stuck. AI on such a team might just draw attention to the messy processes – for instance, an AI tool suggests a change but the team can’t integrate it smoothly due to bureaucratic change controls or messy code, causing more stress.
  • Elite teams (“Harmonious high achievers”): These teams, on the other hand, seem to “have it all.” According to the report, they excel across multiple areas: they deliver software quickly and reliably, their end products perform well, and they maintain positive team well-being with low burnout. Everything is clicking – communication, tooling, and culture are all in sync. When a team like this adopts AI, the AI becomes an accelerant for good practices. For example, they might use AI to automate menial tasks, freeing up human developers for creative work, which further increases job satisfaction and performance. AI basically supercharges an already healthy machine.

For leaders, these archetypes offer a way to diagnose your team’s health beyond just numbers. If your team’s metrics say one thing but morale or product outcomes say another, it might help to figure out which archetype you resemble and what underlying issues or strengths are at play. The key realization is that improving a team’s performance with AI isn’t just about adding more AI – it’s about addressing the human and process factors that AI is reflecting.

Seven Practices of High-Performing Teams in the AI Era

So, what exactly are those underlying practices and conditions that make the biggest difference? The 2025 DORA report doesn’t leave us guessing. It introduces something called the DORA AI Capabilities Model – essentially a blueprint of seven essential practices that their research shows will amplify AI’s positive impact on performance. These are the things that high-performing teams consistently do well, and that struggling teams often lack. In plain language, these seven capabilities are:

  1. A Clear, Communicated AI Policy (“AI Stance”) – High-performing teams establish and share a clear stance on how they use AI. This could mean having guidelines or policies about where AI should be applied, how to handle AI-generated code, and ethical considerations. It ensures everyone on the team is on the same page. Rather than Wild West experimentation, there’s a deliberate strategy: for example, “We use AI to assist with code reviews and test generation, but developers must always vet the output.” This clarity prevents confusion and misuse of AI, making it a tool that serves the team’s goals rather than a gimmick.
  2. Healthy Data Ecosystems – This is about the quality and availability of data in your organization. AI is only as good as the data and knowledge it can access. High performers invest in clean, well-organized data pipelines and documentation. Imagine trying to use an AI tool to analyze your system or make predictions, but half the logs are missing and databases are full of junk data – you won’t get far. In contrast, a “healthy data ecosystem” means your data is reliable, up-to-date, and accessible, so AI features (like analytics or smart suggestions) work properly. Teams that manage their data well can feed their AI tools with better context and get more useful insights in return.
  3. AI-Accessible Internal Data – In addition to having healthy data, teams need to make their internal knowledge accessible to AI systems. This might involve integrating internal codebases, documentation, or knowledge bases with AI tools. For example, a company might connect their private repository to an AI coding assistant, or provide the AI with access to their internal APIs and system architecture docs (in a secure way). The idea is to avoid treating AI as a black-box tool isolated from your project – instead, plug it into your context. High performers ensure that AI has the right context about their products and code, so its suggestions are relevant. Struggling teams might use AI in a disconnected way (like copy-pasting code out to ChatGPT without context), which yields less helpful results.
  4. Strong Version Control Practices – This is a classic software best practice that becomes even more crucial with AI. Teams that are good with version control (for instance, using Git diligently with code reviews, small commits, and clear history) can safely integrate AI-generated changes. If the AI suggests a code edit, strong version control means you can track that change, roll it back if needed, and collaborate on it. It also ties into having robust continuous integration – every code change (AI or human) is automatically tested and integrated. The report specifically calls out version control because it underpins traceability and reproducibility. High performers treat their codebase like a living thing with a reliable memory, so AI contributions don’t wreak havoc. In contrast, if a team’s version control is a mess (say, everyone coding in one giant branch with no process), adding AI code into the mix can quickly spiral out of control.
  5. Working in Small Batches – This refers to how work is broken down and delivered. High-performing teams tend to develop and release code in small, incremental batches rather than huge, monolithic updates. This approach, common in DevOps/Agile cultures, means you get faster feedback and can catch issues early. When using AI, small batches are even more important: you might get dozens of AI-suggested changes, but you should integrate them bit by bit, validate them, and learn as you go. The report finds that working in small batches correlates with better ability to leverage AI. It makes sense – if you deploy 10 tiny updates and one breaks something, you know exactly where the problem is. But if you deploy 1,000 AI-generated changes at once and something breaks, good luck figuring out what it was. So, successful teams keep the pace brisk but the batch size small.
  6. User-Centric Focus – Interestingly, one of the top factors isn’t technical at all, but cultural: keeping a strong focus on end-users and their needs. Why would this matter for AI? The report notes that AI is most useful when pointed at a clear problem, and being user-centric gives that direction. Teams that deeply understand their users’ pain points can aim AI tools at solving the right problems – for example, using AI to analyze user feedback at scale, or generate features that improve user experience. A user-centric team will ask, “How can AI help us deliver more value to our customers or improve the product for them?” rather than using AI for AI’s sake. This focus ensures AI efforts translate into positive outcomes (better product performance, happier users) and not just tech experimentation. In contrast, a team with no clear vision of user needs might implement fancy AI features that don’t actually solve real problems – a wasted effort.
  7. Quality Internal Platforms – Last but certainly not least, having a high-quality internal platform or tooling infrastructure is fundamental. The survey found that 90% of organizations have adopted at least one internal developer platform, and those with better internal platforms unlock more value from AI. An internal platform could be your in-house CI/CD pipeline, cloud environment, developer portals, or automation tools – basically the systems developers use every day to build, test, and deploy software. If these systems are reliable, self-service, and efficient, AI tools can plug in and further streamline workflows. For example, if your platform provides easy sandbox environments, an AI could automatically spin up test environments or deploy code for you. But if your internal platform is a patchy collection of scripts and manual processes, an AI assistant will keep hitting roadblocks. High-performing teams treat their internal platform as a product in itself – they invest in making it robust and developer-friendly, which in turn amplifies the gains from any AI assistance. Essentially, the better your internal engine, the more horsepower AI can add.

These seven capabilities together create the conditions where AI can truly shine. It’s like preparing the stage for a performance: if you set up the lighting, sound system, and script well (policies, data, practices, etc.), the AI “actor” can give an amazing performance. If not, the show might flop. Not every team will have all these capabilities at a high level, but the DORA report suggests using this model as a roadmap for improvement. You can assess where your team is weakest (maybe you need to focus on data quality, or maybe your deployment pipeline needs work) and start there. Over time, building up these foundational practices will raise your team’s performance with or without AI – but especially with AI, since these practices significantly amplify its benefits.

Conclusion: Focus on Foundations, Not Just Tools

The overarching lesson from Google’s 2025 DORA report is that success with AI is more about people, process, and culture than about the AI tools themselves. Adopting the latest AI coding assistant won’t automatically make your team a top performer – you have to embed that tool into a well-oiled machine. In fact, the report suggests treating AI adoption as a form of organizational transformation. It’s an opportunity (and maybe a forcing function) to reimagine how your team works: to streamline workflows, improve collaboration, and shore up any weak spots in your development lifecycle. Leaders should shift the conversation from just adopting AI to using AI effectively as part of a larger strategy. This can include steps like investing in platform engineering (so your team has a solid foundation to build on), measuring team health and not just output (so you catch issues like burnout or process pain points), and making sure that quick wins from AI scale up to long-term advantages for the product and business.

In informal terms, AI will make a good team better and a bad team worse. But the encouraging insight here is that teams have agency: by focusing on the seven key capabilities and foundational practices, any team can improve how they operate. The greatest ROI from AI comes when you improve the system it operates in – your team’s skills, your culture of experimentation and learning, and your technical ecosystem. So if you’re looking to leverage AI in your software org, take a step back and look in the mirror first. Strengthen the core practices (like your data health, automation, and user focus), because AI is going to reflect and magnify them. With the right groundwork, AI can be an incredible accelerator for innovation and productivity. Without that groundwork, you might just be accelerating towards a cliff. The difference is in the operating habits of your team. As the DORA report shows, AI’s true value is unlocked by the teams who are willing to evolve their culture and processes to support it – not by those who simply install a tool and hope for the best.

In short, strong teams + AI = even stronger outcomes; weak teams + AI = amplified chaos. The future belongs to those who invest in being the former. By focusing on fundamentals and treating AI as the powerful amplifier it is, organizations can ensure that they reap the rewards of this technology while avoiding the pitfalls. And if your team isn’t there yet, don’t worry – the path is now clearer than ever, thanks to insights like these. It’s time to get our houses in order, so our new AI “teammates” can help us build something truly great.