Spaces:
Sleeping
Sleeping
File size: 20,107 Bytes
0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f 0e13326 39a2a9f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | {
"deploymentFrequency": {
"elite": "On demand β multiple deploys per day (19% of teams in 2024)",
"high": "Between once per day and once per week (22% of teams in 2024)",
"medium": "Between once per week and once per month (35% of teams in 2024)",
"low": "Between once per month and once every six months (25% of teams in 2024)"
},
"leadTime": {
"elite": "Less than one day",
"high": "Between one day and one week",
"medium": "Between one week and one month",
"low": "Between one month and six months"
},
"changeFailureRate": {
"elite": "5%",
"high": "20%",
"medium": "10% (note: medium cluster can have low CFR but poor throughput)",
"low": "40%"
},
"mttr": {
"elite": "Less than one hour",
"high": "Less than one day",
"medium": "Less than one day",
"low": "Between one week and one month"
},
"patterns": [
{
"id": "high-freq-high-failure",
"signature": "high deployment frequency + high change failure rate (>20%)",
"interpretation": "The team deploys often but lacks sufficient quality gates. High velocity without stability is a sign of missing automated test coverage, insufficient code review rigour, or deploying too-large changesets. This is the most dangerous quadrant β frequent failures erode user trust and engineer morale quickly.",
"improvements": [
"Add automated test gates to the CI pipeline β block deployments that reduce coverage",
"Reduce changelist size using trunk-based development with short-lived feature branches",
"Introduce feature flags to decouple deployment from release, allowing safer deploys",
"Conduct blameless post-mortems on each failure to identify systemic causes"
]
},
{
"id": "low-freq-low-failure",
"signature": "low deployment frequency + low change failure rate",
"interpretation": "The team prioritises stability over speed. This is typical of the 'Slowing' cluster identified in DORA 2022 β large, infrequent releases that succeed but hold back throughput. The low failure rate is likely achieved through extensive manual testing and coordination overhead, not engineering excellence. DORA research consistently shows that high frequency and low failure rate are achievable together.",
"improvements": [
"Decompose large releases into smaller, independently deployable changes",
"Invest in automated testing to reduce reliance on manual pre-release gates",
"Move toward continuous delivery to enable on-demand deploys without increasing risk",
"Pilot trunk-based development on one service to validate the approach"
]
},
{
"id": "fast-lead-slow-deploy",
"signature": "fast lead time (< 1 week) + low deployment frequency (monthly or less)",
"interpretation": "Changes are ready quickly but accumulate before release, suggesting a batch release process or manual deployment gates. The bottleneck is in the release process, not the development process. This pattern is common in teams with compliance requirements or manual change advisory board (CAB) approvals.",
"improvements": [
"Map the value stream from code-complete to production to identify where changes wait",
"Automate deployment pipeline stages that currently require human intervention",
"Work with compliance/governance teams to define automated controls that satisfy audit requirements",
"Consider scheduled releases with automated deployment to reduce coordination overhead"
]
},
{
"id": "slow-lead-high-deploy",
"signature": "slow lead time (> 1 month) + high deployment frequency",
"interpretation": "The team deploys frequently but individual changes take a long time to be ready. This is unusual and may indicate separate fast-track deployment paths for urgent fixes vs. a slow main development track. It can also signal a large monolith where code review, build, and test times are long despite frequent deployments of accumulated changes.",
"improvements": [
"Instrument your CI pipeline to identify where time is spent β build, test, review, or approval",
"Parallelize test execution to reduce pipeline duration",
"Consider breaking the monolith into independently deployable services",
"Enforce small PR sizes β large PRs dramatically increase review and merge time"
]
},
{
"id": "high-mttr-high-cfr",
"signature": "high MTTR (> 1 day) + high change failure rate (>20%)",
"interpretation": "The team both creates failures frequently and takes too long to recover. This is the characteristic of the low/retiring cluster and presents significant organizational risk. DORA research shows that without reliability, software delivery performance does not translate to organisational performance. The combination doubles user impact: more incidents and longer outages.",
"improvements": [
"Establish on-call rotations with clear incident response runbooks",
"Invest in observability: structured logging, distributed tracing, and alerting on SLOs",
"Implement automated rollback triggered by error rate thresholds",
"Define Service Level Objectives (SLOs) and treat breaches as top-priority work"
]
},
{
"id": "monolith-high-deploy",
"signature": "monolithic architecture + deployment frequency of daily or higher",
"interpretation": "Deploying a monolith frequently is achievable but requires significant engineering discipline β comprehensive automated tests, fast build pipelines, and strong trunk-based development practices. Without these, high-frequency monolith deployment typically leads to elevated change failure rates. DORA research found loosely-coupled architecture is one of the top predictors of high performance.",
"improvements": [
"Ensure the monolith has comprehensive automated test coverage before increasing deploy cadence",
"Identify seams in the monolith where independent deployment could be introduced",
"Invest in build optimisation (caching, parallelism) to keep pipeline duration under 10 minutes",
"Use feature flags to gate risky changes without reducing deployment frequency"
]
},
{
"id": "microservices-high-cfr",
"signature": "microservices architecture + high change failure rate (>15%)",
"interpretation": "Microservices introduce distributed system complexity β network failures, schema drift, service versioning, and cascading failures. High CFR in a microservices context often indicates insufficient contract testing, missing consumer-driven contract tests, or inadequate integration test coverage. The DORA 2022 report found that loosely-coupled architecture drives performance only when paired with strong CD practices.",
"improvements": [
"Implement consumer-driven contract testing between services",
"Add integration tests for critical service boundaries in the CI pipeline",
"Use canary deployments or blue-green deployments to limit blast radius of each release",
"Ensure each service has independent rollback capability"
]
},
{
"id": "compliance-low-freq",
"signature": "compliance constraints + low deployment frequency (monthly or less)",
"interpretation": "Compliance requirements often manifest as deployment bottlenecks, but DORA research shows that high-performing regulated teams achieve both compliance and high delivery performance. The key is embedding compliance controls into the automated pipeline rather than using manual gates. Manual change advisory boards (CABs) are consistently shown to be poor predictors of stability and a significant throughput bottleneck.",
"improvements": [
"Map which compliance controls are currently manual and identify which can be automated",
"Implement automated audit trails and change evidence collection in the CI/CD pipeline",
"Engage compliance and audit teams early to define automated control equivalents",
"Pilot continuous delivery on a lower-risk service to build organisational confidence"
]
},
{
"id": "large-team-high-lead-time",
"signature": "large team (15+ engineers) + lead time > 2 weeks",
"interpretation": "Lead time tends to increase with team and codebase size due to coordination overhead, longer PR review queues, slower builds, and more complex merge conflicts. DORA research shows that team structure and architecture are deeply linked β Conway's Law means that organisational structure shapes system architecture and vice versa.",
"improvements": [
"Evaluate whether the team can be split into smaller, independently deployable product teams",
"Set PR size limits and response time expectations to reduce review queue buildup",
"Invest in inner loop tooling (local test runners, fast feedback) to reduce developer wait time",
"Consider whether the architecture enables team independence or creates coupling"
]
},
{
"id": "long-pipeline-low-deploy",
"signature": "pipeline duration > 30 minutes + low deployment frequency",
"interpretation": "Long pipelines are a primary driver of low deployment frequency. When each deployment requires 45-60+ minutes of CI time, engineers batch changes to reduce overhead, increasing changelist size and risk. DORA research identifies fast feedback as one of the foundational capabilities of high-performing teams.",
"improvements": [
"Profile your pipeline to identify the slowest stages β test parallelisation is usually the biggest win",
"Separate fast-feedback tests (unit, contract) from slow tests (integration, E2E) using pipeline stages",
"Cache dependencies and build artifacts aggressively",
"Target a pipeline duration of under 10 minutes for the core feedback loop"
]
},
{
"id": "high-pr-review-time",
"signature": "PR review time > 24 hours + medium or low deployment frequency",
"interpretation": "Long PR review cycles directly increase lead time and reduce deployment frequency. They also create merge conflicts as branches diverge, further slowing the process. DORA's 2022 research specifically identified code review as a key inner loop capability. High-performing teams maintain short-lived branches and fast review cycles.",
"improvements": [
"Establish team norms for PR review response time (e.g., first review within 4 hours)",
"Break large PRs into smaller, stacked PRs that are easier to review quickly",
"Use automated checks (linting, tests, security scanning) to reduce cognitive load on reviewers",
"Adopt trunk-based development to eliminate long-lived branches entirely"
]
},
{
"id": "ai-adoption-stability-risk",
"signature": "high AI tool adoption + increasing change failure rate",
"interpretation": "DORA 2024 research found that increased AI adoption correlates with a 7.2% reduction in delivery stability for every 25% increase in AI adoption. The hypothesis is that AI's code generation speed leads teams to create larger changesets, which DORA consistently shows increases failure rates. The benefit of AI is real at the individual level but requires process adaptation.",
"improvements": [
"Enforce changelist size limits β AI makes it easy to generate more code, but smaller changes are safer",
"Invest in automated test coverage for AI-generated code, which may have subtle correctness issues",
"Treat AI adoption as a process change requiring deliberate adjustment of deployment practices",
"Monitor CFR and lead time closely during periods of AI tool adoption"
]
},
{
"id": "platform-engineering-throughput-drop",
"signature": "recent internal developer platform adoption + decreasing deployment frequency or increasing lead time",
"interpretation": "DORA 2024 found that platform engineering teams see +8% productivity but -8% throughput and -14% change stability during adoption. This aligns with the J-Curve pattern identified in DORA 2022 for SRE adoption β transformations often show short-term regressions before long-term gains. This is expected, not a signal to abandon the platform.",
"improvements": [
"Acknowledge the J-Curve: set stakeholder expectations for a temporary performance dip",
"Measure developer experience (SPACE metrics) alongside DORA metrics during platform rollout",
"Prioritise golden paths that reduce toil for the most common developer tasks first",
"Ensure platform teams apply a product mindset β treat internal developers as users"
]
},
{
"id": "generative-culture-reliability",
"signature": "strong delivery metrics + poor reliability (MTTR > 1 day, high incident frequency)",
"interpretation": "DORA 2022 research found that without reliability, software delivery performance does not predict organisational success. The 'Flowing' cluster had strong delivery metrics but poor reliability, and scored lower on organisational performance than expected. Reliability is described as the most important 'feature' of any product β keeping promises to users is a necessary condition for delivery speed to generate business value.",
"improvements": [
"Define SLOs for your most critical user journeys and treat breaches as P0 incidents",
"Invest in observability before further increasing deployment frequency",
"Establish error budgets β when the budget is exhausted, freeze feature work and fix reliability",
"Build a generative team culture: trust, blameless retrospectives, and psychological safety predict better reliability outcomes"
]
},
{
"id": "scheduled-release-high-lead-time",
"signature": "scheduled release strategy + lead time > 2 weeks",
"interpretation": "Scheduled releases create artificial batch boundaries that accumulate risk. Changes that are ready are held waiting for the release window, extending lead time and increasing the size of each release. Larger releases contain more changes, making it harder to diagnose failures and increasing MTTR when they occur.",
"improvements": [
"Decouple your deployment schedule from your release schedule using feature flags",
"Identify which aspects of the scheduled release exist for coordination reasons vs. technical reasons",
"Move toward release trains with shorter intervals (bi-weekly β weekly β continuous)",
"Use canary releases or percentage rollouts to reduce the risk of each release"
]
}
],
"keyInsights": [
"Elite performers deploy 182x more frequently than low performers and have 127x faster lead times (DORA 2024)",
"Elite performers have 2293x faster failed deployment recovery times than low performers (DORA 2024)",
"Elite performers have 8x lower change failure rates than low performers (DORA 2024)",
"In 2024, only 19% of teams achieved elite performance; 25% were in the low cluster (DORA 2024)",
"High performers in 2022 were estimated to have 417x more deployments than low performers (DORA 2022)",
"Throughput and stability are correlated β the best teams do well on all four DORA metrics simultaneously (DORA 2024)",
"The 2024 medium performance cluster has lower throughput but higher stability than the high cluster β neither is universally better (DORA 2024)",
"Improving is more important than reaching a specific performance level; the best teams achieve elite improvement, not necessarily elite performance (DORA 2024)",
"Industry does not meaningfully affect performance levels β high-performing teams exist in every industry (DORA 2024)",
"Without reliability, software delivery performance does not predict organisational success (DORA 2022)",
"Reliability is the most important 'feature' of any product β keeping promises to users is a necessary condition for delivery speed to generate value (DORA 2022)",
"Teams with generative culture (trust, collaboration) are more likely to achieve good reliability outcomes (DORA 2022)",
"The top technical capabilities driving high performance are: version control, continuous integration, continuous delivery, and loosely-coupled architecture (DORA 2022)",
"High performers who meet reliability targets are 33% more likely to use version control, 39% more likely to practice CI, 46% more likely to practice CD, and 40% more likely to have loosely-coupled architecture (DORA 2022)",
"Continuous Integration is 1.4x more likely to be used by high performers who meet reliability targets (DORA 2022)",
"Trunk-based development decreases change failure rate, error-proneness, and unplanned work β but requires experience to implement successfully (DORA 2022)",
"Teams with 16+ years of experience who use trunk-based development see increased delivery performance and decreased change failure rate (DORA 2022)",
"Cloud users are 14% more likely to exceed organisational performance goals than non-cloud peers (DORA 2022)",
"Hybrid and multi-cloud have a negative impact on delivery performance indicators (MTTR, lead time, deployment frequency) unless teams also have high reliability (DORA 2022)",
"SRE adoption follows a J-Curve: early adoption may not predict better reliability, but teams that persist through the inflection point see strong reliability gains (DORA 2022)",
"Respondents using higher-than-average levels of all technical capabilities have 3.8x higher organisational performance (DORA 2022)",
"AI adoption improves individual flow (+2.6%), productivity (+2.1%), and job satisfaction (+2.2%) per 25% increase in AI reliance (DORA 2024)",
"AI adoption reduces delivery stability by approximately 7.2% and throughput by 1.5% for every 25% increase in AI adoption β the hypothesis is that AI increases changelist size (DORA 2024)",
"AI improves code quality, documentation, and review speed, but these gains do not automatically translate to better delivery performance (DORA 2024)",
"The DORA 2024 'vacuum hypothesis': AI helps people finish valuable work faster, creating time that gets filled with more work rather than reducing toil (DORA 2024)",
"Platform engineering users see 8% higher individual productivity and 10% higher team performance, but 8% lower throughput and 14% lower change stability during adoption (DORA 2024)",
"Platform engineering success requires a user-centred (developer-centred) product mindset β without it, the platform becomes a hindrance (DORA 2024)",
"AI adoption increases organisational performance (+2.3%) and team performance (+1.4%) per 25% increase, but has no clear impact on product performance (DORA 2024)",
"Nearly 90% of software professionals were using AI tools as of the 2025 DORA AI Capabilities Model research",
"Seven AI capabilities identified by DORA 2025: clear AI stance, healthy data ecosystems, quality internal platforms, strong version control, working in small batches, user-centric focus, AI-accessible internal data",
"Working in small batches is a foundational AI capability β AI amplifies productivity, but larger changesets created by AI tools increase delivery risk without small batch discipline",
"The percentage of high performers was at a 4-year low in 2022, while low performers rose from 7% to 19% β suggesting pandemic-related knowledge sharing disruption (DORA 2022)",
"Change failure rate is strongly correlated with rework rate β failures require remediation changes, creating a compounding effect on throughput (DORA 2024)",
"The 2024 report introduced 'Failed Deployment Recovery Time' as the evolved metric replacing MTTR, measured specifically from failed deployment to recovery (DORA 2024)"
]
}
|