The following assessment is designed to assess the maturity of your team(s) or organization as they relates to specific DORA Capabilities.
For each capability, choose the statement that best reflects your current experience within your team(s) or organization. The number next to that statement is your maturity score for that capability. Generally, score yourself a 1 if the capability is completely missing from your team, a 2 if there is a lot of room for improvement, a 3 if there is some room for improvement, and a 4 if your team is exemplary in the capability. Don't worry if the description doesn't exactly match your situation. The descriptions are meant as examples of situations that would qualify for the associated score.
Most capabilities only have one set of statements to consider. For those few capabilities with multiple sets of statements, average your scores and you'll have your overall maturity score for the capability.
Resist pressuring yourself to select a high rating. The goal of this assessment is to get an honest reading of the current state of things within your team(s) and to pinpoint areas where improvements are likely to yield benefits for your organization. Taking this assessment may also spur helpful conversations with your team(s).
To improve in a capability, navigate to its page by either clicking on its title or visiting the capabilities list. Once on the capability page, review the Supporting Practices. These are fresh, pragmatic, and actionable ideas you can begin experimenting with today to support the capability. After you've implemented some practices, take this assessment again to track your progress and find your next area of focus.
- Fragmented & Manual: Data is scattered across various tools (e.g., Slack, Jira, Google Docs, email). Finding information requires manual searching or asking individuals. There is no AI interface for internal data.
- Centralized but Static: Most data is in a central wiki or repo and is accessible with a basic keyword search. Some experiments with AI exist, but they are prone to hallucinations and lack access to real-time updates.
- Integrated & Useful: An AI-powered search or chatbot exists that can access most technical documentation and code. It provides citations for its answers. Accuracy is high, though it occasionally misses very recent changes or restricted data.
- Ubiquitous & Trusted: AI has secure, real-time access to all relevant internal data sources. It respects granular permissions and is the first place employees go for answers. Feedback loops are in place to correct the AI and update the underlying documentation simultaneously.
- Absent or Hidden: No formal stance exists, or it is buried in legal documentation that is not shared with technical teams. Developers are unsure what is allowed, leading to either total avoidance or "underground" usage.
- Reactive & Vague: A stance exists but is mostly reactive (e.g., "Don't put passwords in ChatGPT."). Guidelines are unclear, and there is no centralized place to find updates or ask questions about new tools.
- Clear & Communicated: There is a well-documented AI policy that is easily accessible. Most team members understand the boundaries of AI use, and there is a clear process for requesting or vetting new AI tools.
- Integrated & Iterative: The AI stance is part of the daily engineering culture. It is regularly updated based on team feedback and technological shifts. There is high confidence in using AI because the legal and security guardrails are clear and supportive.
- Growing Tech Debt: Code is rarely refactored, leading to routine growth of tech debt.
- Occasional Maintenance: Teams sometimes prioritize feature delivery over maintainability.
- Reactive Maintenance: Code is regularly maintained as problems become bottlenecks or pain points.
- Proactive Maintenance: Teams have a sharpened design sense and proactively refactor the codebase to minimize the impact of future changes.
- Brittle Codebase: Changing any code is time-consuming, complex, and prone to error.
- Fairly Complex Codebase: Most changes require significant refactoring, and it's difficult to predict the impact of changes on the overall system.
- Partially Modular Codebase: Most parts of the system are modular and easy to update, but some are complex and difficult to work with.
- Well-organized Codebase: When changes are made to the existing codebase, they don't tend to require much rework.
- Infrequent Updates: User-facing updates are delivered less than once a quarter, with long periods of time between releases. Deployments that are void of business value or are sitting behind feature flags are not considered user-facing.
- Occasional Updates: Teams deliver user-facing updates quarterly or monthly.
- Regular Updates: User-facing updates are delivered bi-weekly or weekly.
- Continuous Updates: User-facing updates are delivered multiple times a week (sometimes even multiple times a day), with a fully automated process and minimal manual intervention.
- Months: Changes typically take multiple months to go from code commit to production.
- Weeks: Changes typically take multiple weeks to go from code commit to production.
- Days: Changes typically take multiple days to go from code commit to production.
- Hours: Changes typically take hours or a day to go from code commit to production.
- High Failure Rate: More than 15% of changes to production results in a degraded service and requires immediate remediation.
- Moderate Failure Rate: Between 3-15% of changes to production results in a degraded service and requires immediate remediation.
- Low Failure Rate: Less than 3% of changes to production results in a degraded service and requires immediate remediation.
- Very Low Failure Rate: Less than 1% of changes to production results in a degraded service and requires immediate remediation.
- Days: It typically takes multiple days to restore service after a change failure.
- Hours: It typically takes multiple hours to restore service after a change failure.
- Under An Hour: It typically takes somewhere between 10 and 60 minutes to restore service after a change failure.
- A Couple of Minutes: It typically takes under 10 minutes to restore service after a change failure.
- Infrequent & Painful: Integration is done rarely, with large batches of changes, requiring multiple levels of approval, and often resulting in merge conflicts and uncertain outcomes.
- Routine & Coordinated: Integration happens regularly (e.g., weekly), with moderate-sized changes, requiring some approval and coordination, and occasional merge conflicts, but with a good understanding of the outcome.
- Regular & Smooth: Integration happens frequently (e.g., daily), with small, incremental changes, requiring minimal approval, and rare painful merge conflicts, with a high degree of confidence in the outcome.
- Continuous & Seamless: Integration happens continuously, with tiny, incremental changes, rarely requiring approval, and virtually no painful merge conflicts, with complete confidence in the outcome and immediate feedback.
- Meaningless: User feedback from releases aren't collected.
- Reactive: User feedback is gathered, but usually only after significant issues arise, and it's acted upon sporadically.
- Informative: User feedback is regularly gathered and may influence our prioritization, but meaningful shifts in priority don't happen frequently.
- Impactful: User feedback is gathered, based upon recent changes, and acted upon often.
- Manual & Error-prone: Database changes are made manually, with a high risk of errors. Deployments are slow, sometimes taking hours to complete, and sometimes requiring downtime.
- Partially Automated: Some database changes are automated, but many changes require manual intervention and/or testing to complete.
- Mostly Automated: Most database changes are made using a fully automated process, with some manual review and/or testing. Changes are generally deployed quickly, taking minutes. Reliability is fairly good, with few failed changes.
- Fully Automated & Zero-downtime: All database changes are made using a fully automated process, with no manual intervention or approval required. Changes are deployed rapidly, taking seconds or minutes, and the process is highly reliable, with zero downtime to dependent applications. When failures are introduced, they're automatically and safely reverted.
- Manual: Deployments are manual, time-consuming, and error-prone.
- Partially Automated: Some aspects of deployment are automated, but manual steps are still required.
- Mostly Automated: Deployments are mostly automated, with minimal manual intervention.
- Fully Automated: Deployments are fully automated, including rollback mechanisms and verification steps.
- Minimal: The technical documentation is often outdated, incomplete, or inaccurate, making it difficult to rely on when working with the services or applications. It's hard to find what is needed, and others are often asked for help.
- Basic: The technical documentation is somewhat reliable, but it's not always easy to find what is needed. Updates are sporadic, and multiple sources must be dug through to get the required information. In times of crisis, the documentation might be glanced at, but it's not always trusted.
- Good: The technical documentation is generally reliable, and what is needed can usually be found with some effort. Updates are made regularly, but not always immediately. The documentation is used to help troubleshoot issues, but clarification from others might still be needed.
- Excellent: The technical documentation is comprehensive, accurate, and up-to-date. What is needed can easily be found, and the documentation is relied on heavily when working with the services or applications. When issues arise, the documentation is confidently reached for to help troubleshoot and resolve problems.
- Insufficient Tools: The current tools are inadequate for getting the job done, and there is no clear way to evaluate or adopt new ones.
- Adequate but Limited: The current tools are sufficient but limited, and new tools are occasionally adopted through an informal process.
- Capable & Evolving: The current tools are capable of meeting needs, and a standardized process is in place for evaluating and adopting new tools should the need arise.
- Best-in-Class Tools: The best tools available are used to get the job done, and new tools are proactively researched and teams are empowered to recommend their adoption via a standardized process.
- Rigid & Manual: Infrastructure changes are slow and labor-intensive, requiring manual intervention and taking weeks or months to complete.
- Limited Automation: Some routine infrastructure tasks are automated, but provisioning, scaling, and resource allocation still require manual effort and human interaction, and there is limited visibility into resource utilization and costs.
- Advanced Automation: Infrastructure changes are largely automated, with self-service capabilities and rapid scalability, but different teams and functions may still work in silos, with some manual handoffs and coordination required.
- On-Demand & Elastic: Infrastructure is fully automated, with seamless collaboration and alignment between teams and functions, enabling rapid scaling and flexibility, and providing a unified, on-demand experience for users.
- Top-down: Teams operate under a highly directive approach, with leadership providing explicit instructions and priorities. Autonomy for decision-making is limited. In the event of failure, the focus is on individual accountability and administering corrective action.
- Bureaucratic: Teams follow established procedures and protocols, with clear roles and responsibilities. Sometimes the specific instructions are vague or incomplete with no clear product leader. Teams have some flexibility to adapt to changing circumstances. However, leadership approval is still required for most meaningful decisions.
- Collaborative: Teams seek input and expertise from other teams to inform their decisions, but maintain clear ownership and responsibility for their work. Each team has some autonomy to make decisions within established boundaries. However, strategic direction is set by leadership, and teams are expected to align their work with these top-down priorities.
- Generative: Teams seek input and expertise from other teams to inform their decisions, but maintain clear ownership and responsibility for their work. Each team has some autonomy to make decisions within established boundaries. Strategic direction is set by leadership, but factors in ground-level feedback from the teams.
- Fragmented & Untrusted: Data is trapped in silos. Access requires manual approvals and long waits. No one is sure if the data is accurate and "data cleaning" is a massive, manual chore for every project.
- Coordinated but Manual: Data is documented but often outdated. You have some central repositories (like a data warehouse), but integrating new data types is slow. Testing often uses stale or "hand-rolled" data that doesn't reflect reality.
- Accessible & Reliable: Most data is discoverable via a catalog or API. Automated pipelines handle basic cleaning and transformation. There is high confidence in data quality and privacy masking is largely automated.
- Fluid & Self-service: Data is treated as a product. Teams can self-serve the data they need through well-defined interfaces. Data source tracking is fully transparent, and data quality issues are caught by automated "data tests" before they affect downstream systems.
- Unfulfilling Work: Employees often feel undervalued, overworked, and disconnected from the organization's purpose.
- Limited Engagement: Employees are somewhat satisfied but lack autonomy, opportunities for growth, and a sense of accomplishment.
- Satisfactory Engagement: Employees are generally content, with some opportunities for growth and a sense of fulfillment. They may lack excitement or challenge, though.
- Exceptional Engagement: Employees are highly motivated, empowered, and passionate about their work. They demonstrate a strong sense of purpose and fulfillment.
- Static Knowledge: Learning is limited to onboarding and initial training, with little emphasis on ongoing development or skill-building.
- Ad Hoc Learning: Teams occasionally attend conferences or workshops, but learning is not a prioritized or structured part of the organization's culture.
- Encouraged Learning: Learning is valued and encouraged, with some resources and opportunities provided for professional development, but it's not a core part of the organization's identity.
- Learning as a Competitive Advantage: Learning is deeply ingrained in the organization's culture, viewed as a key driver of improvement and innovation. It is actively prioritized and invested in, helping the team to stay ahead of the curve.
- Tightly Coupled: Teams are heavily dependent on other teams for design decisions and deployment. Frequent, fine-grained communication and coordination are required.
- Somewhat Coupled: Teams have some independence, but still require regular coordination with other teams. Deployment and design changes often need permission or resources from outside the team.
- Moderately Coupled: Teams have a moderate level of independence. They can make some design changes and deploy without permission, but they may still need to coordinate with other teams routinely.
- Loosely Coupled: Teams have full autonomy to make most large-scale design changes and deploy on-demand. They can test independently and release with negligible downtime, without needing fine-grained communication or coordination with other teams.
- Limited Visibility: There is little to no monitoring or visibility into system performance, making it difficult to identify issues.
- Basic Monitoring: Some monitoring tools are in place, providing basic metrics and alerts, but with limited visibility into system behavior. These tools are usually only referenced when there is a problem with the system.
- Comprehensive Monitoring: A comprehensive monitoring system is in place, providing standardized metrics and visibility into system performance, enabling teams to identify trends and simple issues quickly.
- Integrated Observability: Monitoring data is integrated with logs, traces, and other data sources, enabling teams to debug and understand complex issues, and gain deep insights into system behavior.
- Ad Hoc Monitoring: Monitoring is done on an as-needed basis, with little formal process or visibility into system performance. Data is not used to inform business decisions.
- Basic Reporting: Some monitoring data is collected and reported, but it is not regularly used to inform business decisions.
- Data-driven Decision Making: Monitoring data is regularly collected and used to inform business decisions, but there is room for improvement in terms of data quality and scope.
- Strategic Monitoring: Monitoring is a key part of the organization's strategy, with high-quality data collected and used to drive business decisions and optimize system performance.
- Reactive Security: Security is addressed only after issues arise, and there is little consideration of security concerns during development.
- Basic Security: Some security best practices are followed, but security is not a primary consideration during development, and security reviews are infrequent.
- Integrated Security: Security is a key consideration during development, with internal security training and some use of automated security tooling.
- Pervasive Security: Security is deeply ingrained in the development culture, with continuous security testing, automated security tooling, and routine security reviews throughout the software development lifecycle.
- Ticket-ops & Fragmented Tooling: The platform is a collection of infrastructure tickets and manual gates rather than a well-functioning ecosystem. Individual AI coding gains are lost to downstream disorder, as security reviews, testing, and deployments remain manual bottlenecks that increase cognitive load.
- Standardized but Rigid: Initial "golden paths" exist, but they function as a "golden cage" with little flexibility for diverse team needs. While some automation is present, developer feedback is often unclear and the lack of self-service means AI-generated code frequently stalls at the integration phase.
- Product-centric & Self-service: A dedicated platform team treats developers as customers, providing self-service interfaces that "shift down" complexity. Automated pipelines ensure AI-amplified throughput is consistently tested and secured, allowing teams to focus on user value rather than infrastructure hurdles.
- Fluid, Extensible & AI-ready: The platform is an extensible ecosystem where "golden paths" are the easiest choice but allow for contribution and flexibility. Real-time feedback and automated guardrails make experimentation cheap and recovery fast. The platform leverages AI's potential to accelerate the entire delivery lifecycle without sacrificing stability.
- No Notifications: There is no automated system of notifying teams that a failure has occurred in deployed environments. Failures are typically caught via manual QA or reported by users.
- Rudimentary Notifications: Some alerting rules are in place, but thresholds are not well-defined. Notifications are often irrelevant or too frequent.
- Threshold-based Notifications: Alerting rules are well-defined, with failure thresholds tuned to accurately spot issues. Notifications are relevant and timely.
- Proactive Notifications: Rate of change metrics are tracked to proactively spot potential issues. There are automated responses to many notifications, and teams continuously review and refine alerting rules to anticipate and prevent failures.
- Manual & Gatekeeping: Changes require manual approval from a centralized Change Advisory Board (CAB) or external reviewers, creating a bottleneck and slowing down the delivery process.
- Peer-reviewed & Coordinated: Changes are manually verified, reviewed, and subsequently approved by peers. Changes require high levels of coordination when they affect multiple teams. It usually takes close to a week or more to get approval.
- Semi-automated & Efficient: Changes are reviewed and approved through a mix of automated and manual processes, with peer review still in place. Coordination is more efficient and approval times are faster. When change approval is required, feedback is typically provided within a day or two.
- Streamlined: The high level of automation in change approvals significantly reduces, and in some cases eliminates, the burden of peer review. A Change Advisory Board (CAB) may still exist, but their role is to simply advise and facilitate important discussions. When change approval is required, feedback is typically provided in under 24 hours.
- Minimal or No Experimentation: Teams follow a strict, rigid plan with little room for deviation or experimentation and must seek approval for even minor changes. They have limited visibility into the organization's overall goals and context, and they must pull information from leadership on an ad hoc basis.
- Highly Controlled Experimentation: Teams are allowed to explore new ideas but within tightly defined parameters and with close oversight from leadership. Deadline pressure regularly takes priority over idea exploration. Teams must request access to relevant context and information, which is provided on an ad hoc basis.
- Emerging but Limited Experimentation: Teams have some flexibility to try new approaches but must seek permission from leadership or follow established protocols for most changes. They have access to some organizational context, including goals and objectives, but may not have direct access to customer feedback or the financial performance of the company.
- Self-directed Innovation: Teams have autonomy to pursue new ideas and make decisions. Their experiments are informed by direct access to customer feedback and relevant context that is proactively shared by leadership, including the organization's vision, goals, strategic priorities, and financial state.
- Limited: Test automation is minimal, slow, and/or unreliable. There is heavy reliance on manual testing.
- Basic: Test automation is somewhat reliable, but is either slow or has gaps in coverage and/or value. Manual testing is required to achieve high levels of confidence.
- Mature: Test automation is reliable, fast, and valuable, with good risk coverage. Occasional gaps are still discovered.
- Optimized: Test automation provides a high degree of assurance that code changes are correct, stable, and won't introduce significant issues if deployed to production. It provides this assurance quickly and reliably.
- No Data Management: Teams manually set up and tear down data required for test scenarios, sometimes with great difficulty.
- Shared Test Environments with Limited Control: Teams share test environments with varied data, but face occasional challenges with environment contamination and access restrictions.
- Scripted Data Seeding: Teams can run automated data seeding scripts locally or in ephemeral environments, but they may not cover all possible scenarios.
- Scrubbed Prod Data: Ephemeral environments are easily created and torn down, seeded with production data that has sensitive information scrubbed from it, representing all existing states.
- Mocked Data: Automated tests rely on stubs and mocks for data setup; they don't support complex integration-style testing.
- Fragmented Static Data: Automated integration-style tests are supported, but they rely on static test data that is often scattered across multiple sources. As a result, automated tests are difficult to maintain and update, and prone to failures when data is altered.
- Scripted Data Seeding: Automated tests use manual data seeding scripts to set up and tear down their data; they may not cover all production scenarios.
- All Categories of Automated Tests Are Supported: In addition to supporting scripted data seeding, ephemeral environments are easily created and torn down. They can be seeded with realistic synthetic data or production data that has sensitive information scrubbed from it. This enables advanced testing categories like performance, load, and anomaly detection.
- Crisis Management: The organization is in a state of crisis or chaos, requiring leaders to take a direct and hands-on approach. Leaders focus on short-term goals, with limited opportunities to communicate a clear long-term vision or inspire team members.
- Transactional Leadership: Leaders focus on managing scenarios that deviate from the norm. They prioritize meeting urgent goals and objectives, providing clear direction and guidance to team members. They begin to communicate a vision and hold team members accountable for working toward common goals.
- Supportive Leadership: Leaders work closely with team members to develop their skills and abilities, sometimes exhibiting other transformational leadership behaviors like clear vision, inspirational communication, intellectual stimulation, and personal recognition.
- Transformational Leadership: Leaders create a culture of trust, empowerment, and autonomy, consistently demonstrating all five dimensions of transformational leadership: clear vision, inspirational communication, intellectual stimulation, support, and personal recognition.
- Long-lived Branches: Development happens on long-lived feature branches that are rarely merged to trunk, resulting in complex and painful integrations.
- Regular Merges: Development happens on feature branches that are regularly merged to trunk (e.g., weekly), with some manual effort required to resolve conflicts.
- Short-lived Branches: Development happens on short-lived feature branches (e.g., 1-3 days) that are frequently merged to trunk, with minimal manual effort required to resolve conflicts.
- Trunk-based: Development happens either directly on trunk or on very short-lived feature branches (e.g., 1-3 hours). Changes are committed and validated continuously with immediate feedback.
- The Feature Factory: Teams focus on output volume and use AI to ship more features without validating user impact or feedback.
- Reactive & Proxy-led: Teams rely on siloed feedback and manual hand-offs, using AI to accelerate ticket completion rather than generate code that reflects validated user requirements.
- Integrated & Spec-driven: Teams use spec-driven development and direct user observation to ensure AI outputs are grounded in verified requirements.
- User-invested & Self-correcting: Teams treat AI as a discovery partner, using real-time user metrics and rapid prototyping to pivot toward maximum value.
- Limited or No Adoption: Version control is not used, or its use is limited to select teams, with no organization-wide adoption or standardization.
- Basic Code & Data Storage: Version control is used primarily for code and data backups, with limited or no version control for infrastructure and other assets.
- Standard Version Control: Version control is used consistently across teams for code, configuration, data, infrastructure, and documentation. Disaster recovery is fully supported.
- Advanced Version Control: Version control is optimized for small, comprehensible changes, with a focus on making it easy to traverse and understand the history of changes across code, configurations, documentation, data, and infrastructure.
- Limited Visibility: Teams have little understanding of the flow of work from idea to customer. They lack visibility into the current state of products and features.
- Partial Visibility: Teams have some visibility into the flow of work, but it's limited to their own area of responsibility. They lack a comprehensive understanding of the entire value stream.
- Managed Visibility: Teams use visual displays and dashboards to track the flow of work. They have a good understanding of the current state of products and features, but may not have a complete view of the entire value stream.
- End-to-End Visibility: Teams have a complete and up-to-date understanding of the flow of work from idea to customer, with real-time visibility into the current state of products and features. They use data to improve the flow of work.
- No Visibility: No visual management displays or dashboards are used, and teams lack visibility into their processes and progress.
- Basic Dashboards: Simple dashboards or visual displays are used, but they are not regularly updated, and teams do not actively use them to inform their work.
- Informative Displays: Visual management displays are used to track key metrics and progress, and teams regularly review and update them to inform their work and identify areas for improvement.
- Real-time Feedback: Advanced visual management displays provide real-time feedback and insights, enabling teams to quickly identify and address issues, and make data-driven decisions to adjust their priorities and drive continuous improvement.
- Overwhelmed & Undervalued: Employees are consistently overwhelmed by work demands, have little control over their work, and feel undervalued and unrewarded, with a breakdown in community and a lack of fairness in decision-making processes.
- Managing the Load: Teams are coping with work demands, but some employees are still struggling with a lack of control and autonomy, and rewards and recognition are inconsistent. While there are some efforts to build a sense of community, fairness and values alignment are still a work in progress.
- Finding Balance: Employees are generally happy and engaged, with a good work-life balance, and teams are making progress in addressing work overload, increasing control and autonomy, and providing sufficient rewards and recognition, but there is still room for improvement in building a sense of community and fairness.
- Thriving Culture: Employees are highly engaged, motivated, and happy, with a strong sense of well-being, and teams are consistently delivering high-quality work in a supportive and fair work environment, with a clear alignment between organizational and individual values, and opportunities for growth and development.
- No Limits: No WIP limits are set, and teams work on multiple tasks simultaneously, leading to inefficiencies and burnout.
- Loose Limits: WIP limits are set, but they are not enforced, and teams often exceed them, resulting in delays and inefficiencies.
- Managed Limits: WIP limits are set and enforced, and teams prioritize work based on capacity, but there is still room for improvement in reducing lead times and increasing flow.
- Optimized Flow: WIP limits are optimized and continuously refined to minimize lead times, reduce variability, and achieve single-piece flow, with a focus on continuous improvement and removing obstacles.
- Large Batches: Work is done in large batches that take a long time (months) to complete, resulting in reduced visibility into progress, increased integration effort, delayed value, and high variability.
- Moderate Batches: Batches are moderately sized, taking several weeks to complete, which can lead to some delays in integration and value delivery, and moderate variability, making it difficult to track progress.
- Small Batches: Work is broken down into small batches that can be completed and integrated quickly (days), allowing for clear visibility into progress, relatively low integration effort, and faster value delivery, with some variability.
- Minimal Viable Batches: Work is decomposed into extremely small, minimal viable batches that can be completed and integrated rapidly (hours), providing clear and continuous visibility into progress, minimal integration effort, and fast value delivery, with low variability.