Blog

What are the Best AppSec KPIs? It Depends.

Pick the metrics right for your program based on what's most important to your organization.

Published on
17 June 2025

Jenn Gile

Head of Community at Endor Labs

At the May 2025 edition of LeanAppSec Live, a participant asked "What are your favorite KPIs to measure the impact of your AppSec programs?"

To answer this question in a way that’s useful, first you need to answer “What’s the goal of my AppSec program?” In other words, what does my company expect me to achieve? If you don’t start there, you run the risk of your KPIs being misaligned with stakeholder expectations.

The four most common motivators are:

  • Compliance: Meeting customer and/or regulatory requirements
  • Developer Experience: Empowering devs to address security issues with minimal friction
  • Risk Tracking: Getting accurate visibility into application security posture
  • Risk Reduction: Fixing risks and preventing new risks from entering the codebase

All four may be priorities, but sometimes they can be contradictory so it’s important to know how they rank for your organization.

KPIs Mapping to AppSec Motivators

Let's look at KPIs that can help measure impact within each of these areas. I’ll showcase examples from four security teams that have written about this topic: Relativity, Jellyfish, Starburst Data, and Grip Security.

Compliance KPIs

Meeting compliance requirements is often non-negotiable, especially for companies with FedRAMP obligations (Relativity) or a business imperative to build customer trust (Grip Security). KPIs for compliance often focus on demonstrating adherence to standards and responding to audits.

Meeting Vulnerability Remediation SLAs: Organizations often have service level agreements (SLAs) for fixing vulnerabilities based on their severity, and these can be driven by a customer requirement or regulation. For example, Relativity must remediate 100% of exploitable risks to keep their FedRAMP authorization. A KPI could be tracking the percentage of such required remediations completed within required SLAs.

Speed and Accuracy of Responding to Security Assessments/Questionnaires: Grip Security highlights the need to quickly and accurately respond to self-assessments, audited frameworks (like SOC 2, ISO 27001), and detailed security questionnaires from customers. Accelerating these assessment processes through readily available data (e.g., mean time to remediation, confirmation of no open exploitable findings) is a KPI.

Confidence in Risk Reporting for Leadership/Auditors: Jellyfish uses risk models, supported by reliable data on reachable vulnerabilities and other inputs, to report risk predictions to their C-suite and board. This ability to substantiate claims and demonstrate control is a valuable outcome for compliance and governance.

SBOM and VEX Document Generation Capability: Compliance frameworks often require Software Bills of Materials (SBOMs) and Vulnerability Exploitability eXchange (VEX) documents. The ability to generate these quickly, accurately, and in standard formats is a functional capability supporting compliance.

Developer Experience (DevEx)

A key aspect of modern AppSec programs is ensuring that security doesn't hinder developer productivity but instead becomes an enabler. KPIs for DevEx might focus on how easily developers can work with security tools and processes.

Time Spent on Security-Related Tasks: Jellyfish tracks “interrupt work” related to unplanned activities, including handling SCA findings. Reducing the time developers (or security engineers) spend on this type of work due to improved processes or tooling can indicate better DevEx. One anonymous Endor Labs customer uses the number of Jira tickets as a proxy for how much burden is placed on Engineering.

Reduction in Cognitive Load: While difficult to measure directly, Jellyfish looks for indicators of blockers and disruptions (such as permissions issues or CI configuration changes causing build failures) using their own tool (an engineering management platform). For example, Did developers encounter permissions issues? or Did a CI configuration change cause build failure that has to be debugged? Also, tracking the volume of alerts developers need to handle or the time spent investigating false positives could be relevant metrics.

Impact on Build/Development Cycles: Jellyfish uses indicators of blockers and disruptions for a second purpose; to determine whether a tool is having a positive or negative impact on Engineering.

Security Team Intervention Frequency: Relativity considers it a "minor failure" if a developer has to reach out to the security team with a question about a finding, and Grip Security aims for engineers to take action without additional help. The volume of inquiries or questions would help you measure how this is going.

Reduction in False Positives: A low false positive rate shows that developer time is not being wasted on non-issues. (also see this KPI in the “Risk Tracking” section.)

Developer Adoption and Engagement with Security Tools: Both Starburst and Jellyfish see tool adoption as a key indicator of positive sentiment and increased probability of meeting other goals. A powerful way to measure engagement is through Security Champions program metrics.

Risk Tracking

Effective risk tracking means understanding the current state of security risks, where they originate, and how they evolve. KPIs in this area focus on visibility and accuracy.

Mean Time to Detect (MTTD): The average time it takes to discover a vulnerability from the moment it's introduced. (Pairs nicely with MTTR in “Risk Reduction”.)

Vulnerability Density: The number of vulnerabilities found per unit of code (e.g., per 1,000 lines) can help normalize risk across projects of different sizes.

Total Number Reachable Vulnerabilities: Starburst tracks the total number of reachable vulnerabilities, and whether this number is trending up or down, because it’s a good proxy for how much new risk is entering the codebase.

Reduction in False Positives: Many teams track how well they’re able to identify false positives to determine if security tools are well-tuned. (also see this KPI in the “Developer Experience” section.)

Accuracy of Risk Prioritization: Measuring how well the AppSec program identifies and prioritizes the most significant risks can also show the effectiveness of a tool or program.

Organizational Risk Levels: Relativity has organization-wide visibility into risk levels to help them see outliers and prompt conversations about security posture.

Note: What you don’t see in this list is “total number of vulnerabilities.” While many programs do track this metric, beware… Many application security tools aspire to identify as many potential vulnerabilities as possible. The keyword being potential. Tracking the total number of risks they identify doesn’t provide meaningful information about your application’s security posture.

Risk Reduction

The ultimate goal for many AppSec programs is to reduce the organization's exposure to security threats. KPIs for risk reduction measure the effectiveness of preventative and remediation efforts.

Threat Modeling Coverage: Track the percentage of new features that are analyzed for security flaws before a single line of code is written, preventing entire classes of vulnerabilities from ever entering the system.

Preventing Risks from Getting into Production: Jellyfish, Starburst, Relativity all use policies to block PRs with risky findings. Measuring the number or percentage of risky dependencies prevented from entering the codebase shows effectiveness of preventative controls like policies.

Mean Time to Resolution (MTTR): MTTR is a critical metric for measuring how quickly identified risks are addressed by demonstrating efficiency in reducing the window of exposure. Similar to MTTR, Jellyfish measures their ability and time to mitigate incidents.

Percentage of High-Priority Risks Remediated First: Relativity tracks whether they are fixing the highest risks first. Prioritizing and successfully remediating critical and high-severity vulnerabilities demonstrates effective risk reduction strategy.

Reduction in Risks Requiring Remediation: Measuring the overall reduction in the volume of actionable findings is a direct measure of reduced workload and potentially reduced risk.

What About Business Metrics?

While the previous four categories are vital for running an effective program, this category focuses on translating AppSec activities into the language of the business: money, risk, and growth. These are the KPIs you bring to the CFO, the CEO, and the board to justify your budget and demonstrate your program's strategic value.

Many require you to know your “cost per incident,” or can be estimated using industry averages/analyst reports or calculated Direct Costs + Labor Costs + Business Impact Costs.

Reduction in Time to Close Deals with Security Requirements: Tracking the time saved (e.g., "We've reduced the security review stage of the sales cycle by an average of 10 days") is a powerful metric that directly ties AppSec to faster revenue recognition. You might also track % of deals won where security was key, but that can be harder to determine since it’s unlikely to be tracked by your sales team.

Total Financial Risk Exposure: Instead of reporting "we have 50 critical open vulnerabilities," you can report "we are currently carrying an estimated $2.5M in financial risk exposure from our open critical vulnerabilities." This is calculated by multiplying the number of open critical issues by your established "Cost Per Incident." This metric helps leadership make informed decisions about risk appetite and resource allocation in a language they understand perfectly.

AppSec Program Return on Investment (ROI): you may be able to calculate the value your team brings to the company with a formula like this one. Value = (Number of critical vulnerabilities blocked from production) x (Estimated cost per incident) + (Avoided compliance fines) + (Value of accelerated sales deals)

More resources

Fireside Chat: A Different Way to Threat Model
Video
Fireside Chat: A Different Way to Threat Model

Join us for an insightful discussion with Adam Shostack (President of Shostack & Associates, author of many threat modeling books), a renowned expert in threat modeling, as he explains the basics and importance of threat modeling in security. Learn about the four fundamental questions of threat modeling, its application in agile and lean environments, and tips for keeping it lightweight and effective. Adam also shares his journey into the field, his teaching experiences, and how to start a threat modeling program.

The Four Question Framework for Threat Modeling
Video
The Four Question Framework for Threat Modeling

Adam Shostack is one of the best known thought leaders and instructors in threat modeling. At the October 2025 LeanAppSec Live, we invited him to deliver a lightning talk on the four question framework.

Discover how to efficiently incorporate threat modeling into your security processes without extra budget or headcount.

Applying Lean Principles to Application Security
Blog
Applying Lean Principles to Application Security

What is LeanAppSec, and how can you use it?