Findings from security tests must be triaged and outcomes persisted/documented to: - Prevent re-analysis of known issues in subsequent test runs - Track accepted risks vs false positives - Enable consistent decision-making across teams At this maturity level, a simple tracking system suffices - tools need only distinguish between "triaged" and "untriaged" findings, without complex categorization. Some tools refer to this as "suppression" of findings. Samples for false positive handling: - OWASP Dependency Check - Kubescape with VEX - OWASP DefectDojo Risk Acceptance and False Positive Handling
Risk:As false positive occur during each test, all vulnerabilities might be ignored. Specially, if tests are automated an run daily.
Vulnerabilities with severity high or higher are added to the quality gate.
Risk:Vulnerabilities with severity high or higher are not visible.
Vulnerabilities are simple visualized.
Risk:The security level of a component is not visible. Therefore, the motivation to enhance the security is not give.
Implement a simple risk-based prioritization framework for vulnerability remediation based on accessibility of the applications.
Risk:Overwhelming volume of security findings from automated testing tools. This might lead to ignorance of findings.
Validating Findings by Security Engineers Pros:
Validating Findings by Security Engineers Cons:
Pushing Findings Directly to Product Teams Cons:
Risk: Not integrating vulnerability handling into the development process may result in product teams ignoring findings.
Vulnerabilities are tracked in the teams issue system (e.g. jira).
Risk:To read console output of the build server to search for vulnerabilities might be difficult. Also, to check a vulnerability management system might not be a daily task for a developer.
The protection requirements for an application should consider:
Risk: Not defining the protection requirement of applications can lead to wrong prioritization, delayed remediation of critical security issues, increasing the risk of exploitation and potential damage to the organization.
Vulnerabilities with severity middle are added to the quality gate.
Risk:Vulnerabilities with severity middle are not visible.
For known vulnerabilities a processes to estimate the exploit ability of a vulnerability is recommended. To implement a security culture including training, office hours and security champions can help integrating security scanning at scale. Such activities help to understand why a vulnerability is potentially critical and needs handling.
Risk: Maintenance of false positives in each tool enforces a high workload. In addition a correlation of the same finding from different tools is not possible.
Findings are visualized per component/project/team.
Risk:Correlation of the vulnerabilities of different tools to have an overview of the the overall security level per component/project/team is not given.
Vulnerabilities include the test procedure to give the staff from operations and development the ability to reproduce vulnerabilities. This enhances the understanding of vulnerabilities and therefore the fix have a higher quality.
Risk:Vulnerability descriptions are hard to understand by staff from operations and development.
All vulnerabilities are added to the quality gate.
Risk:Vulnerabilities with severity low are not visible.