To optimize software development in today's fast-paced economy, teams need to employ more tool automation than ever before. One of the most productive technologies to utilize is static analysis, which must balance the ability to find important defects against analysis time.
After the automated analysis, all reported warnings must be interpreted by a human to determine if actions are warranted. The criteria for judging warnings can significantly vary, depending on the role of the analyst, the security risk, the nature of the defect, the deployment environment, and many other factors. With such considerations, it can be difficult to decide the best way to configure a single tool.
This paper presents a model for computing the value of using a static analysis tool. Using inputs such as engineering effort, the cost of an exploited security vulnerability, and some easily-measured tool properties, the model allows users to make rational decisions about how best to deploy static analysis.