In a September 2016 Financial Times interview from a Moscow hotel, Edward Snowden said “We are living through a crisis in computer security the likes of which we’ve never seen.”
He’s right. On a global basis, the total number of reported cybersecurity incidents have increased from 3.4 million in 2009 to 59.1 million in 2015, according to PwC. Cybersecurity Ventures similarly predicts that annual cybercrime global costs will grow from $3 trillion USD in 2015 to $6 trillion USD by 2021.
Traditional Application Protection is Not Up to the Task
Application protection as a separate effort or discipline is relatively new. The first attempts at creating application defenses have been limited to two approaches:
- Write more secure software code. Ensure the software is as secure as possible before going live, through a combination of programmer training; software testing tools known as SAST or DAST; and, the use of penetration testing teams.
- Install a Web Application Firewall (WAF). WAFs run on a separate server from the application, but because WAFs are outside the application itself, they are notoriously inaccurate.
Both approaches are extremely labor intensive, expensive and largely ineffective.
As the number of applications and the sheer volume of known vulnerabilities – 2 billion downloaded from software libraries in 2015 according to Sonatype – writing better code is a losing proposition.
Likewise, WAFs are costly to purchase, difficult to operate, and generate a high level of false positives that must be investigated.
A 2014 NSS Labs report comparing popular WAFs found the average Total Cost of Ownership exceeded $5 per Connections per Second (CPS) and produced an average false positive rate of .77%. That translates into millions of false positives per month for a high transaction volume organization and significant labor cost / lost productivity associated with the resulting investigation. The 2015 report from the Ponemon Institute estimates an average of 395 hours is wasted each week detecting and containing malware because of false positives and/or false negatives.
The Role of Time
Time plays two key roles in application security: 1) the length of time a vulnerability remains unprotected; and 2) the length of time required to mitigate the flaw. Both have significant cost and risk implications.
Oracle, Microsoft, software developers and the security community routinely report serious flaws (an average of one every 100 hours in Oracle’s Java), prompting critical patch updates for immediate action. Each new update starts a race between attackers and defenders: Can the patch be applied before an attack occurs exploiting this publicized weakness?
“It takes, on average, three to six days for an attacker to successfully exploit a vulnerability, more than 250 days to discover an attack is underway, and an additional 82 days to contain the attack.” – Ponemon Institute 2015 Cost of Data Breach Study: Global Analysis
The size of critical patch updates is steadily rising – from an average of 128 per patch in 2014 to more than 250 per patch in 2016 – so, it’s not surprising that a large number of known vulnerabilities are never corrected.
The constraints of limited manpower and stressed budgets are compounded by the risks of reopening old source code, where the original developers are gone and documentation may not be up-to-date or accurate.
“99% percent of all successful attacks through 2020 will exploit a vulnerability known for at least one year.” – Gartner 2016 Top Predictions, June, 2016
The Role of Protection
The early focus of application security programs was to protect only a small number of applications with the highest levels of security risk.
Now, the best practice required by regulators is to protect all applications. A similar process is currently underway with the severity of vulnerabilities against which applications should be protected.