If you look at the intersection of public policy and cybersecurity, it appears we are on the verge of a fundamental breakthrough that could result in greater cyber safety. New laws and regulations in Europe and the United States are designed to force changes to the way organizations protect information.
With all these actions converging, it’s easy to conclude that companies will soon make a quantum leap in protecting valuable data and cyber attacks will swiftly decline. Without a doubt, these government actions will be helpful in forming a stronger baseline, but they are not enough to change the fundamental flaws in the way we approach cybersecurity today.
A good start
Enforcement begins next month of the European Union’s General Data Protection Regulation (GDPR). Any business that has personal data of EU citizens no matter where in the world a business is based or systems are located must protect consumer information “by design.” Regulators must be notified of breaches within 72 hours; compliance must be certified annually; and, audits are possible. Organizations that fail to comply face stiff fines that can be as high as 20 million EUR or four percent of annual global sales, whichever is higher.
Many GDPR concepts can also be found in the New York Department of Financial Services (NYDFS) regulations for banks, insurance companies, and other institutions regulated by the Department. The NY rules went into effect in 2017 and require breach notices within 72 hours, annual cybersecurity plans, and the appointment of a cybersecurity officer. Penalties for violating the regs are not as strong as those found in the GDPR.
Meanwhile, newly proposed cybersecurity rules from the US Securities and Exchange Commission (SEC) and several states are far less restrictive than the GDPR or NYDFS.
More Rules Do Not Equal Better Security
EU Privacy regulations and U.S. breach notification laws have been around for nearly 20 years, yet they failed to deter Uber from hiding a major ransomware attack and breach for one year. Yahoo! was not prodded into adopting best-practice cybersecurity controls to prevent three billion passwords from being stolen. Equifax did not patch known flaws despite laws and SEC regulations encouraging public companies to be diligent in protecting consumer data.
If the GDPR had been enforceable in 2017, Equifax would face fines of up to $125 million based on their 2016 revenue; Uber, a whopping $260 million fine. Only time will tell if the threat of nine digit fines will change company behaviors. What we do know is this: the largest barriers to improved cybersecurity can’t be legislated away.
Fixing Old Flaws with New Tricks
There are any number of ways organizations try to improve cybersecurity. DevSecOps advocates claim integrated development is the way to improve cybersecurity, but few organizations have successfully operationalized the concept. Most teams still operate in separate orbits where the AppSec and DevOps Teams vie for resources and priorities. Flawed apps go into production and security patches are delayed (if applied at all) because of competing interests.
Continuous development, too, sounds like a good way to address security flaws. More frequent releases offer more opportunities to fix bugs, right? Not so fast. CAST Research Labs reports that “Java-EE applications released more than 6 times per year had significantly higher CWE densities than those released less than 6 times per year.”
Here’s the rub: These are all solutions that fail to capitalize on the promise of emerging technologies to significantly improve cyber protections. A recent study from the CyberEdge Group reinforces what’s keeping us from making meaningful headway against cyber attacks.
When 1200 cyber professionals in 17 countries were asked what are “the barriers to establishing effective defenses,” the number one answer was “lack of skilled personnel.” For the question “what is preventing…patching systems more rapidly?” a lack of skilled personnel was the number two response. The number one answer: “infrequent windows to take systems offline.”
All three responses are linked to an over-reliance on people to solve issues that can be better addressed with automation. With NIST tracking more than 100,000 known software vulnerabilities, there are simply not enough hands on keyboards available to address the growing number of flaws inherent in a software-driven economy.
Traditional and emerging approaches that use instrumentation and web filters in application security tools are only incremental improvements on 30-year-old network protections. They are error-prone, they slow app performance, and are labor intensive – aggravating staffing issues and failing to deliver the promise of improved security. Stretched-thin staff is required to find, fix and physically patch flawed code.
On the other hand, newer virtualization and deterministic approaches not only increase protection but also reduce the effort required to operate and maintain applications. The need to constantly rewrite code, take downtime to patch applications or tune application security tools is eliminated. Existing staff resources can spend more time on higher value activities that have a positive impact on security rather than routine manual tasks.
It’s well past time to embrace new ways of achieving strong data security to meet the outcomes policymakers expect and consumers deserve.
This article first appeared in Security Info Watch.