Project Zero’s team mission is to “make zero-day hard”, i.e. to make it more costly to discover and exploit security vulnerabilities. They primarily achieve this by performing their own security research, but at times they also study external instances of zero-day exploits that were discovered “in the wild”. These cases provide an interesting glimpse into real-world attacker behavior and capabilities, in a way that nicely augments the insights we gain from our own research.
Today, they shared their tracking spreadsheet for publicly known cases of detected zero-day exploits, in the hope that this can be a useful community resource:
Spreadsheet link: 0day “In the Wild”
The data described in the spreadsheet is nothing new, but they think that collecting it together in one place is useful.
For example, it shows that:
On average, a new “in the wild” exploit is discovered every 17 days (but in practice these often clump together in exploit chains that are all discovered on the same date);
Across all vendors, it takes 15 days on average to patch a vulnerability that is being used in active attacks;
A detailed technical analysis on the root-cause of the vulnerability is published for 86% of listed CVEs;
Memory corruption issues are the root-cause of 68% of listed CVEs.
This data poses an interesting question: what is the detection rate of 0day exploits?
In other words, at what rate are 0day exploits being used in attacks without being detected?
This is a key “unknown parameter” in security, and how you model it will greatly inform your views, plans, and priorities as a defender.