It started with an innocent, semi tongue-in-cheek question on #offsec last week.

anyone root any machines with dirtycow in the labs yet? :)

The DirtyCOW Linux 0day exploit had just recently hit the public consciousness despite being a vulnerability in the source for the past 9 years already. (Consequently probably the single biggest Android vuln to exist that probably also won't get patched).

The basic argument against using it wasn't that it wasn't kosher with OffSec but:

"when doing a real pentest, you can't fault the IT team for not patching brand new vulnerabilites"

That really struck a nerve with me. Because that sort of implied that what I do as a security consultant is running around blaming or faulting people versus helping companies find the weak spots in order to fix them.

That goes against the main premise that I have gone into every security consulting scenario: I want to let the people we are interacting know that we are not there to blame anyone but to uncover issues that they might not have visibility into. Or potentially issues that IT is fully aware of but is not getting resources to fix until someone external party comes and validates the need for more money / time / people.

The retort went something on the lines of

Executives only read executive summary and will see x out of y systems compromised and then proceed to fire people
And that I should find some other way to root the machine and report on that.

This felt absurd to me at first glance. But now with more time to ponder, I can see that this can be, and probably is, a real worry in some organizations.

But that's a systemic problem in management.

It also makes the situation really black and white, which is quite often the case whenever you try to argue rationally any subject over a text-based chat. It removes the nuances required to make sure both of your assumptions of the situation are aligned. For example if the other person has only seen pentest reports which did not put the amount of vulnerabilities into any sort of scale vs other companies in the same vertical, it might be quite alarming to see 99 CRITICAL VULNERABILITIES FOUND!

But if you give some background information "in an organization of size 100-300 machines/people we usually expect to find 1-2 machines with tens of critical vulnerabilities which can skew the statistics" or some such, it will give context to the executive that might then understand what exactly "12 out of 53 systems compromised" means or at least not start summarily terminating the very same people who are supposed to fix the situation. Sidenote: I also hope that the executive summary is longer than just a few one-liner bulletpoints.

If I get local shell on a server and THEN a local 0day priv-esc lands on my lap, then yes I will use it.

I will then explain that this and this step should be taken to mitigate the way I got local access in the first place. The most critical point usually is getting local execution and there are many steps you can take to try and prevent that.

In addition there are mitigations (grsec, selinux, emet etc) to prevent local execution for some set of exploits.

There are tripwires you can setup to know when the compromise happens even if you cannot prevent it. This should hopefully translate to a faster response to quarantine and remedy the situation. This of course depends on what sort of SLAs you have for your IT/security guys to respond to incidents, but that is beside the point. The point is that you cannot rely on one facet of your security posture, you need layers. So me using a 0day to punch through one of these layers will not invalidate all of the other mitigations that could and should be in place.

Using the 0day in an engagement will solidify all of your hypothetical attack scenarios for phishing awareness and understanding of layered security practices. It means that when I show you in the Nessus scan that vulnerability that "May lead to local unprivileged execution" it then becomes effectively "Remote Code Execution with root privileges" for all systems that are affected by a local privilege escalation exploit.

Is it "fair" to use 0day?

I think that anything that adversaries can use is fair game.

But as said, what is fair contractually may vary. Ethically I have no qualms over using 0days.

Whether or not people get fired due to the findings of a pentest is up to the management. It is not the pen-tester's job to sugarcoat findings. Only thing we do is give context and bring awareness. 0days happen. Some of them are local priv-esc, some of them are remote code exec (think shellshock).

Not using them if you find one that fits seems like a wasted opportunity to underline how volatile our notions of secure are. Not to blame, but to educate.

And as I said, text-based arguments are quite often quite futile given how easily misaligned our perceptions of reality are. Those misalignments are obviously colored by our experiences. If you only have experienced lousy management instead of good leadership and have worked under a constant worry of being replaced by outsourcing or downsizing or being fired for any and all mistakes or whatever, then yeah, I can totally understand why you might have such perspectives. I have my own experiences as the reporter of faults and I know how I have conveyed the findings both in text and spoken form and as far as I know those did not lead to firings. And no, no 0days dropped on my lap during those engagements.

Thoughts? Tweet at me, shout at me, email me, comment below where I have linked this.

Updated: