Wednesday, August 12, 2015

How To Bury Your Message - And Get Shot in the Process


Oracle hit the news today for all the wrong reasons. The company's Chief Security Officer, Mary Ann Davidson, wrote a blog post on the topic of "security researchers" who are hired by enterprises - in this case, Oracle's own customers - to go hunting for vulnerabilities in their systems. Because those systems are very often running on Oracle's platforms and products - not just the database, but all the stuff Oracle acquired from Sun, like Solaris and Java - the hired bug-hunters turn their attention to those products, using disassemblers to reverse engineer the code and automated tools to scan for vulnerabilities.

Unfortunately, Davidson chose to bang the drum a little too hard on the reverse-engineering aspect, predictably triggering a response from some "security researchers". Which , in turn, led Oracle to delete the blog post. However, nothing ever dies on the Internet, and so the blog can still be read via the Internet Wayback Machine, here (https://web.archive.org/web/20150811052336/https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t).

End-user licence agreements which prohibit tracing, debugging and other reverse engineering techniques do pose something of a problem in the security world. Yes, we ought to honour them, especially since we accepted those terms and conditions. No, we shouldn't use these kinds of techniques to snoop around our suppliers' intellectual property. But the bad guys couldn't give a rat's patootie about these moral quibbles - they're hacking away for fun and profit, and if there's a vulnerability to be found, so much the better.

However, "security researchers" ought to know better. A junior "security researcher" armed with some automated tools for vulnerability is the white hat equivalent of a script kiddy - he knows just enough to be dangerous. This is one of the two points Davidson really ought to have emphasized - hiring this kind of person just encourages them. And here comes the second point: it's counterproductive.

Hiring "security researchers" of this type in an attempt to secure systems is the kind of futile endeavour that the much-misunderstood Kind Canute would have railed against. How many vulnerabilities are there in a complex business system? A handful? A few dozen? Hundreds? You don't know. You'll never know.

So you hire a "security researcher" to hammer on the code with his automated tools and Bingo! He finds one. So what? Out of the unknown n vulnerabilities in that subsystem, you've found one. Which leaves an unknown n-1 vulnerabilities. What are you going to do about them?

The answer is, you're going to deploy a variety of other controls in a defense-in-depth strategy to prevent entire classes of exploits. You'll use a DMZ configuration, an application-layer firewall, an IPS (Intrusion Prevention System) and a whole bunch of other things to make your systems and the business processes they support as resilient as possible. You have to do that.

Searching for, reporting, and patching vulnerabilities one by one is an inefficient strategy which ignores the fundamental asymmetry of the information security business:
The bad guys only have to be lucky once. The defenders have to be lucky every time.

Trying to do exactly the same thing - find vulnerabilities - faster than all the hackers out there is not a sensible strategy, unless and until you've done everything else you can to make your systems and business processes resilient. Otherwise, your resources - time and money - can be better employed elsewhere.

As Davidson explains, Oracle and other software developers already use static source code analysis tools to scan the original source code for vulnerabilities (among other errors). There's not much point in doing it all over again. There's a bit more point to performing dynamic testing, against complete systems - that's much more likely to turn up architectural issues and configuration problems, and - as Davidson unfortunately chose to over-emphasise - it doesn't violate the EULA.

So why do it? Because software companies pay bounties; the vigilante bug-hunter is in it for the money, and a little fame might be nice, too. But if you're going to play that game, do it properly - if you're going to hunt for vulnerabilities, follow through - document it and develop a proof-of-concept that shows it really is a vulnerability. Don't just submit the output of an automated scan and sit back with hand outstretched. If you do the whole job, properly, then you actually will be a Security Researcher.

(And I hope Mary Ann Davidson enjoys her next job, where the Marketing Department will hopefully put an approval process in place for her blog articles.)