[Discuss] AeroFS
Mike Small
smallm at panix.com
Sun Apr 20 15:33:14 EDT 2014
Richard Pieri <richard.pieri at gmail.com> writes:
> Mike Small wrote:
>> So you're left with only black box testing. No static analysis tools, no
>> runtime memory debuggers, no discussing the problem and the general code
>> quality in public forums, no forking the project and trimming the awful
>> 300,000 lines down to something more manageable with the "exploit
>> mitigation countermeasures" removed (
>
> None of these told us about the Heartbleed flaw in OpenSSL. As a
> matter of fact, it was Codenomicon attacking their own servers that
> lead to the world-wide revelation. Black box testing worked where open
> source philosophy utterly, completely, catastrophically failed.
Black box testing, combined with access to source and a license to fork,
worked in a way that now has people improving matters so that code
auditing is easier and mitigation techniques work better. Maybe the next
bug will be found with black box testing again, or maybe it will be
noticed by a savvy user running with a certain malloc debugging
flag. There are a lot of new bad things being fixed in OpenBSD's OpenSSL
fork if you browse the source history. Irrelevant to the point, but it's
also been quite entertaining and educational to me personally to read
about Heartbleed in a way that browsing patch Tuesday MS Windows updates
is not.
See Ilja van Sprundel's talk here on problems in X for another example
of access to source being used to good effect:
https://www.youtube.com/watch?v=2l7ixRE3OCw
But was I arguing that free software won't have serious security
problems or that bugs will necessarily be found quickly? I call
strawman. I was asking in particular how you trust code you can't see,
not making a statement about the quality of free software
vs. proprietary software in general. I question your response that
seemed to be saying black box testing is everything since whitebox
testing, code scanning and auditing are also obviously useful, but
mostly because I don't see how it protects you from purposeful
evasion. It's very easy to write code whose output looks fine 999 out of
1000 runs. If an insider leaks this fact to the press, what do you get
from a company except a denial? If you don't have the source in question
how do you get past he said she said?
With cloud maybe there's a further question: how do you validate that
the server's running the code they say they are? But I was thinking more
in general. (I don't use cloud services much myself.)
More information about the Discuss
mailing list