ZFS and block deduplication
Edward Ned Harvey
blu-Z8efaSeK1ezqlBn2x/YWAg at public.gmane.org
Wed Apr 27 17:00:20 EDT 2011
> From: discuss-bounces-mNDKBlG2WHs at public.gmane.org [mailto:discuss-bounces-mNDKBlG2WHs at public.gmane.org] On Behalf
> Of Bill Bogstad
>
> > The only difficulty is working up an exploit with a matching hash before
6:25
> AM tomorrow.
It's even more difficult than that ... Yes, many files span multiple
blocks, and therefore begin at the beginning of one block and end in the
middle of another block, but the hashes are calculated on a per-block basis
up to 128k. So any files that are smaller than 128k *might* occupy a block
by themselves, but since they're probably being written a whole bunch of
files at a time, most like the write aggregation is consolidating many small
writes into a single block.
So even if you have a technique to calculate some data that will generate a
hash collision, you're not quite sure what data you need to collide with,
because that ultimately depends on what activity is taking place on the
target machine at the time of the updates being applied... And of course,
simply generating a collision isn't enough to do anything useful (unless
your goal as an attacker is to simply cause random corruption). You have to
generate a specific collision that will be corrupted in just the right ways
as to provide yourself with an exploitable flaw...
And of course the countermeasure of all the above is trivial. Enable
verification. ;-)
More information about the Discuss
mailing list