[Discuss] TrueCrypt with SSD
Edward Ned Harvey
blu at nedharvey.com
Fri Aug 26 21:29:42 EDT 2011
> From: Derek Atkins [mailto:warlord at MIT.EDU]
> Sent: Friday, August 26, 2011 11:19 AM
>
> > Simply write a file. Eliminate the possibility of external drive
slowdown.
> > time dd if=/dev/zero of=10Gfile bs=1024k count=10240
>
> I did this a few times with various count sizes and noticed that the
> speed declined significantly once I started writing more than my RAM
> cache size data:
>
> [warlord at mocana mocana]$ time dd if=/dev/zero
> of=/home/warlord/TestDataWrite bs=1k count=20000
> 20000+0 records in
> 20000+0 records out
> 20480000 bytes (20 MB) copied, 0.0662049 s, 309 MB/s
> 0.002u 0.063s 0:00.10 60.0% 0+0k 128+40000io 2pf+0w
hehehe, yes, of course. :-) The number I suggested above was around 10G.
That was not based on anything, and it may need to be bigger on your system,
depending on your system specs. Really this test should be as large as you
can bear to let it be. But don't go over approx 50% of the drive, or else
you might start getting hurt by fragmentation etc.
Hint: Any benchmark you complete in 0.06 seconds isn't going to be very
useful. ;-) Perhaps try something that runs at least 5-10 minutes,
minimally.
> Still, 50MB/s is a SIGNIFICANT reduction in I/O throughput from what I
> think I should be seeing w/o encryption.
You're also using a 1k blocksize. Try increasing that, at least 128k. I
usually say 1024k. Given that "dd" is actually topping your cpu charts,
you're probably only generating your data at 50 MB/s.
Try running dd directly from /dev/zero into /dev/null, and see how your
blocksizes affect it. That way you can ensure you're at least running dd
efficiently... And then you can write something to disk. Are you familiar
with pv? It's useful to stick into your pipeline, so you can see what's
going on.
Probably not relevant in this case, I have certainly benchmarked systems
where dd was incapable of generating data fast enough. It's not very
efficient, actually, because it's general purpose (needs to actually read
data from the device /dev/zero, rather than simply repeating the previous
data). I have a c program to generate data much faster, if you actually
reach that speed limit. But for a single drive, it should be no problem.
I agree, 50 MB/sec is not stellar. Any typical 7200rpm sata drive should
sustain 1Gbit/sec. SSD's should sustain about the same throughput, but much
faster IOPS.
If you look up the specs of SSD's, they will report throughputs that are
much higher. 250MB/sec, 400MB/sec. While that's not really a lie, it's
basically a lie. Basically, you can get that speed whenever you read, or
the first time you write data, optimized for performance of that specific
drive, while the drive is entirely unused (entirely TRIM'd.) After a little
time and normal operation, the read speed doesn't really degrade much, but
the write speed quickly halves or quarters itself.
More information about the Discuss
mailing list