Kernel version 2.6 -- RAID performance woes?
gboyce
gboyce at badbelly.com
Wed Nov 16 10:55:22 EST 2005
On Wed, 16 Nov 2005, Rich Braun wrote:
> Nov 15 16:33:18 cumbre kernel: hda: DMA timeout retry
> Nov 15 16:33:18 cumbre kernel: hda: timeout waiting for DMA
> Nov 15 16:33:18 cumbre kernel: hda: status timeout: status=0xd0 { Busy }
> Nov 15 16:33:18 cumbre kernel: ide: failed opcode was: unknown
> Nov 15 16:33:18 cumbre kernel: hda: no DRQ after issuing MULTWRITE_EXT
> Nov 15 16:33:19 cumbre kernel: ide0: reset: success
<snip>
> But my main gripe about 2.6 is software RAID performance. It's stunningly
> worse than under 2.4. On version 2.4, you see a process named raid1d that
> never racks up any runtime, and a couple of related ones (bdflush,
> mdrecoveryd) that have mere seconds of runtime after 3 weeks of uptime. On
> version 2.6, a process called md0_raid1 sucks up so much runtime (at nice
> level minus-5) during file creation that the system is brought to its knees.
Is this software RAID using IDE disks?
I think your RAID problems may track back to this error in dmesg. Often
when you get IDE errors like this, your disk will fall back from DMA to
PIO mode. PIO mode causes MASSIVE slowdowns, and increased CPU
consumption. Here's a demonstration using my system using :
DMA Mode:
/dev/hda:
Timing buffered disk reads: 126 MB in 3.02 seconds = 41.66 MB/sec
PIO Mode:
/dev/hda:
Timing buffered disk reads: 10 MB in 3.32 seconds = 3.02 MB/sec
I'm not sure why your IDE system is having problems, but its probably the
cause of your RAID woes as well. If you provide details as to what IDE
controller your using, and a copy of your .config file, I may be able to
figure it out though.
More information about the Discuss
mailing list