XFS or ext4
David Miller
david3d-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Mon Nov 29 18:30:59 EST 2010
On Sat, Nov 27, 2010 at 12:00 PM, sgoldman <sgoldman-3s7WtUTddSA at public.gmane.org> wrote:
> Happy Thanksgiving,
> I am in the process of purchasing a 48 TB mid size storage box from a
> vendor called Scaleable Informatics.
>
> I am not looking to create one huge filespace but the vendor suggests I
> would get greater performance if I do and use XFS.
>
> The plan was to cut the system up and in smaller file system size and
> use reliable ext4.
>
> Looking for thoughts on using XFS and experience of other who do use it
>
> Thanks in advance,
> Stephen
>
>
Well since no one has given you any feed back I'll pipe in with my
experience for what it's worth. At my previous job I ran a file server with
about 30TB of storage. This was back when most drives were 250GB so we were
talking about a significant portion of a rack worth of storage in those
days. Made up of 8
Our storage capacity didn't start out as 30TB and we grew to that in time as
needed. With this in mind I decided to use LVM exposing each external RAID
array in multiple LUNs to keep each LUN under 2TB. So as the drive sizes
increased over the years the number of LUNs from each box increased. But
this was in the days before 64bit so this was really just a result of
working around those limitations. Today I would have just exposed each box
as a single LUN. From a storage management LVM point of view each of these
LUNs were a PV (Physical Volume) that had a single partition type 8e (Linux
LVM) and were put into one VG (Volume Group). Which was then carved up into
LV's (Logical Volumes) and formated with a filesystem as needed.
At the beginning of this I did have some time to test various filesystems
under our workload. That's the beauty of using LVM. Just create as many
LV's as I wanted and formatted them with a filesystem, populate with data,
and ran my workload on each of them. Unfortunately ext4 wasn't an option
back then so I can't compare it to XFS but at the time XFS was much faster
than the other filesystems that I tested. Ext4 added extents so I bet that
the speed gap between Ext* and XFS has narrowed a bit. We were storing
large 64 megapixel images in lossless compressed formats. So each image was
between 500kb and 120MB in size depending on the content and each directory
contained 45k or so images.
I can't speak about Scaleable Informatics product as I don't have any
experience with them. But most of the modern storage that I've
administrated has had some level of abstraction between the drives and the
RAID volumes with the concept of RAID sets. Basically drives are assigned a
RAID set and then the raid volumes or LUNs would be carved out of those. So
the RAID set would dictate the number of spindles involved regardless of
what RAID level or size of the volume on it. This approach tends to give
you the performance of the full number of drives no matter how small of a
RAID volume you create. Based on what this vendor is saying it sounds like
the drives are assigned RAID volumes directly. In which case a larger
volume would mean more spindles and increased speed. Although I'd be a bit
wary of putting 48 drives into a single volume due to rebuild time. It may
not be too bad with RAID10 or other hybrid RAID level. I would make sure to
keep at least 1 preferably 2 hot spares in a box with 48 drives though which
makes splitting it up a bit complicated. The vendor should have some
suggestions for you.
As far as filesystem and how to manage the space. Well it really depends on
your needs but based on the fact that you were planning on using smaller
volumes I would say that LVM would be a good option for you. It also makes
it easy to deal with the situation where you outgrow this storage and have
to add a 2nd storage box to the system.
Hope this helps.
--
David
More information about the Discuss
mailing list