performance tuning on perc6 (LSI) controller

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

performance tuning on perc6 (LSI) controller

Benjeman J. Meekhof
Hello,

I think this might be useful information, and am also hoping for a
little input.

We've been doing some FreeBSD benchmarking on Dell PE2950 systems with
Perc6 controllers (dual-quad Xeon, 16GB, Perc6=LSI card, mfi driver,
7.0-RELEASE).  There are two controllers in each system, and each has
two MD1000 disk shelves attached via the 2 4x SAS interfaces.  (so 30PD
available to each controller, 60 PD on the system).

My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
greater read with this configuration:  2 raid6 volumes volumes striped
into a raid0 volume using linux software raid, XFS filesystem.  Each
raid6 is a volume on one controller using 30 PD.  We've spent time
tuning this, more than I have with FreeBSD so far.

Initially I was getting strangely poor read results.  Here is one
example (before launching into quicker dd tests, i already had similarly
bad results from some more complete iozone tests):

time dd if=/dev/zero of=/test/deletafile bs=1M count=10240
10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec)
  time dd if=/test/deletafile of=/dev/null bs=1M count=10240
10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec)

To make a very long story short, much better results achieved in the end
by simply by increasing the filesystem blocksize to the maximum (same dd
commands).  I'm running a more thorough test on this setup using iozone:

#gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
#newfs -U -b 65536 /dev/stripe/test

#write:  19.240875 secs (558052492 bytes/sec)
#read:  20.000606 secs (536854644 bytes/sec)

Also did this in /boot/loader.conf - it effected nothing very much in
any test but the settings seemed reasonable so I kept them:
kern.geom.stripe.fast=1
vfs.hirunningspace=5242880
vfs.read_max=32

Any other suggestions to get best throughput?  There is also HW RAID
stripe size to adjust larger or smaller.  ZFS is also on the list for
testing.  Should I perhaps be running -CURRENT or -STABLE to be get best
results with ZFS?

-Ben







--
Benjeman Meekhof - UM ATLAS/AGLT2 Computing
[hidden email]

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: performance tuning on perc6 (LSI) controller

Ivan Voras
Benjeman J. Meekhof wrote:

> My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
> greater read with this configuration:  2 raid6 volumes volumes striped
> into a raid0 volume using linux software raid, XFS filesystem.  Each
> raid6 is a volume on one controller using 30 PD.  We've spent time
> tuning this, more than I have with FreeBSD so far.

> time dd if=/dev/zero of=/test/deletafile bs=1M count=10240
> 10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec)
>  time dd if=/test/deletafile of=/dev/null bs=1M count=10240
> 10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec)

I had similar ratio of results when comparing FreeBSD+UFS to most
high-performance Linux file systems (XFS is really great!), so I'd guess
it's about as fast as you can get with this combination.

> Any other suggestions to get best throughput?  There is also HW RAID
> stripe size to adjust larger or smaller.  ZFS is also on the list for
> testing.  Should I perhaps be running -CURRENT or -STABLE to be get best
> results with ZFS?

ZFS will be up to 50% faster on tests such as yours, so you should
definitely try it. Unfortunately it's not stable and you probably don't
want to use it in production. AFAIK there are no significant differences
between ZFS in -current and -stable.




signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: performance tuning on perc6 (LSI) controller

Benjeman J. Meekhof
Hi Ivan,

Thanks for the response.  Your response quotes my initial uneven
results, but are you also implying that I most likely cannot achieve
results better than the later results which use a larger filesystem
blocksize?

gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
#newfs -U -b 65536 /dev/stripe/test
#write:  19.240875 secs (558052492 bytes/sec)
#read:  20.000606 secs (536854644 bytes/sec)

(iozone showed reasonably similar results - depending on recordsize
would mostly be writing/reading around 500MB/s, though lows of 300MB/s
were recorded in some read situations).

I suppose my real question is whether there is some inherent limit in
UFS2 or FreeBSD or geom that would prevent going higher than this.
Maybe that's really not possible to answer, but certainly I plan to
explore a few more configurations.

Most of my tuning so far has been trial and error to get to this point,
and all I ended up doing to finally get good results was changing
filesystem blocksize to the max possible (I wanted to go to 128k but it
doesn't let you do that).  Apparently UFS2 and/or geom interact
differently with the controller than Linux/XFS.  This is no great
surprise.

thanks,
Ben




Ivan Voras wrote:

> Benjeman J. Meekhof wrote:
>
>> My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
>> greater read with this configuration:  2 raid6 volumes volumes striped
>> into a raid0 volume using linux software raid, XFS filesystem.  Each
>> raid6 is a volume on one controller using 30 PD.  We've spent time
>> tuning this, more than I have with FreeBSD so far.
>
>> time dd if=/dev/zero of=/test/deletafile bs=1M count=10240
>> 10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec)
>>  time dd if=/test/deletafile of=/dev/null bs=1M count=10240
>> 10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec)
>
> I had similar ratio of results when comparing FreeBSD+UFS to most
> high-performance Linux file systems (XFS is really great!), so I'd guess
> it's about as fast as you can get with this combination.
>
>> Any other suggestions to get best throughput?  There is also HW RAID
>> stripe size to adjust larger or smaller.  ZFS is also on the list for
>> testing.  Should I perhaps be running -CURRENT or -STABLE to be get best
>> results with ZFS?
>
> ZFS will be up to 50% faster on tests such as yours, so you should
> definitely try it. Unfortunately it's not stable and you probably don't
> want to use it in production. AFAIK there are no significant differences
> between ZFS in -current and -stable.
>
>
>

--
Benjeman Meekhof - UM ATLAS/AGLT2 Computing
office: 734-764-3450 cell: 734-417-6312

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: performance tuning on perc6 (LSI) controller

Ivan Voras
On 26/03/2008, Benjeman J. Meekhof <[hidden email]> wrote:

> Hi Ivan,
>
>  Thanks for the response.  Your response quotes my initial uneven
>  results, but are you also implying that I most likely cannot achieve
>  results better than the later results which use a larger filesystem
>  blocksize?
>
>  gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
>  #newfs -U -b 65536 /dev/stripe/test
>  #write:  19.240875 secs (558052492 bytes/sec)
>  #read:  20.000606 secs (536854644 bytes/sec)
>
>  (iozone showed reasonably similar results - depending on recordsize
>  would mostly be writing/reading around 500MB/s, though lows of 300MB/s
>  were recorded in some read situations).

Yes, that was my meaning. If I understood you correctly, Linux manages
~~ 800 MB/s on the array, right?

>  I suppose my real question is whether there is some inherent limit in
>  UFS2 or FreeBSD or geom that would prevent going higher than this.
>  Maybe that's really not possible to answer, but certainly I plan to
>  explore a few more configurations.

I'd guess it's UFS(2), but I don't really know. My own benchmarking
was on a different controller (IBM ServeRAID 8) and I got a similar
ratio between Linux and FreeBSD, so I don't think it's the drivers'
fault. ZFS achieves noticeably better results so it's probably not
GEOM's.
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"