ZFS, Dell PE2950

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

ZFS, Dell PE2950

Benjeman J. Meekhof
Hi,

I posted earlier about some results with this same system using UFS2.
Now trying to test ZFS.  This is a Dell PE2950 with two Perc6
controllers and 4 md1000 disk shelves with 750GB drives.  16GB RAM, dual
quad core Xeon. I recompiled our kernel to use the ULE scheduler instead
of default.

I could not get through an entire run of iozone without a system
reboot/crash.  ZFS is clearly labeled experimental, of course.

It seems to die for sure around 10 processes, sometimes less (this is
the end of my output from iozone):

  Children see throughput for 10 readers          =  135931.72 KB/sec
         Parent sees throughput for 10 readers           =  135927.24 KB/sec
         Min throughput per process                      =   13351.26
KB/sec
         Max throughput per process                      =   14172.05 KB/sec
         Avg throughput per process                      =   13593.17 KB/sec
         Min xfer                                        = 31586816.00 KB


Some zpool info below - each volume below is a raid6 of 30PD on one
controller.  I may try different hardware volume configs for fun.

zpool create test mfid0 mfid2
# pool is automatically mounted at /test

#  pool: test
# state: ONLINE
# scrub: none requested
#config:
#
#       NAME        STATE     READ WRITE CKSUM
#       test        ONLINE       0     0     0
#         mfid0     ONLINE       0     0     0
#         mfid2     ONLINE       0     0     0
#
#errors: No known data errors


-Ben
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: ZFS, Dell PE2950

Ivan Voras
Benjeman J. Meekhof wrote:

> Hi,
>
> I posted earlier about some results with this same system using UFS2.
> Now trying to test ZFS.  This is a Dell PE2950 with two Perc6
> controllers and 4 md1000 disk shelves with 750GB drives.  16GB RAM, dual
> quad core Xeon. I recompiled our kernel to use the ULE scheduler instead
> of default.
>
> I could not get through an entire run of iozone without a system
> reboot/crash.  ZFS is clearly labeled experimental, of course.
>
> It seems to die for sure around 10 processes, sometimes less (this is
> the end of my output from iozone):
>
>  Children see throughput for 10 readers          =  135931.72 KB/sec
>         Parent sees throughput for 10 readers           =  135927.24 KB/sec
>         Min throughput per process                      =   13351.26 KB/sec
>         Max throughput per process                      =   14172.05 KB/sec
>         Avg throughput per process                      =   13593.17 KB/sec
>         Min xfer                                        = 31586816.00 KB
Can you tell us how does this compare to UFS2 results you posted
previously? (since you used dd for UFS2 and now iozone for ZFS; what are
your conclusions?)


signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: ZFS, Dell PE2950

Benjeman J. Meekhof
Sure, here is an example iozone output when I tested UFS2.  Same
hardware config as with ZFS test.

#gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
#newfs -U -b 65536 /dev/stripe/test


"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 512 Kbytes "
"Output is in Kbytes/sec"

"  Initial write "  576953.38   613457.19   627091.58   626818.95
641763.54   626005.11   579073.92   553088.47   557498.71   556188.95
553340.92   548801.62

"        Rewrite "  580755.31   621562.75   638209.55   573171.69
633114.27   616501.92   542520.70   541662.21   525603.64   529194.37
506230.25   490589.89

"           Read "  505978.28   546837.12   565786.34   310994.23
309813.64   329930.91   351162.15   376940.64   408561.11   432157.69
452106.75   470176.39

"        Re-read "  523917.72   581796.50   592393.28   314724.70
308485.12   327409.40   350913.96   381370.12   408105.25   434168.93
458742.49   475407.44

-Ben

Ivan Voras wrote:

> Benjeman J. Meekhof wrote:
>> Hi,
>>
>> I posted earlier about some results with this same system using UFS2.
>> Now trying to test ZFS.  This is a Dell PE2950 with two Perc6
>> controllers and 4 md1000 disk shelves with 750GB drives.  16GB RAM, dual
>> quad core Xeon. I recompiled our kernel to use the ULE scheduler instead
>> of default.
>>
>> I could not get through an entire run of iozone without a system
>> reboot/crash.  ZFS is clearly labeled experimental, of course.
>>
>> It seems to die for sure around 10 processes, sometimes less (this is
>> the end of my output from iozone):
>>
>>  Children see throughput for 10 readers          =  135931.72 KB/sec
>>         Parent sees throughput for 10 readers           =  135927.24 KB/sec
>>         Min throughput per process                      =   13351.26 KB/sec
>>         Max throughput per process                      =   14172.05 KB/sec
>>         Avg throughput per process                      =   13593.17 KB/sec
>>         Min xfer                                        = 31586816.00 KB
>
> Can you tell us how does this compare to UFS2 results you posted
> previously? (since you used dd for UFS2 and now iozone for ZFS; what are
> your conclusions?)
>

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: ZFS, Dell PE2950

Ivan Voras
Benjeman Meekhof wrote:

> Sure, here is an example iozone output when I tested UFS2.  Same
> hardware config as with ZFS test.
>
> #gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
> #newfs -U -b 65536 /dev/stripe/test
>
>
> "Throughput report Y-axis is type of test X-axis is number of processes"
> "Record size = 512 Kbytes "
> "Output is in Kbytes/sec"
>

> "           Read "  505978.28   546837.12   565786.34   310994.23
> 309813.64   329930.91   351162.15   376940.64   408561.11   432157.69
> 452106.75   470176.39
>
> "        Re-read "  523917.72   581796.50   592393.28   314724.70
> 308485.12   327409.40   350913.96   381370.12   408105.25   434168.93
> 458742.49   475407.44

This is hard to compare to what you've posted before, but if it means
that with ZFS you get 136 MB/s and with UFS 300-593 MB/s, something is
wrong.

Per my discussion with Scott Long Can you repeat the test for UFS, but
create gstripe with a really small stripe size, like 4 KB?


> -Ben
>
> Ivan Voras wrote:
>> Benjeman J. Meekhof wrote:
>>> Hi,
>>>
>>> I posted earlier about some results with this same system using UFS2.
>>> Now trying to test ZFS.  This is a Dell PE2950 with two Perc6
>>> controllers and 4 md1000 disk shelves with 750GB drives.  16GB RAM, dual
>>> quad core Xeon. I recompiled our kernel to use the ULE scheduler instead
>>> of default.
>>>
>>> I could not get through an entire run of iozone without a system
>>> reboot/crash.  ZFS is clearly labeled experimental, of course.
>>>
>>> It seems to die for sure around 10 processes, sometimes less (this is
>>> the end of my output from iozone):
>>>
>>>  Children see throughput for 10 readers          =  135931.72 KB/sec
>>>         Parent sees throughput for 10 readers           =  135927.24
>>> KB/sec
>>>         Min throughput per process                      =   13351.26
>>> KB/sec
>>>         Max throughput per process                      =   14172.05
>>> KB/sec
>>>         Avg throughput per process                      =   13593.17
>>> KB/sec
>>>         Min xfer                                        = 31586816.00 KB
>>
>> Can you tell us how does this compare to UFS2 results you posted
>> previously? (since you used dd for UFS2 and now iozone for ZFS; what are
>> your conclusions?)
>>
>
> _______________________________________________
> [hidden email] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to
> "[hidden email]"
>


signature.asc (258 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: ZFS, Dell PE2950

Ivan Voras
Ivan Voras wrote:

> Per my discussion with Scott Long Can you repeat the test for UFS, but
> create gstripe with a really small stripe size, like 4 KB?

Actually, no need to do that - it looks like iozone is doing quite
random IO ops so it won't help you.


signature.asc (258 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: ZFS, Dell PE2950

Benjeman J. Meekhof
I have some old dd numbers from when I was experimenting to find a
UFS/gstripe combination that wasn't horrifyingly slow to read.  I was
not then adjusting filesystem blocksize, and not until moving UFS2 bs to
the maximum did initial results seem worth resuming iozone tests.   Raid
HW stripe-width is 128k. FWIW.

# using 4k stripe, same as above
#gstripe label -v -s 4k test /dev/mfid0 /dev/mfid2
#newfs -U /dev/stripe/test

#time dd if=/dev/zero of=/test/deletafile bs=1M count=10240
#time dd if=/test/deletafile of=/dev/null bs=1M count=10240

#write: 26.5s 403665800 bps
#read: 157s 68343843 bps

-Ben


Ivan Voras wrote:
> Ivan Voras wrote:
>
>> Per my discussion with Scott Long Can you repeat the test for UFS, but
>> create gstripe with a really small stripe size, like 4 KB?
>
> Actually, no need to do that - it looks like iozone is doing quite
> random IO ops so it won't help you.
>
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"