Re: Bad performance of 7.0 nfs client with Solaris nfs server

classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Kris Kennaway-3
Valerio Daelli wrote:

> Hi list
>
> we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
> It mounts a Solaris 10 NFS share.
> We have bad performance with 7.0 (3MB/s).
> We have tried both UDP and TCP mounts, both sync and async.
> This is our mount:
>
> nest.xx.xx:/data/export/hosts/bsd7.xx.xx/ /mnt/nest.xx.xx nfs
> noatime,async,-i,rw,-T,-3
>
> Both our server (7.0 and Solaris 10) are Gigabit Ethernet, both are HP
> Proliant DL360 i386 (NIC bge0):
>
> ---
> FreeBSD 7.0:
>
> [eldon@bsd7 ~]$ uname -a
> FreeBSD bsd7.xx.xx 7.0-RC2 FreeBSD 7.0-RC2 #1: Mon Feb 18 17:46:46 CET
> 2008     [hidden email]:/usr/obj/usr/src/sys/BSD7  i386
>
> ---
> This is our performance with iozone:
> command line:
>
> iozone -+q 1 -i 0 -i 1 -n 2048 -g 2G -Raceb iozone.xls -f
> /mnt/nest.xx.xx/iozone.bsd7/iozone.tmp
>
> FreeBSD 7:
>
> File stride size set to 17 * record size.
>                                                             random
> random    bkwd  record  stride
>               KB  reclen   write rewrite    read    reread    read
> write    read rewrite    read   fwrite frewrite   fread  freread
>             2048    1024  109883  101289   769058   779880
>             2048    2048    3812    3674   760479   767066
>             4096    1024  111156  106788   724692   728040
>             4096    2048    3336    2241   157132   733417
>             4096    4096    2829    3364   699351   699807
>
>
> As you can see, while with record length less than 1024KB the speed is
> 'fast', with record of 2048 or more (I've tried with record much
> bigger) we get only
> 3 MB/s.
> Is this a known issue? If you need more details please contact me, I
> am willing to do more tests to risolve this problem.

Can you characterize what is different about the NFS traffic when
FreeBSD and Solaris are the clients?  Does other network traffic to the
server perform well?  2048 bytes records are > the 1500 byte MTU, so you
will be invoking packet fragmentation and reassembly.  Maybe you are
getting packet loss for some reason.  What does netstat say about
interface errors on the bge, and for the protocol layers?

Follow-ups set to performance@

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Valerio daelli-2
On Feb 19, 2008 8:53 PM, Kris Kennaway <[hidden email]> wrote:

>
> Valerio Daelli wrote:
> > Hi list
> >
> > we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
> > It mounts a Solaris 10 NFS share.
> > We have bad performance with 7.0 (3MB/s).
> > We have tried both UDP and TCP mounts, both sync and async.
> > This is our mount:
> >
> > nest.xx.xx:/data/export/hosts/bsd7.xx.xx/ /mnt/nest.xx.xx nfs
> > noatime,async,-i,rw,-T,-3
> >
> > Both our server (7.0 and Solaris 10) are Gigabit Ethernet, both are HP
> > Proliant DL360 i386 (NIC bge0):
> >
> > ---
> > FreeBSD 7.0:
> >
> > [eldon@bsd7 ~]$ uname -a
> > FreeBSD bsd7.xx.xx 7.0-RC2 FreeBSD 7.0-RC2 #1: Mon Feb 18 17:46:46 CET
> > 2008     [hidden email]:/usr/obj/usr/src/sys/BSD7  i386
> >
> > ---
> > This is our performance with iozone:
> > command line:
> >
> > iozone -+q 1 -i 0 -i 1 -n 2048 -g 2G -Raceb iozone.xls -f
> > /mnt/nest.xx.xx/iozone.bsd7/iozone.tmp
> >
> > FreeBSD 7:
> >
> >       File stride size set to 17 * record size.
> >                                                             random
> > random    bkwd  record  stride
> >               KB  reclen   write rewrite    read    reread    read
> > write    read rewrite    read   fwrite frewrite   fread  freread
> >             2048    1024  109883  101289   769058   779880
> >             2048    2048    3812    3674   760479   767066
> >             4096    1024  111156  106788   724692   728040
> >             4096    2048    3336    2241   157132   733417
> >             4096    4096    2829    3364   699351   699807
> >
> >
> > As you can see, while with record length less than 1024KB the speed is
> > 'fast', with record of 2048 or more (I've tried with record much
> > bigger) we get only
> > 3 MB/s.
> > Is this a known issue? If you need more details please contact me, I
> > am willing to do more tests to risolve this problem.
>
> Can you characterize what is different about the NFS traffic when
> FreeBSD and Solaris are the clients?  Does other network traffic to the
> server perform well?  2048 bytes records are > the 1500 byte MTU, so you
> will be invoking packet fragmentation and reassembly.  Maybe you are
> getting packet loss for some reason.  What does netstat say about
> interface errors on the bge, and for the protocol layers?
>
> Follow-ups set to performance@
>
> Kris
>

Hi
thanks for responding. I did a couple of test on 7.0, and I've not
seen error on interfaces.
Attached are netstat -s ad netstat -i.


TCP mount
---
root@bsd7:~ netstat -i
Name    Mtu Network       Address              Ipkts Ierrs    Opkts Oerrs  Coll
bge0   1500 <Link#1>      00:0b:cd:37:45:e8    45444     0   110188     0     0
bge0   1500 xx.xx.xx.xx bsd7                 45427     -   110183     -     -
---tcp:
        1536 packets sent
                396 data packets (68516 bytes)
                0 data packets (0 bytes) retransmitted
                0 data packets unnecessarily retransmitted
                0 resends initiated by MTU discovery
                1021 ack-only packets (6 delayed)
                0 URG only packets
                0 window probe packets
                101 window update packets
                18 control packets
        2555 packets received
                413 acks (for 68487 bytes)
                1 duplicate ack
                0 acks for unsent data
                2173 packets (763594 bytes) received in-sequence
                0 completely duplicate packets (0 bytes)
                0 old duplicate packets
                0 packets with some dup. data (0 bytes duped)
                0 out-of-order packets (0 bytes)
                0 packets (0 bytes) of data after window
                0 window probes
                0 window update packets
                7 packets received after close
                0 discarded for bad checksums
                0 discarded for bad header offset fields
                0 discarded because packet too short
                0 discarded due to memory problems
        10 connection requests
        1 connection accept
        0 bad connection attempts
        0 listen queue overflows
        0 ignored RSTs in the windows
        11 connections established (including accepts)
        10 connections closed (including 1 drop)
                7 connections updated cached RTT on close
                7 connections updated cached RTT variance on close
                0 connections updated cached ssthresh on close
        0 embryonic connections dropped
        413 segments updated rtt (of 358 attempts)
        0 retransmit timeouts
                0 connections dropped by rexmit timeout
        0 persist timeouts
                0 connections dropped by persist timeout
        0 Connections (fin_wait_2) dropped because of timeout
        0 keepalive timeouts
                0 keepalive probes sent
                0 connections dropped by keepalive
        0 correct ACK header predictions
        2122 correct data packet header predictions
        1 syncache entrie added
                0 retransmitted
                0 dupsyn
                0 dropped
                1 completed
                0 bucket overflow
                0 cache overflow
                0 reset
                0 stale
                0 aborted
                0 badack
                0 unreach
                0 zone failures
        1 cookie sent
        0 cookies received
        0 SACK recovery episodes
        0 segment rexmits in SACK recovery episodes
        0 byte rexmits in SACK recovery episodes
        0 SACK options (SACK blocks) received
        0 SACK options (SACK blocks) sent
        0 SACK scoreboard overflow
[...]
ip:
        99904 total packets received
        0 bad header checksums
        0 with size smaller than minimum
        0 with data size < data length
        0 with ip length > max ip packet size
        0 with header length < data size
        0 with data length < header length
        0 with bad options
        0 with incorrect version number
        61441 fragments received
        0 fragments dropped (dup or out of space)
        1 fragment dropped after timeout
        10240 packets reassembled ok
        48703 packets for this host
        0 packets for unknown/unsupported protocol
        0 packets forwarded (0 packets fast forwarded)
        0 packets not forwardable
        0 packets received for unknown multicast group
        0 redirects sent
        47693 packets sent from this host
        0 packets sent with fabricated ip header
        0 output packets dropped due to no bufs, etc.
        0 output packets discarded due to no route
        34819 output datagrams fragmented
        208914 fragments created
        0 datagrams that can't be fragmented
        0 tunneling packets that can't find gif
        0 datagrams with bad address in header

[...]


UDP mount
---
root@bsd7:~ netstat -i
Name    Mtu Network       Address              Ipkts Ierrs    Opkts Oerrs  Coll
bge0   1500 <Link#1>      00:0b:cd:37:45:e8    63685     1   119978     0     0
bge0   1500 xx.xx.xx.xx bsd7                 63628     -   119973     -     -
---
udp:
        25880 datagrams received
        0 with incomplete header
        0 with bad data length field
        0 with bad checksum
        0 with no checksum
        0 dropped due to no socket
        7 broadcast/multicast datagrams undelivered
        0 dropped due to full socket buffers
        0 not for hashed pcb
        25873 delivered
        25882 datagrams output
        0 times multicast source filter matched
[...]
ip:
        62379 total packets received
        0 bad header checksums
        0 with size smaller than minimum
        0 with data size < data length
        0 with ip length > max ip packet size
        0 with header length < data size
        0 with data length < header length
        0 with bad options
        0 with incorrect version number
        41473 fragments received
        0 fragments dropped (dup or out of space)
        1 fragment dropped after timeout
        6912 packets reassembled ok
        27818 packets for this host
        0 packets for unknown/unsupported protocol
        0 packets forwarded (0 packets fast forwarded)
        0 packets not forwardable
        0 packets received for unknown multicast group
        0 redirects sent
        27056 packets sent from this host
        0 packets sent with fabricated ip header
        0 output packets dropped due to no bufs, etc.
        0 output packets discarded due to no route
        18438 output datagrams fragmented
        110628 fragments created
        0 datagrams that can't be fragmented
        0 tunneling packets that can't find gif
        0 datagrams with bad address in header
[...]

Then I tried with different versions of FreeBSD and I discovered that:
-FreeBSD 6.2-p3 has good performance with reclen 2048 and bad with
4096 and 8192:

---
root@litio:~ uname -a
FreeBSD litio.xx.xx 6.2-RELEASE-p3 FreeBSD 6.2-RELEASE-p3 #1: Thu Mar
15 14:52:08 CET 2007     [hidden email]:/usr/obj/usr/src/sys/LITIO
amd64

TCP
              KB  reclen   write rewrite    read    reread    read
write    read rewrite    read   fwrite frewrite   fread  freread
            2048    1024   90041   97352    82001  1019428
            2048    2048   87030   88329  1009366  1053046
            4096    1024   96139   95218    74186  1032306
            4096    2048  103038   99700    79950  1003190
            4096    4096    2897    3354   970821   522038
            8192    1024   97978  100020    80681   946699
            8192    2048   95260  104290    82395  1032127
            8192    4096    3300    3169   149505   860882
            8192    8192    3371    3090   998300  1012987
           16384    1024  100072   98446    80503   962737
           16384    2048   99360  105249    80983   924453
           16384    4096    3162    3173   106750   936178
           16384    8192    3233    3223   150394   998913
           16384   16384    3250    3316   955004   944153

UDP similar


-FreeBSD 6.2-p4 and 6.2-p6 has the same performance as 7.0:
---

root@xtl:~ uname -a
FreeBSD xtl.xx.xx 6.2-RELEASE-p6 FreeBSD 6.2-RELEASE-p6 #0: Wed Jul 25
14:32:07 CEST 2007     [hidden email]:/usr/obj/usr/src/sys/XTL  i386

TCP
                                                            random
random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read
write    read rewrite    read   fwrite frewrite   fread  freread
            2048    1024   15544   14560   846597   841044
            2048    2048    2769    2649   852225   851211
            4096    1024   17366   16851   833229   835417
            4096    2048    2777    2457    49038   841226
            4096    4096    2527    2813   847868   847492




root@rubidio:~ uname -a
FreeBSD rubidio.xx.xx 6.2-RELEASE-p4 FreeBSD 6.2-RELEASE-p4 #0: Tue
May  8 11:24:10 CEST 2007
[hidden email]:/usr/obj/usr/src/sys/PROXYCARP1  amd64

TCP

                                                            random
random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read
write    read rewrite    read   fwrite frewrite   fread  freread
            2048    1024   93010   89366   996997   970635
            2048    2048   94740   90820   943347   946048
            4096    2048  100036   94668    81065   927548
            4096    4096    3274    3319  1074493  1047309
            8192    2048  100990   98212    80574  1054050
            8192    4096    3078    3071   149890   990428
            8192    8192    3074    3055   992029   995391



---
Now I will try with a Solaris client and with rsync as protocol.
Other suggestions?
Bye

Valerio Daelli
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

kometen
In reply to this post by Kris Kennaway-3
> > we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
> > It mounts a Solaris 10 NFS share.
> > We have bad performance with 7.0 (3MB/s).
> > We have tried both UDP and TCP mounts, both sync and async.
> > This is our mount:
> >
> > nest.xx.xx:/data/export/hosts/bsd7.xx.xx/ /mnt/nest.xx.xx nfs
> > noatime,async,-i,rw,-T,-3
> >
> > Both our server (7.0 and Solaris 10) are Gigabit Ethernet, both are HP
> > Proliant DL360 i386 (NIC bge0):

I have a solaris 9 nfs-server (on sparc) with som TB on HDS attached
to it with two qlogic-hba's. These partitions are shared to our
webservers via nfs, according to my mrtg-graph I get approx. 8 MB/s at
peak. I can probably get more but the requirement is not there.

With four-way-servers and FreeBSD 6.2 I had a read- and write-size of
8192. I ended up with this size by copying to and from the nfs-server
until I didn't get "nfs server not responding; is alive again"
message. Then I upgraded to FreeBSD 7.0 in October 2007 on a new
eight-way-server I started to get "not responding; alive again" during
load. So I decreased rw-size to the current 2048.

When I decreased the size I also avoided another problem (by accident
:-) ). When uploading images I sometimes saw ImageMagick's convert
went into an (almost) infinite loop, comsuming 100 % cpu (on one core)
until killed. Reducing the rw-size eliminated this issue.

fstab-entry:

my.nfs.server:/archive   /archive      nfs
rw,nfsv3,-w=2048,-r=2048        0       0

I'm using udp-mounts, does not appear to change performance for my part.

HTH.
--
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Eric Anderson-13
In reply to this post by Valerio daelli-2
Valerio Daelli wrote:

> On Feb 19, 2008 8:53 PM, Kris Kennaway <[hidden email]> wrote:
>> Valerio Daelli wrote:
>>> Hi list
>>>
>>> we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
>>> It mounts a Solaris 10 NFS share.
>>> We have bad performance with 7.0 (3MB/s).
>>> We have tried both UDP and TCP mounts, both sync and async.
>>> This is our mount:
>>>
>>> nest.xx.xx:/data/export/hosts/bsd7.xx.xx/ /mnt/nest.xx.xx nfs
>>> noatime,async,-i,rw,-T,-3
>>>
>>> Both our server (7.0 and Solaris 10) are Gigabit Ethernet, both are HP
>>> Proliant DL360 i386 (NIC bge0):
>>>
>>> ---
>>> FreeBSD 7.0:
>>>
>>> [eldon@bsd7 ~]$ uname -a
>>> FreeBSD bsd7.xx.xx 7.0-RC2 FreeBSD 7.0-RC2 #1: Mon Feb 18 17:46:46 CET
>>> 2008     [hidden email]:/usr/obj/usr/src/sys/BSD7  i386
>>>
>>> ---
>>> This is our performance with iozone:
>>> command line:
>>>
>>> iozone -+q 1 -i 0 -i 1 -n 2048 -g 2G -Raceb iozone.xls -f
>>> /mnt/nest.xx.xx/iozone.bsd7/iozone.tmp
>>>
>>> FreeBSD 7:
>>>
>>>       File stride size set to 17 * record size.
>>>                                                             random
>>> random    bkwd  record  stride
>>>               KB  reclen   write rewrite    read    reread    read
>>> write    read rewrite    read   fwrite frewrite   fread  freread
>>>             2048    1024  109883  101289   769058   779880
>>>             2048    2048    3812    3674   760479   767066
>>>             4096    1024  111156  106788   724692   728040
>>>             4096    2048    3336    2241   157132   733417
>>>             4096    4096    2829    3364   699351   699807
>>>
>>>
>>> As you can see, while with record length less than 1024KB the speed is
>>> 'fast', with record of 2048 or more (I've tried with record much
>>> bigger) we get only
>>> 3 MB/s.
>>> Is this a known issue? If you need more details please contact me, I
>>> am willing to do more tests to risolve this problem.
>> Can you characterize what is different about the NFS traffic when
>> FreeBSD and Solaris are the clients?  Does other network traffic to the
>> server perform well?  2048 bytes records are > the 1500 byte MTU, so you
>> will be invoking packet fragmentation and reassembly.  Maybe you are
>> getting packet loss for some reason.  What does netstat say about
>> interface errors on the bge, and for the protocol layers?
>>
>> Follow-ups set to performance@
>>
>> Kris
>>
>
> Hi
> thanks for responding. I did a couple of test on 7.0, and I've not
> seen error on interfaces.
> Attached are netstat -s ad netstat -i.
>
>
> TCP mount
> ---
> root@bsd7:~ netstat -i
> Name    Mtu Network       Address              Ipkts Ierrs    Opkts Oerrs  Coll
> bge0   1500 <Link#1>      00:0b:cd:37:45:e8    45444     0   110188     0     0
> bge0   1500 xx.xx.xx.xx bsd7                 45427     -   110183     -     -
> ---tcp:
> 1536 packets sent
> 396 data packets (68516 bytes)
> 0 data packets (0 bytes) retransmitted
> 0 data packets unnecessarily retransmitted
> 0 resends initiated by MTU discovery
> 1021 ack-only packets (6 delayed)
> 0 URG only packets
> 0 window probe packets
> 101 window update packets
> 18 control packets
> 2555 packets received
> 413 acks (for 68487 bytes)
> 1 duplicate ack
> 0 acks for unsent data
> 2173 packets (763594 bytes) received in-sequence
> 0 completely duplicate packets (0 bytes)
> 0 old duplicate packets
> 0 packets with some dup. data (0 bytes duped)
> 0 out-of-order packets (0 bytes)
> 0 packets (0 bytes) of data after window
> 0 window probes
> 0 window update packets
> 7 packets received after close
> 0 discarded for bad checksums
> 0 discarded for bad header offset fields
> 0 discarded because packet too short
> 0 discarded due to memory problems
> 10 connection requests
> 1 connection accept
> 0 bad connection attempts
> 0 listen queue overflows
> 0 ignored RSTs in the windows
> 11 connections established (including accepts)
> 10 connections closed (including 1 drop)
> 7 connections updated cached RTT on close
> 7 connections updated cached RTT variance on close
> 0 connections updated cached ssthresh on close
> 0 embryonic connections dropped
> 413 segments updated rtt (of 358 attempts)
> 0 retransmit timeouts
> 0 connections dropped by rexmit timeout
> 0 persist timeouts
> 0 connections dropped by persist timeout
> 0 Connections (fin_wait_2) dropped because of timeout
> 0 keepalive timeouts
> 0 keepalive probes sent
> 0 connections dropped by keepalive
> 0 correct ACK header predictions
> 2122 correct data packet header predictions
> 1 syncache entrie added
> 0 retransmitted
> 0 dupsyn
> 0 dropped
> 1 completed
> 0 bucket overflow
> 0 cache overflow
> 0 reset
> 0 stale
> 0 aborted
> 0 badack
> 0 unreach
> 0 zone failures
> 1 cookie sent
> 0 cookies received
> 0 SACK recovery episodes
> 0 segment rexmits in SACK recovery episodes
> 0 byte rexmits in SACK recovery episodes
> 0 SACK options (SACK blocks) received
> 0 SACK options (SACK blocks) sent
> 0 SACK scoreboard overflow
> [...]
> ip:
> 99904 total packets received
> 0 bad header checksums
> 0 with size smaller than minimum
> 0 with data size < data length
> 0 with ip length > max ip packet size
> 0 with header length < data size
> 0 with data length < header length
> 0 with bad options
> 0 with incorrect version number
> 61441 fragments received
> 0 fragments dropped (dup or out of space)
> 1 fragment dropped after timeout
> 10240 packets reassembled ok
> 48703 packets for this host
> 0 packets for unknown/unsupported protocol
> 0 packets forwarded (0 packets fast forwarded)
> 0 packets not forwardable
> 0 packets received for unknown multicast group
> 0 redirects sent
> 47693 packets sent from this host
> 0 packets sent with fabricated ip header
> 0 output packets dropped due to no bufs, etc.
> 0 output packets discarded due to no route
> 34819 output datagrams fragmented
> 208914 fragments created
> 0 datagrams that can't be fragmented
> 0 tunneling packets that can't find gif
> 0 datagrams with bad address in header
>
> [...]
>
>
> UDP mount
> ---
> root@bsd7:~ netstat -i
> Name    Mtu Network       Address              Ipkts Ierrs    Opkts Oerrs  Coll
> bge0   1500 <Link#1>      00:0b:cd:37:45:e8    63685     1   119978     0     0
> bge0   1500 xx.xx.xx.xx bsd7                 63628     -   119973     -     -
> ---
> udp:
> 25880 datagrams received
> 0 with incomplete header
> 0 with bad data length field
> 0 with bad checksum
> 0 with no checksum
> 0 dropped due to no socket
> 7 broadcast/multicast datagrams undelivered
> 0 dropped due to full socket buffers
> 0 not for hashed pcb
> 25873 delivered
> 25882 datagrams output
> 0 times multicast source filter matched
> [...]
> ip:
> 62379 total packets received
> 0 bad header checksums
> 0 with size smaller than minimum
> 0 with data size < data length
> 0 with ip length > max ip packet size
> 0 with header length < data size
> 0 with data length < header length
> 0 with bad options
> 0 with incorrect version number
> 41473 fragments received
> 0 fragments dropped (dup or out of space)
> 1 fragment dropped after timeout
> 6912 packets reassembled ok
> 27818 packets for this host
> 0 packets for unknown/unsupported protocol
> 0 packets forwarded (0 packets fast forwarded)
> 0 packets not forwardable
> 0 packets received for unknown multicast group
> 0 redirects sent
> 27056 packets sent from this host
> 0 packets sent with fabricated ip header
> 0 output packets dropped due to no bufs, etc.
> 0 output packets discarded due to no route
> 18438 output datagrams fragmented
> 110628 fragments created
> 0 datagrams that can't be fragmented
> 0 tunneling packets that can't find gif
> 0 datagrams with bad address in header
> [...]
>
> Then I tried with different versions of FreeBSD and I discovered that:
> -FreeBSD 6.2-p3 has good performance with reclen 2048 and bad with
> 4096 and 8192:
>
> ---
> root@litio:~ uname -a
> FreeBSD litio.xx.xx 6.2-RELEASE-p3 FreeBSD 6.2-RELEASE-p3 #1: Thu Mar
> 15 14:52:08 CET 2007     [hidden email]:/usr/obj/usr/src/sys/LITIO
> amd64
>
> TCP
>               KB  reclen   write rewrite    read    reread    read
> write    read rewrite    read   fwrite frewrite   fread  freread
>             2048    1024   90041   97352    82001  1019428
>             2048    2048   87030   88329  1009366  1053046
>             4096    1024   96139   95218    74186  1032306
>             4096    2048  103038   99700    79950  1003190
>             4096    4096    2897    3354   970821   522038
>             8192    1024   97978  100020    80681   946699
>             8192    2048   95260  104290    82395  1032127
>             8192    4096    3300    3169   149505   860882
>             8192    8192    3371    3090   998300  1012987
>            16384    1024  100072   98446    80503   962737
>            16384    2048   99360  105249    80983   924453
>            16384    4096    3162    3173   106750   936178
>            16384    8192    3233    3223   150394   998913
>            16384   16384    3250    3316   955004   944153
>
> UDP similar
>
>
> -FreeBSD 6.2-p4 and 6.2-p6 has the same performance as 7.0:
> ---
>
> root@xtl:~ uname -a
> FreeBSD xtl.xx.xx 6.2-RELEASE-p6 FreeBSD 6.2-RELEASE-p6 #0: Wed Jul 25
> 14:32:07 CEST 2007     [hidden email]:/usr/obj/usr/src/sys/XTL  i386
>
> TCP
>                                                             random
> random    bkwd  record  stride
>               KB  reclen   write rewrite    read    reread    read
> write    read rewrite    read   fwrite frewrite   fread  freread
>             2048    1024   15544   14560   846597   841044
>             2048    2048    2769    2649   852225   851211
>             4096    1024   17366   16851   833229   835417
>             4096    2048    2777    2457    49038   841226
>             4096    4096    2527    2813   847868   847492
>
>
>
>
> root@rubidio:~ uname -a
> FreeBSD rubidio.xx.xx 6.2-RELEASE-p4 FreeBSD 6.2-RELEASE-p4 #0: Tue
> May  8 11:24:10 CEST 2007
> [hidden email]:/usr/obj/usr/src/sys/PROXYCARP1  amd64
>
> TCP
>
>                                                             random
> random    bkwd  record  stride
>               KB  reclen   write rewrite    read    reread    read
> write    read rewrite    read   fwrite frewrite   fread  freread
>             2048    1024   93010   89366   996997   970635
>             2048    2048   94740   90820   943347   946048
>             4096    2048  100036   94668    81065   927548
>             4096    4096    3274    3319  1074493  1047309
>             8192    2048  100990   98212    80574  1054050
>             8192    4096    3078    3071   149890   990428
>             8192    8192    3074    3055   992029   995391
>
>
>
> ---
> Now I will try with a Solaris client and with rsync as protocol.
> Other suggestions?
> Bye
>





If possible, it might help a lot to send the output (or a snippet of it)
or post it somewhere on the web, of a tshark output from the client. You
might try UDP mounting the NFS export also, just to try it.

Eric
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Eric Anderson-13
In reply to this post by kometen
Claus Guttesen wrote:

>>> we have a FreeBSD 7.0 NFS client (csup today, built world and kernel).
>>> It mounts a Solaris 10 NFS share.
>>> We have bad performance with 7.0 (3MB/s).
>>> We have tried both UDP and TCP mounts, both sync and async.
>>> This is our mount:
>>>
>>> nest.xx.xx:/data/export/hosts/bsd7.xx.xx/ /mnt/nest.xx.xx nfs
>>> noatime,async,-i,rw,-T,-3
>>>
>>> Both our server (7.0 and Solaris 10) are Gigabit Ethernet, both are HP
>>> Proliant DL360 i386 (NIC bge0):
>
> I have a solaris 9 nfs-server (on sparc) with som TB on HDS attached
> to it with two qlogic-hba's. These partitions are shared to our
> webservers via nfs, according to my mrtg-graph I get approx. 8 MB/s at
> peak. I can probably get more but the requirement is not there.
>
> With four-way-servers and FreeBSD 6.2 I had a read- and write-size of
> 8192. I ended up with this size by copying to and from the nfs-server
> until I didn't get "nfs server not responding; is alive again"
> message. Then I upgraded to FreeBSD 7.0 in October 2007 on a new
> eight-way-server I started to get "not responding; alive again" during
> load. So I decreased rw-size to the current 2048.
>
> When I decreased the size I also avoided another problem (by accident
> :-) ). When uploading images I sometimes saw ImageMagick's convert
> went into an (almost) infinite loop, comsuming 100 % cpu (on one core)
> until killed. Reducing the rw-size eliminated this issue.
>
> fstab-entry:
>
> my.nfs.server:/archive   /archive      nfs
> rw,nfsv3,-w=2048,-r=2048        0       0
>
> I'm using udp-mounts, does not appear to change performance for my part.
>
> HTH.


If FreeBSD is your NFS server, you should increase the number of nfsd
threads to help with the "not responding" error.  I usually run one nfsd
thread per active client.

Eric

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Chuck Swiger-2
In reply to this post by Valerio daelli-2
Hi--

On Feb 20, 2008, at 3:23 AM, Valerio Daelli wrote:
> 99904 total packets received
[ ... ]
>
> 61441 fragments received

[ ... ]
> 34819 output datagrams fragmented
> 208914 fragments created

Take a look at the level of packet fragmentation you are encountering;  
yes, this is expected and things will work but there is extra latency  
added when the IP stack has to reassemble packets before the data can  
be delivered.  Try setting the NFS rsize/wsize to 1024 or perhaps 1400  
and see whether that improves performance.

Or, if your switch and NICs support it, see whether you can get Gb  
Ethernet jumbo frames working so that you don't have to fragment for  
2K or 4K data packets....

--
-Chuck

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Alfred Perlstein-2
* Chuck Swiger <[hidden email]> [080220 10:35] wrote:

> Hi--
>
> On Feb 20, 2008, at 3:23 AM, Valerio Daelli wrote:
> > 99904 total packets received
> [ ... ]
> >
> > 61441 fragments received
>
> [ ... ]
> > 34819 output datagrams fragmented
> > 208914 fragments created
>
> Take a look at the level of packet fragmentation you are encountering;  
> yes, this is expected and things will work but there is extra latency  
> added when the IP stack has to reassemble packets before the data can  
> be delivered.  Try setting the NFS rsize/wsize to 1024 or perhaps 1400  
> and see whether that improves performance.
>
> Or, if your switch and NICs support it, see whether you can get Gb  
> Ethernet jumbo frames working so that you don't have to fragment for  
> 2K or 4K data packets....

TCP mounts do not have this problem.  You can safely use
32k or higher sizes with TCP without fragmentation.

-Alfred
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Chuck Swiger-2
On Feb 20, 2008, at 1:01 PM, Alfred Perlstein wrote:

>> Take a look at the level of packet fragmentation you are  
>> encountering;
>> yes, this is expected and things will work but there is extra latency
>> added when the IP stack has to reassemble packets before the data can
>> be delivered.  Try setting the NFS rsize/wsize to 1024 or perhaps  
>> 1400
>> and see whether that improves performance.
>>
>> Or, if your switch and NICs support it, see whether you can get Gb
>> Ethernet jumbo frames working so that you don't have to fragment for
>> 2K or 4K data packets....
>
> TCP mounts do not have this problem.  You can safely use
> 32k or higher sizes with TCP without fragmentation.

Oh, sure.  But there is a bit more overhead with TCP transport than  
UDP-- for local (switched) networks, UDP generally seems to be a  
win...TCP seems to be a better choice over a VPN or some similar kind  
of WAN.

--
-Chuck

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Kris Kennaway-3
Chuck Swiger wrote:

> On Feb 20, 2008, at 1:01 PM, Alfred Perlstein wrote:
>>> Take a look at the level of packet fragmentation you are encountering;
>>> yes, this is expected and things will work but there is extra latency
>>> added when the IP stack has to reassemble packets before the data can
>>> be delivered.  Try setting the NFS rsize/wsize to 1024 or perhaps 1400
>>> and see whether that improves performance.
>>>
>>> Or, if your switch and NICs support it, see whether you can get Gb
>>> Ethernet jumbo frames working so that you don't have to fragment for
>>> 2K or 4K data packets....
>>
>> TCP mounts do not have this problem.  You can safely use
>> 32k or higher sizes with TCP without fragmentation.
>
> Oh, sure.  But there is a bit more overhead with TCP transport than
> UDP-- for local (switched) networks, UDP generally seems to be a
> win...TCP seems to be a better choice over a VPN or some similar kind of
> WAN.

Actually this is no longer true.  At modern LAN speeds (e.g. gige) you
can transmit packets fast enough that two things happen:

1) UDP socket buffer overruns are common, leading to packet loss.

2) the 16-bit sequence numbers wrap *much* faster than the IP fragment
lifetime (30 seconds).

These combine to cause data corruption when fragmented packets are
dropped and then reassembled with a later transmission of the same
sequence number.

TCP mounts should be used whenever possible thesedays (I flipped the
default mode in 8.0 the other day).

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Alfred Perlstein-2
* Kris Kennaway <[hidden email]> [080220 13:42] wrote:

> Chuck Swiger wrote:
> >On Feb 20, 2008, at 1:01 PM, Alfred Perlstein wrote:
> >>>Take a look at the level of packet fragmentation you are encountering;
> >>>yes, this is expected and things will work but there is extra latency
> >>>added when the IP stack has to reassemble packets before the data can
> >>>be delivered.  Try setting the NFS rsize/wsize to 1024 or perhaps 1400
> >>>and see whether that improves performance.
> >>>
> >>>Or, if your switch and NICs support it, see whether you can get Gb
> >>>Ethernet jumbo frames working so that you don't have to fragment for
> >>>2K or 4K data packets....
> >>
> >>TCP mounts do not have this problem.  You can safely use
> >>32k or higher sizes with TCP without fragmentation.
> >
> >Oh, sure.  But there is a bit more overhead with TCP transport than
> >UDP-- for local (switched) networks, UDP generally seems to be a
> >win...TCP seems to be a better choice over a VPN or some similar kind of
> >WAN.
>
> Actually this is no longer true.  At modern LAN speeds (e.g. gige) you
> can transmit packets fast enough that two things happen:
>
> 1) UDP socket buffer overruns are common, leading to packet loss.
>
> 2) the 16-bit sequence numbers wrap *much* faster than the IP fragment
> lifetime (30 seconds).
>
> These combine to cause data corruption when fragmented packets are
> dropped and then reassembled with a later transmission of the same
> sequence number.
>
> TCP mounts should be used whenever possible thesedays (I flipped the
> default mode in 8.0 the other day).

Additionally with smaller read/write sizes you must generate more
RPCs, for instance using a tcp read size of 32k will allow the
client to generate a single NFS_READ request for that 32k, however
a UDP mount at 1.2k will need to generate appoximately 26 requests
and the server will have to service that many requests as well.



--
- Alfred Perlstein
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Valerio daelli-2
In reply to this post by Eric Anderson-13
>  Can you post it somewhere for me to download and look at?  I'm not sure
>  my mail server will take a 30MB attachment :)
>
>  Eric
>
>

Hi

I have done a test with rsync. These are the results:


root@bsd7:/var/rsync RSYNC_PASSWORD='xxx' time rsync -av
rsync://backup@nest/data/FILE .
receiving file list ... done
FILE

sent 126 bytes  received 1048704154 bytes  25893932.84 bytes/sec
total size is 1048576000  speedup is 1.00
       39.56 real         6.70 user         5.48 sys

root@bsd7:/var/rsync RSYNC_PASSWORD='xxx' time rsync -av FILE
rsync://backup@nest/data/FILE1
building file list ... done
FILE

sent 1048704085 bytes  received 38 bytes  83896329.84 bytes/sec
total size is 1048576000  speedup is 1.00
       12.64 real         5.31 user         3.79 sys


As you can see they are much faster than NFS.
Then I have done a test with a Solaris 10 client and a Solaris 10 server:

---
SOLARIS

root@max 17:00:34:~ /usr/local/bin/iozone -r 2m -+q 1 -i 0 -n 2048 -g
8m -Raceb iozone.xls -f
/mnt/nest.ifom-ieo-campus.it/iozone.solaris/iozone.tmp
                                                            random
random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read
write    read rewrite    read   fwrite frewrite   fread  freread
            2048    2048   38507   38278
            4096    2048   54309   63908
            8192    2048   60082   69817


They are quite fast as well.

Valerio
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Kris Kennaway-3
Valerio Daelli wrote:

> As you can see they are much faster than NFS.
> Then I have done a test with a Solaris 10 client and a Solaris 10 server:
>
> ---
> SOLARIS
>
> root@max 17:00:34:~ /usr/local/bin/iozone -r 2m -+q 1 -i 0 -n 2048 -g
> 8m -Raceb iozone.xls -f
> /mnt/nest.ifom-ieo-campus.it/iozone.solaris/iozone.tmp
>                                                             random
> random    bkwd  record  stride
>               KB  reclen   write rewrite    read    reread    read
> write    read rewrite    read   fwrite frewrite   fread  freread
>             2048    2048   38507   38278
>             4096    2048   54309   63908
>             8192    2048   60082   69817
>
What NFS parameters?

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Valerio daelli-2
On Thu, Feb 21, 2008 at 10:57 AM, Kris Kennaway <[hidden email]> wrote:

> Valerio Daelli wrote:
>
>  > As you can see they are much faster than NFS.
>  > Then I have done a test with a Solaris 10 client and a Solaris 10 server:
>  >
>  > ---
>  > SOLARIS
>  >
>  > root@max 17:00:34:~ /usr/local/bin/iozone -r 2m -+q 1 -i 0 -n 2048 -g
>  > 8m -Raceb iozone.xls -f
>  > /mnt/nest.ifom-ieo-campus.it/iozone.solaris/iozone.tmp
>  >                                                             random
>  > random    bkwd  record  stride
>  >               KB  reclen   write rewrite    read    reread    read
>  > write    read rewrite    read   fwrite frewrite   fread  freread
>  >             2048    2048   38507   38278
>  >             4096    2048   54309   63908
>  >             8192    2048   60082   69817
>  >
>  What NFS parameters?
>
>  Kris
>

Sorry:

NFS via TCP
/mnt/nest.ifom-ieo-campus.it on
nest.ifom-ieo-campus.it:/data/export/hosts/bsd7.ifom-ieo-campus.it/
remote/read/write/setuid/devices/rsize=32768/wsize=32768/xattr/dev=4700003
on Thu Feb 21 11:01:04 2008
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Kris Kennaway-3
Valerio Daelli wrote:

> On Thu, Feb 21, 2008 at 10:57 AM, Kris Kennaway <[hidden email]> wrote:
>> Valerio Daelli wrote:
>>
>>  > As you can see they are much faster than NFS.
>>  > Then I have done a test with a Solaris 10 client and a Solaris 10 server:
>>  >
>>  > ---
>>  > SOLARIS
>>  >
>>  > root@max 17:00:34:~ /usr/local/bin/iozone -r 2m -+q 1 -i 0 -n 2048 -g
>>  > 8m -Raceb iozone.xls -f
>>  > /mnt/nest.ifom-ieo-campus.it/iozone.solaris/iozone.tmp
>>  >                                                             random
>>  > random    bkwd  record  stride
>>  >               KB  reclen   write rewrite    read    reread    read
>>  > write    read rewrite    read   fwrite frewrite   fread  freread
>>  >             2048    2048   38507   38278
>>  >             4096    2048   54309   63908
>>  >             8192    2048   60082   69817
>>  >
>>  What NFS parameters?
>>
>>  Kris
>>
>
> Sorry:
>
> NFS via TCP
> /mnt/nest.ifom-ieo-campus.it on
> nest.ifom-ieo-campus.it:/data/export/hosts/bsd7.ifom-ieo-campus.it/
> remote/read/write/setuid/devices/rsize=32768/wsize=32768/xattr/dev=4700003
> on Thu Feb 21 11:01:04 2008

Thanks, and can you remind me what your FreeBSD performance numbers are
with TCP + 32k rsize/wsize?

Kris
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

David O'Brien
In reply to this post by Kris Kennaway-3
On Wed, Feb 20, 2008 at 10:42:45PM +0100, Kris Kennaway wrote:
> Chuck Swiger wrote:
> TCP mounts should be used whenever possible thesedays (I flipped the
> default mode in 8.0 the other day).

And I made TCP mounts the default for Amd over a year ago.  NFS really
has moved on to TCP these days.
 
--
-- David  ([hidden email])
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Chuck Swiger-2
On Feb 22, 2008, at 1:58 AM, David O'Brien wrote:
> On Wed, Feb 20, 2008 at 10:42:45PM +0100, Kris Kennaway wrote:
>> Chuck Swiger wrote:
>> TCP mounts should be used whenever possible thesedays (I flipped the
>> default mode in 8.0 the other day).
>
> And I made TCP mounts the default for Amd over a year ago.  NFS really
> has moved on to TCP these days.

Thanks for the feedback, gentlemen.  Hopefully it will also help the  
OP...

--
-Chuck

_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Valerio daelli-2
In reply to this post by Eric Anderson-13
>  Just now got a chance to look at the trace.  It looks like FILE_SYNC is
>  enabled on the write, which will cause the filer to fully commit the
>  block (8k in this case) to disk before replying.  This will usually hurt
>  performance.  I'm not certain where it is getting set, but you might try
>  some mount options, like 'async' mode.  This might also be a bug in
>  FreeBSD that is forcing it to be enabled all the time.  I'll look
>  through some source code and see what I can find.
>
>  Eric
>
>

Hi

I have yes solved this issue and I have another test.
Now the mount is sync (no async) and the iozone includes
the -D flag.
Now the write performance boosts from 3MB/s to 30MB/s.

---
root@bsd7:~ iozone -D -+q 1 -i 0 -i 1 -r 2048 -n 2048 -g 2G -Raceb
iozone.xls -f /mnt/nest.ifom-ieo-campus.it/iozone/file.tmp
        Iozone: Performance Test of File I/O
                Version $Revision: 3.283 $
                Compiled for 32 bit mode.
                Build: freebsd

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
                     Erik Habbinga, Kris Strecker, Walter Wong.

        Run began: Mon Mar 17 11:06:28 2008

        Using msync(MS_ASYNC) on mmap files
        Delay 1 seconds between tests enabled.
        Record Size 2048 KB
        Using minimum file size of 2048 kilobytes.
        Using maximum file size of 2097152 kilobytes.
        Excel chart generation enabled
        Auto Mode
        Include close in write timing
        Include fsync in write timing
        Command line used: iozone -D -+q 1 -i 0 -i 1 -r 2048 -n 2048 -g 2G
-Raceb iozone.xls -f /mnt/nest.ifom-ieo-campus.it/iozone/file.tmp
        Output is in Kbytes/sec
        Time Resolution = 0.000004 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random
random    bkwd  record  stride
              KB  reclen   write rewrite    read    reread    read
write    read rewrite    read
            2048    2048   49419   49755   629565   632905
            4096    2048    7713   47431   625536   616224
            8192    2048   28479   49564   630012   620276
           16384    2048   26492   49515   631681   621500
           32768    2048   13030   49572   631771   617552
           65536    2048   24907   37586^C
---

Notice that now we have using msync(MS_ASYNC) on mmap files
(not a kernel expert so not sure if it is related to our problem).
Without the -D flag we get 3MB/s with iozone.
Thanks for you help!

Valerio Daelli
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: Bad performance of 7.0 nfs client with Solaris nfs server

Valerio daelli-2
>
>  I have yes solved this issue and I have another test.

^^^ I haven't yet solved this issue

Sorry.

>  Now the mount is sync (no async) and the iozone includes
>  the -D flag.
>  Now the write performance boosts from 3MB/s to 30MB/s.
>
>  ---
>  root@bsd7:~ iozone -D -+q 1 -i 0 -i 1 -r 2048 -n 2048 -g 2G -Raceb
>  iozone.xls -f /mnt/nest.ifom-ieo-campus.it/iozone/file.tmp
>
>         Iozone: Performance Test of File I/O
>                 Version $Revision: 3.283 $
>                 Compiled for 32 bit mode.
>                 Build: freebsd
>
>         Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
>                      Al Slater, Scott Rhine, Mike Wisner, Ken Goss
>                      Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
>                      Randy Dunlap, Mark Montague, Dan Million,
>                      Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
>                      Erik Habbinga, Kris Strecker, Walter Wong.
>
>         Run began: Mon Mar 17 11:06:28 2008
>
>         Using msync(MS_ASYNC) on mmap files
>
>         Delay 1 seconds between tests enabled.
>         Record Size 2048 KB
>
>         Using minimum file size of 2048 kilobytes.
>         Using maximum file size of 2097152 kilobytes.
>
>         Excel chart generation enabled
>         Auto Mode
>         Include close in write timing
>         Include fsync in write timing
>         Command line used: iozone -D -+q 1 -i 0 -i 1 -r 2048 -n 2048 -g 2G
>  -Raceb iozone.xls -f /mnt/nest.ifom-ieo-campus.it/iozone/file.tmp
>
>         Output is in Kbytes/sec
>         Time Resolution = 0.000004 seconds.
>
>         Processor cache size set to 1024 Kbytes.
>         Processor cache line size set to 32 bytes.
>
>         File stride size set to 17 * record size.
>                                                             random
>  random    bkwd  record  stride
>               KB  reclen   write rewrite    read    reread    read
>  write    read rewrite    read
>             2048    2048   49419   49755   629565   632905
>             4096    2048    7713   47431   625536   616224
>             8192    2048   28479   49564   630012   620276
>            16384    2048   26492   49515   631681   621500
>            32768    2048   13030   49572   631771   617552
>            65536    2048   24907   37586^C
>  ---
>
>  Notice that now we have using msync(MS_ASYNC) on mmap files
>  (not a kernel expert so not sure if it is related to our problem).
>  Without the -D flag we get 3MB/s with iozone.
>  Thanks for you help!
>
>  Valerio Daelli
>
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[hidden email]"