Re: mbuf_jumbo_9k & iSCSI failing

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ryan Stone-2
Is this setup using the mlx4_en driver?  If so, recent versions of that
driver has a regression when using MTUs greater than the page size (4096 on
i386/amd64).  The bug will cause the card to drop packets when the system
is under memory pressure, and in certain causes the card can get into a
state when it is no longer able to receive packets.  I am working on a fix;
I can post a patch when it's complete.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ben RUBSON
> On 25 Jun 2017, at 17:14, Ryan Stone <[hidden email]> wrote:
>
> Is this setup using the mlx4_en driver?  If so, recent versions of that driver has a regression when using MTUs greater than the page size (4096 on i386/amd64).  The bug will cause the card to drop packets when the system is under memory pressure, and in certain causes the card can get into a state when it is no longer able to receive packets.  I am working on a fix; I can post a patch when it's complete.

Thank you very much for your feedback Ryan.

Yes, my system is using mlx4_en driver, the one directly from FreeBSD 11.0 sources tree.
Any indicator I could catch to be sure I'm experiencing the issue you are working on ?

Sounds like anyway I may be suffering from it...
Of course I would be glad to help testing your patch when it's complete.

Thank you again,

Ben

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ryan Stone-2
In reply to this post by Ryan Stone-2
Having looking at the original email more closely, I see that you showed an
mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
clusters increase while you are far below the zone's limit means that
you're definitely running into the bug I'm describing, and this bug could
plausibly cause the iSCSI errors that you describe.

The issue is that the newer version of the driver tries to allocate a
single buffer to accommodate an MTU-sized packet.  Over time, however,
memory will become fragmented and eventually it can become impossible to
allocate a 9k physically contiguous buffer.  When this happens the driver
is unable to allocate buffers to receive packets and is forced to drop
them.  Presumably, if iSCSI suffers too many packet drops it will terminate
the connection.  The older version of the driver limited itself to
page-sized buffers, so it was immune to issues with memory fragmentation.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ben RUBSON
> On 25 Jun 2017, at 17:32, Ryan Stone <[hidden email]> wrote:
>
> Having looking at the original email more closely, I see that you showed an mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf clusters increase while you are far below the zone's limit means that you're definitely running into the bug I'm describing, and this bug could plausibly cause the iSCSI errors that you describe.
>
> The issue is that the newer version of the driver tries to allocate a single buffer to accommodate an MTU-sized packet.  Over time, however, memory will become fragmented and eventually it can become impossible to allocate a 9k physically contiguous buffer.  When this happens the driver is unable to allocate buffers to receive packets and is forced to drop them.  Presumably, if iSCSI suffers too many packet drops it will terminate the connection.  The older version of the driver limited itself to page-sized buffers, so it was immune to issues with memory fragmentation.

Thank you for your explanation Ryan.
You say "over time", and you're right, I have to wait several days (here 88) before the problem occurs.
Strange however that in 2500MB free memory system is unable to find 9k physically contiguous. But we never know :)

Let's then wait for your patch !
(and reboot for now)

Many thx !

Ben
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Edward Tomasz Napierała
In reply to this post by Ryan Stone-2
2017-06-25 16:32 GMT+01:00 Ryan Stone <[hidden email]>:

> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into the bug I'm describing, and this bug could
> plausibly cause the iSCSI errors that you describe.
>
> The issue is that the newer version of the driver tries to allocate a
> single buffer to accommodate an MTU-sized packet.  Over time, however,
> memory will become fragmented and eventually it can become impossible to
> allocate a 9k physically contiguous buffer.  When this happens the driver
> is unable to allocate buffers to receive packets and is forced to drop
> them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> the connection.  [..]


More specifically, it will terminate the connection when there's no "ping
reply"
from the other side for the configured amount of time, which defaults to
five
seconds.  It can be changed using the kern.iscsi.ping_timeout sysctl, as
described in iscsi(4).
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Andrey V. Elsukov
In reply to this post by Ryan Stone-2
On 25.06.2017 18:32, Ryan Stone wrote:

> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into the bug I'm describing, and this bug could
> plausibly cause the iSCSI errors that you describe.
>
> The issue is that the newer version of the driver tries to allocate a
> single buffer to accommodate an MTU-sized packet.  Over time, however,
> memory will become fragmented and eventually it can become impossible to
> allocate a 9k physically contiguous buffer.  When this happens the driver
> is unable to allocate buffers to receive packets and is forced to drop
> them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> the connection.  The older version of the driver limited itself to
> page-sized buffers, so it was immune to issues with memory fragmentation.
I think it is not mlxen specific problem, we have the same symptoms with
ixgbe(4) driver too. To avoid the problem we have patches that are
disable using of 9k mbufs, and instead only use 4k mbufs.

--
WBR, Andrey V. Elsukov


signature.asc (565 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Andrey V. Elsukov
On 26.06.2017 16:27, Ben RUBSON wrote:

>
>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <[hidden email]> wrote:
>>
>> I think it is not mlxen specific problem, we have the same symptoms with
>> ixgbe(4) driver too. To avoid the problem we have patches that are
>> disable using of 9k mbufs, and instead only use 4k mbufs.
>
> Interesting feedback Andrey, thank you !
> The problem may be then "general".
> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a workaround ?
Yes.

--
WBR, Andrey V. Elsukov


signature.asc (565 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ben RUBSON

> On 26 Jun 2017, at 15:25, Andrey V. Elsukov <[hidden email]> wrote:
>
> On 26.06.2017 16:27, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <[hidden email]> wrote:
>>>
>>> I think it is not mlxen specific problem, we have the same symptoms with
>>> ixgbe(4) driver too. To avoid the problem we have patches that are
>>> disable using of 9k mbufs, and instead only use 4k mbufs.
>>
>> Interesting feedback Andrey, thank you !
>> The problem may be then "general".
>> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a workaround ?
>
> Yes.

Is it a kernel patch or a driver/ixgbe patch ?
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Andrey V. Elsukov
On 26.06.2017 16:29, Ben RUBSON wrote:

>
>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov <[hidden email]> wrote:
>>
>> On 26.06.2017 16:27, Ben RUBSON wrote:
>>>
>>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <[hidden email]> wrote:
>>>>
>>>> I think it is not mlxen specific problem, we have the same symptoms with
>>>> ixgbe(4) driver too. To avoid the problem we have patches that are
>>>> disable using of 9k mbufs, and instead only use 4k mbufs.
>>>
>>> Interesting feedback Andrey, thank you !
>>> The problem may be then "general".
>>> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a workaround ?
>>
>> Yes.
>
> Is it a kernel patch or a driver/ixgbe patch ?
I attached it.

--
WBR, Andrey V. Elsukov

0004-Add-m_preferredsize-and-use-it-in-all-intel-s-driver.patch (6K) Download Attachment
signature.asc (565 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Julien Cigar-4
In reply to this post by Andrey V. Elsukov
On Mon, Jun 26, 2017 at 04:13:33PM +0300, Andrey V. Elsukov wrote:

> On 25.06.2017 18:32, Ryan Stone wrote:
> > Having looking at the original email more closely, I see that you showed an
> > mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> > clusters increase while you are far below the zone's limit means that
> > you're definitely running into the bug I'm describing, and this bug could
> > plausibly cause the iSCSI errors that you describe.
> >
> > The issue is that the newer version of the driver tries to allocate a
> > single buffer to accommodate an MTU-sized packet.  Over time, however,
> > memory will become fragmented and eventually it can become impossible to
> > allocate a 9k physically contiguous buffer.  When this happens the driver
> > is unable to allocate buffers to receive packets and is forced to drop
> > them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> > the connection.  The older version of the driver limited itself to
> > page-sized buffers, so it was immune to issues with memory fragmentation.
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.
I had the same issue on a lightly loaded HP DL20 machine (BCM5720
chipsets), 8GB of RAM, running 10.3. Problem usually happens
within 30 days with 9k jumbo clusters allocation failure.

>
> --
> WBR, Andrey V. Elsukov
>




--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ben RUBSON
In reply to this post by Andrey V. Elsukov
> On 26 Jun 2017, at 15:36, Andrey V. Elsukov <[hidden email]> wrote:
>
> On 26.06.2017 16:29, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov <[hidden email]> wrote:
>>>
>>> On 26.06.2017 16:27, Ben RUBSON wrote:
>>>>
>>>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <[hidden email]> wrote:
>>>>>
>>>>> I think it is not mlxen specific problem, we have the same symptoms with
>>>>> ixgbe(4) driver too. To avoid the problem we have patches that are
>>>>> disable using of 9k mbufs, and instead only use 4k mbufs.
>>>>
>>>> Interesting feedback Andrey, thank you !
>>>> The problem may be then "general".
>>>> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a workaround ?
>>>
>>> Yes.
>>
>> Is it a kernel patch or a driver/ixgbe patch ?
>
> I attached it.

Thank you !
The idea of new sysctls to enable/disable the workaround is nice.
Should be easy to modify to use with mlx4_en, waiting for Ryan specific work on this driver.

I found a similar issue, reported date : 2013-10-28
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183381

FreeBSD certainly needs a general solid patch !
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ben RUBSON
In reply to this post by Andrey V. Elsukov

> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <[hidden email]> wrote:
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.

Another workaround is to decrease the MTU until 9K mbufs are not more used.
On my systems it gives a 4072 bytes MTU.
It solved the issue without having to reboot.
Of course it's just a workaround, as decreasing MTU increases overhead...
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

YongHyeon PYUN
In reply to this post by Julien Cigar-4
On Mon, Jun 26, 2017 at 03:44:58PM +0200, Julien Cigar wrote:

> On Mon, Jun 26, 2017 at 04:13:33PM +0300, Andrey V. Elsukov wrote:
> > On 25.06.2017 18:32, Ryan Stone wrote:
> > > Having looking at the original email more closely, I see that you showed an
> > > mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> > > clusters increase while you are far below the zone's limit means that
> > > you're definitely running into the bug I'm describing, and this bug could
> > > plausibly cause the iSCSI errors that you describe.
> > >
> > > The issue is that the newer version of the driver tries to allocate a
> > > single buffer to accommodate an MTU-sized packet.  Over time, however,
> > > memory will become fragmented and eventually it can become impossible to
> > > allocate a 9k physically contiguous buffer.  When this happens the driver
> > > is unable to allocate buffers to receive packets and is forced to drop
> > > them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> > > the connection.  The older version of the driver limited itself to
> > > page-sized buffers, so it was immune to issues with memory fragmentation.
> >
> > I think it is not mlxen specific problem, we have the same symptoms with
> > ixgbe(4) driver too. To avoid the problem we have patches that are
> > disable using of 9k mbufs, and instead only use 4k mbufs.
>
> I had the same issue on a lightly loaded HP DL20 machine (BCM5720
> chipsets), 8GB of RAM, running 10.3. Problem usually happens
> within 30 days with 9k jumbo clusters allocation failure.
>

This looks strange to me.  If I recall correctly bge(4) does not
request physically contiguous 9k jumbo buffers for BCM5720 so it
wouldn't suffer from memory fragmentation. (It uses m_cljget() and
takes advantage of extended RX BDs to handle up to 4 DMA segments).
If your controller is either BCM5714/BCM5715 or BCM5780, it
requires physically contiguous 9k jumbo buffers to handle jumbo
frames though.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Matt Joras
In reply to this post by Andrey V. Elsukov
On Mon, Jun 26, 2017 at 6:36 AM, Andrey V. Elsukov <[hidden email]> wrote:

> On 26.06.2017 16:29, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov <[hidden email]> wrote:
>>>
>>> On 26.06.2017 16:27, Ben RUBSON wrote:
>>>>
>>>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <[hidden email]> wrote:
>>>>>
>>>>> I think it is not mlxen specific problem, we have the same symptoms with
>>>>> ixgbe(4) driver too. To avoid the problem we have patches that are
>>>>> disable using of 9k mbufs, and instead only use 4k mbufs.
>>>>
>>>> Interesting feedback Andrey, thank you !
>>>> The problem may be then "general".
>>>> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a workaround ?
>>>
>>> Yes.
>>
>> Is it a kernel patch or a driver/ixgbe patch ?
>
> I attached it.
>
> --
> WBR, Andrey V. Elsukov

I didn't think that ixgbe(4) still suffered from this problem, and we
use it in the same situations rstone mentioned above. Indeed, ixgbe(4)
doesn't presently suffer from this problem (you can see that in your
patch, as it is only effectively changing the other drivers), though
it used to. It looks like it was first fixed to not to in r280182.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Andrey V. Elsukov
On 26.06.2017 19:26, Matt Joras wrote:
> I didn't think that ixgbe(4) still suffered from this problem, and we
> use it in the same situations rstone mentioned above. Indeed, ixgbe(4)
> doesn't presently suffer from this problem (you can see that in your
> patch, as it is only effectively changing the other drivers), though
> it used to. It looks like it was first fixed to not to in r280182.
>

Yes, actually we have this patch since 8.x. Recent drivers aren't
affected by this problem. iflib also has the code:

#ifndef CONTIGMALLOC_WORKS
        else
                fl->ifl_buf_size = MJUMPAGESIZE;
#else
        else if (sctx->isc_max_frame_size <= 4096)
                fl->ifl_buf_size = MJUMPAGESIZE;
        else if (sctx->isc_max_frame_size <= 9216)
                fl->ifl_buf_size = MJUM9BYTES;
        else
                fl->ifl_buf_size = MJUM16BYTES;
#endif

that seems by default doesn't use 9-16k mbufs.

--
WBR, Andrey V. Elsukov


signature.asc (565 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Zaphod Beeblebrox-2
In reply to this post by Ben RUBSON
Don't forget that, generally, as I understand it, the network stack suffers
from the same problem for 9k buffers.

On Sun, Jun 25, 2017 at 12:56 PM, Ben RUBSON <[hidden email]> wrote:

> > On 25 Jun 2017, at 17:32, Ryan Stone <[hidden email]> wrote:
> >
> > Having looking at the original email more closely, I see that you showed
> an mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into the bug I'm describing, and this bug could
> plausibly cause the iSCSI errors that you describe.
> >
> > The issue is that the newer version of the driver tries to allocate a
> single buffer to accommodate an MTU-sized packet.  Over time, however,
> memory will become fragmented and eventually it can become impossible to
> allocate a 9k physically contiguous buffer.  When this happens the driver
> is unable to allocate buffers to receive packets and is forced to drop
> them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> the connection.  The older version of the driver limited itself to
> page-sized buffers, so it was immune to issues with memory fragmentation.
>
> Thank you for your explanation Ryan.
> You say "over time", and you're right, I have to wait several days (here
> 88) before the problem occurs.
> Strange however that in 2500MB free memory system is unable to find 9k
> physically contiguous. But we never know :)
>
> Let's then wait for your patch !
> (and reboot for now)
>
> Many thx !
>
> Ben
> _______________________________________________
> [hidden email] mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "[hidden email]"
>
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: mbuf_jumbo_9k & iSCSI failing

Ryan Stone-2
I've just put up a review that fixes mlx4_en to no longer use clusters
larger than PAGE_SIZE in its receive path.  The patch is based off of the
older version of the driver which did the same, but keeps all of the
changes to the driver since then (including support for bus_dma).  The
review can be found here:

https://reviews.freebsd.org/D11560
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-scsi
To unsubscribe, send any mail to "[hidden email]"
Loading...