Quantcast

tap on lagg ?

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

tap on lagg ?

Vincent Olivier
Hi,

Has anyone succeeded in having a bhyve VM tap interface on an aggregate interface (lagg)? From what I have read until now, it seems to be a known problem, and my experience shows that it is still an issue with FreeBSD 11, so I would like to know if it is because tap/bridging a lagg is not a reasonable thing to do anyway…

Regards,

Vincent
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Vincent Olivier
BTW, I found this a couple of weeks ago and it has just been updated: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182>


> Le 6 févr. 2017 à 11:11, Vincent Olivier <[hidden email]> a écrit :
>
> Hi,
>
> Has anyone succeeded in having a bhyve VM tap interface on an aggregate interface (lagg)? From what I have read until now, it seems to be a known problem, and my experience shows that it is still an issue with FreeBSD 11, so I would like to know if it is because tap/bridging a lagg is not a reasonable thing to do anyway…
>
> Regards,
>
> Vincent

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Ruben
Hi Vincent,



> BTW, I found this a couple of weeks ago and it has just been updated: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182>
>
>
>> Le 6 févr. 2017 à 11:11, Vincent Olivier <[hidden email]> a écrit :
>>
>> Hi,
>>
>> Has anyone succeeded in having a bhyve VM tap interface on an aggregate interface (lagg)? From what I have read until now, it seems to be a known problem, and my experience shows that it is still an issue with FreeBSD 11, so I would like to know if it is because tap/bridging a lagg is not a reasonable thing to do anyway…
>>
>> Regards,
>>
>> Vincent

I have multiple machines on which 2 or more nics make up a LACP lagg
with vlans on it. Those vlan interfaces are in bridges together with the
tap interfaces that are in use by bhyve vms.

Works as long as I "up" the nics in a specific fasion ( " -tso4 -lro
-vlanhwtag " ) . This works on 10.3 and 11.0 as far as I'm aware and I
have never experienced problems with it (Intel / AMD / em driver / bce
driver, re driver , all kinds of combinations).

I have no experience in comparable setups without the vlan "layer" though.

What seems to be your predicament?

Regards,

Ruben
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Vincent Olivier
Hi Ruben,

Thanks for this.

> Le 6 févr. 2017 à 17:14, Ruben <[hidden email]> a écrit :
>
> Hi Vincent,
>
> I have multiple machines on which 2 or more nics make up a LACP lagg
> with vlans on it. Those vlan interfaces are in bridges together with the
> tap interfaces that are in use by bhyve vms.
>
> Works as long as I "up" the nics in a specific fasion ( " -tso4 -lro
> -vlanhwtag " ) . This works on 10.3 and 11.0 as far as I'm aware and I
> have never experienced problems with it (Intel / AMD / em driver / bce
> driver, re driver , all kinds of combinations).


Didn’t try it it with disabling the tso/lro/vlanhwtagging features. Will try again with those disabled.


> I have no experience in comparable setups without the vlan "layer" though.


My setup didn’t involve vlans, only this: tap <—> bridge <—> lagg <—> igb0, igb1, igb2, igb3

Do you think that could be it? I have no need for a vlan here, though…


> What seems to be your predicament?

The tap would fail to « up » with an error message (that I forgot to note).


I will try to do it again with the aforementioned features disabled (but without a vlan layer) and report back here.

Vincent
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Dustin Marquess
In reply to this post by Ruben
On Mon, Feb 6, 2017 at 4:14 PM, Ruben <[hidden email]> wrote:

> Hi Vincent,
>
>
>
>> BTW, I found this a couple of weeks ago and it has just been updated: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182>
>>
>>
>>> Le 6 févr. 2017 à 11:11, Vincent Olivier <[hidden email]> a écrit :
>>>
>>> Hi,
>>>
>>> Has anyone succeeded in having a bhyve VM tap interface on an aggregate interface (lagg)? From what I have read until now, it seems to be a known problem, and my experience shows that it is still an issue with FreeBSD 11, so I would like to know if it is because tap/bridging a lagg is not a reasonable thing to do anyway…
>>>
>>> Regards,
>>>
>>> Vincent
>
> I have multiple machines on which 2 or more nics make up a LACP lagg
> with vlans on it. Those vlan interfaces are in bridges together with the
> tap interfaces that are in use by bhyve vms.
>
> Works as long as I "up" the nics in a specific fasion ( " -tso4 -lro
> -vlanhwtag " ) . This works on 10.3 and 11.0 as far as I'm aware and I
> have never experienced problems with it (Intel / AMD / em driver / bce
> driver, re driver , all kinds of combinations).
>
> I have no experience in comparable setups without the vlan "layer" though.

I've gotten it to work both with and without vlan just by using
-vlanhwtag.  Leaving lro & tso4 enabled works fine.  In fact,
everything works fine with vlanhwtag enabled until I throw epair into
the mix.  This is using cxgbe/cxl.

-Dustin
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Vincent Olivier
>> I have multiple machines on which 2 or more nics make up a LACP lagg
>> with vlans on it. Those vlan interfaces are in bridges together with the
>> tap interfaces that are in use by bhyve vms.
>>
>> Works as long as I "up" the nics in a specific fasion ( " -tso4 -lro
>> -vlanhwtag " ) . This works on 10.3 and 11.0 as far as I'm aware and I
>> have never experienced problems with it (Intel / AMD / em driver / bce
>> driver, re driver , all kinds of combinations).
>>
>> I have no experience in comparable setups without the vlan "layer" though.
>
> I've gotten it to work both with and without vlan just by using
> -vlanhwtag.  Leaving lro & tso4 enabled works fine.  In fact,
> everything works fine with vlanhwtag enabled until I throw epair into
> the mix.  This is using cxgbe/cxl.

For the sake of exhaustivity: I have Chelsio cxgb devices in the machine I’m testing this on, but I (am pretty certain) I was only using the Intel igb devices in the aggregate. I am also pretty sure that was without the -vlanhwtag flag that I will now test ASAP.
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Ruben
In reply to this post by Dustin Marquess
Hi Dustin,


On 07/02/17 03:52, Dustin Marquess wrote:

> On Mon, Feb 6, 2017 at 4:14 PM, Ruben <[hidden email]> wrote:
>> Hi Vincent,
>>
>>
>>
>>> BTW, I found this a couple of weeks ago and it has just been updated: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=157182>
>>>
>>>
>>>> Le 6 févr. 2017 à 11:11, Vincent Olivier <[hidden email]> a écrit :
>>>>
>>>> Hi,
>>>>
>>>> Has anyone succeeded in having a bhyve VM tap interface on an aggregate interface (lagg)? From what I have read until now, it seems to be a known problem, and my experience shows that it is still an issue with FreeBSD 11, so I would like to know if it is because tap/bridging a lagg is not a reasonable thing to do anyway…
>>>>
>>>> Regards,
>>>>
>>>> Vincent
>> I have multiple machines on which 2 or more nics make up a LACP lagg
>> with vlans on it. Those vlan interfaces are in bridges together with the
>> tap interfaces that are in use by bhyve vms.
>>
>> Works as long as I "up" the nics in a specific fasion ( " -tso4 -lro
>> -vlanhwtag " ) . This works on 10.3 and 11.0 as far as I'm aware and I
>> have never experienced problems with it (Intel / AMD / em driver / bce
>> driver, re driver , all kinds of combinations).
>>
>> I have no experience in comparable setups without the vlan "layer" though.
> I've gotten it to work both with and without vlan just by using
> -vlanhwtag.  Leaving lro & tso4 enabled works fine.
Interesting Dustin, I will try re-enabling those features in the months
to come.
>
> -Dustin
Kind regards,

Ruben
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Vincent Olivier
In reply to this post by Vincent Olivier
Hello,

Sorry for waiting so long. I don’t know if i’m doing it right but I tried «  -vlanhwtag » all the interfaces and I’m still having problems. Namely (as I didn’t have this information before) that all participating interfaces in the bridge itself are in promiscuous mode (and, if that is related) I cannot ssh into the host machine from any bhyve virtual machine. My goal is to be able to ssh and mount host nfs exports onto the VMs. Doing a « -promisc » on all the interfaces won’t change anything. Can someone help? Pleas find below a ifconfig dump.

Regards,

Vincent


igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403ab<RXCSUM,TXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
igb1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403ab<RXCSUM,TXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
igb2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403ab<RXCSUM,TXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
igb3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403ab<RXCSUM,TXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
cxl0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=ec07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:07:43:37:47:70
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet none
        status: no carrier
cxl1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
        options=ec07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:07:43:37:47:78
        inet 192.168.11.5 netmask 0xffffff00 broadcast 192.168.11.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-Twinax <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x7
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
lagg0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403ab<RXCSUM,TXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        inet 192.168.1.23 netmask 0xffffff00 broadcast 192.168.1.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        groups: lagg
        laggproto lacp lagghash l2,l3,l4
        laggport: igb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
        laggport: igb1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
        laggport: igb2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
        laggport: igb3 flags=0<>
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: vm-lan1g
        ether 02:f7:d6:01:1a:00
        nd6 options=1<PERFORMNUD>
        groups: bridge
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000000
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000000
        member: lagg0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 8 priority 128 path cost 6666
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: vmnet-unifi-0-lan1g
        options=80000<LINKSTATE>
        ether 00:bd:b9:51:fa:00
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        groups: tap
        Opened by PID 1523
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: vmnet-docker-0-lan1g
        options=80000<LINKSTATE>
        ether 00:bd:41:36:d7:01
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        groups: tap
        Opened by PID 16378

> Le 7 févr. 2017 à 03:53, Ruben <[hidden email]> a écrit :
>
> Hi Vincent,
>
>> Didn’t try it it with disabling the tso/lro/vlanhwtagging features. Will try again with those disabled.
>>
>>
>>> I have no experience in comparable setups without the vlan "layer" though.
>>
>> My setup didn’t involve vlans, only this: tap <—> bridge <—> lagg <—> igb0, igb1, igb2, igb3
>>
>> Do you think that could be it? I have no need for a vlan here, though…
>>
>>
>>> What seems to be your predicament?
>> The tap would fail to « up » with an error message (that I forgot to note).
>
> I haven't had any trouble "upping" taps (even with the offloading
> features enabled) but since I mostly use the
>
> net.link.tap.up_on_open=1
>
> sysctl setting I can't say I have manually upped them a lot (and didn't
> look at logfiles that much since stuff just worked).
>
>
>>
>>
>> I will try to do it again with the aforementioned features disabled (but without a vlan layer) and report back here.
>
> Im curious about your findings!
>
> Regards,
>
> Ruben

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Harry Schmalzbauer
 Bezüglich Vincent Olivier's Nachricht vom 20.03.2017 23:32 (localtime):
> Hello,
>
> Sorry for waiting so long. I don’t know if i’m doing it right but I tried «  -vlanhwtag » all the interfaces and I’m still having problems. Namely (as I didn’t have this information before) that all participating interfaces in the bridge itself are in promiscuous mode (and, if that is related) I cannot ssh into the host machine from any bhyve virtual machine. My goal is to be able to ssh and mount host nfs exports onto the VMs. Doing a « -promisc » on all the interfaces won’t change anything. Can someone help? Pleas find below a ifconfig dump.

I'd go for tcpdump.
First, check that routing is no issue.  In your constellation I guess
VMs Ips are in the 192.168.1.0/24 network, correct?
Else make sure your default gateway does
routing/deflection/icmp-redirection.

Then watch 'tcpdump -n -e - s 150 -i bridge0' on the host and the like
inside your VM (vtnet?)
Start with ping and check if ARP is working.
Also 'arp -a' on host and VM provides fundamentally information to find
the problem.
If ARP and icmp (ping) work but TCP (ssh) not, it's PMTU or offloading
related most likely.

-harry


_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Harry Schmalzbauer
Bezüglich Vincent Olivier's Nachricht vom 21.03.2017 16:51 (localtime):
> Hi, I can confirm that ping works but ssh and (I haven’t tried anything else, but I assume you are right) TCP as a whole doesn’t work.
>
> From there, I guess that, since I haven’t changed the MTU on the 1G interfaces (only on the 10G ones which are isolated from the 1G network). this leaves offloading.
>
> Should I disable it (which ones)? On all the physical interfaces or also on the lagg and maybe bridge?

You seem to have the following problem:
if_bridge(4) tries to disable TXCSUM on all members added.
But you add if_lagg(4), which doesn't pass those requests to it's
members, but simply ignores the request.
So you need to manually -txcsum (-txcsum6), e.g. in rc.conf when you set
them "up".

Unofrtunately I don't know how offloading is implemented generally, nor
how it works for if_igb(4), so I haven't thought about the reason yet,
why you need to disable TXCSUM.
Much more important, does it also affect TSO?
I can't tell, maybe someone with more knowledge can jump in.

-harry

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: tap on lagg ?

Vincent Olivier
Hi,

So all in all, I added « -vlanhwtag -txcsum -txcsum6 » on all 4 igb interfaces. They still all put themselves in promiscuous mode, however. But TCPing the host from a VM now works. Please find the relevant dmesg and ifconfig output below.

But I’d still appreciate more background information on this. is « -vlanhwtag » still required ? Should I expect a traffic slowdown on these interfaces ? Will it bring the CPU to its knees ? Why do I still get « bridge0: error setting interface capabilities on lagg0 » ?

Thanks!

Vincent

lagg0: link state changed to UP
igb3: link state changed to UP
igb1: link state changed to UP
igb2: link state changed to UP
bridge0: Ethernet address: 02:f7:d6:01:1a:00
bridge0: link state changed to UP
igb0: promiscuous mode enabled
igb1: promiscuous mode enabled
igb2: promiscuous mode enabled
igb3: promiscuous mode enabled
lagg0: promiscuous mode enabled
tap0: Ethernet address: 00:bd:29:bf:f8:00
bridge0: error setting interface capabilities on lagg0
tap0: promiscuous mode enabled
tap0: link state changed to UP
tap1: Ethernet address: 00:bd:1a:e6:f8:01
bridge0: error setting interface capabilities on lagg0
tap1: promiscuous mode enabled
tap1: link state changed to UP



igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=2403a9<RXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
igb1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=2403a9<RXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
igb2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=2403a9<RXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
igb3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=2403a9<RXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 1000baseT <full-duplex>
        status: active
cxl0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=ec07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:07:43:37:47:70
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet none
        status: no carrier
cxl1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
        options=ec07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:07:43:37:47:78
        inet 192.168.11.5 netmask 0xffffff00 broadcast 192.168.11.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet 10Gbase-Twinax <full-duplex>
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x7
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo
lagg0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=2403a9<RXCSUM,VLAN_MTU,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6>
        ether 54:a0:50:88:88:c6
        inet 192.168.1.23 netmask 0xffffff00 broadcast 192.168.1.255
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        groups: lagg
        laggproto lacp lagghash l2,l3,l4
        laggport: igb0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
        laggport: igb1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
        laggport: igb2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
        laggport: igb3 flags=0<>
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: vm-lan1g
        ether 02:f7:d6:01:1a:00
        nd6 options=1<PERFORMNUD>
        groups: bridge
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000000
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000000
        member: lagg0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 8 priority 128 path cost 6666
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: vmnet-docker-0-lan1g
        options=80000<LINKSTATE>
        ether 00:bd:29:bf:f8:00
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        groups: tap
        Opened by PID 1417
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        description: vmnet-unifi-0-lan1g
        options=80000<LINKSTATE>
        ether 00:bd:1a:e6:f8:01
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        groups: tap
        Opened by PID 1698


> Le 21 mars 2017 à 12:10, Harry Schmalzbauer <[hidden email]> a écrit :
>
> Bezüglich Vincent Olivier's Nachricht vom 21.03.2017 16:51 (localtime):
>> Hi, I can confirm that ping works but ssh and (I haven’t tried anything else, but I assume you are right) TCP as a whole doesn’t work.
>>
>> From there, I guess that, since I haven’t changed the MTU on the 1G interfaces (only on the 10G ones which are isolated from the 1G network). this leaves offloading.
>>
>> Should I disable it (which ones)? On all the physical interfaces or also on the lagg and maybe bridge?
>
> You seem to have the following problem:
> if_bridge(4) tries to disable TXCSUM on all members added.
> But you add if_lagg(4), which doesn't pass those requests to it's
> members, but simply ignores the request.
> So you need to manually -txcsum (-txcsum6), e.g. in rc.conf when you set
> them "up".
>
> Unofrtunately I don't know how offloading is implemented generally, nor
> how it works for if_igb(4), so I haven't thought about the reason yet,
> why you need to disable TXCSUM.
> Much more important, does it also affect TSO?
> I can't tell, maybe someone with more knowledge can jump in.
>
> -harry
>

_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to "[hidden email]"
Loading...