Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conflict between EVNP L3 multicast and EVPN L2 multicast #3462

Open
1 task done
krikoon73 opened this issue Dec 27, 2023 · 4 comments
Open
1 task done

Conflict between EVNP L3 multicast and EVPN L2 multicast #3462

krikoon73 opened this issue Dec 27, 2023 · 4 comments
Labels
state: stale type: bug Something isn't working

Comments

@krikoon73
Copy link

Issue Summary

Hi,
Context :

  • 1x Mcast VRF with 10x SVI/VLANs (210 to 219)
  • 1 mcast underlay group for Mcast VRF
  • 1x underlay groups for just one vlan (210)
    Issue :
  • The generated configuration is the following :
interface vxlan 1
...
vxlan vlan 210 multicast group 232.231.0.210
vxlan vrf MCAST multicast group 232.232.0.21
... 
router bgp 65101
...
  vlan 210
      rd 10.255.0.3:10210
      route-target both 10210:10210
      redistribute igmp
      redistribute learned
   vrf MCAST
      rd 10.255.0.3:21
      evpn multicast
      route-target import evpn 21:21
      route-target export evpn 21:21
      router-id 10.255.0.3
      redistribute connected
  • According to the TOI => https://www.arista.com/en/support/toi/eos-4-25-1f/14670-multicast-evpn-irb the VLAN:
    • redistribute igmp should not be configured in the control-plane because there is an SVI associated to that vlan
    • for the data-plane, there is 2 options :
      • vxlan vlan 210 multicast group 232.231.0.210 => will take precedence over the VRF mcast group for OISM+BUM
      • vxlan vlan 210 flood group 232.231.0.210 => for optimizing BUM traffic only

Can you confirm and fix it eventually in the model ?

Best Regards,
Christophe

Which component(s) of AVD impacted

eos_designs

How do you run AVD ?

Ansible CLI (with virtual-env or native python)

Steps to reproduce

tenants:
  # Definition of tenants. Additional level of abstraction to VRFs
  - name: Tenant-A
    evpn_l2_multicast:
       underlay_l2_multicast_group_ipv4_pool: 232.231.0.0/20
       underlay_l2_multicast_group_ipv4_pool_offset: 1
    evpn_l3_multicast:
      evpn_underlay_l3_multicast_group_ipv4_pool: 232.232.0.0/20
      evpn_underlay_l3_multicast_group_ipv4_pool_offset: 1
...
    l2vlans:
    vrfs:
...
      - name: MCAST
        vrf_id: 21
        vrf_vni: 21
        vtep_diagnostic:
          loopback: 21
          loopback_description: vtep_diagnostic
          loopback_ip_range: 10.254.0.0/24
        evpn_l3_multicast: 
          enabled: true
          evpn_peg:
            - nodes: [ dc1-blm1a-mrv356, dc1-blm1b-mrv357 ]
              # transit: true 
        svis:
          - id: 210
            name: Flow_Multicast
            profile: GENERIC
            tags: [MCAST]
            evpn_l2_multicast:
              enabled: true
            ip_address_virtual: 100.20.210.1/24
            ipv6_address_virtuals: ["fc00:100:20:210::1/64"]

Relevant log output

No response

Contributing Guide

  • I agree to follow this project's Code of Conduct
@krikoon73 krikoon73 added the type: bug Something isn't working label Dec 27, 2023
@mtache
Copy link
Contributor

mtache commented Jan 2, 2024

#2031 was split into 2 features, we are interested in #2033

@ClausHolbechArista
Copy link
Contributor

Thank you for reporting this.
It is not clear to me if this is actually a bug or a request to get support for flood groups. Please update the issue accordingly. Thanks!

If this is in fact a request to get #2033 implemented, please comment on that issue instead and close this one. We do stretched thin these days, so we will need volunteers to take on implementation of flood groups.

@krikoon73
Copy link
Author

Hey Claus,
Thank you for your feedback !
I think there is multiple subjects here :

I'll have cycles beginning February to discuss about this.
Cheers,
/cc

Copy link

This issue is stale because it has been open 90 days with no activity. The issue will be reviewed by a maintainer and may be closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state: stale type: bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants