Monday, March 5, 2018

IOS and XR Multicast VPN with Profile 1 - MLDP MP2MP PIM Default MDT with BSR and AutoRP


Multipoint GRE tunnels between the PEs was the first flavor of mVPN. Since LDP is deployed in the SP core, we can leverage the LSPs that have been built in the core. The default MDT uses LDP for transport and not GRE tunnels between the PEs. The idea is to enable mLDP as the transport. Default MDT-mLDP, also referred to as Multi-Directional Inclusive - Provider Multicast Service Interface (MI-PMSI) or Profile 1. The end result is MP2MP bidirectional LDP adjacencies that allow the PEs to exchange PIM communication from the customers, instead of the traffic being forwarded over mGRE.

BSR is used in the customer network and Auto RP is used in the SP network.



This is configured on all the IOS XE routers. This is used to inform the SP routers that LDP is used for PIM transport.

ip pim mpls source Loopback0

One device in the SP core needs to be configured as the root of the MP2MP tree. This is configured under the VRF process for the address families that mLDP applies to. Specify the VPN ID at this point as well.

IOS-XE
mdt default mpls mldp x.x.x.x 

IOS-XR
mdt default mldp ipv4 x.x.x.x


IOS XE PE routers
vrf definition MCAST
 rd 1:1
 vpn id 1:1
 !
 address-family ipv4
  mdt default mpls mldp 172.16.100.1


*Mar  5 23:49:35.292: MLDP: Reevaluating peers for nhop: 10.1.9.1
*Mar  5 23:49:36.239: %LINEPROTO-5-UPDOWN: Line protocol on Interface Lspvif1, changed state to up
*Mar  5 23:49:36.703: %PIM-5-DRCHG: VRF MCAST: DR change from neighbor 0.0.0.0 to 172.16.100.9 on interface Lspvif1
*Mar  5 23:49:56.565: %PIM-5-NBRCHG: VRF MCAST: neighbor 172.16.100.3 UP on interface Lspvif1 

*Mar  5 23:50:26.520: %PIM-5-NBRCHG: VRF MCAST: neighbor 172.16.100.110 UP on interface Lspvif1 



R9#sh mpls mldp neighbors

 MLDP peer ID    : 172.16.100.1:0, uptime 1d07h Up,
  Target Adj     : No
  Session hndl   : 1
  Upstream count : 1
  Branch count   : 0
  Path count     : 1
  Path(s)        : 10.1.9.1          LDP GigabitEthernet1.19
  Nhop count     : 1
  Nhop list      : 10.1.9.1


R9#sh mpls mldp root

 Root node    : 172.16.100.1
  Metric      : 20
  Distance    : 115
  Interface   : GigabitEthernet1.19 (via unicast RT)
  FEC count   : 1
  Path count  : 1
  Path(s)     : 10.1.9.1         LDP nbr: 172.16.100.1:0    GigabitEthernet1.19


R9#sh mpls mldp bindings
System ID: 1
Type: MP2MP, Root Node: 172.16.100.1, Opaque Len: 14
Opaque value: [mdt 1:1 0]
lsr: 172.16.100.1:0, remote binding[U]: 37, local binding[D]: 44 active


R9#show mpls  forwarding-table labels 44 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop 
Label      Label      or Tunnel Id     Switched      interface           
44         No Label   [mdt 1:1 0][V]   244           aggregate/MCAST
        MAC/Encaps=0/0, MRU=0, Label Stack{}, via Ls1
        VPN route: MCAST
        No output feature configured
    Broadcast

As you can see on R9, the label value of 44 is bound to the mLDP configuration and after pinging 224.1.1.1 from R5, we see that 244 bytes of traffic were label switched. This indicates that traffic is supported and forwarded.


IOS XR PE routers
mpls ldp
 mldp
 !
vrf MCAST
 vpn id 1:1
 address-fammily ipv4 unicast
 !
route-policy MDT_mLDP
  set core-tree mldp
end-policy
!
multicast-routing
 vrf MCAST
  address-family ipv4
   mdt default mldp ipv4 172.16.100.1
   !
router pim
 vrf MCAST
  address-family ipv4
   rpf topology route-policy MDT_mLDP


RP/0/0/CPU0:XR9#sh mpls mldp root        
Tue Mar  6 00:13:47.947 UTC
mLDP root database
 Root node    : 172.16.100.1 
  Metric      : 30
  Distance    : 115
  FEC count   : 1
  Path count  : 1

  Path(s)     : 10.11.19.11      LDP nbr: 172.16.100.11:0  



RP/0/0/CPU0:XR9#sh mpls mldp bindings 
Tue Mar  6 00:14:00.916 UTC
mLDP MPLS Bindings database

LSP-ID: 0x00001 Paths: 2 Flags: Pk
 0x00001 MP2MP  172.16.100.1 [mdt 1:1 0]
   Local Label: 24025 Remote: 24021 NH: 10.11.19.11 Inft: GigabitEthernet0/0/0/0.1119 Active

   Local Label: 24024 Remote: 1048577 Inft: LmdtMCAST RPF-ID: 3 TIDv4/v6: 0xE0000011/0xE0800011


Even though IOS XR PEs are configured, it doesn't appear that Profile 1 is supported in the data plane on IOS XRv for either XRv 5.3 or XRv 6.0. As I added the configuration to XR9 and XR8 and tested R5 multicast group pings, neither R13 or R14 interface addresses responded. 

I received this message from the console after enabling "debug mpls mldp all"

RP/0/0/CPU0:Mar  6 00:28:18.084 : mpls_ldp[1049]: %ROUTING-MLDP-5-BRANCH_ADD : 0x00001 [mdt 1:1 0] MP2MP 172.16.100.1, Add PIM MDT branch remote label no_label, local label no_label
RP/0/0/CPU0:Mar  6 00:28:20.153 : config[65738]: %MGBL-CONFIG-6-DB_COMMIT : Configuration committed by user 'Rob'. Use 'show configuration commit changes 1000000028' to view the changes.


RP/0/0/CPU0:XR8(config-pim-MCAST-ipv4)#RP/0/0/CPU0:Mar  6 00:28:28.453 : netio[312]: %PKT_INFRA-PAK-3-OWNERSHIP_PROCESS_FREE : Client with pid 434260 tried to access freed packet b0c08a0c (pak pid -434260)  : pkg/bin/netio : (PID=434260) :  -Traceback= e7de732 f7f6997 e7d18b4 f475f26 4225597 422265e d17d445 d17b518 42219ee d1c7050

This indicates to me either there is an issue on the device or other problem that inhibits XR from forwarding the traffic. IOSXR 9ks do this as well for L2VPN data plane forwarding. This tells me that data plane support is not available for this profile.

IOS and IOS XR Multicast VPN with the Rosen Model Default MDT and Data MDT with BSR and AutoRP

The default MDT is used for control plane communication between the PEs, this is for PIM communication, Joins, Prunes and other communication. This is the equivalent to PIM Dense mode, all PEs get the communication.
The data MDT is primarily used for bandwidth optimization and allows for multiple groups to be configured and not all PEs respond or forward the traffic. The idea is that when customer routers send a C-PIM join message into the provider network. The provider network takes that info and forwards to the RP of the provider. The customer also has an RP configured, this information is passed over the provider network. The data MDT is not a required configuration, if the network only sees low bandwidth flows, if high bandwidth flows are present, then moving from the shared tree to the shortest path tree is necessary to take the optimal path through the network.
On IOS and IOS XE, the command "ip pim ssm default" is needed. This is used to create (S, G) entries in the MRIB for optimized communication.
IOS XR by default is listening for SSM traffic already.
All the IOS routers need to be configured with updated mdt default configuration and add the mdt data capability as well as the ip pim ssm default configuration. In our case I am already using the 232.0.0.1 multicast address which is the SSM range. So we don't have to change the address, if it was the address 239.0.0.1 then we would need to change to the 232.x.x.x address block. Since the BGP IPv4 MDT is already enabled, we don't need to add that in. We will use the 232.10.10.0/24 data range for dynamically created groups.
All IOS XE routers in the SP core:

ip pim ssm default



All IOS XE routers that are PEs in the SP:

vrf definition MCAST

 address-family ipv4

  mdt default 232.0.0.1

  mdt data 232.10.10.0 0.0.0.255



All IOS XR PEs in the SP, XR 5.3 and 6.0. XR 5.3 supports the data plane:

XR8

multicast-routing

 address-family ipv4

  mdt source Loopback0

  interface all enable

 !

 vrf MCAST

  address-family ipv4

   mdt source Loopback0

   interface all enable

   mdt default ipv4 232.0.0.1



   mdt data 232.18.18.0/24

!

XR9

multicast-routing

 address-family ipv4

  mdt source Loopback0

  interface all enable

 !

 vrf MCAST

  address-family ipv4

   mdt source Loopback0

   interface all enable

   mdt default ipv4 232.0.0.1



   mdt data 232.19.19.0/24

!

XR3

multicast-routing

 address-family ipv4

  mdt source Loopback0

  interface all enable

 !

 vrf MCAST

  address-family ipv4

   mdt source Loopback0

   interface all enable

   mdt default ipv4 232.0.0.1



   mdt data 232.13.13.0/24



Now that we have configure all the routers in the SP core, we need to go around and test the functionality. We'll add another multicast group, which is configured by changing the join group from 224.1.1.1 to 224.1.2.3 on a handful of routers. The idea is that PIM control plane messages are sent via the MDT Default tree and that the actual multicast flows will use the MDT Data tree. Depending on the placement of the RP, which in this case is XR9, the traffic should be moved from the shared tree to the shortest path tree. Meaning that the traffic should flow through the RP for the entire flow, rather it should be pruned off the shared tree and onto the shortest path tree. 



This is accomplished by a PIM shared tree prune and PIM shortest path tree join. During the switch-over, a slight increase in latency is expected. Not all flows will be switched over, since the RP is XR9, traffic from R14 will never be switched over as it has to flow through the RP to reach R5. R5 is the source and R14 is a receiver. Because of this, when R5 pings 224.1.2.3, the ping goes out to all PEs that have joined the appropriate groups. 



Typically SPT switchover happens during the second ping for each receiver, indicated by the 1, here all of the initial ping responses drop after the second ping, showing that a switch over occurred.



R5#ping 224.1.1.1 repeat 3

Type escape sequence to abort.

Sending 3, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 172.16.100.120, 147 ms

Reply to request 0 from 10.12.9.12, 139 ms

Reply to request 0 from 10.4.7.7, 104 ms

Reply to request 0 from 10.10.11.11, 93 ms

Reply to request 1 from 10.4.7.7, 108 ms

Reply to request 1 from 10.10.11.11, 76 ms

Reply to request 1 from 10.4.7.7, 54 ms

Reply to request 1 from 172.16.100.120, 39 ms

Reply to request 1 from 10.12.9.12, 28 ms

Reply to request 2 from 10.4.7.7, 27 ms

Reply to request 2 from 10.10.11.11, 50 ms

Reply to request 2 from 172.16.100.120, 33 ms

Reply to request 2 from 10.12.9.12, 33 ms

Same thing here, the responses from R13 and R14 stay consistent due to flowing through the RP and having to flow through the router just above the RP.

R5#ping 224.1.2.3 repeat 3

Type escape sequence to abort.

Sending 3, 100-byte ICMP Echos to 224.1.2.3, timeout is 2 seconds:
Reply to request 0 from 10.4.8.8, 63 ms

Reply to request 0 from 10.18.13.13, 329 ms

Reply to request 0 from 10.14.19.14, 130 ms

Reply to request 1 from 10.4.8.8, 25 ms

Reply to request 1 from 10.18.13.13, 262 ms

Reply to request 1 from 10.14.19.14, 95 ms

Reply to request 2 from 10.18.13.13, 151 ms

Reply to request 2 from 10.14.19.14, 98 ms

Reply to request 2 from 10.4.8.8, 73 ms

IOS and IOS XR Multicast VPN with the Rosen Model Default MDT with BSR and AutoRP

Auto RP is the Cisco proprietary RP distribution mechanism. It uses the same logic as BSR, or I should say, BSR uses the same logic as Auto RP. Auto RP leverages the mapping agent and the rp candidate options like BSR. IOS XR support it as well. I have configured R1 as the Auto RP mapping agent and RP candidate and XR9 as another RP candidate. Since we are not running PIM dense mode anywhere we need to add a command "ip pim autorp listener".

Candidate RPs (C-RP):
  • Routers willing to be the RP
  • Announce themselves to the MA via 224.0.1.39
Mapping Agent (MA):
  • Decide who will be the RP from the Candidates RP
  • Will inform the rest of the network about the elected RP via 224.0.1.40



I configured Auto RP in the SP core and left BSR configured in the customer network.

R1 is a Mapping agent and a RP candidate. XR9 is a RP candidate. 

R1

ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 255
ip pim send-rp-discovery Loopback0 scope 255

XR9
router pim
 address-family ipv4
 auto-rp candidate-rp Loopback0 scope 255 group-list 224-4 interval 60

Now we can check on R3 to see what the rp mappings info has to say.

R3#sh ip pim rp mapping 
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 172.16.100.19 (?), v2
    Info source: 172.16.100.1 (?), elected via Auto-RP
         Uptime: 01:36:48, expires: 00:02:35

RP/0/0/CPU0:XR8#sh pim rp mapping 
Mon Mar  5 11:28:24.979 UTC
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
  RP 172.16.100.19 (?), v2
    Info source: 172.16.100.1 (?), elected via autorp
      Uptime: 00:47:56, expires: 00:02:40
Group(s) 224.0.0.0/4
  RP 172.16.100.1 (?), v2
    Info source: 0.0.0.0 (?), elected via config
      Uptime: 14:53:30, expires: never

XR9 is the RP for the SP core. XR9 communicated with R1 to let R1 know XR9 is an RP candidate. R1 is the distributor of information in the SP core. 


So now we'll check on R12 to see what it sees an as RP for the customer network.

R12#sh ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 172.16.100.5 (?), v2
    Info source: 172.16.100.5 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 02:19:19, expires: 00:01:41

We see that R12 sees R5 as the BSR and RP candidate. Let's make sure that reachability is still there.

R5#ping 224.1.1.1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.4.8.8, 51 ms
Reply to request 0 from 10.18.13.13, 374 ms
Reply to request 0 from 10.14.19.14, 227 ms
Reply to request 0 from 10.10.11.11, 195 ms
Reply to request 0 from 10.4.7.7, 138 ms
Reply to request 0 from 172.16.100.120, 127 ms
Reply to request 0 from 10.12.9.12, 119 ms

We see that when R5 pings the multicast group 224.1.1.1 that all the customer routers respond, these are the customer include those connected to XR 5.3 PEs, not XR 6.0

IOS and IOS XR Multicast VPN with the Rosen Model Default MDT with Boot Strap Router or BSR

BSR or Bootstrap Router is the open standard, RFC 5059, for distributing RP info through out the multicast PIM enabled network. BSR messages are sent inside of PIM ver2 messages where ever PIM is enabled and flowing. PIM builds the connections per router to allow routed multicast to flow. We've already taken a look at static RP which needs to be configured on every router that will participate in multicast forwarding. This requirement makes Static RP challenging to configure. The nice thing about SP mVPN providing transport for Customer multicast is that the RP distribution method chosen by the provider is independent of the customer. The provider may choose Auto-RP and the customer could choose BSR or vice-versa. In this post we will use BSR for both the customer and provider multicast networks. For ease of migration from Static RP to BSR I have configured BSR to use the same loopback interfaces that Static RP previously used. R1's loopback 0 and R5's loopback 0 interfaces.

BSR and Auto-RP operate differently that Static RP, both support the RP information distributor and the RP assigner. These are terms I made up, the "ip pim bsr-candidate loopback 0" is the information distributor and the "ip pim rp-candidate loopback 0" is used to define the RP itself. This would mean that many different routers could be the RP. Typically an RP filter is used, where one RP would service the first half of the 224/4 range and the other the back half. We won't be testing that out in this post. The BSR candidate and RP candidate can both be the same router, which in our case is what I configured.



R5 - Customer RP
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0


R1 - Service Provider RP
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0


Not configured in our topology or scenario but worth mentioning:
IOS XR BSR Configuration
router pim
 address-family ipv4
  bsr candidate-bsr 172.16.100.19 hash-mask-len 30 priority 1
  bsr candidate-rp 172.16.100.19 priority 192 interval 60


R4 - PE Router servicing R7 and R8
R4#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 172.16.100.1 (?), v2
    Info source: 172.16.100.1 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:30:52, expires: 00:01:39


XR9 - PE Router servicing R14
RP/0/0/CPU0:XR9#sh pim rp mapping
Mon Mar  5 09:31:40.742 UTC
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
  RP 172.16.100.1 (?), v2
    Info source: 10.11.19.11 (?), elected via bsr, priority 0, holdtime 150
      Uptime: 00:31:39, expires: 00:01:56


Customer multicast receiver
R12#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 172.16.100.5 (?), v2
    Info source: 172.16.100.5 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:31:41, expires: 00:02:01

Now that we have configured BSR and verified that the RP information has been distributed, we will now test multicast.

R5#ping 224.1.1.1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.10.11.11, 59 ms
Reply to request 0 from 10.18.13.13, 321 ms
Reply to request 0 from 10.14.19.14, 257 ms
Reply to request 0 from 10.4.7.7, 193 ms
Reply to request 0 from 172.16.100.120, 130 ms
Reply to request 0 from 10.12.9.12, 118 ms
Reply to request 0 from 10.4.8.8, 79 ms

We can see that it does work using the BSR configuration.

Sunday, March 4, 2018

IOS and IOS XR Multicast VPN with the Rosen Model Default MDT with Static RP

In this post we will follow on from the previous post where we looked at Multicast VPN on just IOS. We'll expand on that post with this one. Just a couple of things before we get into the fun stuff. The topology has grown significantly since the last post, this was done as the original 2 IOS XR PE routers running IOS XR 6.0 code don't appear to support multicast in the data plane. I have attempted mVPN on both IOS XR 6.0 OVA and .VMDK variations. These are the same .VMDK used for VIRL, that's where I downloaded it from. I ended up using IOS XR 5.3 OVA, the OVA is the only version tested that supports the dataplane. I have not tested any other versions, so you may get lucky if you find a version that works. However, for those with an interest of the CCIE SPv4 lab exam, IOS XR 6.0 is the version listed to be used in the lab exam.

So like the previous post, the is the MDT default, which means that like PIM Dense mode, every PE gets the traffic even if the traffic isn't destined for it.

XR9 is the bottom left router connected to R14 and XR1. IOS XR uses two different constructs to enable mVPN. IOS XR supports both enterprise and service provider multicast. We'll configure multicast on IOS XR and show the configuration of R3 just to show IOS and IOS XR in the same post. R5 is the multicast source, ping 224.1.1.1 from here should get responses from 6 receivers

PIM or Protocol Independent Mutlicast is used to build PIM trees between all the routers. Without PIM enabled, routed multicast can't flow between the routers and is restricted to a TTL of 1 or link local multicast. The PIM construct is used to define the RP address, where to source PIM traffic from and define VRF specific info. The RP address of 172.16.100.1 is the RP of the SP and the VRF RP address is the RP of the customer.

The multicast construct enables multicast to be forwarded. The MDT or multicast distribution tree is used to build the PIM tunnels. To avoid an RPF issue, it is a best practice to enable multicast on all interfaces enabled for IGP. Specify the MDT source, since this is a PE router, the MDT source should be the loopback that is the LDP and BGP source. Under the VRF, specifying the MDT default multicast group address.

XR9
multicast-routing
 address-family ipv4
  mdt source Loopback0
  interface all enable
 !
 vrf MCAST
  address-family ipv4
   mdt source Loopback0
   interface all enable
   mdt default ipv4 232.0.0.1
!
router pim
 address-family ipv4
  rp-address 172.16.100.1
  interface Loopback0
 !
 vrf MCAST
  address-family ipv4
   rp-address 172.16.100.5
   interface GigabitEthernet0/0/0/0.1419
!
router bgp 1
 address-family ipv4 unicast
 !
 address-family vpnv4 unicast
 !
 address-family ipv6 unicast
 !
 address-family vpnv6 unicast
 !
 address-family ipv4 mdt
 !
 neighbor 172.16.100.1
  remote-as 1
  update-source Loopback0
  address-family vpnv4 unicast
  !
  address-family vpnv6 unicast
  !
  address-family ipv4 mdt
  !
 !
 vrf MCAST
  rd 1:1
  address-family ipv4 unicast
  !
  address-family ipv6 unicast


RP/0/0/CPU0:XR9#sh pim neighbor
Sun Mar  4 23:56:32.026 UTC

Neighbor Address             Interface              Uptime    Expires  DR pri   Flags

10.11.19.11                  GigabitEthernet0/0/0/0.1119 01:38:18  00:01:31 1      B
10.11.19.19*                 GigabitEthernet0/0/0/0.1119 01:38:23  00:01:29 1 (DR) B P E
172.16.100.19*               Loopback0              01:38:23  00:01:21 1 (DR) B P

This validates that there are PIM neighbors, in this case it is XR1.

RP/0/0/CPU0:XR9#sh pim vrf MCAST neighbor
Sun Mar  4 23:56:41.246 UTC

Neighbor Address             Interface              Uptime    Expires  DR pri   Flags

10.14.19.14                  GigabitEthernet0/0/0/0.1419 01:36:38  00:01:36 1      P
10.14.19.19*                 GigabitEthernet0/0/0/0.1419 01:38:32  00:01:20 1 (DR) B P E
172.16.100.3                 mdtMCAST               01:29:31  00:01:18 1      P
172.16.100.9                 mdtMCAST               01:29:58  00:01:19 1      P
172.16.100.18                mdtMCAST               01:29:42  00:01:31 1   
172.16.100.19*               mdtMCAST               01:38:28  00:01:34 1      P
172.16.100.110               mdtMCAST               01:29:57  00:01:21 1 (DR) P

There are also VRF MCAST PIM neighbors, one to R14 which is the G0/0/0/0.1419, there others are MDT peers from the PIM tunnels, which are mGRE tunnels.

RP/0/0/CPU0:XR9#sh bgp ipv4 mdt | b Network
Sun Mar  4 23:57:28.713 UTC
   Network            Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:1
*>i172.16.100.3/96    172.16.100.3             0    100      0 ?
*>i172.16.100.4/96    172.16.100.4             0    100      0 ?
*>i172.16.100.9/96    172.16.100.9             0    100      0 ?
*>i172.16.100.14/96   172.16.100.14                 100      0 i
*>i172.16.100.18/96   172.16.100.18                 100      0 i
*> 172.16.100.19/96   0.0.0.0                                0 i
*>i172.16.100.110/96  172.16.100.110           0    100      0 ?

Processed 7 prefixes, 7 paths


R14#sh ip pim neighbor | b Address
Address                                                            Prio/Mode
10.14.19.19       GigabitEthernet1.1419    01:43:29/00:01:28 v2    1 / DR P G



R3
ip multicast-routing distributed
!
ip multicast-routing vrf MCAST distributed
!
interface GigabitEthernet1.13
 ip pim sparse-mode
!
interface Loopback0
 ip pim sparse-mode
!
interface GigabitEthernet1.35
 vrf forwarding MCAST
 ip pim sparse-mode
!
ip pim rp-address 172.16.100.1
!
ip pim vrf MCAST rp-address 172.16.100.5
!
router bgp 1
 bgp log-neighbor-changes
 no bgp default ipv4-unicast
 neighbor 172.16.100.1 remote-as 1
 neighbor 172.16.100.1 update-source Loopback0
!
 address-family ipv4 mdt
  neighbor 172.16.100.1 activate
  neighbor 172.16.100.1 send-community both


R3#sh bgp ipv4 mdt all 
BGP table version is 25, local router ID is 172.16.100.3
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, 
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter, 
              x best-external, a additional-path, c RIB-compressed, 
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:1 (default for vrf MCAST)
 *>  172.16.100.3/32  0.0.0.0                                0 ?
 *>i 172.16.100.4/32  172.16.100.4             0    100      0 ?
 * i                  172.16.100.4             0    100      0 ?
 * i 172.16.100.9/32  172.16.100.9             0    100      0 ?
 *>i                  172.16.100.9             0    100      0 ?
 *>i 172.16.100.13/32 172.16.100.13                 100      0 i
 *>i 172.16.100.14/32 172.16.100.14                 100      0 i
 *>i 172.16.100.18/32 172.16.100.18                 100      0 i
 * i                  172.16.100.18                 100      0 i
 *>i 172.16.100.19/32 172.16.100.19                 100      0 i
 *>i 172.16.100.110/32

                       172.16.100.110           0    100      0 ?


R1
router bgp 1
 bgp log-neighbor-changes
 no bgp default ipv4-unicast
 neighbor MCAST peer-group
 neighbor MCAST remote-as 1
 neighbor MCAST update-source Loopback0
 neighbor 172.16.100.19 peer-group MCAST
!
address-family ipv4 mdt
  neighbor MCAST route-reflector-client
  neighbor 172.16.100.19 activate


R1#sh bgp ipv4 mdt all | b Network
     Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:1
 *>i 172.16.100.3/32  172.16.100.3             0    100      0 ?
 *>i 172.16.100.4/32  172.16.100.4             0    100      0 ?
 *>i 172.16.100.9/32  172.16.100.9             0    100      0 ?
 *>i 172.16.100.14/32 172.16.100.14                 100      0 i
 *>i 172.16.100.18/32 172.16.100.18                 100      0 i
 *>i 172.16.100.19/32 172.16.100.19                 100      0 i
 *>i 172.16.100.110/32
                       172.16.100.110           0    100      0 ?

We can see that R1, the BGP Route Reflector, has formed MDT peerings with all the PE routers.


R5
ip multicast-routing distributed
!
ip pim rp-address 172.16.100.5
!
interface GigabitEthernet1.35
 ip pim dr-priority 0
 ip pim sparse-mode
!
interface Loopback0
 ip pim sparse-mode



R5#ping 224.1.1.1 repeat 2
Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.10.11.11, 64 ms
Reply to request 0 from 10.18.13.13, 318 ms
Reply to request 0 from 10.4.7.7, 263 ms
Reply to request 0 from 10.4.8.8, 246 ms
Reply to request 0 from 10.14.19.14, 229 ms
Reply to request 0 from 172.16.100.120, 188 ms
Reply to request 0 from 10.12.9.12, 64 ms

Saturday, March 3, 2018

IOS Multicast VPN with the Rosen Model Default MDT with Static RP

In the below topology, only the IOS routers in the SP core are leveraged. I am systematically testing out mVPN for all profiles on IOS then XR. The idea is to keep the configuration for multicast rather simple and easy to verify and troubleshoot. The lab I have built allows flexibility of adding/removing interfaces at my choosing.

So the topology uses R3 and R4 as the PE routers and R1 and R2 as the Provider Core routers. R1 is a BGP route reflector for all applicable address families. Not shown in the configuration is the MPLS L3 VPN configuration. All the routers in the SP core are running IS-IS with LDP, MP-BGP on all PEs back to R1. PE-CE Routing is accomplished with the PEs redistributing the connected PE-CE link via a route-map or RPL and the CE sending an IPv4 or IPv6 default route.

The goal we are trying to achieve, R5 is the sender for the 224.1.1.1 multicast group and R7 and R8 have joined 224.1.1.1 as receivers. After we configure the core, CEs and PE-CE connectivity, when R5 pings 224.1.1.1 there should be responses from both R7 and R8.

Multicast VPN, if you have never played with it before, is daunting to get working. The reason why is the provider runs it's own multicast infrastructure and the customer runs it's own multicast infrastructure. The customer could just use GRE tunneling techniques, P2P or DMVPN and achieve multicast reachability that way. MPLS L3 VPN can be used to route multicast traffic.

There are 5 major sections to get this working.
1. Configuring the SP core on the P and PE routers
2. Configuring the PEs for the MDT
3. Configuring PE-CE connections to be multicast aware
4. Configuring the Customers equipment to support to multicast
5. Test reachability


---------------------Configuring the SP core on the P and PE routers---------------------
The multicast transport is PIM encapsulated in the core.

R1 - Provider core router
ip multicast-routing distributed
!
interface Loopback0
 ip pim sparse-mode
!
interface GigabitEthernet1.12
 ip pim sparse-mode
!
interface GigabitEthernet1.13
 ip pim sparse-mode


R1#sh ip pim interface

Address          Interface                Ver/   Nbr    Query  DR         DR
                                          Mode   Count  Intvl  Prior
10.1.3.1         GigabitEthernet1.13      v2/S   1      30     1          10.1.3.3
10.1.2.1         GigabitEthernet1.12      v2/S   1      30     1          10.1.2.2
172.16.100.1     Loopback0                v2/S   0      30     1          172.16.100.1


R1#sh ip pim neighbor | b Address
Address                                                            Prio/Mode
10.1.3.3          GigabitEthernet1.13      04:02:48/00:01:41 v2    1 / DR S P G
10.1.2.2          GigabitEthernet1.12      04:02:19/00:01:23 v2    1 / DR S P G



R2 - Provider core router
ip multicast-routing distributed
!
interface GigabitEthernet1.12
 ip pim sparse-mode
!
interface GigabitEthernet1.24
 ip pim sparse-mode


R2#sh ip pim interface 

Address          Interface                Ver/   Nbr    Query  DR         DR
                                          Mode   Count  Intvl  Prior
10.1.2.2         GigabitEthernet1.12      v2/S   1      30     1          10.1.2.2
10.2.4.2         GigabitEthernet1.24      v2/S   1      30     1          10.2.4.4


R2#sh ip pim neighbor | b Address
Address                                                            Prio/Mode
10.1.2.1          GigabitEthernet1.12      04:03:44/00:01:34 v2    1 / S P G
10.2.4.4          GigabitEthernet1.24      03:36:17/00:01:28 v2    1 / DR S P G


R3 - Provider Edge router
ip multicast-routing distributed
!
interface Loopback0
 ip pim sparse-mode
!
interface GigabitEthernet1.13
 ip pim sparse-mode


R3#sh ip pim interface 

Address          Interface                Ver/   Nbr    Query  DR         DR
                                          Mode   Count  Intvl  Prior
10.1.3.3         GigabitEthernet1.13      v2/S   1      30     1          10.1.3.3
172.16.100.3     Loopback0                v2/S   0      30     1          172.16.100.3


R3#sh ip pim neighbor | b Address
Address                                                            Prio/Mode
10.1.3.1          GigabitEthernet1.13      04:04:48/00:01:27 v2    1 / S P G


R4 - Provider Edge router
ip multicast-routing distributed
!
interface Loopback0
 ip pim sparse-mode
!
interface GigabitEthernet1.24
 ip pim sparse-mode


R4#sh ip pim interface 

Address          Interface                Ver/   Nbr    Query  DR         DR
                                          Mode   Count  Intvl  Prior
10.2.4.4         GigabitEthernet1.24      v2/S   1      30     1          10.2.4.4
172.16.100.4     Loopback0                v2/S   0      30     1          172.16.100.4


R4#sh ip pim neighbor | b Address
Address                                                            Prio/Mode
10.2.4.2          GigabitEthernet1.24      03:37:49/00:01:30 v2    1 / S P G




---------------------Configuring PE-CE connections to be multicast aware---------------------

R3
ip multicast-routing vrf MCAST distributed
!
interface GigabitEthernet1.35
 vrf forwarding MCAST
!
ip pim vrf MCAST rp-address 172.16.100.5


R3#sh ip pim vrf MCAST interface 

Address          Interface                Ver/   Nbr    Query  DR         DR
                                          Mode   Count  Intvl  Prior
10.3.5.3         GigabitEthernet1.35      v2/S   1      30     1          10.3.5.3
172.16.100.3     Tunnel0                  v2/S   1      30     1          172.16.100.4


R3#sh ip pim vrf MCAST neighbor | b Address
Address                                                            Prio/Mode
10.3.5.5          GigabitEthernet1.35      04:10:48/00:01:40 v2    0 / S P G
172.16.100.4      Tunnel0                  02:13:10/00:01:29 v2    1 / DR S P G


R3#sh ip pim vrf MCAST rp mapping
Group(s): 224.0.0.0/4, Static
    RP: 172.16.100.5 (?)


R4
ip multicast-routing vrf MCAST distributed
!
interface GigabitEthernet1.48
 vrf forwarding MCAST
!
interface GigabitEthernet1.47
 vrf forwarding MCAST
!
ip pim vrf MCAST rp-address 172.16.100.5


R4#sh ip pim vrf MCAST interface 

Address          Interface                Ver/   Nbr    Query  DR         DR
                                          Mode   Count  Intvl  Prior
10.4.8.4         GigabitEthernet1.48      v2/S   1      30     1          10.4.8.4
172.16.100.4     Tunnel0                  v2/S   1      30     1          172.16.100.4
10.4.7.4         GigabitEthernet1.47      v2/S   1      30     1          10.4.7.4


R4#sh ip pim vrf MCAST neighbor | b Address
Address                                                            Prio/Mode
10.4.8.8          GigabitEthernet1.48      04:12:07/00:01:22 v2    0 / S P G
172.16.100.3      Tunnel0                  02:14:06/00:01:32 v2    1 / S P G
10.4.7.7          GigabitEthernet1.47      02:00:00/00:01:19 v2    0 / S P G


R4#sh ip pim vrf MCAST rp mapping
Group(s): 224.0.0.0/4, Static
    RP: 172.16.100.5 (?)


---------------------Configuring the Customers equipment to support to multicast---------------------


R5
ip multicast-routing distributed
!
interface Loopback0
 ip pim sparse-mode
!
interface GigabitEthernet1.35
 ip pim dr-priority 0
 ip pim sparse-mode
!
ip pim rp-address 172.16.100.5



R8
ip multicast-routing distributed
!
interface G1.48
 ip pim dr-priority 0
 ip pim sparse-mode
 ip igmp join-group 224.1.1.1
!
ip pim rp-address 172.16.100.5

The "dr-priority 0" command is used on any link between 2 PIM neighbors where the PIM neighbor that is forwarding away from the RP has become the DR. The DR should always be in the forwarding direction towards the RP. If that is not the case, issue can occur. 


R8#sh ip pim rp map
Group(s): 224.0.0.0/4, Static
    RP: 172.16.100.5 (?)

R7
ip multicast-routing distributed
!
interface G1.47
 ip pim dr-priority 0
 ip pim sparse-mode
 ip igmp join-group 224.1.1.1
!
ip pim rp-address 172.16.100.5


R7#sh ip pim rp mapping
Group(s): 224.0.0.0/4, Static
    RP: 172.16.100.5 (?)



---------------------Configuring the PEs for the IPv4 MDT---------------------
This builds the multicast distribution trees between the PEs and in this case, the RR. Once completed, R3 (172.16.100.3, 232.0.0.1) and R4(172.16.100.4, 232.0.0.1) should both be visible in the "ip mroute" table. These connections are the global MRIB and show all the multicast endpoints the provider can send traffic to. R1 is the route reflector, so nothing significant shows up. R3 and R4 are the PEs, looking at them we can see the "Z" indicating a multicast tunnel.

R1 - BGP Route Reflector
router bgp 1
address-family ipv4 mdt
  neighbor MCAST route-reflector-client
  neighbor 172.16.100.3 activate
  neighbor 172.16.100.4 activate
  neighbor 172.16.100.13 activate
  neighbor 172.16.100.14 activate


R1#sh bgp ipv4 mdt all | b Network
     Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:1
 *>i 172.16.100.3/32  172.16.100.3             0    100      0 ?
 *>i 172.16.100.4/32  172.16.100.4             0    100      0 ?

R1#sh ip mroute | b \(
(*, 232.0.0.1), 02:08:30/00:03:13, RP 172.16.100.1, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.13, Forward/Sparse, 02:08:21/00:03:13

(172.16.100.3, 232.0.0.1), 02:08:17/00:01:35, flags: T
  Incoming interface: GigabitEthernet1.13, RPF nbr 10.1.3.3
  Outgoing interface list:
    GigabitEthernet1.12, Forward/Sparse, 02:08:17/00:03:08

(172.16.100.4, 232.0.0.1), 02:08:28/00:02:20, flags: T
  Incoming interface: GigabitEthernet1.12, RPF nbr 10.1.2.2
  Outgoing interface list:
    GigabitEthernet1.13, Forward/Sparse, 02:08:21/00:03:15

(*, 224.0.1.40), 02:08:43/00:02:14, RP 172.16.100.1, flags: SJCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.13, Forward/Sparse, 02:08:36/00:02:40




R3 - PE
router bgp 1
address-family ipv4 mdt
  neighbor 172.16.100.1 activate
  neighbor 172.16.100.1 send-community both


R3#sh bgp ipv4 mdt all | b Network
     Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:1 (default for vrf MCAST)
 *>  172.16.100.3/32  0.0.0.0                                0 ?
 *>i 172.16.100.4/32  172.16.100.4             0    100      0 ?

R3#sh ip mroute | b \(
(*, 232.0.0.1), 03:37:54/stopped, RP 172.16.100.1, flags: SJCFZ
  Incoming interface: GigabitEthernet1.13, RPF nbr 10.1.3.1
  Outgoing interface list:
    MVRF MCAST, Forward/Sparse, 03:37:52/00:00:46

(172.16.100.4, 232.0.0.1), 02:07:16/00:01:18, flags: JTZ
  Incoming interface: GigabitEthernet1.13, RPF nbr 10.1.3.1
  Outgoing interface list:
    MVRF MCAST, Forward/Sparse, 02:07:16/00:00:46

(172.16.100.3, 232.0.0.1), 03:37:54/00:03:25, flags: FT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.13, Forward/Sparse, 02:07:16/00:03:06

(*, 224.0.1.40), 04:09:06/00:02:08, RP 172.16.100.1, flags: SJPCL
  Incoming interface: GigabitEthernet1.13, RPF nbr 10.1.3.1
  Outgoing interface list: Null


R3#sh ip pim vrf MCAST neighbor | b ^Neighbor
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.3.5.5          GigabitEthernet1.35      13:55:17/00:01:19 v2    0 / S P G
172.16.100.4      Tunnel0                  11:57:39/00:01:40 v2    1 / DR S P G

This is the connection point between the provider and customer for vrf MCAST, showing that there is a PIM neighbor down to the customer and a tunnel built to the other PE or R4. The tunnel is a multipoint GRE tunnel, allowing many PEs to connect to many other PEs enabled for IPv4 MDT.



R3#show derived-config interface tunnel0
interface Tunnel0
 ip unnumbered Loopback0
 no ip redirects
 ip mtu 1500
 tunnel source Loopback0
 tunnel mode gre multipoint

This output shows that the tunnel is in fact a GRE tunnel. We didn't configure this, this is automatically implemented when MP-BGP IPv4 MDT is enabled.




R4 - PE
router bgp 1
address-family ipv4 mdt
  neighbor 172.16.100.1 activate
  neighbor 172.16.100.1 send-community both


R4#sh bgp ipv4 mdt all | b Network
     Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 1:1 (default for vrf MCAST)
 *>i 172.16.100.3/32  172.16.100.3             0    100      0 ?
 *>  172.16.100.4/32  0.0.0.0                                0 ?

R4#sh ip mroute | b \(
(*, 232.0.0.1), 03:38:43/stopped, RP 172.16.100.1, flags: SJCFZ
  Incoming interface: GigabitEthernet1.24, RPF nbr 10.2.4.2
  Outgoing interface list:
    MVRF MCAST, Forward/Sparse, 03:38:42/stopped

(172.16.100.3, 232.0.0.1), 02:07:58/00:02:29, flags: JTZ
  Incoming interface: GigabitEthernet1.24, RPF nbr 10.2.4.2
  Outgoing interface list:
    MVRF MCAST, Forward/Sparse, 02:07:58/stopped

(172.16.100.4, 232.0.0.1), 03:38:43/00:02:39, flags: FT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.24, Forward/Sparse, 02:08:09/00:03:24

(*, 224.0.1.40), 04:08:08/00:02:20, RP 172.16.100.1, flags: SJPCL
  Incoming interface: GigabitEthernet1.24, RPF nbr 10.2.4.2
  Outgoing interface list: Null


R4#sh derived-config interface tun0
interface Tunnel0
 ip unnumbered Loopback0
 no ip redirects
 ip mtu 1500
 tunnel source Loopback0
 tunnel mode gre multipoint


R4#sh ip pim vrf MCAST neighbor | b ^Neighbor
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.4.8.8          GigabitEthernet1.48      14:02:08/00:01:23 v2    0 / S P G
172.16.100.3      Tunnel0                  12:04:08/00:01:36 v2    1 / S P G
10.4.7.7          GigabitEthernet1.47      11:50:02/00:01:26 v2    0 / S P G



---------------------Test reachability---------------------


R5#ping 224.1.1.1 repeat 2
Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.4.8.8, 43 ms
Reply to request 0 from 10.4.8.8, 51 ms
Reply to request 0 from 10.4.7.7, 46 ms
Reply to request 0 from 10.4.7.7, 43 ms
Reply to request 1 from 10.4.7.7, 13 ms
Reply to request 1 from 10.4.8.8, 13 ms
Reply to request 1 from 10.4.8.8, 13 ms
Reply to request 1 from 10.4.7.7, 13 ms