Monday, March 5, 2018

IOS and IOS XR Multicast VPN with the Rosen Model Default MDT and Data MDT with BSR and AutoRP

The default MDT is used for control plane communication between the PEs, this is for PIM communication, Joins, Prunes and other communication. This is the equivalent to PIM Dense mode, all PEs get the communication.
The data MDT is primarily used for bandwidth optimization and allows for multiple groups to be configured and not all PEs respond or forward the traffic. The idea is that when customer routers send a C-PIM join message into the provider network. The provider network takes that info and forwards to the RP of the provider. The customer also has an RP configured, this information is passed over the provider network. The data MDT is not a required configuration, if the network only sees low bandwidth flows, if high bandwidth flows are present, then moving from the shared tree to the shortest path tree is necessary to take the optimal path through the network.
On IOS and IOS XE, the command "ip pim ssm default" is needed. This is used to create (S, G) entries in the MRIB for optimized communication.
IOS XR by default is listening for SSM traffic already.
All the IOS routers need to be configured with updated mdt default configuration and add the mdt data capability as well as the ip pim ssm default configuration. In our case I am already using the 232.0.0.1 multicast address which is the SSM range. So we don't have to change the address, if it was the address 239.0.0.1 then we would need to change to the 232.x.x.x address block. Since the BGP IPv4 MDT is already enabled, we don't need to add that in. We will use the 232.10.10.0/24 data range for dynamically created groups.
All IOS XE routers in the SP core:

ip pim ssm default



All IOS XE routers that are PEs in the SP:

vrf definition MCAST

 address-family ipv4

  mdt default 232.0.0.1

  mdt data 232.10.10.0 0.0.0.255



All IOS XR PEs in the SP, XR 5.3 and 6.0. XR 5.3 supports the data plane:

XR8

multicast-routing

 address-family ipv4

  mdt source Loopback0

  interface all enable

 !

 vrf MCAST

  address-family ipv4

   mdt source Loopback0

   interface all enable

   mdt default ipv4 232.0.0.1



   mdt data 232.18.18.0/24

!

XR9

multicast-routing

 address-family ipv4

  mdt source Loopback0

  interface all enable

 !

 vrf MCAST

  address-family ipv4

   mdt source Loopback0

   interface all enable

   mdt default ipv4 232.0.0.1



   mdt data 232.19.19.0/24

!

XR3

multicast-routing

 address-family ipv4

  mdt source Loopback0

  interface all enable

 !

 vrf MCAST

  address-family ipv4

   mdt source Loopback0

   interface all enable

   mdt default ipv4 232.0.0.1



   mdt data 232.13.13.0/24



Now that we have configure all the routers in the SP core, we need to go around and test the functionality. We'll add another multicast group, which is configured by changing the join group from 224.1.1.1 to 224.1.2.3 on a handful of routers. The idea is that PIM control plane messages are sent via the MDT Default tree and that the actual multicast flows will use the MDT Data tree. Depending on the placement of the RP, which in this case is XR9, the traffic should be moved from the shared tree to the shortest path tree. Meaning that the traffic should flow through the RP for the entire flow, rather it should be pruned off the shared tree and onto the shortest path tree. 



This is accomplished by a PIM shared tree prune and PIM shortest path tree join. During the switch-over, a slight increase in latency is expected. Not all flows will be switched over, since the RP is XR9, traffic from R14 will never be switched over as it has to flow through the RP to reach R5. R5 is the source and R14 is a receiver. Because of this, when R5 pings 224.1.2.3, the ping goes out to all PEs that have joined the appropriate groups. 



Typically SPT switchover happens during the second ping for each receiver, indicated by the 1, here all of the initial ping responses drop after the second ping, showing that a switch over occurred.



R5#ping 224.1.1.1 repeat 3

Type escape sequence to abort.

Sending 3, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 172.16.100.120, 147 ms

Reply to request 0 from 10.12.9.12, 139 ms

Reply to request 0 from 10.4.7.7, 104 ms

Reply to request 0 from 10.10.11.11, 93 ms

Reply to request 1 from 10.4.7.7, 108 ms

Reply to request 1 from 10.10.11.11, 76 ms

Reply to request 1 from 10.4.7.7, 54 ms

Reply to request 1 from 172.16.100.120, 39 ms

Reply to request 1 from 10.12.9.12, 28 ms

Reply to request 2 from 10.4.7.7, 27 ms

Reply to request 2 from 10.10.11.11, 50 ms

Reply to request 2 from 172.16.100.120, 33 ms

Reply to request 2 from 10.12.9.12, 33 ms

Same thing here, the responses from R13 and R14 stay consistent due to flowing through the RP and having to flow through the router just above the RP.

R5#ping 224.1.2.3 repeat 3

Type escape sequence to abort.

Sending 3, 100-byte ICMP Echos to 224.1.2.3, timeout is 2 seconds:
Reply to request 0 from 10.4.8.8, 63 ms

Reply to request 0 from 10.18.13.13, 329 ms

Reply to request 0 from 10.14.19.14, 130 ms

Reply to request 1 from 10.4.8.8, 25 ms

Reply to request 1 from 10.18.13.13, 262 ms

Reply to request 1 from 10.14.19.14, 95 ms

Reply to request 2 from 10.18.13.13, 151 ms

Reply to request 2 from 10.14.19.14, 98 ms

Reply to request 2 from 10.4.8.8, 73 ms

No comments:

Post a Comment