Friday, January 27, 2017

CCIE SPv4 - MPLS Traffic Engineering - TE Attributes - Bandwidth

Software versions:
IOS XE 15.5
IOS XR 5.3

The topology for this demo:
In this post we will be looking at bandwidth as a TE attribute. This is the most common attribute I have seen deployed and makes the most sense, since most customers want to make sure their applications riding over the MPLS core will have enough bandwidth to meet SLAs. There is a bandwidth reservation capability and a sub-pool capability. We'll examine the bandwidth option first. Typically the sub pool is used for traffic that has some QoS value associated, like EF or AF41 for instance. 

In IOS, when you tell an interface that it will participate in TE tunneling, RSVP is applied to that interface, almost like PIM is applied to an interface to build MDTs. IOS auto reserves/allocates 75% of the total bandwidth that is configured on that interface, so if it's a GigE interface you get 750 Mbps. I won't change this but you could manually configure the BW down to 50 Mbps to test reoptimization later on. 

In the above topology, I have brought up the right side of the topology, re-designing the "affinity" to give us more to work with than just 6 routers, I will eventually start wrapping multiple TE attributes together to see the complexity. Right now, I am currently only matching on the "BROWN" path, or 0x0010/0x0010. I have labeled each link with the appropriate affinity valures. I also enabled MPLS OAM on all the routers to trace inside the LSP as we're building it, since we aren't leveraging IGP for forwarding, it's best if we test the TE defined path.

We'll configure a TE tunnel to reserve 200 Mbps, I'll debug the RSVP RESV output so we can see the outputs, but I'll only show the relevant debugs. Bandwidth is reserved in "kilobits per second". So we'll reserve 200000 and fire away.

interface Tunnel1
 ip unnumbered Loopback0
 tunnel mode mpls traffic-eng
 tunnel destination 192.168.1.13
 tunnel mpls traffic-eng priority 7 7
 tunnel mpls traffic-eng bandwidth 200000
 tunnel mpls traffic-eng affinity 0x10 mask 0x10
 tunnel mpls traffic-eng path-option 1 dynamic

sub 0 from global tdb_bw_avail pool on link 10.12.13.12 hop_gen 0 link_gen 83 in forward link
    before:
    rrr_pcalc_print_bw_values nbr_p = 0x7EFF4C7BA398 pri 7
    Global inprogress 0, Global avail 0

So we see that from the link between XR2 and XR3, affinity is NOT the issue, the global BW available is ZERO. Unlike IOS, XR does not auto reserve the bandwidth, we have to manually tell XR what bandwidth values to allocate, we're going to say 750 Mbps to equal IOS. 

XR1-6
rsvp
 interface type/number.subinterface
  bandwidth 750000


sub 0 from global tdb_bw_avail pool on link 10.12.13.12 hop_gen 0 link_gen 101 in forward link
    before:
    rrr_pcalc_print_bw_values nbr_p = 0x7EFF4C35A2D8 pri 7
    Global inprogress 0, Global avail 750000
    Sub-pool inprogress 0, Sub-pool avail 0
        after:
    rrr_pcalc_print_bw_values nbr_p = 0x7EFF4C35A2D8 pri 7
    Global inprogress 0, Global avail 750000
    Sub-pool inprogress 0, Sub-pool avail 0
    192.168.1.13: 10.12.13.13
    192.168.1.13: 192.168.1.13
TE-PCALC-API: 192.168.1.3_538->192.168.1.13_1 {7}: P2P LSP Path Lookup result: success
MPLS_TE-5-LSP: LSP 192.168.1.3 1_538: UP
MPLS_TE-5-TUN: Tun1: installed LSP 1_538 (popt 1) for 1_511 (popt 1), reopt. LSP is up

So we can see from the debug output that there was bandwidth available, and the LSP was successfully signaled end to end, you'll also see on the last bolded line, "reopt" which means reoptimized. 

R3#sh mpls traffic-eng tunnels tunnel 1

Name: R3_t1                               (Tunnel1) Destination: 192.168.1.13
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type dynamic (Basis for Setup, path weight 5)

  Config Parameters:
    Bandwidth: 200000   kbps (Global)  Priority: 7  7   Affinity: 0x10/0x10

So we can see that 1, the tunnel is up, so both the affinity values were good, and that there was enough BW available to allocate to this LSP. The setup and hold priority are both 7, the lowest, more on that in another post.

R3#$p reservation detail filter session-type 7 tunnel-id 1 | in Label|Bitrate
  Label: 1 (outgoing)
  Average Bitrate is 0 bits/sec, Maximum Burst is 1K bytes
  Label: 19 (outgoing)
  Average Bitrate is 200M bits/sec, Maximum Burst is 1K bytes

We can see that the signaling pulled the right amount of bandwidth and has also allocated a TE label of 19.

R3#sh ip rsvp interface
interface    rsvp       allocated  i/f max  flow max sub max  VRF
Gi1          ena        0          750M     750M     0
Gi1.34       ena        200M       750M     750M     0
Gi1.143      ena        0          750M     750M     0

We can see that we will use G1.34 to forward the traffic towards XR3. 

Now we add in a static route to map the customer traffic to the TE tunnel.

ip route 192.168.1.13 255.255.255.255 Tunnel1

R3#sh ip route 192.168.1.13
Routing entry for 192.168.1.13/32
  Known via "static", distance 1, metric 0 (connected)
  Routing Descriptor Blocks:
  * directly connected, via Tunnel1
      Route metric is 0, traffic share count is 1

We can see that any VPN based traffic that has a next hop of XR3, 192.168.1.13, will use the TE tunnel.

R3#sh ip cef 192.168.1.13 detail
192.168.1.13/32, epoch 2, flags [attached]
  local label info: global/36
  3 RR sources [non-eos indirection, heavily shared]
  attached to Tunnel1

We see that there is a CEF entry as well, the global/36 is applied, but this is the locally assigned label, not the transport or VPN label, nut we can use that label as reference to look at the MPLS forwarding table.

R3#sh mpls forwarding-table labels 36
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
36    [T]  Pop Label  192.168.1.13/32  0             Tu1        point2point

[T]     Forwarding through a LSP tunnel.
        View additional labelling info with the 'detail' option

Since we are using a TE tunnel, we see the [T], which indicates that a TE tunnel is used. to know more we need to tack on the "detail" option.

R3#sh mpls forwarding-table labels 36 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop
Label      Label      or Tunnel Id     Switched      interface
36         Pop Label  192.168.1.13/32  0             Tu1        point2point
        MAC/Encaps=18/22, MRU=1500, Label Stack{19}, via Gi1.34
        000C2924DCA2000C29062644810000228847 00013000
        No output feature configured

We can see that label 19 is used and is pointing out G1.34. Let's trace from R3 to XR3 in the core and see what happens. 

R3#traceroute mpls traffic-eng tunnel 1
Type escape sequence to abort.
  0 10.3.4.3 MRU 1500 [Labels: 19 Exp: 0]
L 1 10.3.4.4 MRU 1500 [Labels: 24000 Exp: 0] 11 ms
L 2 10.15.4.15 MRU 1500 [Labels: 24012 Exp: 0] 8 ms
L 3 10.15.16.16 MRU 1500 [Labels: 24013 Exp: 0] 9 ms
L 4 10.12.16.12 MRU 1500 [Labels: implicit-null Exp: 0] 15 ms
! 5 10.12.13.13 13 ms

Our first label is 19 which was allocated by TE.

I'm going to create another TE tunnel, telling the affinity to use the same values as before, but this time I need 600 Mbps.

interface Tunnel2
 ip unnumbered Loopback0
 tunnel mode mpls traffic-eng
 tunnel destination 192.168.1.13
 tunnel mpls traffic-eng priority 7 7
 tunnel mpls traffic-eng bandwidth 600000
 tunnel mpls traffic-eng affinity 0x10 mask 0x10
 tunnel mpls traffic-eng path-option 1 dynamic

%MPLS_TE-5-LSP: LSP 192.168.1.3 2_1: No path to destination, 192.168.1.13 (bw or affinity)

Since TE checks each hop along the way, having to make sure that the affinity values match and there is bandwidth available, in this case we are reserving more bandwidth than is available, I have asked for 600 Mbps, when only 550 are available for allocation. I'll change the BW to 550000 and see if that changes anything.

interface Tunnel2
 tunnel mpls traffic-eng bandwidth 550000

%MPLS_TE-5-TUN: Tun2: installed LSP 2_8 (popt 1) for nil, got 1st feasible path opt
%MPLS_TE-5-LSP: LSP 192.168.1.3 2_8: UP
%MPLS_TE-5-TUN: Tun2: LSP path change 2_8 for nil, normal
%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel2, changed state to up

The tunnel comes up this time, which means that a path with the affinity applied matched and there was bandwidth available. 

R3#sh mpls traffic-eng tunnels tunnel 2 | b Status
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected
    path option 1, type dynamic (Basis for Setup, path weight 5)

  Config Parameters:
    Bandwidth: 550000   kbps (Global)  Priority: 7  7   Affinity: 0x10/0x10
RSVP Path Info:
      My Address: 10.14.3.3
      Explicit Route: 10.14.3.14 10.14.15.14 10.14.15.15 10.15.16.15
                      10.15.16.16 10.12.16.16 10.12.16.12 10.12.13.12
                      10.12.13.13 192.168.1.13

In this example, we aren't using the same path as tunnel 1, we are using a path to XR4 and not R4.

R3#sh ip rsvp interface
interface    rsvp       allocated  i/f max  flow max sub max  VRF
Gi1          ena        0          750M     750M     0
Gi1.34       ena        200M       750M     750M     0
Gi1.143      ena        550M       750M     750M     0

G1.143 has an allocation of 550 Mbps. The documentation I have read doesn't give any indication of a protection mechanism of maxing out the tunnel maximums, but one could imagine that this is the case. 

R3#$reservation detail filter session-type 7 tunnel-id 2 | in Label|Bitrate
  Label: 1 (outgoing)
  Average Bitrate is 0 bits/sec, Maximum Burst is 1K bytes
  Label: 24001 (outgoing)
  Average Bitrate is 550M bits/sec, Maximum Burst is 1K bytes

We can see that label 24001 is allocated for this tunnel and 550 Mbps are reserved. 

R3#traceroute mpls traffic-eng tunnel 2
Type escape sequence to abort.
  0 10.14.3.3 MRU 1500 [Labels: 24001 Exp: 0]
L 1 10.14.3.14 MRU 1500 [Labels: 24003 Exp: 0] 10 ms
L 2 10.14.15.15 MRU 1500 [Labels: 24011 Exp: 0] 10 ms
L 3 10.15.16.16 MRU 1500 [Labels: 24012 Exp: 0] 10 ms
L 4 10.12.16.12 MRU 1500 [Labels: implicit-null Exp: 0] 14 ms
! 5 10.12.13.13 14 ms

I'll configure a third tunnel, this will grab 500 Mbps, which will max out both interfaces and completely oversubscribe Tunnel 2 which has 550 Mbps already reserved.

interface Tunnel3
 ip unnumbered Loopback0
 tunnel mode mpls traffic-eng
 tunnel destination 192.168.1.13
 tunnel mpls traffic-eng priority 7 7
 tunnel mpls traffic-eng bandwidth 500000
 tunnel mpls traffic-eng affinity 0x10 mask 0x10
 tunnel mpls traffic-eng path-option 1 dynamic

*Jan 28 00:35:42.088: %MPLS_TE-5-TUN: Tun3: installed LSP 3_10 (popt 1) for nil, got 1st feasible path opt
*Jan 28 00:35:42.137: %MPLS_TE-5-LSP: LSP 192.168.1.3 3_10: Path Error from 10.15.4.15: Admission control Failure: Requested bandwidth unavailable (flags 0)
*Jan 28 00:35:42.138: %MPLS_TE-5-LSP: LSP 192.168.1.3 3_10: DOWN: path error
*Jan 28 00:35:42.138: %MPLS_TE-5-TUN: Tun3: installed LSP nil for 3_10 (popt 1), path error
*Jan 28 00:35:42.138: %MPLS_TE-5-TUN: Tun3: LSP path change nil for 3_10

We get an obvious syslog message stating that the requested bandwidth isn't reservable, therefore the tunnel will fail to form. This message continues to repeat over and over. 

So as you can see, the bandwidth attribute is pretty straightforward. As long as the bandwidth is available, the tunnel should form pretty easily, assuming that the other TE attributes are able to met of course. 

Thanks for stopping by!
Rob Riker, CCIE #50693

No comments:

Post a Comment