DMVPN redundancy

In this post I want to show you how to implement redundancy in DMVPN network. Cisco DMVPN design guide says that there are two kinds of redundancy in DMVPN networks:

1- Dual hub/Single DMVPN cloud
2- Dual hub/Dual DMVPN cloud

Cisco does not recommend the first one in which we have 2 central hubs but both of them uses the IP from the same L3 network on their mGRE tunnels. The sample for this could be 192.168.1.1/24 for first hub, 192.168.1.2/24 for second hub and 192.168.1.3 to 192.168.1.254 for spokes. The disadvantages of this design are discussed in that document and you can refer to that if you want more details.
But in this topic I want to show you how to use the second method in sample DMVPN topology. Using two different hubs with two different DMVPN cloud means creating 2 different mGRE tunnels on hubs and spokes (mGRE on the spokes in the case of using DMVPN phase 2 or phase 3, which is ideal too). In the other words, we need to assign different IP addresses to our two mGRE tunnel interfaces on each router, whether it is hub or spoke. So what we need to do to implement dual hub/dual DMVPN is building the first DMVPN, as we are familiar with. Our topology is like this, consisting 14 different routers with R12 and R13 as our hub routers (R12 as primary and R13 as backup hub) and R2, R6, R8 and R10 as our spokes.
R2 only advertises a summary route of its loopbacks to other routers, so the rest of network just knows how to reach to R2’s loopback interfaces via BGP. This means the network behind R2, including the link between R2 and R1 and the routes exist on R1 don’t be advertised into public network and nobody knows about these addresses. This is the same with other spokes too. So it is better to list our publically accessible networks first:

1- R2 advertises 2.2.2.0/32
2- R6 advertises 6.6.6.1/32
3- R8 does not run BGP but R12 advertises the like between itself and R8 to BGP instead.
4- R10 does not run BGP too but R13 advertises the link between itself and R10 to BGP instead.
5- R12 as our primary hub just advertises its loopback, 12.12.12.1/32
6- R13 as our backup hub just advertises its loopback, 13.13.13.1/32

Customer internal networks (that are not reachable from public internet) at each site run an IGP to exchange internal routes and to reach to internet. So e can say briefly:

1- We are running EIGRP #1 between R2 and R1.
2- We are running OSPF area 0 inside all other customer internal networks (Between R4, R5 and R6, between R7 and R8 and between R9 and R10).

At the first place, we need to test reachability between publically reachable networks of spokes and hubs.

R8(config)#do ping 2.2.2.1
Sending 5, 100-byte ICMP Echos to 2.2.2.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/52/76 ms
R8(config)#do ping 6.6.6.1
Sending 5, 100-byte ICMP Echos to 6.6.6.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 108/187/300 ms
R8(config)#do ping 100.1.131.10
Sending 5, 100-byte ICMP Echos to 100.1.131.10, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/27/68 ms
R8(config)#do ping 12.12.12.1
Sending 5, 100-byte ICMP Echos to 12.12.12.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/27/68 ms
R8(config)# do ping 13.13.13.1
Sending 5, 100-byte ICMP Echos to 13.13.13.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/24/40 ms

Everything is OK and we have full connectivity between publically accessible addresses. So at the second step we need to configure hub first. Configuring the hub at the first place makes the hub ready to listen to the NHRP requests that will trigger the whole process. Just remember that the NHRP resolution process is triggered by spoke routers, not the hub. What that means is when you shut the hub router’s loopback interface (we will use a loopback on hub router as the mGRE tunnel source, but you can use what you want) or clear the contents of NHRP resolution table on hub router, the NHRP will not work before disabling and re-enabling tunnel interfaces on spokes to trigger the process. Let’s configure the R12, our primary DMVPN hub router:

R12(config-if)#do show run inter tunnel 1
Building configuration...

Current configuration : 298 bytes
!
interface Tunnel1
 ip address 100.1.1.12 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 1
 ip nhrp authentication cisco1
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 no ip split-horizon eigrp 1
tunnel source Loopback0
 tunnel mode gre multipoint
 tunnel key 1

I don’t want to explain every details of the configuration above, because I wrote another topic about DMVPN couple of months ago and you can refer to that if you need some more explanations.
The shared configuration of spokes that is the same between all of them is as follow:

interface Tunnel1
 ip address 100.1.1.2 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco1
 ip nhrp map 100.1.1.12 12.12.12.1
 ip nhrp map multicast 12.12.12.1
 ip nhrp network-id 1
 ip nhrp nhs 100.1.1.12
 tunnel source Loopback0
 tunnel mode gre multipoint
 tunnel key 1

What is different between spokes is the IP address that you will assign on tunnel interface on each spoke that must be unique among other spokes. Our addresses and the source of mGRE tunnels on spokes will be:

1- R2: 100.1.1.2/24, source of tunnel: loopback0
2- R6: 100.1.1.6/24, source of tunnel: loopback0
3- R8: 100.1.1.8/24, source of tunnel: serial 0/0
4- R10: 100.1.1.10/24, source of tunnel: serial 0/0

After completing this step, the NHRP table must contain come data. The “show ip nhrp” command reveals that data, if any exists:

R12(config-if)#do sh ip nhrp
100.1.1.2/32 via 100.1.1.2, Tunnel1 created 00:01:24, expire 01:59:26
  Type: dynamic, Flags: unique registered used 
  NBMA address: 2.2.2.1 
100.1.1.6/32 via 100.1.1.6, Tunnel1 created 00:01:23, expire 01:59:22
  Type: dynamic, Flags: unique registered used 
  NBMA address: 6.6.6.1 
100.1.1.8/32 via 100.1.1.8, Tunnel1 created 00:01:25, expire 01:59:19
  Type: dynamic, Flags: unique registered used 
  NBMA address: 100.1.128.8 
100.1.1.10/32 via 100.1.1.10, Tunnel1 created 00:01:24, expire 01:59:14
  Type: dynamic, Flags: unique registered used 
  NBMA address: 100.1.131.10
R12(config-if)#do sh dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incompletea
        N - NATed, L - Local, X - No Socket
        # Ent --> Number of NHRP entries with same NBMA peer

Tunnel1, Type:Hub, NHRP Peers:4, 
 #Ent  Peer NBMA Addr Peer Tunnel Add   State  UpDn Tm Attrb
 -----  ---------------     --------------- ----- --------      -----
     1         2.2.2.1       100.1.1.2    UP     never         D    
     1         6.6.6.1       100.1.1.6    UP     never         D    
     1     100.1.128.8     100.1.1.8    UP     never         D    
     1    100.1.131.10    100.1.1.10    UP    never         D    

By now, we can say that we have created or first DMVPN tunnel successfully between R12, as our hub and R2, R6, R8 and R10 as spokes. At the third place we need these sites to exchange their internal networks (that is not reachable by internet routers) with each other, so every client inside the customer site be able to reach the other clients in the other sites. I emphasis that the internal networks should be reachable to other internal networks only, as this is the goal of implementing any kind of VPN solution.
Assuming that we’ve used EIGRP on tunnel interfaces on every router, let’s check neighborships:

R12(config-if)#do sh ip eigrp neigh 
IP-EIGRP neighbors for process 1
H   Address                 Interface   RTT   RTO  Q  Seq
                                        ms)       Cnt Num
3   100.1.1.2               Tu1         893  5000  0  58
2   100.1.1.8               Tu1         194  5000  0  14
1   100.1.1.6               Tu1         303  5000  0  17
0   100.1.1.10              Tu1         184  5000  0  18

You see that the EIGRP neighborship is established between hub router and spokes successfully. Regarding to the fact that we have enabled DMVPN phase 2 on hub router, each spoke router will have (and should have) the whole other customer routes in its routing table. But if you remember, we have enabled OSPF in some sites, and this will necessitate redistribution between EIGRP 1 and OSPF on R6, R8 and R10. Because we have used EIGRP #1 between R2 and R1, the redistribution is not needed here on R2. Let’s check routing table on routers:

R1(config)#do sh ip route  ei 
     100.0.0.0/24 is subnetted, 7 subnets
D EX    100.1.109.0 
           [170/310070272] via 192.168.12.2, 00:17:48, FastEthernet0/0
D EX    100.1.78.0 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D EX    100.1.45.0 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D EX    100.1.46.0 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D EX    100.1.56.0 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D       100.1.1.0 [90/297270016] via 192.168.12.2, 00:17:51, Fe0/0
D EX    100.1.2.0 [170/540469760] via 192.168.12.2, 01:34:05, Fe0/0
     4.0.0.0/32 is subnetted, 3 subnets
D EX    4.4.4.1 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D EX    4.4.4.2 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D EX    4.4.4.3 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
     5.0.0.0/32 is subnetted, 3 subnets
D EX    5.5.5.1 [170/310070272] via 192.168.12.2, 00:17:48, Fe0/0
D EX    5.5.5.3 [170/310070272] via 192.168.12.2, 00:17:49, Fe0/0
D EX    5.5.5.2 [170/310070272] via 192.168.12.2, 00:17:49, Fe0/0
     7.0.0.0/32 is subnetted, 3 subnets
D EX    7.7.7.3 [170/310070272] via 192.168.12.2, 00:17:49, Fe0/0
D EX    7.7.7.2 [170/310070272] via 192.168.12.2, 00:17:49, Fe0/0
D EX    7.7.7.1 [170/310070272] via 192.168.12.2, 00:17:49, Fe0/0
     8.0.0.0/32 is subnetted, 3 subnets
D EX    8.8.8.8 [170/310070272] via 192.168.12.2, 00:17:49, Fe0/0
D EX    8.8.8.9 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
D EX    8.8.8.10 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
     9.0.0.0/32 is subnetted, 3 subnets
D EX    9.9.9.9 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
D EX    9.9.9.11 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
D EX    9.9.9.10 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
     10.0.0.0/32 is subnetted, 3 subnets
D EX    10.10.10.2 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
D EX    10.10.10.3 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0
D EX    10.10.10.1 [170/310070272] via 192.168.12.2, 00:17:50, Fe0/0

You see that the R1 has full reachability to other internal networks, so a simple ping test will taste well:

R1(config)#do ping 7.7.7.1
Sending 5, 100-byte ICMP Echos to 7.7.7.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 60/112/208 ms
R1(config)#do ping 9.9.9.9
Sending 5, 100-byte ICMP Echos to 9.9.9.9, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/117/228 ms
R1(config)#do ping 4.4.4.1
Sending 5, 100-byte ICMP Echos to 4.4.4.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/83/244 ms

I tested the reachability to some remote networks, but you can issue ping about whole network; anyway, all of them will succeed successfully.

Up to this point, we have done what we did before which has described in the previous DMVPN topic. But the final goal for this topic is providing redundancy, so disabling R12 does not affect the reachability between customer sites. for this purpose I’m going to configure another DMVPN cloud between R13 and spokes. This cloud will be the same as the first DMVPN cloud, even the configuration of the routers will be the same. But in a Dual hub/Dual DMVPN we need another mGRE tunnel interface on each router to build the second cloud. So let’s start with the R13, the proposed backup DMVPN hub router.

R13(config-if)#do sh run inter tun 2 
interface Tunnel2
 ip address 100.1.2.13 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 2
 ip nhrp authentication cisco2
 ip nhrp map multicast dynamic
 ip nhrp network-id 2
 no ip split-horizon eigrp 2
 tunnel source Loopback0
 tunnel mode gre multipoint
 tunnel key 2

you see that there is no big differences among the configuration of R12 and R13 about tunnel interfaces, except the key, network-id and of course, the IP address which must be different from the IP network that was used for tunnel 1. I selected 100.1.2.0/24 for the second DMVPN tunnel.
The shared configuration of spokes:

interface Tunnel2
 ip address 100.1.2.2 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco2
 ip nhrp map 100.1.2.13 13.13.13.1
 ip nhrp map multicast 13.13.13.1
 ip nhrp network-id 2
 ip nhrp nhs 100.1.2.13
 delay 999999
 tunnel source Loopback0
 tunnel mode gre multipoint
 tunnel key 2

The same as the previous tunnel 1, just IP address of tunnel interface on spokes needs to be changed. The tunnel key and NHRP network-id must be the same as the hub router, R13. At this point we must have the second DMVPN cloud up and running.

R13(config-if)#do sh dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incompletea
        N - NATed, L - Local, X - No Socket
        # Ent --> Number of NHRP entries with same NBMA peer

Tunnel2, Type:Hub, NHRP Peers:4, 
 #Ent  Peer NBMA Addr Peer Tunnel Add    State  UpDn Tm Attrb
 -----  ---------------     --------------- ----- --------       -----
     1         2.2.2.1       100.1.2.2    UP      never         D    
     1         6.6.6.1       100.1.2.6    UP      never         D    
     1     100.1.128.8     100.1.2.8    UP      never         D    
     1    100.1.131.10    100.1.2.10    UP    never         D    
R13(config-if)#do sh ip nhrp
100.1.2.2/32 via 100.1.2.2, Tunnel2 created 02:31:41, expire 01:28:18
  Type: dynamic, Flags: unique registered 
  NBMA address: 2.2.2.1 
100.1.2.6/32 via 100.1.2.6, Tunnel2 created 02:31:44, expire 01:28:15
  Type: dynamic, Flags: unique registered 
  NBMA address: 6.6.6.1 
100.1.2.8/32 via 100.1.2.8, Tunnel2 created 02:31:47, expire 01:28:12
  Type: dynamic, Flags: unique registered 
  NBMA address: 100.1.128.8 
100.1.2.10/32 via 100.1.2.10, Tunnel2 created 02:31:50, expire 01:28:09
  Type: dynamic, Flags: unique registered 
  NBMA address: 100.1.131.10 

And now this is the time for enabling routing on new tunnel interfaces. For the sake of isolation between the routing information obtained by the previous EIGRP instance, I’m going to enable EIGRP #2 on the new tunnel interfaces. After doing this, we must have the EIGRP neighborship in place:

R13(config-if)#do sh ip eigrp neigh 
IP-EIGRP neighbors for process 2
H   Address                 Interface   RTT   RTO  Q  Seq
                                        ms)       Cnt Num
3   100.1.2.2               Tu2         384  5000  0  20
2   100.1.2.6               Tu2         181  5000  0  13
1   100.1.2.8               Tu2         181  5000  0  14
0   100.1.2.10              Tu2         221  5000  0  15

For making the second DMVPN cloud to act as a backup path between spoke routers, we need to change some routing details, such as Administrative Distance on all routers. So before enabling EIGRP #2 on the links to customer sites, something like this would be enough:

R2, R6, R8, R10, R12, R13:

router eigrp 2
 no auto-summary
 distance eigrp 92 172

This will make routers to prefer the routes learned by the previous EIGRP #1 (with the AD of 90 and 170) over the EIGRP #2. But any failure on EIGRP #1 will trigger routers to use another best path between spokes, that is through our second hub, R13.
The process of redistributing routes between regional OSPF domains into EIGRP is the same as before. But remember that we need some other changes on R2 and/or R1 to make the R1 to be able to route traffic to remote networks in the case of DMVPN #1 goes down. You can enable EIGRP #2 between R1 and R2 or you can redistribute between EIGRP #1 and EIGRP #2 on R2. Even you can use a simple default route on R2 toward uplink to R2 too.
At the final step we need to be sure about the functionality of our network in the case of any failure. For this, I’m going to start a ping test on some customer internal routers toward some other remote networks and in the middle of the process disable the tunnel #1 on R12 router to see what will happen to our ping data. The uninterrupted ping test will show the successfulness of our configuration.

R1(config)#do ping 7.7.7.1 rep 2000
R4(config)#do ping 9.9.9.9 rep 2000
R12(config-if)#inter tunnel 1
R12(config-if)#shut
R1(config)#do ping 7.7.7.1 rep 2000
Sending 2000, 100-byte ICMP Echos to 7.7.7.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!
R4(config)#do ping 9.9.9.9 rep 2000 
Sending 2000, 100-byte ICMP Echos to 9.9.9.9, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Great! Let’s check the routing table on a sample spoke router:

R2(config-router)#dir 7.7.7.1 255.255.255.255
Routing entry for 7.7.7.1/32
  Known via "eigrp 2", distance 172, metric 553244416, type external
  Redistributing via eigrp 1, eigrp 2
  Advertised by eigrp 1
  Last update from 100.1.2.8 on Tunnel2, 00:01:44 ago
  Routing Descriptor Blocks:
  * 100.1.2.8, from 100.1.2.13, 00:01:44 ago, via Tunnel2
      Route metric is 553244416, traffic share count is 1
      Total delay is 10500000 microseconds, minimum bandwidth is 9 Kbit
      Reliability 100/255, minimum MTU 1472 bytes
      Loading 10/255, Hops 2
You see that the routing information about our sample remote network (7.7.7.1/32) is obtained through 100.1.2.13, that is the IP address of tunnel #2 on R13, our backup DMVPN hub.

Leave a Comment

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

2,050 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>