Components Used

Cisco recommends that you have knowledge of these topics:

  • Ethernet Virtual Connection (EVC) configuration
  • Basic L2 and L3 configuration on the ASR platform
  • Basic Internet Group Management Protocol (IGMP) Version 3 and Protocol Independent Multicast (PIM) configuration knowledge

Components Used

The information in this document is based on the ASR1002 with Cisco IOS® Version asr1000rp1-adventerprise.03.09.00.S.153-2.S.bin.

Your system must have these requirements in order to implement the OTV feature on the ASR 1000:

  • Cisco IOS-XE Version 3.5S or later
  • Maximum Transmission Unit (MTU) of 1542 or higher

    Note: OTV adds a 42-byte header with the Do Not Fragment bit (DF-bit) to all encapsulated packets. In order to transport 1500-byte packets through the overlay, the transit network must support a Maximum Transmission Unit (MTU) of 1542 or higher. In order to allow for fragmentation accross OTV, you must enable otv fragmentation join-interface <interface>.
  • Unicast and multicast reachability between sites

The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.

Configure

This section describes how to configure OTV multicast mode.

Network Diagram with Basic L2/L3 Connectivity

Basic L2/L3 Connectivity

Start with a base configuration. The internal interface on the ASR is configured for service instances for dot1q traffic. The OTV join interface is the external WAN L3 interface.

ASR-1
interface GigabitEthernet0/0/0
 description OTV-WAN-Connection
 mtu 9216
 ip address 172.17.100.134 255.255.255.0
 negotiation auto
 cdp enable

ASR-2
interface GigabitEthernet0/0/0
 description OTV-WAN-Connection
 mtu 9216
 ip address 172.16.64.84 255.255.255.0
 negotiation auto
 cdp enable

Since OTV adds a 42-byte header, you must verify that the Internet Service Provider (ISP) passes the minimum MTU size from site-to-site. In order to accomplish this verification, send a packet size of 1542 with the DF-bit set. This gives the ISP the payload required plus the do not fragment tag on the packet in order to simulate an OTV packet. If you cannot ping without the DF-bit, then you have a routing problem. If you can ping without it, but cannot ping with the DF-bit set, you have an MTU problem. Once successful, you are ready to add OTV unicast mode to your site ASRs.

ASR-1#ping 172.17.100.134 size 1542 df-bit
Type escape sequence to abort.
Sending 5, 1514-byte ICMP Echos to 172.17.100.134, timeout is 2 seconds:
Packet sent with the DF bit set
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms

The internal interface is a L2 port configured with service instances for the L2 dot1q tagged packets. It also builds an internal site bridge domain. In this example, it is the untagged VLAN1. The internal site bridge domain is used for the communication of multiple OTV devices at the same site. This allows them to communicate and determine which device is the Authoritative Edge Device (AED) for which bridge domain.

The service instance must be configured into a bridge domain that uses the overlay.

ASR-1
interface GigabitEthernet0/0/1
 no ip address
 negotiation auto
 cdp enable
 service instance 1 ethernet
  encapsulation untagged
  bridge-domain 1
 !
 service instance 50 ethernet
  encapsulation dot1q 100
  bridge-domain 200
 !
 service instance 51 ethernet
  encapsulation dot1q 101
  bridge-domain 201


ASR-2
interface GigabitEthernet0/0/2
 no ip address
 negotiation auto
 cdp enable
 service instance 1 ethernet
  encapsulation untagged
  bridge-domain 1
 !
 service instance 50 ethernet
  encapsulation dot1q 100
  bridge-domain 200
 !
 service instance 51 ethernet
  encapsulation dot1q 101
  bridge-domain 201

OTV Multicast Minimum Configuration

This is a basic configuration that requires only a few commands in order to set up OTV and join / internal interfaces.

Configure the local site bridge domain. In this example, it is VLAN1 on the LAN. The site identifier is specific to each physical location. In this example, there are two remote locations that are physically independent of each other. Site 1 and Site 2 are configured accordingly. Multicast also must be configured in accordance with the requirements for OTV.

ASR-1

Config t
otv site bridge-domain 1
otv site-identifier 0000.0000.0001
ip multicast-routing distributed
ip pim ssm default

interface GigabitEthernet0/0/0
  ip pim passive
  ip igmp version 3


ASR-2

Config t
otv site bridge-domain 1
otv site-identifier 0000.0000.0002
ip multicast-routing distributed
ip pim ssm default
interface GigabitEthernet0/0/0
 ip pim passive
 ip igmp version 3

Build the overlay for each side. Configure the overlay, apply the join interface, and add the control and data groups to each side.

Add the two bridge domains that you want to extend. Notice that you do not extend the site bridge domain, only the two VLANs needed. You build a separate service instance for the overlay interfaces to call the bridge domain 200 and 201. Apply the dot1q tags 100 and 101 respectively.

ASR-1

Config t
interface Overlay1
 no ip address
 otv join-interface GigabitEthernet0/0/0
otv control-group 225.0.0.1 otv data-group 232.10.10.0/24

service instance 10 ethernet
  encapsulation dot1q 100
  bridge-domain 200
  service instance 11 ethernet
  encapsulation dot1q 101
  bridge-domain 201


ASR-2

Config t
interface Overlay1
 no ip address
 otv join-interface GigabitEthernet0/0/0
otv control-group 225.0.0.1 otv data-group 232.10.10.0/24
 service instance 10 ethernet

  encapsulation dot1q 100
  bridge-domain 200
  service instance 11 ethernet
  encapsulation dot1q 101
  bridge-domain 201

Note: Do NOT extend the site VLAN on the overlay interface. This causes the two ASRs to have a conflict because they believe each remote side is in the same site.

At this stage, ASR to ASR OTV multicast adjacency is complete and functional. The neighbors are found, and the ASR should be AED-capable for the VLANs that need to be extended.

ASR-1#show otv
Overlay Interface Overlay1
 VPN name                 : None
 VPN ID                   : 2
 State                    : UP
 AED Capable              : Yes
 IPv4 control group       : 225.0.0.1
 Mcast data group range(s): 232.10.10.0/24
 Join interface(s)        : GigabitEthernet0/0/0
 Join IPv4 address        : 172.17.100.134
 Tunnel interface(s)      : Tunnel0
 Encapsulation format     : GRE/IPv4
 Site Bridge-Domain       : 1
 Capability               : Multicast-reachable
 Is Adjacency Server      : No
 Adj Server Configured    : No
 Prim/Sec Adj Svr(s)      : None


ASR-2#show otv
Overlay Interface Overlay1
 VPN name                 : None
 VPN ID                   : 2
 State                    : UP
 AED Capable              : Yes
 IPv4 control group       : 225.0.0.1
 Mcast data group range(s): 232.10.10.0/24
 Join interface(s)        : GigabitEthernet0/0/0
 Join IPv4 address        : 172.16.64.84
 Tunnel interface(s)      : Tunnel0
 Encapsulation format     : GRE/IPv4
 Site Bridge-Domain       : 1
 Capability               : Multicast-reachable
 Is Adjacency Server      : No
 Adj Server Configured    : No
 Prim/Sec Adj Svr(s)      : None

OTV Verification

Use this section in order to confirm that your configuration works properly.

Network Diagram with OTV

Verification Commands and Expected Output

This output shows that VLANs 100 and 101 are extended. The ASR is the AED, and the internal interface and Service Instance that maps the VLAN