Time and time again I have customers wanting to understand the true benefit of a Cisco Nexus switch versus a Cisco catalyst switch in the data center for connecting servers. Customers may argue that they just need simple 1G or 10G speeds, dual-homed with a port-channel and they can achieve this simply using a stack of Catalyst 9300 or Catalyst 9500 switches. So here are a few reasons to ponder:
If you have servers that are dual-homed for redundancy across two separate Cisco catalyst switches which are stacked because you want to leverage a port-channel then that sounds fine and dandy but when it comes time to upgrade the switch, because there will be a time you have to upgrade the switch, the whole stack will need to be reloaded resulting in an outage to your servers. This is not the case with Cisco Nexus.
After a few hours of troubleshooting, I found out that when using the 3.3 brownfield Cloudformation template, entering the VCO as an IP does not work. You must use the FQDN instead of the IP for the VCO. I also made sure to set the version to 331 instead of 321. The instance type of C5.4xlarge. After the vEdge joins the orchestrator, then you can upgrade the version to a newer code.
This is because your NVE interface is down. Shutdown your NVE loopback and NVE interface, then unshut your loopback followed by NVE interface.
Border leaf receiving advertisement from external router and advertising to spine. Spine not advertising to other leafs. After review of the bgp l2vpn evpn routing table, its indicates “Path type: internal, path is invalid(no RMAC or L3VNI), no labeled nexthop”.
Why is this happening? Well because you don’t have the L3VNI configured properly. On the Border Leaf, verify that you have the L3VNI VLAN defined, the vni assigned to the VLAN and the interface VLAN defined with vrf and ip forward.
vlan 2500 name L3VNI-VLAN vn-segment 50000
vrf context PROD vni 50000 rd auto address-family ipv4 unicast route-target both auto route-target both auto evpn address-family ipv6 unicast route-target both auto route-target both auto evpn
interface Vlan2500 description L3VNI-SVI no shutdown mtu 9216 vrf member PROD no ip redirects ip forward no ipv6 redirects
interface nve1 no shutdown host-reachability protocol bgp source-interface loopback1 member vni 50000 associate-vrf
In order to configure the TGW attachment in appliance mode you must perform this from the AWS CLI. Go to the IAM role and create an access key and record your secret key. Then configure your AWS CLI client and use these keys to be able to access the AWS CLI. When in the AWS CLI, enter the following: