Reason to use Cisco Nexus instead of Cisco Catalyst in the Data Center

Time and time again I have customers wanting to understand the true benefit of a Cisco Nexus switch versus a Cisco catalyst switch in the data center for connecting servers. Customers may argue that they just need simple 1G or 10G speeds, dual-homed with a port-channel and they can achieve this simply using a stack of Catalyst 9300 or Catalyst 9500 switches. So here are a few reasons to ponder:

Code upgrade

If you have servers that are dual-homed for redundancy across two separate Cisco catalyst switches which are stacked because you want to leverage a port-channel then that sounds fine and dandy but when it comes time to upgrade the switch, because there will be a time you have to upgrade the switch, the whole stack will need to be reloaded resulting in an outage to your servers. This is not the case with Cisco Nexus.

Lower latency = better performance

VeloCloud in AWS

After a few hours of troubleshooting, I found out that when using the 3.3 brownfield Cloudformation template, entering the VCO as an IP does not work. You must use the FQDN instead of the IP for the VCO. I also made sure to set the version to 331 instead of 321. The instance type of C5.4xlarge. After the vEdge joins the orchestrator, then you can upgrade the version to a newer code.

Cisco VXLAN troubleshooting

ERROR after you configure EVPN

No VLAN id configured, unable to generate auto RD

This is because your NVE interface is down. Shutdown your NVE loopback and NVE interface, then unshut your loopback followed by NVE interface.

Border leaf receiving advertisement from external router and advertising to spine. Spine not advertising to other leafs. After review of the bgp l2vpn evpn routing table, its indicates “Path type: internal, path is invalid(no RMAC or L3VNI), no labeled nexthop”.

Why is this happening? Well because you don’t have the L3VNI configured properly. On the Border Leaf, verify that you have the L3VNI VLAN defined, the vni assigned to the VLAN and the interface VLAN defined with vrf and ip forward.


vlan 2500
vn-segment 50000

vrf context PROD
vni 50000
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
address-family ipv6 unicast
route-target both auto
route-target both auto evpn

interface Vlan2500
description L3VNI-SVI
no shutdown
mtu 9216
vrf member PROD
no ip redirects
ip forward
no ipv6 redirects

interface nve1
no shutdown
host-reachability protocol bgp
source-interface loopback1
member vni 50000 associate-vrf


If your wondering why you can’t get the nodes past “QUEUED” in CML, its because the images aren’t loaded.

  1. make sure your refplat-xxx-fcs file is mounted under CD/DVD drive
  2. Log in with sysadmin to port ip:9090
  3. Open up terminal and type in sudo /usr/local/bin/

How to configure appliance mode on AWS Transit Gateway

In order to configure the TGW attachment in appliance mode you must perform this from the AWS CLI. Go to the IAM role and create an access key and record your secret key. Then configure your AWS CLI client and use these keys to be able to access the AWS CLI. When in the AWS CLI, enter the following:

aws ec2 modify-transit-gateway-vpc-attachment --options "ApplianceModeSupport=enable" --transit-gateway-attachment-id <YOUR TGW ATTACHMENT HERE> --region <YOUR REGION HERE>

Replace <xxx> with actuals.