Reason to use Cisco Nexus instead of Cisco Catalyst in the Data Center

Time and time again I have customers wanting to understand the true benefit of a Cisco Nexus switch versus a Cisco catalyst switch in the data center for connecting servers. Customers may argue that they just need simple 1G or 10G speeds, dual-homed with a port-channel and they can achieve this simply using a stack of Catalyst 9300 or Catalyst 9500 switches. So here are a few reasons to ponder:

Code upgrade

If you have servers that are dual-homed for redundancy across two separate Cisco catalyst switches which are stacked because you want to leverage a port-channel then that sounds fine and dandy but when it comes time to upgrade the switch, because there will be a time you have to upgrade the switch, the whole stack will need to be reloaded resulting in an outage to your servers. This is not the case with Cisco Nexus.

Lower latency = better performance

VeloCloud in AWS

After a few hours of troubleshooting, I found out that when using the 3.3 brownfield Cloudformation template, entering the VCO as an IP does not work. You must use the FQDN instead of the IP for the VCO. I also made sure to set the version to 331 instead of 321. The instance type of C5.4xlarge. After the vEdge joins the orchestrator, then you can upgrade the version to a newer code.

CMLv2 Node QUEUED

If your wondering why you can’t get the nodes past “QUEUED” in CML, its because the images aren’t loaded.

  1. make sure your refplat-xxx-fcs file is mounted under CD/DVD drive
  2. Log in with sysadmin to port ip:9090
  3. Open up terminal and type in sudo /usr/local/bin/copy-refplat-iso-to-disk.sh

PAN and BFD

Setting up a BFD session between Palo Alto and Cisco ACI Leaf or General Nexus Switch

If Device A (ex. Palo Alto) does not support BFD Echo and only BFD Control Packets, Device B (ex. Cisco Switch) will not utilize BFD Echo and will only use BFD Control Packets for the BFD session. As a result, the highest transmit interval between both peers multiplied by the multiplier = the hold-down time. Without BFD Echo, the hold-down time will be how long the BFD peer will wait till BFD session goes down.

Another consideration is that depending on the Palo Alto model, high CPU control plane traffic will effect BFD and may tear your adjacency/peering down.

I have tested 16 eBGP peers on Palo Alto 3220 connected to ACI leaf-A and 16 other eBGP peers on same Palo Alto connected to ACI leaf-B. If the BFD timers were anything below 900 x 3, after an ACI leaf-A or leaf-B reload the Palo Alto would randomly bring down eBGP neighbors from ACI leaf-B, even though no issue occurred between PAN and ACI leaf-B. BFD would tear down because of a control plane spike as PAN must be processing BFD in software. The only acceptable timers were 900 x 3. Anything lower, the Palo Alto would tear down BFD which would bring down the eBGP Peering.

Check08 – FPGA/BIOS out of sync test [FAIL] Error: Can’t find firmwareCatRunning Mo


If you are getting this error when you run show discoveryissues on the switch or if you notice FPGA version mismatch detected. Running version: 0x10 Expected version:0x11 in APIC.

Here is the fix.

# dir bootflash

Copy the .bin code version into clipboard

setup-bootvars.sh <code-version>

reload

After reload, run the below commands.

/bin/check-fpga.sh FpGaDoWnGrAdE
/usr/sbin/chassis-power-cycle.sh