With most Cisco routing platforms, by default, routers do not advertise network prefixes to eBGP peers whose AS is already found last in the BGP’s network prefix AS_PATH attribute. This is a loop prevention mechanism known as disable-peer-as-check.
Recently I deployed a number of Fortigate Fortinet firewalls within Cisco ACI and noticed that Fortigates (with code 5.6) do not have any option of enabling this feature. I was actually surprised because even Palo Alto firewalls have this as an option. In Palo Alto, this option is known as “Enable Sender Side Loop Detection” and its found directly under the neighbor configuration. When you deploy a lot of VRFs with BGP in ACI, this is an option you need to be aware of.
After trying to join APICs from a secondary POD, I got the APIC Data Layer Partially Diverged Error:
In my case this was because I had the “Contract Viewer” app installed on the primary APICs. When the secondary site APICs tried to join the cluster, the 3rd party APP caused this error. I had to remove the APP and reinstall it to allow the secondary POD APICs to join properly.
Recently I deployed a Cisco ACI 4.2 Multi-Pod deployment, or let’s say I added a secondary POD to a Cisco ACI 4.2 deployment. One of the things I noticed is that you now have to click on Add a Pod under Fabric Inventory and its all wizard based which is a pain, especially if you already have GOLF configured and you want to use the same MPOD L3OUT.
If you want to use the same interface Multi-pod is using and leverage GOLF on the same interface, you cannot use the wizard. You have to manually specify the POD2 spine interfaces and pod info under the GOLF L3out as well as the Fabric access policies and assign the interface profiles to the spine switch profiles.
Why is this secondary POD Spine information important? Well because if you do not enter the secondary POD Spine information into the L3OUT and Fabric External Connection Policies, when you register the Spine under Fabric membership, the switch will be stuck in the Discovering stage and will will never be assigned a Infra TEP IP. It will look like its working but the spine will be stuck in Discovery. If you troubleshoot, you will notice discoveryissues on the switch, APIC DHCP does not issue DHCP IPs and it is stuck discovering.
Hopefully Cisco makes adding a POD with existing GOLF configuration easier to manipulate.
There are multiple scenarios where DHCP is a requirement in the data center. VDI is one major use-case among others. VDI clients request DHCP IP addresses from local data center DHCP servers. A large percentage of data centers leverage Windows Servers for DHCP. With Cisco ACI, an additional DHCP Option 82 is required to be processed in order to properly assign DHCP IP Addresses. Legacy Windows Operating System DHCP server such as Server 2003, 2008 and 2012 have caveats around supporting Cisco ACI fabric networks. New operating systems such as Server 2016 or Server 2019 are needed in order to fully support ACI network topologies.
Option 82 is a DHCP Relay Agent Information Option. It is used to provide additional contextual information in the DHCP request such as Physical Interface and VLAN ID of where the Client resides on the DHCP Relay Proxy Gateway, TEP address of the DHCP Relay Proxy Gateway (Leaf Switch), VRF Name/VPN ID, Server ID and Link Selection information. All of these Sub-options of Option 82 have critical information needed to properly assign IP addresses to the correct location on the Cisco ACI fabric. Keep in mind ACI has tenants, VRFs, Bridge Domains which can all have overlapping constructs and are not as straight forward as traditional networks. We have new concepts like pervasive gateways where the default gateway and MAC can be on all of the leaf switches. These all have to be factored in and considered for DHCP in VXLAN networks.
A few years back when I was learning Cisco ACI I read through a lot of white papers and configuration guides, banged my head against the wall trying to sort through all the information which I must admit was challenging and realized that since this is still a fairly new technology, the information I sought would not be readily available on the Internet or simply hard to find. I would have to spend the time researching the topics and with technical hands-on experience, develop standards and best practices that worked best for my customer deployments.
As I deployed more and more ACI Fabrics, I realized that if you have a poor design with your foundational building blocks namely Fabric Access Policies, it would become more difficult to manage the ACI environment and migrate it later to a hybrid or application-centric policy model. Unfortunately, with ACI there are a lot of relationships between constructs and its very difficult to rename objects and in many cases impossible. Organizing and naming ACI constructs from the beginning at the foundational level is essential to a simpler and more scalable architecture.
In my experience, proper ACI construct design begins with Cisco ACI Fabric Access Policies. These ACI Fabric Access Policies consist of physical domains, VLAN pools, AEPs, interface policy groups, interface selectors, interface profiles, switch policies, etc. These are constructs used for attaching endpoints into the Cisco ACI Fabric.
During the build of the Cisco ACI Fabric, it is very important to design the ACI Fabric Access Policies properly for ease of management, simplified troubleshooting and more importantly with scalability in mind. These constructs are heavily utilized in ACI Tenant polices and must be organized and named accordingly.