Cisco ACI allows you to integrate physical and virtual workloads in multi-hypervisor, programmable fabric for building a cloud data centre or multiservice. The ACI fabric comprises discrete components operating as switches and routers, but it’s monitored and provisioned as one single entity. Below we have discussed five easy steps to understand the basic technical aspects of Cisco ACI.
When discussing new architectures like ACI, parallels to the known network infrastructure are preferred to be drawn. Most of the customers currently have NEXUS 7K and VDCs (Virtual Device Contexts). This technology enables you to go for the large physical switch of NEXUS 7K while also carving out the portions of Port ACIs, Memory, and CPU from different line cards into small logical or virtual switches.
NEXUS VDCs similar concept can be understood through an understanding of ACI Tenants. For example, suppose we create two VDCs, one Tenant_A and another one Tenant _B, by using some portion of CPU and memory from the parent switch. Also, we have a NEXUS line card of 12 ports and allocate the ports from 1/1 to 1/8 into Tenant_A VDC and the ports from 1/9 to 1/12 into Tenant_B VDC.
RBAC (Role Based Access Control) can be used for controlling the users logging into the virtual switches. One who logs into Tenant_A can see port 1/1 to ⅛, and the one who logs into Tenant_B can see port 1/9 to 1/12. These ports can be configured as L2 or L3 ports by using the complete functionality of NX-OS.
The next step is the way we interact by configuring these ports and virtual switches. Like any other standard network device, it is required to log into the device or supervisor and apply user-friendly human-readable code for configuring Layer 3 routing, VLANs, etc. The device or supervisor translates the user-friendly configuration into binary microcode to program various ACIs, enabling the device for packet forwarding manipulation.
It’s how stand-alone network devices are working today. Getting into the program and ACIs is technically possible with actual microcode, but it’s only for the OEM developers.
While creating ACI, overall, the architecture was about creating a ‘big switch’ that could be carved up logically into smaller switches like a VDC. APIC replaces the supervisor role of NEXUS 7K in ACI.
The fabric modules of NEXUS 7K are replaced by spine switches connecting the leaf switches. And finally, in ACI, leaf switches replace the NEXUS 7K line card’s ASICs, and they can be allocated to different VDC Tenants.
If we look at the ACI backplane now, it will show the similarity between a ‘giant switch’ of ACI and a single NEXUS 7K switch with Virtual Device Contexts (VDCs). As discussed above about VDCs and NEXUS, we can consider ports from 1/1 to 1/4 from Leaf 101 and ports from 1/5 to 1/8 from Leaf 102 while allocating them to Tenant_A.
Also, we can allocate ports from 1/9 to 1/12 out of Leaf 103 to Tenant_B. You would map the ports more consistently in a real-world design. This Tenant with connectivity inside it can be across hundreds of different switches in a data centre, and it can even be across many data centres and public clouds.
It is also seen that APICs translate human-readable and user-friendly code into microcode for programming the leafs and spines like we programmed the ASICs and fabric modules in a single NEXUS 7K. It is a critical juncture for network engineers to understand as most feel as if they are giving up their infrastructure control.
Actually, what we are doing is instead of box by box configuring the things, we create on large switch, log into it and allow the APICs to program this large switch by creating the microcode. In this step, there is no difference compared to the step above about a single NEXUS configuration. Allowing NEXUS 7K for converting the configuration of NX-OS command line into microcode and applying it to VDCs and single switch is no different than APIC performing that same function to hundreds of ACI Tenants, leafs and spines.
As we are well-versed with APIC translating the microcode for programming the ‘giant switch,’ let’s check out thACI infrastructure’s self-assembly. Here, the first step is bootstrap the APICs, then configuring each one via OOB and console.
The lead switches will be ‘discover’ by APICs via LLDP to which they are attached, then the leaf switches are registered, a DHCP address is given from APIC and a part of the fabric. The spines are then discovered by the leaf switches; the spines are given DHCP addresses and become a part of the fabric. The remaining leafs are then discovered by the spines that aren’t attached to any APIC, and they get registered, given a DHCP address, and then become a part of ‘giant switch’.
It would be a good analogy to insert a line card to NEXUS 7K that would prompt if you want this line card to be added or not. It would be like the addition of fabric to NEXUS 7K. Like the spines, it would be like adding a fabric module in NEXUS 7K. This all happens under covers, similar to fabric module insertion or NEXUS 7K line card.
After discovering and registering the fabric, the IS-IS routing is turned on by the fabric for creating ECMP (Equal Cost Multi-Pathing) route for DHCP address. The IS-IS routing is tuned and pre-configured for fast convergence.
The ECMP routing of loopback address or DHCP address is what is known as an underlay network. To create an overlay network, VXLAN encapsulation is used for connectivity between leaf switches and ports. The Tunnel Endpoint of VXLAN is bound to loopback with reachability via IS-IS underlay ECMP routing.
It’s like we don’t create connectivity between fabric modules, ASICs, supervisors, and backplane in NEXUS 7K, and ACI is allowed to do it for us also. We just create APIC connectivity policies for allowing various ports and leafs to communicate through L2 and L3. Just like a NEXUS 7K, the APIC will also push the microcode.
Talk to our experts!
If you want to learn more about the power of Cisco ACI, or how digital transformation can help you accelerate your outcomes, click on the button below: