Appliance Ports in Cisco UCS

So, I recently had a customer that wanted to enable “Jumbo Frames” to a UCS server that had the Cisco Virtual Interface Card (VIC) installed in it (also applies to Palo/M81KR, VIC-1240, VIC-1280). You might also know this process as “maximizing the MTU”. In this particular situation, the customer had an iSCSI appliance directly connected to the fabric interconnects (Whiptail in this case, which is not officially supported by Cisco as of this writing, but this process will be the same for any iSCSI appliance – supported or unsupported). It’s not the first time this has come up so I thought I’d write it down so that everyone can benefit (including me when I forget how I did all of this). This article will be helpful if you’re using any NAS storage such as NFS, CIFS/SMB, or iSCSI.

In Cisco UCS, we support certain storage appliances directly connected to the fabric interconnect via an Appliance Port (see supported arrays here: http://bit.ly/teL5Pb). The Appliance Port type was introduced in the 1.4(1) release of UCS Manager. Prior to this release, you had to put UCS in “switch mode” to attach an appliance directly to it. As a side note, appliance ports share some characteristics with server ports: They learn MAC addresses and they have an uplink (border) port assigned (static or dynamic) to them. Because they learn MAC addresses like any normal switch port does, as soon as a device is “heard” its MAC is considered “learned” and the switch will no longer need to flood packets destined for that MAC to all ports. This is good to know because when configuring an Appliance port, you are given the opportunity to add an “Ethernet Target Endpoint” – which is the MAC address of the connected appliance. This is an optional field and is not required for most appliances (as they broadcast their MAC address when connected), but if you have an appliance that doesn’t or if you have connectivity issues, you should enter the MAC in this field (See Figure 1 below). It should be noted that appliance ports do not support connected switches – at all. The port will shut down when it detects a switch on the far side.

So, let’s dig right in and setup an appliance port…step by step.

  1. QoS Configuration
    1. This step is only needed if the appliance is trying to use a protocol or file system that benefits from a large MTU (Jumbo Frame) like iSCSI or NFS, for example. If your storage does not, you can move on to Step 2 because the MTU is usually the default Ethernet size of 1500 bytes.
    2. Go to the LAN tab, expand Policies, and look for the QoS Policies item.
    3. Right click “QoS Policies” and choose Create QoS Policy.
    4. Give it a name.
    5. Select an unused Priority from Bronze, Silver, Gold, or Platinum. I will use Bronze (most likely none of these are in use on your system, but make sure).
    6. Click OK to save the changes to the new policy.
    7. While still on the LAN tab, select “QoS System Class” under the “LAN Cloud” item.
    8. You will see a dialog similar to Figure 1.

    Figure 1

    1. Change the MTU from normal to 9000 (again, I used Bronze).
    2. Check the Enabled box for Bronze.
    3. Click the Save Changes button.
    4. Set the MTU on the storage appliance to 9000

I should mention here that some storage appliances will allow an MTU of 9216, and if so, you can choose that so long as you make the Priority in UCS 9216 as well. We won’t get into it in this article, but if the appliance is not directly attached and is instead somewhere north of the fabric interconnects, you would need to match the MTU priority in UCS to the same size as the switch port connected to the fabric interconnect. But that scenario would not include Appliance Ports as they require the array to be directly connected.

If the appliance is plugged into both fabric interconnects (and it should be unless this is a lab like mine), repeat Step 2 on the opposite fabric interconnect. My suggestion is that you get one side working at a time.

 

  1. Appliance Port Configuration
    1. On the Equipment tab, select the fabric interconnect where the appliance is plugged in to.
    2. Right-click the correct port from the unconfigured Ethernet ports and select “Configure as Appliance Port” (See Figure 2).
    3. If you’re using iSCSI storage AND you want to maximize the MTU, choose the same Priority class from the drop down menu that you configured in Step 1 above. We used Bronze in my example. Otherwise, just continue.
    4. Don’t use Pin Groups unless you’re very familiar with how they work and why you’re doing it.
    5. There should not be any need for a Network Control Policy here in this example.
    6. Select the correct port speed based on the speed of your storage appliance.
    7. Decide if the port is a trunk or access port.

      Note: I should mention that if this is a trunk port connected to the appliance, the VLANs you input here are not stored in the standard VLAN database that the fabric interconnect uses for server and uplink traffic. You can see this by looking on the LAN tab and you will notice VLANs for the Appliance cloud as well as the LAN cloud – these entries are separate from one another. So, if you are trying to put the appliance on an existing VLAN already configured on your servers, you will need to create an identical appliance VLAN for the appliance port using the same VLAN ID you use for the server’s vNIC (alternatively, you could create this VLAN ahead of time on the LAN tab under Appliances).

    8. Click OK to save the changes.

       

    Optional: As explained above, if you know the MAC address of the appliance, input in the dialog box (this simply pre-populates this address into the mac-address table in the FI). As explained above, most appliances will broadcast their MAC when they are connected, but it will not hurt to enter it here. My suggestion is to leave the endpoint blank and re-visit it if you cannot get it to work.

Figure 2

 

At this point, assuming you have configured the uplink and appliance VLANs correctly and that you have enabled the chosen storage appliance VLAN within your northbound infrastructure, you should be able to ping the appliance IP address from a workstation outside the UCS system. If you cannot, check the VLAN configuration for both the fabric interconnect LAN Cloud and the Appliances Cloud (both on the LAN tab) as well as the MTU size on the appliance port and the appliance itself. If your intention is just to make the storage available within the UCS system, it may not be available to outside systems at all because the VLANs are not accessible outside.

It’s now time to setup the VIC-enabled servers to access the storage. The plan is to add a vNIC on the server that is in the same VLAN as the storage. If using iSCSI, you need to match the MTU size of the vNIC to the size already configured in Step 1. The steps are as follows:

  1. Server Configuration (this is disruptive and will reboot the server). See Figure 3
    1. Locate the desired Service Profile on the Servers tab.
    2. Right-click the vNICs item and choose “Create vNIC”
    3. Name the vNIC (i.e. “eth2”)
    4. Fill in the dialog with all required information such as MAC pool, correct Fabric, and the VLAN that matches the appliance port created in step 2 above.
    5. Be sure and select the appropriate VLAN that matches the appliance
    6. If using iSCSI and maximizing MTU, change the MTU of the vNIC to 9000 (vNIC MTU maximum is 9000, not 9216)
    7. If using iSCSI and maximizing MTU, select the correct QoS Policy you defined in Step 1
    8. Click OK to save the changes.

Figure 3

TIP: If this procedure were being done on an HP, IBM, or Dell server, you would have the additional step of going into the OS to set the MTU to match (9000). Depending on the OS/Hypervisor, this involves a registry hack, ifconfig, esxcfg-vswitch, or setting the MTU manually within the Windows adapter properties. This would be required on every server that plan to use the iSCSI appliance. With UCS and the Cisco VIC, it involves none of these because Cisco has strong integration with the OS/Hypervisor and the VIC driver will inform the installed OS/Hypervisor of the new MTU size automatically. Whatever MTU size you designate on vNIC itself will be used by the OS/Hypervisor. So, in this case, once you create the storage vNIC and reboot the server, the MTU size will already be at the correct value of 9000. How cool is that? Regardless of what OS you install, you don’t have to worry about finding the right command to set the MTU!

The designated MTU can be verified as follows:

Windows:

netsh interface ipv4 show subinterfaces

Linux:

ifconfig

ESX:

esxcfg-vswitch –l

If you would like to test the end-to-end MTU, there is an easy process for that as well:

Windows:

ping –f –l 8000 <storage appliance ip address>

If you get replies, it’s working. If you get “Packet needs to be fragmented but DF set”, windows is trying to tell you that the packet is too large to pass through and you specifically told ping not to fragment the packet.

Linux:

ping –M do –s 8000 <storage appliance ip address>

ESX:

vmkping –s 8000 <storage appliance ip address>

I use 8000 to keep it common between Windows, Linux, and ESX. The largest true packet size is 8972 which sends a 9000 byte packet when you add in IP and ICMP header info of 28K. Some OS’s accept a parameter of 9000 and others max at 8972, but 8000 works on all of them and demonstrates that the MTU is most likely working as you expect it to.

Note: This article is written to the lowest common denominator. It does not involve LAN Pin Groups or vNIC Templates. Those are topics that I hope to write articles on in the future, but are covered well in our product documentation.

As always, thanks for stopping by.

-Jeff

P.S. If you’re using a different CNA than the Cisco VIC and would like specific direction on setting the MTU for it (inside the OS/Hypervisor), drop me a note below and let me know which one. I’ll do my best to dig up the instructions.

13 thoughts on “Appliance Ports in Cisco UCS

  1. I actually went through a setup like this recently with an EMC VNX 5300 (iSCSI), and vSphere 5. I did notice something a little weird though. I set up two vNICs in each Service Profile to be used for iSCSI traffic, one vNIC on Fabric A and the other Fabric B (no failover enabled). I noticed that I had to add each SP port’s IP address in ESXi (instead of 1 in the Dynamic Discovery) for it to be able to see all the paths to the storage. I suspected this was because all initiators and SP ports were in the same VLAN, and ESXi was trying to use one VMkernel port to find all of them (each vmkernel port could only see 2 SP ports). Thoughts?

  2. I tried all of the steps in this post but ended up having a configuration failure. Currently I am trying to test the configuration using the UCS manager simulator.

    The description of error that I get is provided in faults tab as:

    “Service profile 11 configuration failed due to connection-placement,insufficient-resources,vnic-capacity”

    I have tried to do this using the following NICs:
    1. Cisco UCS NIC M51KR-B Broadcom BCM57711 Network Adapter
    2. Ciscco UCS M81KR Virtual Interface Card (VIC)

    Would you have any clue on what could have gone wrong?

    • If you are using the M81KR, I’m not sure why this happened. I would create a new service profile for that blade with the M81KR and try it again. Do not change any of the default screens beyond selecting the pools (which are required for M81KR).

  3. Pingback: Cisco UCS MTU Sizing with VIC | Jeff Said So

  4. Jeff:

    Tomorrow I will raise this question with Cisco TAC- why does Cisco release firmware like 2.0(3a) with bugs like these? I wish I still had you to accompany us on our continued adventures on the USC edge.

    Cheers.

    JM

  5. Pingback: UCS Appliance Ports : Or in my case UCS NetApp ports | FlexPod.org

  6. Hey Jean – we need to have lunch. Let’s try that mom & pop home cooking place we talked about downtown. Can’t remember the name but I know you will. I wish I could say that we’re bug free – every company does. But I would love to talk more specifically with you about what your facing and how we might make things easier going forward.

  7. Pingback: Cisco UCS Appliance Ports for NFS Storage | Wahl Network

  8. have you also have a document on how to configure dyn. vnics (vm- fex) ?
    In more deail, on how to configure/change dyn vnics mtu size.

  9. This did not work for us. I lost all iSCSI connectivity to my array following these instructions. What did work was all of the the above plus setting the “Bronze” and “Best Effort” QOS to 9216. With Bronze alone it was a no go with this exact config. My consultant says that the inbound traffic from the storage must tag the traffic. If it does not the switch will not except it. Setting the default allows any large MTU traffic to enter the fabric. Of course we would not need the bronze setting at that point.

  10. This is almost working for me. I am running ESXi on the UCS blades, and running a Windows server with the MTU set to 9000. The LAN QoS policy on best effort is set to 9216, the service policies for the NIC’s have MTU set to 9000, and the VMware virtual switches have MTU’s set to 9000.

    When I ping from the attached Nexus 5500, I am limited to an MTU size of 8976. I am thinking it is some kind of VLAN header on the VMware virtual switch that is limiting the packet size. It would seem that the UCS NIC’s would have to have a higher MTU than 9000 to allow for VLAN trunking.

    Maybe I should limit the Windows server MTU size to 8976. Would appreciate your thoughts on the issue.

    Anyway, here is the ping from the Nexus that shows the failure to reach MTU size of 8977:

    demo-5548# ping 10.124.12.132 packet-size 8976 df-bit
    PING 10.124.12.132 (10.124.12.132): 8976 data bytes
    8984 bytes from 10.124.12.132: icmp_seq=0 ttl=127 time=1.109 ms
    8984 bytes from 10.124.12.132: icmp_seq=1 ttl=127 time=7.503 ms
    8984 bytes from 10.124.12.132: icmp_seq=2 ttl=127 time=18.689 ms
    8984 bytes from 10.124.12.132: icmp_seq=3 ttl=127 time=18.214 ms
    8984 bytes from 10.124.12.132: icmp_seq=4 ttl=127 time=18.671 ms

    — 10.124.12.132 ping statistics —
    5 packets transmitted, 5 packets received, 0.00% packet loss
    round-trip min/avg/max = 1.109/12.837/18.689 ms
    demo-5548# ping 10.124.12.132 packet-size 8977 df-bit
    PING 10.124.12.132 (10.124.12.132): 8977 data bytes
    Request 0 timed out
    Request 1 timed out
    Request 2 timed out
    Request 3 timed out
    Request 4 timed out

    — 10.124.12.132 ping statistics —
    5 packets transmitted, 0 packets received, 100.00% packet loss
    demo-5548#

Leave a Reply

Your email address will not be published. Required fields are marked *