Getting FCoE on a RHEL Linux Server working with Cisco NX-OS

Sunday, 26 Jan 2020

Getting FCoE on a RHEL Linux Server working with Cisco NX-OS

Recently I had a great deal of "fun" getting the following technologies to actually work together, so I thought this might be a good blog post if anyone else out there is still working with circa 2012 technology and needs to get stuff working:

  • Fibre Channel over Ethernet (FCoE)
  • Fibre Channel Fabric Login (FLOGI)
  • Virtual Port Channels (vPC, the other non-AWS one...) in "Individual Mode" (hack)
  • Red Hat Enterprise Linux (RHEL)

What we working with then (The Topology)?


  • Storage-side
    • 1x Hitachi G200 SAN Array
      • Dual-attached via 8 Gigabit Fibre Channel (GFC) to two FC SANs
        • SAN_A = FCOE-SWITCH-01 VSAN10/VLAN20
        • SAN_B = FCOE-SWITCH-02 VSAN11/VLAN21
  • Server-side
    • 1x Dell R620 with RHEL 6.9 installed as Baremetal OS (no Virtualisation)
    • 2x Intel 82599ES Combined Network Adapters (CNAs) running at 10 Gigabit Ethernet
    • FCoE yum package installed
      • yum install fcoe-utils
  • Network-side
    • 2x Cisco N5596UP Converged Ethernet/FC/FCoE Network Switches running Cisco NX-OS
    • 2x Virtual Fibre Channel (VFC) Bindings
      • Each Switch runs binding of VFC48 to Physical Interface Eth1/5
    • 1x vPC 48 mapped into both Eth1/5 instances (vPC 48 = PortChannel 48)

What didn't work then (The Problem Statement)?

This was half-setup (the best kind of setup, because it's me who's getting set-up...) when I got involved, and the issues were:

  1. Sporadic/intermittent pings to Server01
    1. Which could be restored by disabling one of the two Switch-Server Uplinks (Eth1/5<->Em1/2), on either Switch-side (Eth1/5) or Server-side (Em1/Em2)
  2. Storage Array not seeing the Server01 WWNs
    1. Which had already been set up in a SAN Zoning, and the WWN values had been confirmed

What did you do to fix (The Poirot Moment)?

Firstly, I flexed my Google muscles and found this lovely Configuring a Fibre Channel over Ethernet Interface in RHEL 7 Guide, which came in very handy. The first thing to know about this is that your FCoE Subinterfaces won't show up (or shouldn't) as /etc/sysconfig/network-scripts objects, as those are for IP/Ethernet NICs (i.e. OSI Model L2/L3), and we're dealing with FCoE/Ethernet (i.e. OSI Model L1/L2).

In my case, there were some rogue ifcfg-em1.20 and em1.21 (doesn't make sense, would have been em2.21) definitions I had to delete:

cd /etc/sysconfig/network-scripts
rm ifcfg-em1.20
rm ifcfg-em1.21
rm ifcfg-em2.21

Configuring the FCoE NICs (CNA / CBAs)

Upon reading the RHEL Guide, I was expecting something about a VLAN definition here, as handily the previous person who set up the Converged SAN didn't like matching Virtual Storage Area Network (VSAN) to VLAN numbering schemes, so I've got this:

FCOE-SWITCH-01 10 20
FCOE-SWITCH-02 11 21

On Cisco NX-OS, you Trunk-through the VLAN ID, not the VSAN ID, so my 802.1q Trunk to the Server looks like this:

FCOE-SWITCH-01# sh run int po48
interface port-channel48
  description Po48 - Server01
  switchport mode trunk
  switchport trunk allowed vlan 20,380-381
  spanning-tree port type edge trunk
  speed 10000
  vpc 48

So I'm happy VLAN20 (VSAN10) is being Trunked-through, but upon inspection of the /etc/fcoe/cfg-ethX example file, I find no reference of a "VLAN ID", only this option relating to VLAN:


Which is disabled, and there's no config present for my interfaces, em1 and em2:

[root@server01 ~]# ip a | grep em
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 44:a8:42:2b:4c:39 brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 44:a8:42:2b:4c:3a brd ff:ff:ff:ff:ff:ff

So I do the following:

cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-em1
cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-em2
nano /etc/fcoe/cfg-em1
[Edit line AUTO_VLAN to be equal to "yes"]
[Ctrl+O to save]
nano /etc/fcoe/cfg-em2
[Edit line AUTO_VLAN to be equal to "yes"]
[Ctrl+O to save]

 Then restart the FCoE Daemon:

[root@server01 ~]# service fcoe restart
[root@server01 ~]# service fcoe status
/usr/sbin/fcoemon -- RUNNING, pid=28331
Created interfaces: em1.20 em2.21

Then wait a bit, a few Kernel Syslog messages appear, et voila my two FCoE Interfaces magically come up in the fcoeadm tool:

[root@server01 ~]# fcoeadm -i
    Description:      82599ES 10-Gigabit SFI/SFP+ Network Connection
    Revision:         01
    Manufacturer:     Intel Corporation
    Serial Number:    246E966B0120
    Driver:           ixgbe 4.2.1-k
    Number of Ports:  1

        Symbolic Name:     fcoe v0.1 over em1.20
        OS Device Name:    host12
        Node Name:         0x2000246E966B0121
        Port Name:         0x2001246E966B0121
        FabricName:        0x200A8C604F332001
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2112
        FC-ID (Port ID):   0x0103C0
        State:             Online

        Symbolic Name:     fcoe v0.1 over em2.21
        OS Device Name:    host13
        Node Name:         0x2000246E966B0123
        Port Name:         0x2001246E966B0123
        FabricName:        0x200B8C604F2DE381
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2112
        FC-ID (Port ID):   0x0103A0
        State:             Online

And they also pop up as normal Ethernet Interfaces in the normal place:

[root@server01 ~]# ip a
13: em1.20@em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 24:6e:96:6b:01:20 brd ff:ff:ff:ff:ff:ff
       valid_lft forever preferred_lft forever
17: em2.21@em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 24:6e:96:6b:01:22 brd ff:ff:ff:ff:ff:ff
       valid_lft forever preferred_lft forever

And if I check on the Cisco N5K side, I can see Fabric Login events occurring for the Virtual FC interfaces:

FCOE-SWITCH-01# sh flogi database | inc vfc48
vfc48            10    0x0103a0  20:01:24:6e:96:6a:f3:01 20:00:24:6e:96:6a:f3:01

So onward to the next bit of the challenge: intermittent pings/Network Connectivity.

Fixing the pings (IP/Ethernet issues)

A quick rack of the brains reveals something I've had before - when you don't set an explicit mode on Cisco PortChannels, they come up unconditionally, which can cause problems with some Server OS/Hypervisors (I'm looking at you, ESXi without a dvSwitch...):

FCOE-SWITCH-01# sh run int eth1/5
interface Ethernet1/5
  description Server01 Em1
  switchport mode trunk
  switchport trunk allowed vlan 20,380-381
  channel-group 48

Sure enough, there it is; and on the RHEL side, I'd maybe expect a "Bond0" or equivalent interface, but I don't have any - all the Configuration is done as VLAN Subinterfaces on Em1/Em2. So now, out comes the "hack" to get a PortChannel up, but leave it with a vPC Parent (in case the Server ever gets properly configured for LAG - i.e. LACP is Active) - basically add "mode active" to your PortChannel/vPC Member Interfaces (note you have to "no" the previous PortChannel command, otherwise NX-OS will bitch at you):

FCOE-SWITCH-01# conf t
FCOE-SWITCH-01(conf)# interface Ethernet1/5
FCOE-SWITCH-01(int)# no channel-group 48
FCOE-SWITCH-01(int)# channel-group 48 mode active
FCOE-SWITCH-01(int)# end
FCOE-SWITCH-01# copy run start

Et voila, pings are restored when both Eth1/5 interfaces in vPC48 are up, but it is hacky as they come up in non-PortChannel "Individual" Mode, so these are the unusual looking outputs (for something that works):

FCOE-SWITCH-01# sh port-channel summary | inc Protocol|48|I
        I - Individual  H - Hot-standby (LACP only)
Group Port-       Type     Protocol  Member Ports
48    Po48(SD)    Eth      LACP      Eth1/5(I)

FCOE-SWITCH-01# sh int po48
port-channel48 is down (No operational members)
 vPC Status: Down, vPC number: 48 [packets forwarded via vPC peer-link]
  Hardware: Port-Channel, address: 8c60.4f33.200c (bia 8c60.4f33.200c)
  Description: Po48 - Server01
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec

Why even bother with a vPC if you're not actually LAGing?

A great question, and one I'll have to explore now the connectivity is back and working; conceptually all I've done above is to bypass the PortChannel (for the FCoE, this is fine as FC Multipaths anyway; for the IP/Ethernet, you either LAG or don't - this is some frankenstate), so while it's "PortChannel-eligible" (so the vPC48 and Po48 make sense), there's no 802.3ad configuraton on the RHEL OS (something like in this How to configure LACP 802.3ad with bonded interfaces RHEL guide), the fact it comes up as an "I" port basically means it's two standalone interfaces - which negates the need for a vPC or PortChannel.

So the PortChannel isn't required, but maybe I'd need the vPC still? Or maybe I'm losing the plot and don't need either, much like this lovely post on why LACP and vSphere (ESXi) hosts: not a very good marriage

One for another day :).


Merging a Cisco NX-OS SAN with an IBM Brocade SAN (trying to use NPIV)

Wednesday, 18 Sep 2019

SAN to the Future

Storage Area Networking (SAN) is something I'd guess most Network Engineers have heard of, or some limited exposure, but not much; maybe you've done some zoning for the Storage Guys on your Cisco N5K boxes, but otherwise it's a bit of a dark art. Well, same here - but recently I was posed an interesting problem, that in the IP/Ethernet world, is a fairly trivial undertaking:

Can we merge our IBM SAN with our Cisco/Hitachi SAN, so that Servers on one can access Storage on the other, and vice-versa?

Ever the idiot optimist, I immediately responded "Sure, that's like 10 minutes of work or something right?", and so dear reader, we begin.

Being prepared (FC Learnings)

Optimistic as I am, I've been burned by playing with stuff I dabble in before. So a hasty £4 transaction was made on fleeBay to procure this fine tomb of knowledge from the early 2000's:


I can highly recommend this book. A few bedtime reading sessions later, and I've already learned an awful lot more about Fibre Channel (FC) and undone some misconceptions I'd brought in from the IP/Ethernet world, like:

  • A Fibre Channel Domain (collection of FC Switches interconnected) can only work if each Switch has a unique FCID
    • By default, like VLANS, this is FCID 1
    • Two FCIDs of 1 on the same FC Network ("Domain") mean you're gonna have a bad time (one of the FC Switches will be "segmented" from the rest of the world)
  • A SAN Fabric is the collection of Switches in an FC Domain
  • HBA is a Host Bus Adapter (for FC)
    • This is the NIC of the FC world
  • CBA is a Combined Bus Adapter (for FCoE)
    • This is a NIC, but now it's also a HBA (the "C" refers to the fact that the same physical port is both a HBA and a NIC)
  • Normally, there are no more than two SAN Fabrics (A and B) per Deployment of a given set of Compute/Storage Array
    • But each SAN Fabric (i.e. the A Leg or B Leg) could have lots of FC Switches within it, and a Hub-and-Spoke setup, where the "Core Switch" is an FC Director-class Switch, and the "Access Switches" are Pizzabox-like FC Access Switches
    • "Ghostbusters Rule" applies here, the two streams (A Fabric and B Fabric) must never cross/talk to each other
  • Fibre Channel comes in 1, 2, 4, 8, 16 and 32 Gbps speeds, typically called "<x> GFC" (i.e. 8 GFC is 8 Gbps Fibre Channel)
    • Cisco N5Ks only go up to 8 GFC; I'm convinced 16 and 32 GFC are unicorns
    • Each is their own OSI Layer 1/2 Protocol pairing, although my brain approximates them to equivalent-tier on the OSI Model to, say, 1 Gbps Ethernet vs 10 Gbps Ethernet (i.e. an 8 GFC SFP will normally be backward-compatible for 1/2/4 GFC as well)
      • There's some optical magic where the OTU/OTN "encapsulating wavelength" is the same for, say a 8 GFC SFP as a 10 GbE SFP, it's just that an 8 GFC SFP "wastes" the 2.5 Gbps of this bandwidth (the world of optical is made up of 1.25 Gbps Wavelengths it seems)
  • FC uses an IS-IS/SPF-like algorithm to construct a Network Tree and block redundant paths
    • A large Blue/Red-hatted company who trIed Bloody hard to iMplement this on one of our SANs had completely misunderstood this, and thought that 4x 8 GFC uplinks makes 1x 32 GFC uplink
    • You can typically see which is active on, say, Brocade kit by looking at the "(upstream)" or "(downstream)" flag against a "fabricshow" or "switchshow" command
  • FC Interswitch Links are called ISLs
  • FC has sets of features - such as the FC Name Service - not all manufacturers/products support all features
    • This is hard to swallow, as it's a bit like Cisco and Juniper still competing on commonly-done features, at the "Ah yeah, we do Ethernet, but not with STP as an option" level (i.e. you can't take FC features for granted between vendors/products like you can in the IP/Ethernet world)
  • FC has various terms for the types of port (much more than "Access" vs "Trunk")
    • E_Port is a Trunk between FC Switches or Nodes
    • F_Port is an Access towards a Server or Array
    • N_Port (on the HBA) is the Server/Array Port towards a Switch N_Port
  • All FC Switches in a Domain can see all others and know the topology
    • On Brocade FOS, you can quickly get this with the following CLI (which looks like a reversie of Cisco IOS, with the space between keywords removed):
      • switchshow
  • All Zoning/LUN/Fibre Channel Database Login ("flogi") information is held in the Fibre Channel Name Service (FCNS), which each FC Switch automagicaally populates with other FC Switches as soon as it is updated on any one FC Switch
    • I like to think of this as to FC Zoning Database what VTP is to VLANs in the IP/Ethernet world
  • World Wide Names (WWNs) are the equivalent of a MAC Address
    • Some are for the physical Port, others are for the Node (Switch/Server/Storage Array) itself
    • As well as the OUI-like "Vendor Identifier" concept on MAC Addresses, WWNs have a "Usage Identifier" to show if that WWN belongs to a Server or Storage Array
  • Logical Units (LUNs) are the name for Virtual Disks, which the Storage Array abstracts away onto multiple Physical Disks for redundancy
  • Everybody calls it a SAN Array although really it's a Storage Array
  • Fibre Channel over Ethernet (FCoE) is it's own thing, and aside from using the same Ethernet Medium/Cabling, can be viewed as a compete foreigner hitching a lift on the last-mile bit (i.e. Server-to-Switch) on the IP/Ethernet Network
    • FCoE requires a host of other stuff, like DCBX (Adapters that can negotiate FCoE parameters/Switches that can do something useful with the Ethernet "PAUSE" frame, rather than ignoring it; QoS parameters that prioritise FCoE frames...)
    • There's a reason FCoE never really took off (it's a pain in the arse to do right, even more than FC)
  • Targets (i.e. where the Storage LUN lives, the Storage Array) can't live on the same N_Port as an Initiator (i.e. the Server wanting to put/pull from that Storage LUN)
  • VSANs are another level of abstraction (unnecessary for most) where you can have a VSAN act as a container to a SAN, which in turn has Zones, which in turn only allow certain FC Aliases (human-friendly names for WWNs) to speak to other certain FC Aliases/WWNS
  • Everything in FC Zoning configs is an Inception-style "mapping to something else, which maps to something else" that only ends when you swallow the blue pill

Applying the theory to reality

Now armed (and definitely dangerous), let's look at what it is we've got in terms of the two SAN Fabrics to merge today, focussing only on the "A Leg" (for visual simplicity, but the same exists again for a "B Leg"):


If you're not familiar with an IBM FlexSystem/PureFlex Blade Server, think of a Cisco UCS but with much less functionality. For those of you unfamiliar with the world of the Blade Switch (you lucky, lucky people) - it's a module within the Blade Chassis that takes power/hosting from the Chassis, and has some ports on it as invisible internal ports (i.e. maybe Eth1/1-48 map 1:1 to the respective Backplane NIC on each Server in Blade Chassis Slots 1-8 - so Eth1/3 on Blade Switch #1 is NIC0 on Blade Server #3), and other ports on it as physically-connected uplinks (i.e. maybe Eth1/9-12 are 4x 1 Gbps Uplinks to the Top of Rack Switch, via 1000BaseSX Multimode Fibre patch lead).

Relevant for NPIV/NPV (when we get onto it), the IBM FlexStor V7000 is an in-Blade Chassis Storage Array, which utilises some of the Blade Chassis Server Slots, but acts as an FC Target (Storage) rather than a typical Blade Server Compute Node (as an FC Initiator, Compute Server).

As with many things in Large Enterprises, the cool kid unicorns don't exist here; is it daft that we've got two distinct Data Centre Stacks (one IBM and one Cisco/Hitachi) from each other? Absolutely. Would a cool kid hipster DevOps tell me this is impossible in the real world? Probably. Is there a technical reason for it existing? Not at all. Why is it there? Big Company politics and Project silos.

On the IBM kit; it's all re-badged Brocade, running Brocade Fabric Operating System (FOS), namely:

  • IBM SAN24B = Brocade 300
  • IBM FC5022 = Brocade 6547

IBM make this hard to discover, for some reason; I can't think why their Customers have left them in droves since the early 2000's, everyone must be wrong.

Raising Vendor TACs

Looking at the above, you're probably thinking - "Not too hard then, cable up some OM3/OM4 8 GFC from the IBM SAN24B to the Cisco N5K, job done?". Sadly, no - there's a few pre-requisites we need to do; so I'll leverage the expensive IBM-side and Cisco-side Technical Assistance Centre (TAC) Contracts I've got, and check my back. Caveats I'm aware of are the uniqueness of the FC IDs, so I go around and do the following to glean these:

  • IBM/Brocade
    • Login to each Switch via SSH/Telnet, and issue the following to glean the FC Topology, FCIDs and SFP Inventory/Status for each Switch
      • fabricshow
    • Record them all in a big ol' spreadsheet
      • Including the Hostname, which handily for me, Big Blue have made completely different from the sticker on the front of the kit/documentation; thanks for that, IBM - again, it *really* hurts me that you're slowly going under in the Cloud Era, I can't think why your Cloud offering isn't even on the leaderboard...
    • Pull out the FC Alias (human-friendly name:WWN) and FC Zoning information
      • alishow
    • Record this all in a big ol' notepad
      • Because I think I might have to transpose this into Cisco NX-OS/SANOS syntax
  • Cisco
    • Login to each Switch via SSH, and issue the following to glean the FC Topology (not much, there's 1x N5K per SAN Fabric), FCIDs and SFP Inventory/Status for each Switch
      • show fcdomain
        show vsan membership
        show inventory
    • Record them all in the same big ol' spreadsheet
    • Pull out the FC Alias and FC Zoning information
      • show flogi database
        show zoneset active
        show zone
        show fcalias
    • Record this in a big ol' notepad
      • To get the syntax I need to translate into (Brocade FOS -> Cisco NX-OS)

With Vendor TACs in progress, I go around and complete the above, and am happy that the FCIDs are unique on each FC Switch, so a SAN Merge isn't going to cause a problem. Having read this fantastic blog post on Merging Brocade SAN Fabrics, my understanding is that the SAN Fabric with the highest  (in ASCII terms, so "Z" trumps "A" for instance) Effective Configuration name (Brocade speak, or "Zoneset Name" in Cisco speak) wins/goes active. As I want to minimise the outage, and have the Cisco N5K "win" as the FCNS Master, my thinking is:

  1. Convert all the Brocade (IBM) FC Aliases/Zones from Brocade FOS into Cisco NX-OS
    1. Easily achieved file-by-file with Notepad++ and some Regular Expressions (RegEx)
  2. Pre-apply this to the Active Zoneset on the Cisco N5Ks
    1. Won't do anything, but won't harm anything/go FC Active Zone until the applicable WWNs are seen on the Cisco N5K fabric
  3. Arrange an Outage Window "just in case", and plug in the IBM SAN24B to the Cisco N5K, and allow the ISL to form
  4. Ensure the Cisco Zoneset is active, and no FC Switches have Segmented
    1. Merge them with the applicable CLI command on the Brocade/Cisco if they have
  5. Party on down

Response of the Vendor TACs

Cisco are the first to come back; they're not too sure the IBM (Brocade side) will ISL with their N5K kit. Initially, I'm confused - "Surely FC is FC, like Ethernet is Ethernet, if both bits of kit speak FC, even if you've not tested the interoperability, it'll work right?". Sadly, as per the Brocade Community Forums post on "Can I connect a 300e to a Cisco Nexus 5548", the answer is no for me, because:

  1. I'm running Brocade Fabric FOS greater than 7.0.0
    1. After this point, Brocade disabled the ability to turn on so-called "interop mode", which means it can't ISL with anything other than a Brocade
    2. The lack of this means FCNS-type stuff, like ability to specify FC Aliases, will fail miserably on me (and both Cisco/IBM Fabrics already make extensive use of FC Aliases)
  2. Neither Cisco nor Brocade guarantee it will work

So back to the drawing board then; but now running with the suggestion someone made in the Forums about Access Gateway (AG) mode.

Brocade Access Gateway (AG) Mode

Access Gateway is Brocade's renaming of what everyone else calls N_Port Virtualisation (NPIV) - because, as I'm now finding, FC Vendors are aresholes and don't believe in notions like standardisation or consistent naming. Access Gateway (and NPIV for that matter) basically turns the Brocade Switch (in this case, the FC5022 Blade Switch) into a "dumb FC Hub", which has no configuration/Zoning on it, and consolidates a given number of F_Ports into 1x shared N_Port, such that the upstream Cisco N5K Switch will see multiple WWNs (Servers) as connected to 1x F_Port (rather than the normal 1x F_Port per WWN). It's better described by The SAN Guy on his Configuring a Brocade Switch for Access Gateway (AG) Mode post, but visually it does this:



Given that I've got enough spare FC ports on the GEMs on my Cisco N5Ks, this is a perfect opportunity to kill-off the useless IBM SAN24B Top of Rack (ToR) Switches I've got, and just cable the 4x Uplinks from each IBM FC5022 (Brocade 6547) directly into the Cisco N5K, so I end up with this:


Implementing Brocade AG to Cisco NPIV

I'll need an outage to achieve this to the Brocade (IBM) side, as after Access Gateway Mode is enabled, the Brocade forgets all it's FCNS/Config, so I'll need to do the following. There is also a very important note in the Brocade Fabric OS Administrator Guide, which basically says FC Initiator and FC Targets can't live on the same N_Port; which is something that could happen to me/has significance, as I have an IBM FlexStor V7000 Storage Array on the same Blade Chassis as IBM Flex Compute Nodes (Blade Servers) that want to access it via FC as a LUN. To overcome this, I'll need to ensure the N_Port Groupings of my Blade Backplane Ports for a given Blade Compute Node end up on differing N_Ports, or "AG Port Groupings" to those which any given V7000 Arrays end up on.

This all looks like:

  1. Cisco N5K preparation (non-disruptive)
    1. Copy-mutate-paste over the Brocade (IBM) FC Aliases and FC Zoning into the Active Zoneset on the Cisco N5K, and activate it in advance ready
    2. Enable "feature npiv" (non-disruptive, not to be confused with "feature npv" which turns the Cisco N5K into a "dumb FC Hub", and is disruptive - as it does to the Cisco side the same that Access Gateway does to an IBM/Brocade)
  2. Brocade cutover (disruptive/needs an Outage Window)
    1. Re-cable the 4x Uplinks from each IBMFC5022 -> IBM SAN24B to instead go IBMFC5022 -> Cisco N5K
      1. Use OM3/OM4 as it's 8 GFC over a short distance
      2. Cisco-side SFPs are DS-SFP-FC8G-SW
      3. IBM/Brocade-side SFPs are XBR-000147
    2. Take the FC Switch out of the FC Domain
      1. switchdisable
    3. Enable the Brocade (IBMFC5022) for Access Gateway (NPIV) Mode
      1. ag –modeenable
    4. Verify NPIV (AG) is done/running on the Brocade (IBM FC5022)
      1. ag --modeshow
    5. Show the port mappings (F_Port -> N_Port), and verify that the V7000 Blade Chassis Ports/WWNs are in differing N_Port Groups to any Blade Compute Servers
      1. ag --mapshow
      2. If they aren't (i.e. WWN from a V7000 and a Blade Compute Node mapped to same N_Port), split them out:
        1. ag --mapdel 0 "13;14"
          ag --mapadd 13 "1;2;5;6"
  3. Cisco N5K post-cutover check
    1. Check copied-over FC Zones using Brocade/IBM WWNs/Hosts are now active (have a "*" against them)
      1. show zone active
        show zoneset active
    2. Check Brocade WWNs are logged into the FLOGI Database
      1. show flogi database
  4. Hit the old IBM SAN24B repeatedly with a large lump hammer and/or baseball bat for all the pain it has caused

I've not had chance to navigate the "Politics of ITIL" (TM) yet to tell you if this is the correct way; I'll let you know.