A fair number of people have reported that when they want to provision storage to a host this doesn’t seem to work. Only after bouncing a FC port or rebooting the host these LUNs become visible. Others reported that it only works when they provision LUNs and zones in a particular order. So how is this possible?
The problem sits with the sequence and behaviour of hosts discovering targets and LUNs in a Fibre-Channel environment. Normally when a host starts up the HBA issues a FLOGI to the switch after which it gets its 24bit FCID. It then logs into the name-server to register this FCID plus it queries that name-server for targets. The name-server responds with a list of FCID’s that corresponds to the members of the zone of which this HBA has access. The HBA then sends a PLOGI to each FCID the name-server has provided. If the target accepts the PLOGI the HBA will then initiate a PRLI (Process Login) which means that a session on the FC4 level (in most cases FCP/SCSI) is set-up which allows the SCSI subsystem in the host to send SCSI inquiry for discovery purposes after which normal read and write commands can be issued.
Now the issue sits on the access level at the array port. Every array in the world supports access of more than one host (HBA/Initiator) to a single port. The firmware behind this port provides a method of grouping these HBA’s so you can differentiate which initiator has access to which LUNs and which FC and SCSI characteristics should be enabled or disabled. (VAAI is a good example of functionality that only needs to be enables for an initiator belonging to an ESX host). If however such a group does not exist or the WWN of the HBA is not registered in one of those groups, the initiator will not be able to login because the array-port rejects the PLOGI. The initiator is therefore unable to setup a FCP session and thus cannot obtain a list of LUNs from that port.
You might think “OK, lets create such a group, issue a rescan and they should pop up” but that is wrong. A rescan from a host perspective does not re-issue a new PLOGI (remember that if no host group was present the PLOGI was rejected.). Only a state change on the FC level will trigger an RSCN (Registered State Change Notification) to inform a port that something has changed in the fabric or their zone so it can send a, so called, ADISC to discover new ports and PDISC to existing ports in order to register to that port. The creation of a new hostgroup does NOT trigger an RSCN and as such the HBA is still unable to instantiate a FCP/SCSI session in order to get a list of LUNS. So we’re a bit in a catch-22 situation here.
Now if we change the sequence in which we create host-groups, zones and host-bootup then the phenomenon is not applicable.
Three things do work:
- Create hostgroup, create zone and then boot up the host.
- Create zone, create hostgroup and then boot up the host
- Create hostgroup, boot-up host and then add/change the zone.
Option 3 works because a zone-change for that initiator does trigger an RSCN and hence will cause the initiator to discover the new target, submit a PLOGI and PRLI and thus a list of LUNs which it can access. If you then add new LUNs in that same hostgroup the host-level “rescan” does work since that command checks on the FCP layer for changes. Since it is already logged in the array port successfully any new additions or removals will be picked up.
You can argue that the array should send a notification to all members in a zone if anything changes on that particular port but there are risks attached to that. If you have 200 initiators (physical or virtual NPIV) attached and logged in to a single array-port and something changes for a particular hostgroup ALL those initiators will receive the RSCN and thus they will ALL issue PDISC frames and subsequent SCSI inquiry commands which might overload an array controller and thus can have severe consequences. There is no such thing as a “targeted” RSCN towards a single initiator in this case so the cure might be worse than the disease. A safer option would be to add a bogus WWN in the same zone of the initiator and target, apply the zoning configuration (only members of that zone will receive the RSCN from the fabric-controller) after which that particular initiator has access and then remove the bogus WWN from the zone again. I agree this is a fairly cumbersome workaround but for now it’s the simplest and safest option for non-disruptive provisioning if you are in such a situation. If you are able to toggle an HBA port it’ll have the same result in the end however be aware that all outstanding exchanges will be aborted and thus you’ll likely to see IO errors or MPIO error messages.
Regards,
Erwin
Hello Biju,
Some arrays do allow the initiator to log in even though the WWN is not registered. This can imply a security risk since it will also allow WWN spoofing and unauthorised access to LUNs with all nasty issues related.
I don’t have an EVA to my disposal so I cannot analyse a trace to see what its behaviour is. There are pros and cons to both options. It depends on the choices the vendors make.
Hope this explains it.
so how is the EVA different, customer claims that he used to
1. Zone the host to the port so that the WWPNs are visible on the port.
2. Mask the LUNs using the visible WWPN
At least the claim is that the LUNs were visible right after, no reboots or flapping paths.
As one of my colleagues pointed out, is it because EVA has the concept of a LUN0 visible to all hosts. Some sort of an access device which allows every PLOGI to be registered??