Buffer Credits for Newbies

Pretty often you get into some sort of mode whereby I forget that there are many youngsters getting their feet wet in the storage arena and some things are hard to grasp on. Traffic flow is one of them so here we go.

As I described in my “Rotten Apple” series a buffer credit methodology is where one side of a link providesĀ  the other side an authorization to send an x amount of frames. The receiving side has reserved a guaranteed amount of buffers to store the incoming frames. As soon as the receiving side has processed the frame or has sent it further down the track it sends a so called “R_RDY” primitive. This special bit-pattern is then recognized by the other side to increase its credits again by one. I always like to bring on some analogy and in this case I use a parking area whereby the entrance is the sending side and the exit is the recieving side. The parking lot has lets say 50 spaces to store 50 cars. It doesn’t matter how big the cars are as long as they are not larger than a predefined size. In our case fibre-channel frames have a maximum size of 2148 bytes (2048 payload, frameheader, footer, crc and maybe an extended header) so lets compare this to a Cadillac Escalade (for non-US citizens look here). In most cases though the payload is way smaller than this so a parking space can also be occupied by a Suzuki Alto (for US citizens look here). Even though the Alto is much smaller than the physical parking space it still does occupy an entire lot.

As mentioned the parking lot has 50 spaces (buffers) and it can receive cars during its operating hours. If the entrance has counted 50 cars it cannot let the next car pass because there is no space. If during the operation period one car leaves the exit tells the entrance (via an electronic message ie R_RDY) so the entrance knows it can let pass another car beyond the original 50 it already had. This can go on and on during the operation period.

There are three ways this can go wrong:

  1. The exit is unable to tell the entrance that a car has left or the message (R_RDY) gets lost. (encoding error/corruption of the R_RDY)
  2. The amount of cars stacking up at the entrance is far bigger than the amount of cars able of leaving. (throughput performance problem)
  3. or the cars cannot exit the carpark quick enough because the exit ramp cannot offload the cars quick enough to the road also causing cars to stack up at the entrance. (Latency)

With option one in Fibre-Channel there are two ways to recover a from this problem One is a credit-recovery mechanism which can also be applied to the car-park which basically lets the entrance and exit tell each other how many cars they have let past. This creates a new baseline and both know whats going on and how many spots are available again. With option two there is really only one method of solving and this is to increase the capacity of both the entrance and the exit in such a way a more efficient flow can accommodate the number of cars going back in and out. With option 3 there is not really a thing you can do from a carp-park (or switch) perspective. The problem is external and thus the issue needs to be addressed over there. Either provide additional exits to different sections of the off-loading road, create different exits to other roadsĀ  or increase the capacity of the roads. With the car-park analogy you will also see that if people need to wait in front of an entrance they will pretty quickly drop off. With Fibre-Channel this is the same, if a frame cannot be sent to a destination quick enough it will time-out and the switch will discard the frame thereby requiring the upper level protocol (in most cases SCSI) to re-submit the IO request. (A bit like having having the car-park manager asking the people to come-back and re-join the queue.)

In a well-designed FC fabric where initiators and targets behave and no errors are observed you will see that it is very unlikely you will run into these kind of situations.

And how is this different with Ethernet then?” you likely ask. With Ethernet you do not have a buffer system as described above and as such you have to see this through another analogy. Its like moving water between two basins on different height levels where the upper basin opens a valve and lets water flow to the lower basin. Once the lower basins is full it’ll send a messenger back to notify the upper basin operator to shut the valve (in Ethernet terms this is called a PAUSE). All water still in transit between the upper and lower basin will be discarded via an overflow at the lower basin and thus becomes useless. The same happens with Ethernet. All packets in transit and arriving at the destination port will be sent into lala land. On busy networks this happens an awe-full lot and when you transfer IP packets this is not something to be worried about. IP provides a bullet-proof method of recovering and has it’s own buffering protocols as well. If however you transport SCSI (like with FCoE) you are in for a big roller-coaster ride.

If you want to know more about buffer-credits and fibre-channel frameflow check out the articles that describe this in other article as well as other articles on FCoE.

Hope this helps.

Constructive feedback is welcome in the comment section or in the survey >>here<<


Erwin van Londen

Print Friendly, PDF & Email

About Erwin van Londen

Master Technical Analyst at Hitachi Data Systems
Config Guide, Fibre Channel, Storage Networking, Troubleshooting , , ,