SCSI UNMAP and performance implications

When listening to Greg Knieriemens’ podcast on Nekkid Tech there was some debate on VMWare’s decisision to disable the SCSI UNMAP command on vSphere 5.something. Chris Evans (www.thestoragearchitect.com) had some questions why this has happened so I’ll try to give a short description.

Be aware that, although I work for Hitachi, I have no insight in the internal algorithms of any vendor but the T10 (INCITS) specifications are public and every vendor has to adhere to these specs so here we go.

With the introduction of thin provisioning in the SBC3 specs a whole new can of options, features and functions came out of the T10 (SCSI) committee which enabled applications and operating systems to do all sorts of nifty stuff on storage arrays. Basically it meant you could give a host a 2 TB volume whilst in the background you only had 1TB physically available. The assumption with thin provisioning (TP) is that a host or application wont use that 2 TB in one go anyway so why pre-allocate it.

So what happens is that the storage array will provide the host with a range of addressable LBA’s (Logical Block Addresses) which the host is able to use to store data. In the back-end on the array these LBA’s are then only allocated upon actual use. The array has one or more , so called, disk pools where it can physically store the data. The mapping from the “virtual addressable LBA” which the host sees and the back-end physical storage is done by mapping tables. Depending on the implementation between the different vendor certain “chunks” out of these pools are reserved as soon as one LBA is allocated. This prevents performance bottlenecks from a housekeeping perspective since it doesn’t need to manage each single LBA mapping. Each vendor has different page/chunks/segment sizes and different algorithms to manage these but the overall method of TP stay the same.

So lets say the segment size on an array is 42MB (:-)) and an application is writing to an LBA which falls into this chunk. The array updates the mapping tables, allocates cache-slots and all the other housekeeping stuff that is done when a write IO is coming in.  As of that moment the entire 42 MB is than allocated to that particular LUN which is presented to that host. Any subsequent write to any LBA which falls into this 42MB segment is just a regular IO from an array perspective. No additional overhead is needed or generated w.r.t. TP maintenance. As you can see this is a very effective way of maintaining an optimum capacity usage ratio but as with everything there are some things you have to consider as well like over provisioning and its ramifications when things go wrong.

Lets assume that is all under control and move on.

Now what happens if data is no longer needed or deleted. Lets assume a user deletes a file which is 200MB big (video for example) In theory this file had occupied at least 5 TP segments of 42MB. But since many filesystems are very IO savvy they do not scrub the entire 42MB back to zero but just delete the FS entry pointer and remove the inodes from the inode table. This means that only a couple of bytes effectively have been removed on the physical disk and array cache.
The array has no way of knowing that these couple of bytes, which have been returned to 0, represent an entire 200MB file and as such these bytes are still allocated in cache, on disk and the TP mapping table. This also means that these TP segments can never be re-mapped to other LUN’s for more effective use if needed. To overcome this there have been some solutions to overcome this like host-based scrubbing (putting all bits back to 0), de-fragmentation to re-align all used LBA’s and scrub the rest and some array base solutions to check if segments do contain on zero’s and if so remove them from the mapping table and therefore make the available for re-use.

As you can imagine this is not a very effective way of using TP. You can be busy clearing things up on a fairly regular basis so there had to be another solution.

So the T10 friends came up with two new things namely “write same” and “unmap”. Write same does exactly what it says. It issues a write command to a certain LBA and tells the array to also write this bit stream to a certain set of LBA’s. The array then executes this therefore offloading the host from keeping track of all the write commands so it can do more useful stuff than pushing bits back and forth between himself and the array. This can be very useful if you need to deploy a lot of VM’s which by definition have a very similar (if not exactly) the same pattern. The other way around it has a similar benefit that if you need to delete VM’s (or just one) the hypervisor can instruct the array to clear all LBA’s associated with that particular VM and if the UNMAP command is used in conjunction with the write same command you basically end up with the situation you want. The UNMAP command instructs the array that a certain LBA (LBA’s) are no longer in use by this host and therefore can be re-used in the free pool.

As you can imagine if you just use the UNMAP command this is very fast from a host perspective and the array can handle this very quickly but here comes the catch. If the host instructs the array to UNMAP the association between the LBA and the LUN it is basically only a pointer from the mapping table that is removed. the actual data does still exist either in cache or on disk. If that same segment is then re-allocated to another host in theory this particular host can issue a read command to any given LBA in that segment and retrieve the data that was previously written by the other system. Not only can this confuse the operating system but it also implies a huge security risk.

In order to prevent this the array has one or more background threads to clear out these segments before they are effectively returned to the pool for re-use. These tasks normally run on a pretty low priority to not interfere with normal host IO. (Remember that it still is (or are) the same CPU(s) who have to take care of this.) If CPU’s are fast and the background threads are smart enough under normal circumstances you hardly see any difference in performance.

As with all instruction based processing the work has to be done either way, being it the array or the host. So if there is a huge amount of demand where hypervisors move around a lot of VM’s between LUN’s and/or arrays, there will be a lot of deallocation (UNMAP), clearance (WRITE SAME) and re-allocation of these segments going on. It depends on the scheduling algorithm at what point the array will decide to reschedule the background and frontend processes so that the will be a delay in the status response to the host. On the host it looks like a performance issue but in essence what you have done is overloading the array with too many commands which normally (without thin provisioning) has to be done by the host itself.

You can debate if using a larger or smaller segment size will be beneficial but that doesn’t matter at all. If you use a smaller segment size the CPU has much more overhead in managing mapping tables whereas using bigger segment sizes the array needs to scrub more space on deallocation.

So this is the reason why VMWare had disabled the UNMAP command in this patch since a lot of “performance problems” were seen across the world when this feature was enabled. Given the fact that it was VMWare that disabled this you can imagine that multiple arrays from multiple vendors might be impacted in some sense otherwise they would have been more specific on array vendors and types which they haven’t done.

Print Friendly, PDF & Email

Subscribe to our newsletter to receive updates on products, services and general information around Linux, Storage and Cybersecurity.

The Cybersecurity option is an OPT-OUT selection due to the importance of the category. Modify your choice if needed.

Select list(s):

5 responses on “SCSI UNMAP and performance implications

  1. Damian

    Hi,
    thanks for the article, now I understand why VMware had problems with it. I agree with Chris that it’s the storage array’s fault, but I also think that there is a simple solution:
    The array should maintain a bit-flag “not_written_since_last_reassignment” per physical block, stating that the physical block has been unmapped before and not written to since then. As long as the flag is still nonzero (i.e. true), the array should return all-zero-blocks on any read-request. Using such flags there would be no need to physically zero unmapped blocks.

  2. Erwin

    Hi Chris,

    Indeed 42MB is not a bad number and due to this fact we don’t have this much CPU overhead plus we can use a very good sequential algorithm to get this sorted whereas if you have much smaller segment sizes these could be scattered all over a diskpool which causes much more randomness and hence a significant performance impact. (Yes this 42MB didn’t drop out of the sky for nothing since it has many more effectiveness all over the place w.r.t external storage, HDP, HDT etc….. We always told you we have smart folk on board. :-))

    W.r.t. to the T10 UNMAP being secure or insecure or whether it should have this option that’s up for debate. Be aware that pools can be shared between hypervisors and non-hypervisors as well as RAW apps like Oracle. Also how does a hypervisor determine when a certain LBA needs to be securely erased or not. If the OS does some memory swapping the hypervisor is really unable to determine if this is due to the memory swap process or if an application is deleting files. As I mentioned in one of my first blog posts the lack of data intelligence mapped onto the infrastructure is massive to such an extend that every piece of storage equipment out there has absolutely no clue of what’s going on on a application levels (some exceptions do apply) . This is where the real integration should be. This means that the apps should determine whether an UNMAP operation should be wiped or not and not have the hypervisors interfere with these kind of operations.

    Anyway, lets see what the future brings.

    Thanks again for your reply.

    Cheers
    Erwin

  3. Chris M Evans

    Erwin

    This draws up a number of questions;

    First, 42MB might not be such a bad number after all, as the number of block release an remappings would be significantly less than with, say, 16KB. Next, you would expect VMware would *not* be releasing blocks back to the free pool on a regular basis unless some kind of threshold was met. For example, as VMFS marks a block as free, if it kept (say) 10% blocks in a local queue and reused them, then there would be less need to push off the release to the array.

    I think also it means that the implementation of UNMAP on the underlying array is flawed. T10 & VMware and the storage vendors should have implemented both a secure and unsecured UNMAP, one which wipes data, one which doesn’t, rather than assuming everything is so critical that it needs to be wiped at release. Where arrays are only connected to VMware, then an unsecure UNMAP might be acceptable as new blocks would zeroed out on usage.

    Perhaps we’ll see some of these functions coming along as they mature.

    Cheers
    Chris