Tag Archives: HDS

ViPR – Frankenstorage Revisited | Architecting IT

Fellow blogger and keen dissector of fluff Chris Evans really hit the nail on the head with this one.

ViPR – Frankenstorage Revisited | Architecting IT:

Even after reading the announcement from EMC a couple of times I really struggle with finding out what is actually announced. It looks like they crammed every existing technology available in the storage-world and overlay that with every other piece of existing technology in the storage world.

I am wondering if EMC market watches have been under a stone for the past 6 years but the ability to virtualise storage in multiple north-south bound manners have been done by HDS for a long time. With the introduction of HCP (Hitachi Content Platform) HDS introduced multiple access methods (http/NFS/REST) to object based storage which can retrieve and store information from multiple sources including block/nfs/etc. HNAS added a high performance file based platform to that. The storage virtualization stack was invented by them and the overall management is a single pane of glass to manage all of this even from three generations back. The convergence of storage products and protocols and making them available to hook into cloud platforms like OpenStack, vSphere, EC3 etc is a natural evolution however it seems that EMC needs an entire marketing department to convolute the fact they have a very disparate set of products which were either designed by themselves thru different engineering teams who do not talk to each other or, and that has been the prefered way for EMC, by obtaining foreign technologies via acquisitions which by design never ever talk to other technologies to make sure to have a market differentiating product.

As Chris said, for years EMC have been contemptuous around storage virtualisation and didn’t have an answer to what the rest of the industry, most notably HDS and IBM, were doing. The fact that cloud platforms ramp up significantly left them in a state where they would lose out on customers who were looking into this for their next generation cloud and storage platforms. The speed in which is now brought to market is almost evidence that the quality of the whole is severely less than the sum of its components which leads you to ask “Do I really want this?”

ViPR needed a significant project (Bourne) to a.) try and tie all this stuff together from all the EMC product dungeons and b.) ramp up the entire marketing department and create a soup which has been stirred long enough to look massive and new but leaves a very bitter after-taste.

I would go for some nice Japanese sushi which is well balanced, thought through, looks greats and prevents the need to take stomach pills.

Cheers,
Erwin

SCSI UNMAP and performance implications

When listening to Greg Knieriemens’ podcast on Nekkid Tech there was some debate on VMWare’s decisision to disable the SCSI UNMAP command on vSphere 5.something. Chris Evans (www.thestoragearchitect.com) had some questions why this has happened so I’ll try to give a short description.

Be aware that, although I work for Hitachi, I have no insight in the internal algorithms of any vendor but the T10 (INCITS) specifications are public and every vendor has to adhere to these specs so here we go.

With the introduction of thin provisioning in the SBC3 specs a whole new can of options, features and functions came out of the T10 (SCSI) committee which enabled applications and operating systems to do all sorts of nifty stuff on storage arrays. Basically it meant you could give a host a 2 TB volume whilst in the background you only had 1TB physically available. The assumption with thin provisioning (TP) is that a host or application wont use that 2 TB in one go anyway so why pre-allocate it.

So what happens is that the storage array will provide the host with a range of addressable LBA’s (Logical Block Addresses) which the host is able to use to store data. In the back-end on the array these LBA’s are then only allocated upon actual use. The array has one or more , so called, disk pools where it can physically store the data. The mapping from the “virtual addressable LBA” which the host sees and the back-end physical storage is done by mapping tables. Depending on the implementation between the different vendor certain “chunks” out of these pools are reserved as soon as one LBA is allocated. This prevents performance bottlenecks from a housekeeping perspective since it doesn’t need to manage each single LBA mapping. Each vendor has different page/chunks/segment sizes and different algorithms to manage these but the overall method of TP stay the same.

So lets say the segment size on an array is 42MB (:-)) and an application is writing to an LBA which falls into this chunk. The array updates the mapping tables, allocates cache-slots and all the other housekeeping stuff that is done when a write IO is coming in.  As of that moment the entire 42 MB is than allocated to that particular LUN which is presented to that host. Any subsequent write to any LBA which falls into this 42MB segment is just a regular IO from an array perspective. No additional overhead is needed or generated w.r.t. TP maintenance. As you can see this is a very effective way of maintaining an optimum capacity usage ratio but as with everything there are some things you have to consider as well like over provisioning and its ramifications when things go wrong.

Lets assume that is all under control and move on.

Now what happens if data is no longer needed or deleted. Lets assume a user deletes a file which is 200MB big (video for example) In theory this file had occupied at least 5 TP segments of 42MB. But since many filesystems are very IO savvy they do not scrub the entire 42MB back to zero but just delete the FS entry pointer and remove the inodes from the inode table. This means that only a couple of bytes effectively have been removed on the physical disk and array cache.
The array has no way of knowing that these couple of bytes, which have been returned to 0, represent an entire 200MB file and as such these bytes are still allocated in cache, on disk and the TP mapping table. This also means that these TP segments can never be re-mapped to other LUN’s for more effective use if needed. To overcome this there have been some solutions to overcome this like host-based scrubbing (putting all bits back to 0), de-fragmentation to re-align all used LBA’s and scrub the rest and some array base solutions to check if segments do contain on zero’s and if so remove them from the mapping table and therefore make the available for re-use.

As you can imagine this is not a very effective way of using TP. You can be busy clearing things up on a fairly regular basis so there had to be another solution.

So the T10 friends came up with two new things namely “write same” and “unmap”. Write same does exactly what it says. It issues a write command to a certain LBA and tells the array to also write this bit stream to a certain set of LBA’s. The array then executes this therefore offloading the host from keeping track of all the write commands so it can do more useful stuff than pushing bits back and forth between himself and the array. This can be very useful if you need to deploy a lot of VM’s which by definition have a very similar (if not exactly) the same pattern. The other way around it has a similar benefit that if you need to delete VM’s (or just one) the hypervisor can instruct the array to clear all LBA’s associated with that particular VM and if the UNMAP command is used in conjunction with the write same command you basically end up with the situation you want. The UNMAP command instructs the array that a certain LBA (LBA’s) are no longer in use by this host and therefore can be re-used in the free pool.

As you can imagine if you just use the UNMAP command this is very fast from a host perspective and the array can handle this very quickly but here comes the catch. If the host instructs the array to UNMAP the association between the LBA and the LUN it is basically only a pointer from the mapping table that is removed. the actual data does still exist either in cache or on disk. If that same segment is then re-allocated to another host in theory this particular host can issue a read command to any given LBA in that segment and retrieve the data that was previously written by the other system. Not only can this confuse the operating system but it also implies a huge security risk.

In order to prevent this the array has one or more background threads to clear out these segments before they are effectively returned to the pool for re-use. These tasks normally run on a pretty low priority to not interfere with normal host IO. (Remember that it still is (or are) the same CPU(s) who have to take care of this.) If CPU’s are fast and the background threads are smart enough under normal circumstances you hardly see any difference in performance.

As with all instruction based processing the work has to be done either way, being it the array or the host. So if there is a huge amount of demand where hypervisors move around a lot of VM’s between LUN’s and/or arrays, there will be a lot of deallocation (UNMAP), clearance (WRITE SAME) and re-allocation of these segments going on. It depends on the scheduling algorithm at what point the array will decide to reschedule the background and frontend processes so that the will be a delay in the status response to the host. On the host it looks like a performance issue but in essence what you have done is overloading the array with too many commands which normally (without thin provisioning) has to be done by the host itself.

You can debate if using a larger or smaller segment size will be beneficial but that doesn’t matter at all. If you use a smaller segment size the CPU has much more overhead in managing mapping tables whereas using bigger segment sizes the array needs to scrub more space on deallocation.

So this is the reason why VMWare had disabled the UNMAP command in this patch since a lot of “performance problems” were seen across the world when this feature was enabled. Given the fact that it was VMWare that disabled this you can imagine that multiple arrays from multiple vendors might be impacted in some sense otherwise they would have been more specific on array vendors and types which they haven’t done.

SNWUSA Pre-conference schedule

As this week is going to fairly interesting at SNWUSA in Santa Clara I thought I’d might give some short daily updates. In the twittersphere there is already some great noise spread with all schedules around #storagebeers and other social interactions. Yesterday was travelday for me. Getting on a massive Airbus A380  is always quite an experience. Since I used my points on other gadgets (mainly for my kids)
I had to fly tourist class so 14 hours being cranked in a chair is not the best way to travel but on an A380 it’s still doable. At least they have some pretty good entertainment (or annoyance distraction) system in that aircraft so time “flew” by pretty fast. The flying Kangaroo brought me safely to Los Angeles.
After a short wait in LAX the next plane (Ambrear, what a difference) took me to San Jose and I stopped by the office on Central Express way to catch up with some emails and check out the HDS corporate HQ.

Tonight I’m going to have a catchup with buddy Heff (no not that Heff, Michael Heffernan or @virtualheff for insiders) and have some dinner.

I think it’s going to be a good and exiting week.

More to come.

Cheers
E

The Smarter Storage Admin (Work Smarter not Longer)

Lets start off with a question: Who is the best storage admin?
1. The one that starts at 07:00 AM and leaves at 18:00 PM
2. The one that starts at 09:00 AM and leaves at 16:00 PM

Two simple answers but they can make a world of difference to employers. Whenever an employer answers with no. 1 they often have the remark that this admin does a lot more work and is more loyal to the company. They might be right however the daily time spent at work is not a good qualifier for productivity so the amount of work done might be less than no.2. This means that an employer has to measure on other points and define clear milestones that have to be fulfilled.


Whenever I visit customers I often get the complaint that they spend too much time doing day to day administration like digging through log files, checking status messages, restoring files or emails etc. etc. These activities can occupy more than 60% of an administrators day which can be avoided.
To be more efficient one has to change the mindset from knowing all to knowing what doesn’t work. It’s a very simple principle however to get there you have to do a lot planning.
An example is when a server reboots do I want to know if the switch port goes offline? Maybe I do, maybe I don’t. It all depends on what the impact of that server is. Is it planned or not or maybe this server belongs to a test environment in which case I don’t want to get a phone-call in the middle of the night at all.

The software and hardware in a storage environment consists of many different components and they all have to work together. The primary goal of such an environment is to move bytes back and forth to disk, tape or another medium and they do that pretty well nowadays. The problem however is management of all these different components which require all different management tools, learning tracks and operation procedures. Even if we shift our mindset to “What doesn’t work”, we still have to spend a lot of time and effort in thing we often don’t want to know.

Currently there are no tools available who support the whole range of hardware and software so for specific tasks we still need the tools the vendors provide. However for day to day administration there are some good tools which might be very beneficial for administrators. These tools can save more than 40% of an administrators time so they can do more work in less time. It takes a simple calculation to determine the ROI and another pro is that the chances of making mistakes is drastically reduced.

Another thing to consider is if these tools fit into the business processes if these are defined within a company. Does the company have ITIL, Prince2 or any other method of IT service management in place. If so the storage management tool has to align to these processes since we don’t want to do things twice.

Last but not least is the support for open standards. The SNIA (Storage Networking Industry Association) is an non-profit organization which was founded by some storage vendors in the late 90’s. The SNIA works in conjunction with its members around the globe to make storage networking technologies understandable, simpler to implement, easier to manage, and recognized as a valued asset to business. One of the standards ,which was recently certified by ANSI, is SMI-S. This standard defines a very large subset of storage components which can be managed through a single common methodology. This means that you’ll get one common view of all your storage assets with the ability to manage it through a single interface independent of the vendor. If your storage management tool is based on this standard you do not have a vendor lock-in and day to day operations will be more efficient.
This implies however that the vendor also has to support the SMI-S standard so make sure you make the right choice if you are looking for a storage solution and ask the vendor if he supports the SMI-S standard and to what extend.

Greetz,
Erwin