Tag Archives: array

Help, my Thin Provisioning is not working

On many occasions I’ve seen posts from storage administrator who mapped some luns to hosts and on the first use the entire pool got whacked with all bells and whistles going off. (Yes, we can control bells and whistles.:-))


The administrator did nothing wrong however he should have communicated with the server admin what the luns were for and how they were going to be used. As I mentioned in my previous post around Thin Provisioning is that the array doesn’t really know what’s going on from a host perspective. It know, due to HMO (port group settings) which type of host is connected and adjusts some internal knobs to accommodate for the commands from that particular host or application.
What it does not know is how that application is using the array.

Remember that a storage array just knows about reads and writes (besides the special commands specific for management).

In normal occasions a lun is mapped and on the host this lun is then formatted to a specific filesystem. Some filesystems use only the first couple of sectors of a disk to outline mapping of the blocks so if the application want to write a chuck of data the filesystem creates the inode, registers the mapping in the filesystem table in the beginning of the disk and away we go.

When we look at the disk from this perspective when formatted it looks like this:

————————————————————————————
|************   |            |               |             |            |
————————————————————————————

Only the first sector is written and the rest is still empty.

The same would happen if this lun was mapped out of a thin provisioned pool. Only the first couple of sectors on the virtual disk would be written, and therefore only the page occupying these sectors, would be marked as used in the pool, and the rest would still be empty and thus the array would not allocate them to this particular lun.

So far all is well.

The problem begins when the same lun is formatted with a filesystem which does interleaved formatting. The concept here is that the filesystem mapping table is spread over the entire disk which might improve performance if you do this on a single physical disk.

————————————————————————————

|**          | **           | **           | **            | **           | **
————————————————————————————

On writes the chances that you’re able to update the mapping table, create the inodes and write the data in one stroke is fairly good.

Now compare the interleaved method to the one I described before and you will be able to figure out why this is really rendering This Provisioning useless. Since the chance is near 100% that all pages from that pool will be “touched” at least once, the entire page will be marked as used in that pool even though the net written data is next to nothing.

No you might think: “OK, I choose a filesystem which is TP friendly and I’m sorted”.

Well, not quite. Server administrator very often like to have their own “storage management tool” in the likes of volume managers. This allows them to virtualise  “physical” luns mapped out of an array to a single entity in their systems.
The problem with this is that it will behave the same as the TP unfriendly filesystem with that difference that it’s not the filesystems doing the interleaving of metadata but now it’s the volumemanagers doing the same thing.

In both cases a TP pool will fill up pretty quickly without having an application write a single bit.

All storage vendors have whitepapers and instructions available how to plan for all these occasions. If you don’t want to run into surprises I suggest you have a look at them.

Regards,
Erwin van Londen

Maintenance

Why do I keep wondering why companies don’t maintain their infrastructure. It looks to be more of an exception than a rule to come along software and firmware which is newer than 6 months old. True, I admit, it’s a beefy task to keep this all lined up but then again you know this in advance so why isn’t there any plan to get this sorted every 6 months or even more frequent.


In my day-to-day job I see a lot of information from many customers around the world. Sometimes during implementation phases there is a lot of focus on interoperability and software certification stacks. Does Server A with HBA B work with switch Y on storage platform Z? This is very often a rubber stamping methodology which goes out the door with the project team. The moment a project has been done and dusted this very important piece is very often neglected due to many reasons. Most of them are time constraints, risk aversion, change freezes, fear something goes wrong etc. However when you look at the risk businesses take by taking chances not properly maintaining their software is like walking on a tightrope over the Grand Canyon with wind-gusts over 10 Beaufort. You might get a long way but sooner or later you will fall.

Vendors do actually fix things although some people might think otherwise. Remember in a storage array are around 800.000 pieces of hardware and a couple of million lines of software which make this thing run and store data for you. If you compare that to a car and would run the same maintenance schedule you’re requiring the car to run for 120 years non-stop without changing oil, filters, tyres, exhaust etc.etc. So would you trust your children in such a car after 2 years or even after 6 months. I don’t, but still businesses do seem to take the chances.

So is it fair for vendors to ask (or demand) you to maintain your storage environment. I think it is. Back in the days when I had my feet wet in the data-centers (figuratively speaking that is) once I managed a storage environment of around 250 servers, 18 FC switches and 12 arrays so a pretty beefy environment in those days. I’d set myself a threshold for firmware and drivers not to exceed a 4 month lifetime. That meant that if newer code came out from a particular vendor it was impossible that code was not implemented before those 4 months were over.
I spent around two days every quarter to generate the paperwork with change requests, vendor engineers appointments etc and 2 two days to get it all implemented. Voilà done.The more experience you become in this process the better and faster it will be done.

Another problem is with storage service providers, or service providers in general. They also are depending on their customers to get all the approvals stamped, signed and delivered which is very often seen a huge burden so they just let it go and eat the risk of something going wrong. The problem is that during RFP/RFI processes the customers do not ask for this, the sales people are not interested since this might become a show-stopper for them and as such nothing around the ongoing infrastructure maintenance is documented or contractually written in delivery or sales documents.

As a storage or service provider I would turn this “obligation” in my advantage and say: Dear Mr, customer, this is how we maintain our infrastructure environment so we, and you, are assured of immediate and accurate vendor support, have up-to-date infrastructure to minimize the risk of anything going downhill at some point in time.”

I’ve seen it happen were high severity issues with massive outages were logged with vendors where those vendors came back and say “Oh yes Sir, we know of this problem and we have fixed that a year ago but you haven’t implemented this level of firmware, it is far outdated”.

How would that look if you’re a storage/service provider to your customers or a bank who’s online banking system had some coverage in the newspapers??

Anyway, the moral is “Keep it working by maintaining your environment”.

BTW, at SNWUSA Spring 2011 I wrote a SNIA tutorial “Get Ready for Support” which outlines some of the steps you need to take before contacting your vendors’ support organisation. Just log on to www.snia.org/tutorials. It will be posted there after the event.

Cheers,
Erwin van Londen