Tag Archives: disk

Short stroking disk drives to improve performance

Reading a post from Hans DeLeenheer (VEEAM) which ramped up quite a bit including responses from Calvin Zito (HP), Alex McDonald (NetApp) and Nigel Poulton. The discussion started on a comment that XIO had “special” firmware which improved IO performance. Immediately the term “short-stroking” came up which leads to believe X-IO is cheap-skating on their technologies. I was under the same impression at first right until the moment I saw that Richard Lary is (more or less) the head of tech at X-IO together with Clark Lubbers and Bill Pagano who also come out of the same DEC stable. For those of you who don’t know Richie, he’s the one who ramped up Digital StorageWorks back in the late 70’s/early 80’s and also stood at the cradle of VAX-VMS. (Yeah yeah, I’m getting old, google it if you don’t know what I’m talking about.)

Continue reading

Why partition alignment on disk matters (Linux)

Linux has been pretty good with and for storage. The sheer volume of options w.r.t. filesystems, volume-managers, access methods (FC, iSCSI, NFS, DAS etc), multi-pathing  but also the very broad support of the hardware ecosystem is something to be proud of. The issue with storage support is that you ALWAYS have to maintain a massive backward-compatibility string with previous generations of technology. Not only from a hardware perspective but also the soft-side needs to retain the older technology. I saw a video featuring Linus, Greg Kroah-Hartman,  Sarah Sharp and Ted Ts’o over here where Ted mentioned that the KVM feature helped him massively with regression testing for the storage projects he’s involved in. (As you may know Ted maintains the ext(2/3/4) filesystem among other things). That brings me to the bottleneck of history in a technology environment and why the topic I described in the subject is important.

Continue reading

One rotten apple spoils the bunch – 3

In the previous 2 blog-posts we looked at some areas why a fibre-channel fabric still might have problems even with all redundancy options available and MPIO checking for link failures etc etc.
The challenge is to identify any problematic port and act upon indications that certain problems might be apparent on a link.

So how do we do this in Brocade environments? Brocade has some features build into it’s FOS firmware which allows you to identify certain characteristics of your switches. One of them (Fabric-Watch) I briefly touched upon previously. Two other command which utilize Fabric_Watch are bottleneckmon and portfencing. Lets start with bottleneckmon.

Bottleneckmon was introduced in the FOS code stream to be able to identify 2 different kinds of bottlenecks: latency and congestion.

Latency is caused by a very high load to a device where the device cannot cope with the offered load however it does not exceed the capabilities of a link. As an example lets say that a link has a synchronized speeds of 4G however the load on that link reached no higher than 20MB/s and already the switch is unable to send more frames due to credit shortages. A situation like this will most certainly cause the sort of credit issues we’ve talked about before.

Congestion is when a link is overloaded with frames beyond the capabilities of the physical link. This often occurs on ISL and target ports when too many initiators are mapped on those links. This is often referred to as an oversubscribed fan-in ratio.

A congestion bottleneck is easily identified by looking at the offered load compared to the capability of the link. Very often extending the connection with additional links (ISL, trunk ports, HBA’s)  and spreading the load over other links or localizing/confining the load on the same switch or ASIC will most often help. Latency however is a very different ballgame. You might argue that Brocade also has a portcounter called tim_txcrd_zero  and when that reaches 0 pretty often you also have a latency device but that’s not entirely true. It may also mean that this link is very well utilized and is using all its credits. You should also see a fair link utilization w.r.t. throughput but be aware this also depends on frame size.

So how do we define a link as a high latency bottleneck? The bottleneckmon configuration utility provide a vast amount of parameters which you can use however I would advise to use the default settings as a start by just enabling bottleneck monitoring with the “bottleneckmon –enable” command. Also make sure you configure the alerting with the same command otherwise the monitoring will be passive and you’ll have to check each switch manually.

If a high latency device is caused by physical issues like encoding/decoding errors you will get notified by the bottleneckmon feature however when this happens in the middle of the night you most likely will not be able to act upon the alert in a timely fashion. As I mentioned earlier it is important to isolate this badly behaving device as soon as possible to prevent it from having an adverse effect on the rest of the fabric. The portfencing utility will help with that. You can configure certain thresholds on port-types and errors and if such a threshold has been reached the firmware will disable this port and alert you of it.

I know many administrators are very reluctant to have a switch take these kind of actions on its own and for a long time I agreed with that however seeing the massive devastation and havoc a single device can cause I would STRONGLY advise to turn this feature on. It will save you long hours of troubleshooting with elongated conference calls whilst your storage network is causing your application to come to a halt. I’ve seen it many times and even after pointing to a problem port very often the decision to disable such a port subject to change management politics. I would strongly suggest that if you have such guidelines in your policies NOW is the time to revise those policies and enable the intelligence of the switches to prevent these problem from occurring.

For some comprehensive overview, options and configuration examples I suggest you first take a look at the FOS admins guide of the latest FOS release versions. Brocade have also published some white-papers with more background information.

Regards,
Erwin

 

Help, my Thin Provisioning is not working

On many occasions I’ve seen posts from storage administrator who mapped some luns to hosts and on the first use the entire pool got whacked with all bells and whistles going off. (Yes, we can control bells and whistles.:-))


The administrator did nothing wrong however he should have communicated with the server admin what the luns were for and how they were going to be used. As I mentioned in my previous post around Thin Provisioning is that the array doesn’t really know what’s going on from a host perspective. It know, due to HMO (port group settings) which type of host is connected and adjusts some internal knobs to accommodate for the commands from that particular host or application.
What it does not know is how that application is using the array.

Remember that a storage array just knows about reads and writes (besides the special commands specific for management).

In normal occasions a lun is mapped and on the host this lun is then formatted to a specific filesystem. Some filesystems use only the first couple of sectors of a disk to outline mapping of the blocks so if the application want to write a chuck of data the filesystem creates the inode, registers the mapping in the filesystem table in the beginning of the disk and away we go.

When we look at the disk from this perspective when formatted it looks like this:

————————————————————————————
|************   |            |               |             |            |
————————————————————————————

Only the first sector is written and the rest is still empty.

The same would happen if this lun was mapped out of a thin provisioned pool. Only the first couple of sectors on the virtual disk would be written, and therefore only the page occupying these sectors, would be marked as used in the pool, and the rest would still be empty and thus the array would not allocate them to this particular lun.

So far all is well.

The problem begins when the same lun is formatted with a filesystem which does interleaved formatting. The concept here is that the filesystem mapping table is spread over the entire disk which might improve performance if you do this on a single physical disk.

————————————————————————————

|**          | **           | **           | **            | **           | **
————————————————————————————

On writes the chances that you’re able to update the mapping table, create the inodes and write the data in one stroke is fairly good.

Now compare the interleaved method to the one I described before and you will be able to figure out why this is really rendering This Provisioning useless. Since the chance is near 100% that all pages from that pool will be “touched” at least once, the entire page will be marked as used in that pool even though the net written data is next to nothing.

No you might think: “OK, I choose a filesystem which is TP friendly and I’m sorted”.

Well, not quite. Server administrator very often like to have their own “storage management tool” in the likes of volume managers. This allows them to virtualise  “physical” luns mapped out of an array to a single entity in their systems.
The problem with this is that it will behave the same as the TP unfriendly filesystem with that difference that it’s not the filesystems doing the interleaving of metadata but now it’s the volumemanagers doing the same thing.

In both cases a TP pool will fill up pretty quickly without having an application write a single bit.

All storage vendors have whitepapers and instructions available how to plan for all these occasions. If you don’t want to run into surprises I suggest you have a look at them.

Regards,
Erwin van Londen

Why disk drives have become slower over the years

What is the first question vendors get (or at least used to get) when a customer (non-technical) calls???
I’ll spare you the guesswork: “What does a TB of disks do at your place ??”. Usually I’ll goofle around for the cheapest disk at a local PC store and say “Well Sir, that would be about 80 dollars”. I then hear somebody falling of the chair, trying to get up again, reach for the phone and with a resonating voice asking “Why are your competitors so expensive then?”. “They most likely did not gave a direct answer to your question.”, I reply.
The thing is an HDD should be evaluated on multiple factors and when you spend 80 bucks on a 1TB disk you get capacity and that’s about it. Don’t expect performance or extended MTBF figures let alone all the stuff than comes with enterprise arrays like large caches, redundancy in every sense and a lot more. This is what makes up the price per GB.


“Ok, so why have disk drives become so much slower in the past couple of years?”. Well, they haven’t. The RPM, seek time and latency have remained the same over the last couple of years. The problem is that the capacity has increased so much that the so called “access density” has increased linearly so the disk has to service a massive amount of bytes with the same nominal IOPS capability.


I did some simple calculations which shows the decrease in performance on larger disks. I didn’t assume any raid or cache accelerators.


I first calculated a baseline based on a 100GB disk drive (I know, they don’t exist but it just for the calculations) with 500GB of data that I need to read or write.


The assumption was to have a 100% random read profile. Although the host can read or write in increments of 512 bytes IO size theoretically this doesn’t mean the disk will write this IO in one sequential stroke. An 8K host IO can be split up in the smallest supported sector size on disk which is currently around 512 bytes. (Don’t worry, every disk and array will optimize this but again this is just to show the nominal differences)


So when I have 100GB disk drive this translates to a little over 190 million sectors.  In order to read 500 GB of data this would take a theoretical 21.7 minutes. The number of disks are calculated based on the capacity required for that 500GB (Also remember that disks use a base10 capacity value whereas operating systems,memory chips and other electronics use a base2 value so that’s 10^3 vs 2^10.)

Baseline
Sectors RPM Avrg delay in ms Max IOPS Disks required 6
100 190,734,863 10000 8 125 Num IOPS 750
Time Required 1302
in minutes 21.7

 If you now take this baseline and map this to some previous and current disk types and capacities you can see the differences.

GB Sectors RPM # Disk per A29 Num IOPS Time required in sec in min %pcnt of base line * times base value
9 17,166,138 7200 57 4731 206 3.44 15.83 6.32
18 34,332,275 7200 29 2407 406 6.77 31.19 3.21
36 68,664,551 10000 15 1875 521 8.69 40.02 2.50
72 137,329,102 10000 8 1000 977 16.29 75.04 1.33
146 278,472,900 10000 4 500 1953 32.55 150 0.67
300 572,204,590 10000 2 250 3906 65.1 300 0.33
450 858,306,885 10000 2 250 3906 65.1 300 0.33
600 1,144,409,180 10000 1 125 7813 130.22 600.08 0.17

You can see here that capacity wise to store the same 500GB on 146 GB disks you need less disks but you also get fewer total IOPS. This then translates into slower performance. As an example a 300GB drive with 10000RPM triples the time compared to the baseline disk to read this 500 gigabyte.


Now these a re relatively simple calculations however they do apply to all disks including the ones in your disk array.


I hope this also makes you start thinking about performance as well as capacity. I’m pretty sure your business finds it most annoying when your users need to get a cup of coffee after every database query. 🙂

The end of spinning disks (part 2)

Maybe you found the previous article a bit hypothetical and is not substantiated by facts but merely some guestimations?

To put some beef into the equation I’ll try to substantiate it with some simple calculations. Read on.


As shown in Cornell Uni’s report the expected amount of data generated will reach 1700 exabytes in 2011 with an additional 2500 in 2012. 1700 exabytes equates to 1 trillion, 700 billiard gigabytes in EU notation (say what…., look here)

So number-wise it looks like this: 1.700.000.000.000 GB

The average capacity of a disk drive in 2011 is around 1400 GB (the average of enterprise drives with high RPM of 600GB + the largest capacity wise commercially available for enterprise environments HDD of 2TB).In consumer land WD has a 6TB drive but these will not become mainstream until the end of 2011 or beginning 2012 . Maybe storage vendors will use the 3 and 4 TB versions but I do not have visibility of that currently.

1700EB / 1400GB = 1.214.285.714 disk drives are needed to store this amount of information. (Ohh, in 2012 we need 1.785.714.286 units :-))

This leads us to have a look at production capabilities and HD vendors. Currently there are two major vendors in the HDD market. Seagate (which shipped 50 million HDD in FQ3 2011) and WD shipping 49 million. (Seagate acquired HGST and WD is talking to the HDD division of Samsung) Those 4 companies combined have a production capacity of around 150 million diskdrives per quarter. This means on an annual basis a shortage of : 1.214.285.714 – 600.000.000 = 614.285.714 HDD’s
So who says the HDD business isn’t a healthy one? 🙂

OK, I agree, not everything is stored on HDD and the offload to secondary media like DVD,BlueRay,tape etc will cut a significant piece out of this pie however the instantiation of new data will primarily be done on HDD’s. Adoption of newer, larger capacity HDD is restricted for enterprise use because the access density is getting too high which equates to higher latency and lower performance which is not acceptable in these kind of environments.

This means new techniques will need to be adopted in all areas. From a performance perspective a lot can be gained with SSD’s (Solid State Drives) which have extremely good read performance but still lack somewhat in write performance as well as long term reliability. I’m sure over time this will be resolved. SSD will however not fill the capacity gap needed to accommodate the data growth.

As mentioned before my view is that this gap can and will be filled by advanced 3D optical media which provides new levels of capacity, performance, reliability and cost savings.

I’m open for constructive comments.

Cheers,
Erwin

The end of spinning disks

Did you ever wonder how long this industry will rely on spinning disk? I do and I think that within 5 to 10/15 years we’ve reached the end of the abilities of disks to keep up with demand and data growth ratios. A report from Andrey V Makarenko of Cornell University estimates that around 1700 Exabytes (yes EXA-bytes) will be generated in 2011 alone with growth rates to over 2500 EXAbytes next year.

With new technologies invented and implemented in science, space exploration, health care and last but not least consumer electronics this growth ratio will increase exponentially. Although disk drive technology has kept pretty much pace with Moore’s law you can see the advances in development of this technology is declining. Rotational speed has been steady for years and the edges of perpendicular recording have almost been reached. This means that within the foreseeable future there will be a flipping point were demand will outgrow the capacity. Even if production facilities would be increased to keep up with demand, do we as society want to have these massive infrastructures which are very expensive to build and maintain as well as having a huge burden on our environment. So were does this leave us, do we have to stop generating data or generate it in a far more efficient way or should we also combine this with aggressive data life cycle management. I wrote an article earlier in this blog which shows how this could be achieved and it doesn’t take a scientist to understand it.
To go back to the subject there are talks that SSD will take over a significant amount of magnetic based drives and maybe it is so however it still lacks on reliability in one form or another. I’m sure this will be resolved in the not so distant future however will this technology be as cost effective as spinning disks have been in the last decades. I think this will take a significant amount of time to reach that point. So where do we go from here? It is my take that in addition to the uptake of SSD based drives significant advances will be made in 3D optical storage. This will not only allow for massive increase in capacity per cubic inch but also a reduction in cost, energy as well as a massive increase in performance.
Advancements in laser technology and photonic behavior as well as optical media will clear the pathway of adoption into data-centers the moment this will become commercially attractive.

There are numerous scientific studies as well as commercial entities working on this type of technology and due to market demand add significant pressure on the development of it. Check out this wikipedia article on 3D optical storage to get some more information around the technicalities.

Let me know your opinion.

Regards,
Erwin van Londen