Category Archives: Storage

Performance misconceptions on storage networks

The piece of spinning Fe3O4 (ie rust) is by far the slowest piece of equipment in the IO stack. Heck, they didn’t invent SSD and Flash for nothing, right. To overcome the terrible latency, involved when a host system requests a block of data, there are numerous layers of software and hardware that try to reduce the impact of physical disk related drag.

One of the most important is using cache. Whether that is CPU L2/L3 cache, DRAM cache or some hardware buffering device in the host system or even huge caches in the storage subsystems. All these can, can will, be used by numerous layers of the IO stack as each cache-hit means it prevents fetching data from a disk. (As in intro into this post you might read one I’ve written over here which explains what happens where when a IO request reaches a disk.)

Continue reading

Open Source Storage (part 2)

Six years ago I wrote this article : Open Source Storage in which I described that storage will become “Software Defined”. Basically I already predicted SDN before the acronym was even invented.  What I did not see coming is that Oracle would buy SUN and by doing that basically killing off the entire “Open Source” part of that article but hey, at least you can call yourself a Americas Cup sponsor and Larry Elisons yacht maintainer. 🙂

Fast forwarding 6 years and we land in 2015 we see the software defined storage landscape has expanded massively. Not only is there a huge amount of different solutions available now but the majority of them have evolved into a mature storage platform with almost infinite scalability towards capacity and performance.

Continue reading

Open Source Storage

Storage vendors are getting nervous. The time has come that SMB/SME level storage systems can be build from scratch with just servers, JBOD’s and some sort of connectivity.

Most notably SUN (or Oracle these days) has been very busy in this area. Most IP was already within SUN, Solaris source code has been made available, they have an excellent file-system (ZFS) which scales enormously and has a very rich feature set. Now extent that with Lustre ** and you’re steaming away. Growth is easily accomplished by adding nodes to the cluster which simultaneously increases the IO processing power as well as throughput.


But for me the absolute killer app is COMSTAR. This way you can create your own storage array with commodity hardware and make your HBA’s fibre channel targets. Present your LUNS and connect other systems to it via a fibre channel network. Better yet even iSCSI and FCOE are part of it now. Absolutely fabulous. These days there would be no reason to buy an expensive proprietary array but use the kit that you have. Ohh yes, talking about scalability, is 8 exabyte enough on one filesystem and over a couple of thousand nodes in a cluster. If you don’t have these requirements it works one a single server as well.

The only thing lacking is Mainframe support but since the majority of systems in data-centres have Windows or some sort of Unix farm anyway this can be an excellent candidate for large scale Open Source storage systems. Now that should make some vendors pretty nervous.

Regards,
Erwin

**ZFS is not yet supported in Luster clusters but on the roadmap for next year