Does SNIA matter?

If your not into enterprise storage and live in France you might confuse the acronym with the Brotherhood of Infirmary Anaesthetics (Syndicat National des Infirmiers-Anesthésistes) but if you relate it to storage you obviously end up at the Storage Networking Industry Association.

This organisation is founded based upon the core values of developing vendor neutral storage technologies and open standards that enhance the overall usability of storage in general. 

In addition the SNIA organises events such as Storage Networking World, Storage Developers Conference, summits and it also provides a lot of vendor neutral education with it’s own certification path.There are world-wide chapters who each organise their local gigs and can provide help and support on general storage related issues.

The question is though, to what extend is SNIA able to steer the storage industry to a point on the horizon that is both beneficial to customers as to the vendors. The biggest issue is that the entire SNIA organisation lives by the grace of it’s members, which are primarily vendors. Although you, as a customer or system integrator or anyone else interested, can become a member and make proposals, you have to bring a fairly large bag of coins to become a voting member and have the ability to somewhat influence the pathways of the storage evolution.

The SNIA does not directly steer development of technologies which are under the umbrella of the INCITS, IEEE, IETF and ISO standards bodies. Although many vendors are part of both organisations you will find that the well established standards such as FibreChannel, SCSI, Ethernet, TCPIP are developed in these respective bodies.

So should you care about SNIA?

YES !!!!. You certainly need to. The SNIA is a not-for-profit organisation which provides a very good overview of where storage technology is at every stage. It started of in 1997 shortly after storage went from DAS to SAN. Over the years it has provided the industry with numerous exciting technologies which enhanced storage networking in general. Some examples are SMI-S, CDMI, CSI, XAM etc. Some of these technologies evolved into products used by vendors and others have either ceased to exist due to lack of vendor support or customer demand.

If you’re fairly new in the storage business the SNIA is an excellent start to get acquainted with storage concepts, protocols and general storage technologies without any bias to vendors. This allows to remain clear minded of options and provides the ability to start of your career in this exciting, fast pace business. I would advise to have a look at the course and certification track and recommend to get certified. It gives you a good start with some credibility and at least you know what the pundits in the industry talk about when they mention distributed filesystems, FC, block vs file etc etc.

I briefly mentioned the events they organise. If you want to know who’s who in the storage zoo a great place to visit is SNW (Storage Networking World), an event organised twice a year in the US on both the east and west coast. All major vendors are around (at least they should in my view) and it gives you a great opportunity to check out what they have on their product list.
The next great event is SDC (Storage Developers Conference) which quite easily outsmarts most other geek events. This event is where everyone comes together who knows storage to the binary level. This is the event where individual file-system blocks are unravelled, HBA API’s are discussed and all the new and exciting cloud technologies are debunked. So if you’re into some real technical deep-dives this is the event to visit.

Although questions have been raised whether SNIA is relevant at all I think it is and it should be supported by anyone with an interest in storage technologies.

I’m curious about your thoughts.

Regards,
E

Storage in 2013 and beyond.

It’s comes to no surprise that a couple of technologies really struck in 2012. Flash disk drives, and specifically in flash arrays, have gone mainstream. One more technology still clinging on is converged networking and of course Big Data.

Big Data has become such a hype-word that many people have different opinions and descriptions for it. What is basically boils down to is that too many people have too much stuff hanging around which they never clean up or remove. This undeniably causes a huge burden on many IT departments who only have one answer: Add more disks……..

So where do we go from here. There is no denial that exabyte type storage environments become more apparent in many companies and government agencies. The question is what is being done with all these “dead” bytes. Will they even be used again. What is being done to safeguard this information?

Some studies show that the cost of managing this old data outgrows the benefit one could obtain from it. The problem is there are so many really useful and beneficial pieces of data in this enormous pile of bits but none of them are classified and tagged as such. This makes the “delete all” option a no-go but the costs of actually determining what needs to be kept can run side-by-side with keeping it all. We can be fairly certain that neither of the two options can hack it in the long run. Something has to be done to actually harvest the useful information and finally get rid of the old stuff.

The process of classification needs to be via heuristic mathematical deterministics. A mouth full but what it actually means is that every piece of information needs to be tagged with a value. Lets call this value X. This X is generated based upon business requirements related to the type of business we’re actually in. Whilst indexing the entire information base certain words, values, and other pieces of information appear more often than others. These indicators can cause a certain information type to obtain a higher value then others and there ranks higher (ie the X value increases). Of course you can have a multitude of information streams where one is by definition larger and causes data to appear more frequent in which case it rank higher even though the actual business value is not that great whilst you might have a very small project going on that could generate a fair chunk of your annual revenue. To identify those these need to be tagged with a second value called Y. And last but not least we have age. Since all data loses its accuracy and therefore value the data needs to be tagged with a third value called Z.

Based upon these three values we can create 3 dimensional value maps which can be projected on different parts of the organization. This outlines and quantifies where the most valuable data resides and where the most savings can be obtained. This allows for a far more effective process of data elimination and therefore huge cost savings. Different mathematical algorithms already exist however have not been applied in this way and therefore such technologies do not exist yet. Maybe something for someone to pick up. Good luck.

As for the logical parts of the Big Data question in 2013 we will will see a bigger shift towards object based storage. If you go back to one of my first articles you will see that I predicted this shift 6 years ago. Data objects need to get smarter and more intelligent by nature in order to increase value and manageability. By doing this we can think of all sorts of smarts to utilize the information to the fullest extend.

As for the other, more tangible technologies my take on them is as follows.

Flash

Flash technology will continue to evolve en price erosion will, at some point, will cause it to compete with normal disks but that is still a year or two away. R&D costs will still have a major burden on the price point of these drives/arrays so as the uptake of flash continues it will level out. Reliability has mostly been tackled by advances in redundancy and cell technology so that argument can be mostly negated. My take on dedicated flash arrays is that these are too limited in their functions and therefore overpriced. The only benefit they provide is performance but that is easily countered by the existing array vendors by adding dedicated flash controllers and optimized internal data-paths in their equipment. The benefit is that these can utilize the same proven functions that have been available for years. One of the most useful and cost-effective is of course auto-tiering which allows to have optimum usage is gives the most bang for your buck.

Converged networking

Well, what can I say. If designed and implemented correctly it just works but many companies are just not ready from a knowledge standpoint to adopt it. There are just too many differentiation in processes, knowledge and many other point which conflicts between the storage and networking folks. The arguments I ventilated in my previous post have still not been countered by anyone and as such my standpoint has not changed. If reliability and uptime is one of your priorities than don’t start with converged networking. Of course there are some exceptions. If for instance use want to buy a Cisco UCP then this system runs converged networking internally from front-to-back but there is not really much than is configurable so the “Oeps” factor is significantly minimized.

Processor and overall system requirements

More and more focus will be placed upon power requirements and companies will be forcing vendors to the extreme to reduce the amount of watts their systems suck from the wall socket. Software developers are strongly encouraged (and that’s an understatement) to sift through their code and check if optimizations can be achieved in this area.

Legal

A short look on the techno news sites in 2012 and you’ve probably noticed an increase in court cases were people are held responsible for breaches in confidentiality and  availability of information infrastructures. This will become a real battle with outsourced cloud services in the very near future. Cloud providers like AWS, Rackspace and Microsoft negate all responsibility w.r.t. to service/data-availability and uptime in their terms of use and contracts but just how far can they stretch this? There will be some point in time where courts will hold these provides accountable and you will see a major shift in requirements these providers will put in their infrastructures. All this will of course have significant ramifications on pricing and cloud expectations will have to be adjusted.

Hope you all have a good 2013 and we’ll see if some of these will gain some uptake.

Regards,
Erwin