Tag Archives: power

FOS 8.2.1 – the one that is required.. or not?

Like clockwork Brocade releases new FOS version around every 6 months. No news here. FOS 8.2.1 is however a release you may need to pay special attention to especially if you have X6 director class switches hooked up to a 240 volt, 50 Hz power mains as well as sitting between a rock and a hard place with the 7800 extension switch but don;t have the budget to go to a relatively pricey 7840. One other thing is the change in licensing hardening on pizza-box switches which makes the upgrade to this release a one-way street without being able to go back.

Continue reading

Energy Efficient Fibre Channel and related cost savings

For years many storage environments have used both active-active and active-passive multipath (MPIO) access mechanisms to access storage arrays in a dispersed or linear method. On enterprise class storage arrays with global caches the active-active method is most often used while on modular arrays you’ll see the active-passive scenario often applied. Inherently this means that during absence of IO, whether being the passive path or due to total non-IO operations (ie. there is no application or operating system sending or receiving any data), the actual fibre-channel links are only sending IDLE or ARB(ff) fillwords to maintain bit- and word synchronization. This also means that both the sender and receiver are always up and thus use the same amount of power as where they transmitting data at full line-rate. Obviously this is a waste of scarce resources and this is what has been addressed in the new FC standards that are coming up. The FC framing and signalling standard will be enhanced to have traffic diagnostics determine if an SFP should be in full power operating power or in a power reduced mode. Below are the details including some cost-savings calculations.

Continue reading

Storage in 2013 and beyond.

It’s comes to no surprise that a couple of technologies really struck in 2012. Flash disk drives, and specifically in flash arrays, have gone mainstream. One more technology still clinging on is converged networking and of course Big Data.

Big Data has become such a hype-word that many people have different opinions and descriptions for it. What is basically boils down to is that too many people have too much stuff hanging around which they never clean up or remove. This undeniably causes a huge burden on many IT departments who only have one answer: Add more disks……..

So where do we go from here. There is no denial that exabyte type storage environments become more apparent in many companies and government agencies. The question is what is being done with all these “dead” bytes. Will they even be used again. What is being done to safeguard this information?

Some studies show that the cost of managing this old data outgrows the benefit one could obtain from it. The problem is there are so many really useful and beneficial pieces of data in this enormous pile of bits but none of them are classified and tagged as such. This makes the “delete all” option a no-go but the costs of actually determining what needs to be kept can run side-by-side with keeping it all. We can be fairly certain that neither of the two options can hack it in the long run. Something has to be done to actually harvest the useful information and finally get rid of the old stuff.

The process of classification needs to be via heuristic mathematical deterministics. A mouth full but what it actually means is that every piece of information needs to be tagged with a value. Lets call this value X. This X is generated based upon business requirements related to the type of business we’re actually in. Whilst indexing the entire information base certain words, values, and other pieces of information appear more often than others. These indicators can cause a certain information type to obtain a higher value then others and there ranks higher (ie the X value increases). Of course you can have a multitude of information streams where one is by definition larger and causes data to appear more frequent in which case it rank higher even though the actual business value is not that great whilst you might have a very small project going on that could generate a fair chunk of your annual revenue. To identify those these need to be tagged with a second value called Y. And last but not least we have age. Since all data loses its accuracy and therefore value the data needs to be tagged with a third value called Z.

Based upon these three values we can create 3 dimensional value maps which can be projected on different parts of the organization. This outlines and quantifies where the most valuable data resides and where the most savings can be obtained. This allows for a far more effective process of data elimination and therefore huge cost savings. Different mathematical algorithms already exist however have not been applied in this way and therefore such technologies do not exist yet. Maybe something for someone to pick up. Good luck.

As for the logical parts of the Big Data question in 2013 we will will see a bigger shift towards object based storage. If you go back to one of my first articles you will see that I predicted this shift 6 years ago. Data objects need to get smarter and more intelligent by nature in order to increase value and manageability. By doing this we can think of all sorts of smarts to utilize the information to the fullest extend.

As for the other, more tangible technologies my take on them is as follows.

Flash

Flash technology will continue to evolve en price erosion will, at some point, will cause it to compete with normal disks but that is still a year or two away. R&D costs will still have a major burden on the price point of these drives/arrays so as the uptake of flash continues it will level out. Reliability has mostly been tackled by advances in redundancy and cell technology so that argument can be mostly negated. My take on dedicated flash arrays is that these are too limited in their functions and therefore overpriced. The only benefit they provide is performance but that is easily countered by the existing array vendors by adding dedicated flash controllers and optimized internal data-paths in their equipment. The benefit is that these can utilize the same proven functions that have been available for years. One of the most useful and cost-effective is of course auto-tiering which allows to have optimum usage is gives the most bang for your buck.

Converged networking

Well, what can I say. If designed and implemented correctly it just works but many companies are just not ready from a knowledge standpoint to adopt it. There are just too many differentiation in processes, knowledge and many other point which conflicts between the storage and networking folks. The arguments I ventilated in my previous post have still not been countered by anyone and as such my standpoint has not changed. If reliability and uptime is one of your priorities than don’t start with converged networking. Of course there are some exceptions. If for instance use want to buy a Cisco UCP then this system runs converged networking internally from front-to-back but there is not really much than is configurable so the “Oeps” factor is significantly minimized.

Processor and overall system requirements

More and more focus will be placed upon power requirements and companies will be forcing vendors to the extreme to reduce the amount of watts their systems suck from the wall socket. Software developers are strongly encouraged (and that’s an understatement) to sift through their code and check if optimizations can be achieved in this area.

Legal

A short look on the techno news sites in 2012 and you’ve probably noticed an increase in court cases were people are held responsible for breaches in confidentiality and  availability of information infrastructures. This will become a real battle with outsourced cloud services in the very near future. Cloud providers like AWS, Rackspace and Microsoft negate all responsibility w.r.t. to service/data-availability and uptime in their terms of use and contracts but just how far can they stretch this? There will be some point in time where courts will hold these provides accountable and you will see a major shift in requirements these providers will put in their infrastructures. All this will of course have significant ramifications on pricing and cloud expectations will have to be adjusted.

Hope you all have a good 2013 and we’ll see if some of these will gain some uptake.

Regards,
Erwin