Server virtualisation is the result of software development incompetence

Voila, there it is. The fox is in the hen-house.

Now let me explain before I get the entire world over me. πŸ™‚
First I do not say that software developers are incompetent. I fact I think they are extremely smart people.
Second, the main reason for my statement is given the fact Moors’ law is still active we more or less got used to somewhat unlimited resources w.r.t. CPU/Memory/bandwidth etc. etc. developers most of the time write their own idea’s into the code without looking at better/more appropriate alternatives.


So let take a look why this server virtualization got started anyway.

The mainframe guys in the good old days already acknowledged the problem that application developers didn’t really give a “rats-ass” what else had to be installed on a system. They assumed that their application was the most important and deserved a dedicated system with likewise resources. Now this is Mainframe environment which already had strict rules regarding system utilization etc. The problem still was conflicts of shared libraries caused application havoc. So instead of flicking the application back to the developers IBM had to come up with something else and virtual instances were born. Now I’m too young to recollect the year this was introduced but I assume it was somewhere in the 70’s.

When Bill Gates came to power in the desktop industry and later in the server market you would assume that they learned something of the mistakes made in the past. Instead they came out with MS-DOS (I’m ignoring the OS/2 bit which they had some involvement in as well)
Now I’m fully aware that an Intel 8086 cpu had no were the capabilities as the CPU’s that were in the mainframe systems or mini’s but anyhow the entire architecture was build for single system and single application use.
They have ignored the fact that one system could do more than one task at the same time and application developer wrote whatever they seemed fit for their particular needs. Even today with windows and Unixes you are very often stuck with conflicting dependencies of libraries, compiler versions etc etc. Some administrators have called this the DLL Hell.

I’ve been personally involved in sorting out this mess with different applications that had to run on single systems so in that sense I know what I’m talking about.

So since the OS developers were obstructed by business requirements (in the sense that they could not enforce hard restrictions on application development) they more or less had no means to overcome this problem.

Now then there came some smart guys and dolls from Berkeley who started with a product which let you install an entire operating system in a software container and every resource needed by that operating system was directed by this container and voila: VMWare was born.

From my perspective this design has been the stupidest move ever made. I’m not saying the software is not good but from an architectural point of view this was a totally wrong decision. Why should I waste 30% or more of my resources by needing to install two, three, ten, twenty times the same kernel, libraries, functionality etc. etc.

What they should have done was build an application abstraction layer which made an inventory of underlying OS type, functionality, libraries etc etc. (You can safely assume that with current server farms each server has the same inventory if deployed from a central repository. Even if not this abstraction layer could detect and fix that) This way you can create lightweight application containers which share all common libraries and functionality from the OS that sits below this layer but if that is not enough or conflicts with these shared libraries they use a library or other settings which is locked inside this applications container.

Now here comes the fun. If I need to move applications to another sever I don’t need to move entire operating systems which rely on underlying storage infrastructures but instead I could move or even copy this application container to one or multiple servers. After that has been done you should be able to keep that application container in sync so if one gets a corrupt file for whenever reason the abstraction software should be able to correct that. This way you’re also assured that if I need to change anything I only need to that on configurations within that container.

This architecture is fare more flexible and can save organizations a lot of money.

The problem is: this software doesn’t exist yet. πŸ™‚ (except maybe in development labs which I don’t have visibility of.)

You can’t compare it to cloud computing since currently that is far too limited in functionality. Clouds are build with a certain subset of functionality so although on the front-end side you see everything though a web-browser doesn’t mean that on the back-end in the data-centers if operates the same way. Don’t make the mistake that if you want to setup a cloud infrastructure your problems will be solved. You need a serious amount of real-estate to even think about cloud computing.

The above mentioned application container architecture let’s you grow far more easily.

Cheers,
Erwin

P.S. I used VMWare as an example since they are pretty well known but I also need to include all other server virtualisation technologies like Xen, Hyper-V etc.

Print Friendly, PDF & Email

Subscribe to our newsletter to receive updates on products, services and general information around Linux, Storage and Cybersecurity.

The Cybersecurity option is an OPT-OUT selection due to the importance of the category. Modify your choice if needed.

Select list(s):