In case you missed the boat or else your eyes glaze over whenever the word virtualization is spoken here’s a fairly simplified explanation of what it is and means.
When we first deployed computers, they were so large that we had to house them in their own rooms with their own dedicated power and cooling. Then came the mini-computer (which was, duh, smaller) and which was simpler to operate. Finally (at least by the year 2000) we had the PC or Mac, a computing device that had an operating system (OS – as in OS/X or Linux or Windows), its own display, memory, input output devices (I/O) like a keyboard, mouse, scanner, printer, etc.
Then a start-up named VMWare created a revolutionary product (read that Virtual Machine Ware) which radically changed the computing landscape, especially inside the datacenter. For the first time (actually since the older days of the mainframe where similar functionality was possible, but complicated and expensive) the server itself – CPU(s), memory, IO could be carved up into “virtual machines” which were actually data “images” of servers loaded and run simultaneously on a single physical piece of server hardware. In fact it was the same hardware that could host a single operating system and applications like the old days. Except in the case of VMWare, a “hypervisor” was loaded onto the machine BEFORE the operating system to provide a platform from which to launch miltiple server images.
Since VMWare’s introduction, acquisition by storage vendor EMC, and the development of competing products from Citrix (Xen) and Microsoft (HyperV), the platform has evolved rapidly and become unbelievably successful (including the development of virtual workstations).
The end result was a highly efficient and scalable platform, pieces of which were dedicated to the tasks associated with being a server – memory, storage, I/O, etc. The benefit is that all these tasks occur simultaneously although there may be no logical relationship between them. In other words, in the case of wesbite hosting, one could be yours, another your neighbors, and there would never be any communication or conflict between either of them. You both would share the physical resources, each in turn (clock cycles, memory access, storage reads/writes, but otherwise separate.
Now comes Network Virtualization – the newest kid on the block. Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. Each channel is independently secured. Every subscriber has shared access to all the resources on the network from a single computer. The first network virtualization was accomplished in switching through the creation of vlans – virtual local area networks. More sophistocated systems now automate the tasks required to add or reduce bandwidth (capacity) and /or add additional resources automatically – like more servers, data centers, storage systems, to meet the demand of thousands of individual, remote users simultaneously..
Why is this important?
Because the Cloud is Huge and now needs on demand expansion and flexible resources quickly. These are capabilities that must be automated in order to be efficient and cost effective.
After all, when you want your files you want your files, right? And when 800 million people want to use Facebook, it takes systems that are responsive – when needed – or removed when not.
Hope this helps.
For further information about getting to the Cloud please see transition IT