Cloud computing is an operations model, and a cloud server (or VM) is the productization of that operations model.
There is no Cloud Layer
When it comes to the actual cloud server, there is no cloud layer or cloud software. Software called a hypervisor is used to abstract resources through virtualization. Cloud itself is not a technology, but a bundle of technologies and procedures.
With your average physical server, the physicality of the server along with its operating system defines the minimum and maximum resources available. However, because cloud servers are abstractions, they do not benefit from these predefined rules. Instead, cloud servers first started as user-defined. Every time you spun up a cloud server, you defined its resource usage, which is all well and good, but abstraction can lead to so much more.
Moving from User Defined to Software Defined Cloud
Cloud servers eventually moved from user-defined to software-defined. In other words, instead of a user directly orchestrating cloud server deployments, the user creates a list of rules, and cloud servers are deployed and defined by those rules. These rules create an “elastic” server. A server that can both expand and contract based on rules. The rules also created cattle servers, servers that started and stopped, transferred data all based on rules. We call them cattle because they are servers we are not attached to (whereas I named every one of my personal computers after computer villains, true story, GLaDOS is my current).
APIs and Playing Nice in the Cloud
The next technology that folds into cloud is API (Application Programming Interface). Let’s say you have a piece of software and I have a piece of software. My software creates cloud servers through a set of rules; your software installs software packages onto servers. Through the use of an API, your software can connect to mine and take the system requirements of an application and use that to provision a server. Now when a user clicks Create a WordPress site on your system, your API tells my system to provision a cloud server with the appropriate resources and to install PHP, Apache (I would say NGINX, but today I feel like saying Apache), MySQL, and WordPress. This process happens in seconds.
Lastly, it is not just servers that can be provisioned with the cloud operations model but network gear as well. Going back to the previous example, instead of one cloud server, we can have four cloud servers with two load balancers, and two firewalls spun up. Of the four servers, two servers running WordPress and two servers running MySQL. Each component orchestrated automatically and at speed to create a complex solution.
And all of this leads to software-defined datacenters. Full orchestrations of company IT infrastructure using a set of rules.
Standardizing the Cloud
Now the inherent strengths or flaws in the cloud system depend on the service provider. The base server hardware of the service provider’s cloud carries tremendous weight in the overall system health of the customer’s cloud servers. Internal network connection speed is also vital in determining overall efficiency and speed. Lastly, the often ignored, physical proximity of the servers that make up your cloud can also have a lasting effect on your system’s overall health.
GigeNET standardized our cloud servers on the Xeon-D platform. The Xeon-D family was based on a partnership between Facebook and Intel (for more information on that partnership, check Facebook’s blog) in a play to increase computing power while reducing power consumption. The Xeon-D allows us to increase the capacity of our cloud and provide resources at impressive amounts of scale.
And if the public cloud is not to your liking, we are currently working on automated orchestrations of private cloud infrastructure to turn a months-long project into just a handful of hours.
Our customers talked to us about what bothers them the most about cloud, and not only did we listen, but are making the solution a reality. Learn more about the GigeNET cloud or receive a free consultation.