Cloud Computing

A cloud server is a virtualized server that powers storage resources and shared computing resources. Cloud server networks provide nearly unlimited resources and diverse capabilities.

The virtualization technology allows the demand for a particular set resources or network to be spread over multiple servers at the same time. This allows for maximized security and practically no down time.

In addition, customers do not have to worry about upgrading, as the hosting environment’s unique features are highly scalable and can leverage all size networks and sites.

Advantages of Cloud Servers

  • Route traffic automatically around network outages
  • support for multiple public IP addresses
  • unlimited free private bandwidth
  • R1Soft Backup Servers
  • IP network brownouts are effectively managed
  • continuous identification and selection of the most favorable path
  • Do not need to acquire and keep your own network and server
  • Scalability: add RAM, CPU capacity, storage, as the applications, data, or website traffic grows

Its flexibility, reliability, and affordability makes Cloud Server networks very beneficial for any e-commerce site which is why cloud servers are now in great demand.

Functionally unlimited power & storage. Use-based billing, like a utility. Rapid deployment speed. Unparalleled flexibility.

The hype around the cloud isn’t new, but the reality is that it still offers incredible advantages for the right applications.

Defining the cloud

It’s always worth clearly defining the cloud, since it’s such a slippery concept: the cloud is a set of virtualized servers.

This means the cloud “server” isn’t confined to one physical box, but is instead a software-defined set of computing resources. It’s a virtual tool, specifically designed to meet the needs of your unique application.

This means that you leverage the distributed computing power of multiple servers—rather than relying on one server to perform your needed tasks.

Who avoids revenue loss with cloud hosting?

The number one reason for businesses to utilize cloud hosting as revenue insurance is to thwart downtime caused by surges in traffic and computing loads.

During a spike in activity, resources can be rapidly allocated to cope with the strain. Since bandwidth and capacity are functionally limited with traditional servers, there’s significant lag in getting new servers online.

With a cloud model, bandwidth and capacity are constantly available at a moment’s notice.

Avoiding revenue loss by facilitating rapid expansion for seasonal businesses continues to be a growth area for cloud service providers.

So who benefits the most from cloud services? Organizations with business models that rely on high-traffic periods.

Some examples of organizations who benefit from cloud:

  • Tax preparers
  • Seasonal tourist businesses
  • Colleges & universities
  • Florists
  • Sports teams
  • Ticket and reservation providers
  • Rental agencies
  • Development and marketing agencies
  • Medical providers

Through a cloud model, the initial investment in IT infrastructure is lower and the payoff is immediately tangible.

Since the service is funded like a utility (you pay for what you use), businesses that experience drastic shifts in revenue can avoid spending large amounts of capital and reduce their overall operating costs.

That isn’t to say the cloud is always less expensive—in fact, that’s patently false. Cloud computing can be far more expensive than traditional dedicated servers when they’re mismanaged, and not every business can properly leverage the overall reduced cost of cloud hosting effectively.

Pay for what you use: more about billing & the cloud

While Amazon’s (awesome!) marketing makes the cloud seem cheaper by every metric, this isn’t actually the case.

Since cloud hosting is based on a contracted usage model, it can wind up being orders of magnitude more expensive than traditional dedicated servers. As we’ve explained elsewhere, the cloud is really an evolution of dedicated servers, but the brunt of the capital investment is borne by the hosting company instead of the individual business.

By paying for what you use, a smart organization can reduce their operating costs. They can rapidly experiment with the intention of failing fast—for example, deploying a new application or service for a limited time as a test case, testing different landing pages or scaling up their business during a push for more sales.

The cloud model also allows for smaller businesses to access powerful resources for short amounts of time at a very low cost. This can be extremely useful for things like market research and other projects that require computing power and bandwidth out of the scope of their budget, but without investing in costly servers.

But as we’ve hinted at – the pay-per-use model can mean that overspending is easy. It’s crucial to understand how to effectively manage your cloud. If that’s too daunting, GigeNET offers managed cloud services that can help alleviate the stress of gambling with your budget.

Some businesses find the unpredictability of cloud billing to be a hassle and instead opt for managed services or a dedicated server—it’s worth having a candid discussion with an expert systems architect before you jump into something expensive that seems like a good fit.

The technical benefits of cloud computing: uptime, uptime, uptime

There are practical real-world benefits to utilizing the cloud, which we absolutely don’t want to downplay. These include:

  • Routing web traffic around network outages and bottlenecks
  • Avoiding hardware outages through distribution of resources
  • Multiple public IP addresses for redundancy
  • Management of IP network brownouts
  • No need to navigate the procurement process for servers
  • Powerful scaling capabilities
  • Continuous best-path network routing

For each instance, you gain uptime and operational stability. It’s generally intuitive—distributing your data over multiple servers means you aren’t susceptible to a single point of failure crippling your organization.

A hidden benefit of cloud computing: highly redundant backups

The other benefit of distributing data in a cloud service model is that your data is distributed across many servers.

While it’s advisable to use redundant backups (like combining R1Soft backups and RAID arrays), the cloud is generally much safer because of its many nodes. Additionally, since the cloud’s servers are kept off site, your data is protected from on-site disasters, power outages and downtime.

The cloud, however, is not automatically a method for backing up your valuable data. We’d recommend investing carefully in a backup service that can meet the specific needs of your business instead of imagining that the cloud is foolproof. There have been cloud failures at scale, even for providers like Google, so ensuring that you have your own unique backup strategy is crucial to data security.

However, the cloud model still offers a kind of simple backup that can help reduce downtime significantly, particularly if your current server hardware is outdated or unreliable.

Dare to compare: GigeNET’s competitive cloud services

We’ve developed our cloud to compete with the big cloud providers. We’ve been developing and refining our cloud capacity for nearly a decade—we were early adopters.

If your organization needs the flexibility and reliability that a cloud service offers, contact one of our expert systems architects today for help with planning and implementing an ideal cloud solution.

GigeNET has been in business for more than 20 years, and our goal to is to make the internet better for everyone. We aim to be your hosting partner for life, and have the servers & talent to back up our claims.

Try GigeNET cloud for yourself. 

Do you need your own dedicated server, or can you trust a cloud service for hosting?

Before you make a decision, it’s crucial to understand the differences between dedicated server hosting & cloud hosting—which are rapidly emerging as the dominant modes for organizational hosting.

There are real budgeting and operational implications to consider before settling on either solution.

Cloud servers: scalable flexibility—at a premium

With rapid deployment and a virtualized nature, cloud hosting models allow your organization to circumvent a direct investment in hardware. This benefits businesses that require a high degree of operational flexibility, want to experiment in the short-term or have highly seasonal demands.

The click of a button can deploy a new virtual server, allowing you to rapidly scale up your operations during times of high load, product releases, seasonal rushes and other demanding applications that require instantly available bursts of computing power. This can typically be accomplished from anywhere with an internet connection and a web browser—meaning that cloud hosting is a truly portable solution.

Since the cloud is virtualized, your organization isn’t tethered to the strength of a single server or limited in its computing power—you can scale up, on demand. Load balancing allows for rapid distribution during demanding workloads, tapping into the power of multiple servers instead of relying on one powerful server.

Cloud hosting is also exceptionally resilient. By introducing numerous instances, cloud hosting behaves redundantly. Coupled with a secure backup strategy, the cloud provides a degree of restorability and operational stability that other solutions struggle to match—with the caveat that it’s highly dependant on the hosting company’s decisions about which servers host its cloud.

This degree of flexibility and decreased initial investment comes at a price: cloud services are billed like a utility—so you pay for what you use. This means that the cloud, while a genuinely powerful and useful option for many organizations—can rapidly deteriorate into a wildly expensive, budget-busting affair without proper planning and skillful management.

Explore GigeNET’s cloud servers.

Dedicated servers: reliability & performance—with the right planning

Deploying a dedicated server is a more complicated task than deploying a cloud server. It requires procuring, installing and configuring a physical server in a data center—while this process usually takes only a few days, it is less than ideal when an organization needs instantly available computing capacity because of a rush.

For organizations with consistent demands or operational IT requirements that don’t change rapidly over time, dedicated servers represent an opportunity to introduce high-performing capabilities and an exceptional degree of business continuity.

Since the server is controlled solely by your organization, you don’t risk bad neighbors or unscrupulous actors introducing instability—the server is yours to do with as you please, and not shared with others who may mismanage their resources or interrupt your services.

That isn’t to say that dedicated servers can’t be designed to meet the needs of growing businesses—a skilled systems architect will build a strategy that meets your needs today and your needs in the coming years as well.

Managing a dedicated server requires a competent IT team that is capable of overseeing server maintenance and creating a strategy for keeping software and security updates under control. For organizations that need the benefits of a dedicated server but don’t have an IT team, there’s managed services that allow leveraging of our veteran support team.

However, there is one clear operational benefit to a dedicated server: consistent, stable and predictable billing based off a contract, rather than usage-based pricing structures that can confound budgets.

Explore GigeNET’s dedicated servers.

Deciding between the cloud and dedicated servers

Choosing your hosting solution requires taking stock of your organization: what are its goals? Where do you see your internet presence in 1, 3 and 5 years? What sort of IT requirements do you really have?

We have experienced system architects that will work with you develop an affordable and reasonable strategy for a scalable and flexible cloud hosting—or a reliable and consistent dedicated server.

If you’re looking to resolve the challenge of finding the right hosting strategy for your organization, receive a free consultation.

evolution of private cloud

There are a variety of computing workloads that support successful business operations. If we were to maximize resource usage for these workloads, we would end up with a vast assortment of computer chassis, hard drives, RAM chips, and a host of various other computer parts. And although this assortment would provide coverage for the current needs, it would not be able to keep up with business growth (or shrinkage), nor would it be economical to maintain.

What is a workload?

“Workload is a generic industry term meaning an independent collection of code, service (app), or similarly packaged process. A defining factor in the definition, especially when looking at today’s infrastructure technologies, is the independent nature.

But that is where many on-premise businesses find themselves. A large collection of mish-mashed gear that was perfect when purchased, but has fallen behind in adequately covering business needs economically. Unfortunately, when many businesses turn to the cloud, they find that the over standardization of the market has left them with fewer options than their needs dictated and at a higher price tag.

Private cloud was designed to address the needs of businesses finding themselves in these situations. Private cloud is the efficient methodology for defining workload space within a fixed cost environment. That being said, private cloud is not found within the public cloud business model. As many analysts and industry insiders have said before, private cloud is almost the antithesis of public cloud, for it shackles public cloud expansion by placing VMs in a much smaller fixed environment. In fact, some analysts have gone so far as rejecting the private cloud operations model completely.

So why is it that many businesses are finding more success with private cloud than the public ones?

What is a workload?

Workload is a very generic industry term that means an independent collection of code, service (or app), or similarly packaged process. A defining factor in the definition, especially when looking at today’s infrastructure technologies, is the independent nature. Can I pick up this service, as is, off of the current server and run it on a different one? In a cloud environment, would I be able to move the service from one VM to the next?

A few examples of computing workloads (be it cloud or otherwise) include batch, database, mobile application, backup, website, and analytic workloads.

A batch workload, as an example, includes processing large volumes of data and can be run off hours at scheduled intervals. Batches include data reconciliations, audits, and system syncing. These workloads rely on predetermined scripts, access to the data, and a pool of compute and memory (whether that pool is fixed such as on a full server or dynamic such as in the cloud is irrelevant). As long as the originating system has access to the data or systems involved, those scripts can be picked up and moved to a new server.

The Evolution of Dedicated Servers

The reason that more businesses are finding success with private cloud is that private cloud is not the evolution of public cloud, it is the evolution of dedicated servers.

Dedicated server environments increased in popularity as the need for root access, dedicated static IP addresses, and dedicated resource pools increased. In the early days of hosting, both root access and static IPs were firmly out of reach in a shared environment. The unfortunate consequence was those who had smaller workloads but required either a dedicated IP or root access had to move to a dedicated server. These scenarios helped push forward VPS and later Cloud.

On the other end of the spectrum, many dedicated server users had multiple workloads they had to run, and many placed them on the same server. Although this would help both from a cost standpoint as well reducing complexity, it also increased the possibilities for performance issues (compute and storage bottlenecks) and security and business continuity problems. The fix to this was to purchase multiple servers, which would increase cost but would solve performance issues.

Although Cloud was an answer to these problems, it wasn’t always the most economical even when compared to those who purchased multiple dedicated servers.

Cloud pricing is based on overselling the hardware. If you took 10 VMs from any number of Cloud hosts and compared the price of those VMs to an equal measure of server hardware, you could purchase at a Dedicated Hosting outfit you would find that, without automated processes for spinning down and spinning up VMs, the pricing was greatly inflated with the Cloud host.

Now if we apply what we learned from Cloud operations to the dedicated server world, we would find something remarkable. The ability to take your current server configuration and streamline it, so each workload receives the proper amount of resources is game-changing in the dedicated server world. It means consolidation of servers. It means incredible flexibility for provisioning resources for the right fit of the workload. It gives DevOps and Sysadmins the ability to automate provisioning across their servers based on a set of pre-determined criteria, and that is game-changing.

With Dedicated Private Cloud, a user can take a dedicated server environment and cut it up into the appropriate VMs necessary to handle their current workloads. No need to pay for overhead or per VM licensing costs.

Where as public cloud is amorphous with nigh-limitless resource pools, private cloud is more like slapping a perfectly formed organizer on your resources. Some companies need the ability to grow and shrink in seconds, or the lowered costs of operating a few cloud servers. Most companies, however just need a way to organize their workloads, providing each with the perfect balance of resources to keep the engine running at peak efficiency.

What is Cloud

Cloud computing is an operations model, and a cloud server (or VM) is the productization of that operations model.

When it comes to the actual cloud server, there is no cloud layer or cloud software. Software called a hypervisor is used to abstract resources through virtualization. Cloud itself is not a technology, but a bundle of technologies and procedures.

With your average physical server, the physicality of the server along with its operating system defines the minimum and maximum resources available. However, because cloud servers are abstractions, they do not benefit from these predefined rules. Instead, cloud servers first started as user-defined. Every time you spun up a cloud server, you defined its resource usage, which is all well and good, but abstraction can lead to so much more.

Cloud servers eventually moved from user-defined to software-defined. In other words, instead of a user directly orchestrating cloud server deployments, the user creates a list of rules, and cloud servers are deployed and defined by those rules. These rules create an “elastic” server. A server that can both expand and contract based on rules. The rules also created cattle servers, servers that started and stopped, transferred data all based on rules.  We call them cattle because they are servers we are not attached to (whereas I named every one of my personal computers after computer villains, true story, GLaDOS is my current).

The next technology that folds into cloud is API (Application Programming Interface). Let’s say you have a piece of software and I have a piece of software. My software creates cloud servers through a set of rules; your software installs software packages onto servers. Through the use of an API, your software can connect to mine and take the system requirements of an application and use that to provision a server. Now when a user clicks Create a WordPress site on your system, your API tells my system to provision a cloud server with the appropriate resources and to install PHP, Apache (I would say NGINX, but today I feel like saying Apache), MySQL, and WordPress. This process happens in seconds.

Lastly, it is not just servers that can be provisioned with the cloud operations model but network gear as well. Going back to the previous example, instead of one cloud server, we can have four cloud servers with two load balancers, and two firewalls spun up. Of the four servers, two servers running WordPress and two servers running MySQL. Each component orchestrated automatically and at speed to create a complex solution.

And all of this leads to software-defined datacenters. Full orchestrations of company IT infrastructure using a set of rules.

Now the inherent strengths or flaws in the cloud system depend on the service provider. The base server hardware of the service provider’s cloud carries tremendous weight in the overall system health of the customer’s cloud servers. Internal network connection speed is also vital in determining overall efficiency and speed. Lastly, the often ignored, physical proximity of the servers that make up your cloud can also have a lasting effect on your system’s overall health.

GigeNET standardized our cloud servers on the Xeon-D platform. The Xeon-D family was based on a partnership between Facebook and Intel (for more information on that partnership, check Facebook’s blog) in a play to increase computing power while reducing power consumption. The Xeon-D allows us to increase the capacity of our cloud and provide resources at impressive amounts of scale.

And if the public cloud is not to your liking, we are currently working on automated orchestrations of private cloud infrastructure to turn a months-long project into just a handful of hours.

Our customers talked to us about what bothers them the most about cloud, and not only did we listen, but are making the solution a reality. Learn more about the GigeNET cloud or receive a free consultation.

Underutilizing Cloud Computing Resources

The cloud is a computing wonder that will change IT forever. For the cloud to be revolutionary, the cloud computing resources must be integrated into IT infrastructure in a manner that allows them to be managed and controlled. However, the research firm Osterman Research and Electric Cloud®, recently released results of a survey of senior-level IT professionals on public and private cloud use and implementation revealed an important issue. Their findings revealed that although cloud computing is now being widely implemented, many companies and organizations have yet to fully leverage their cloud infrastructure.

Of the companies surveyed that use either public cloud or cloud computing, it was found that “52% have cloud infrastructure resources that are hardly ever or never used.” Many of those resources are sitting unused. As well, 47% of the companies reported some or a lot of excess capacity.” These findings reveal that more than half of companies are underutilizing cloud computing.

The essential issue with productive use of cloud technologies is efficient management of the cloud resources. Software development tasks that should be utilized to get the best benefit of the private cloud include: systems testing, requirements planning and tracking, statistic analysis, deployment automation, and source code control. For developers to make sure they receive the full benefits of the cloud environment that includes better and faster performance, they have to be able to harness the software development and testing tools that are perfect for the private cloud computing. As well, for effective adoption and management of cloud resources, the same techniques must be employed as those employed by the administrators of data centers. By doing so, they will be able to fully leverage the cloud’s resources.

Cloud computing has evolved to become a vital component of an increasing number of company’s computing resources and it is forecasted to become one of the most essential  computing resources in the future. Configuration for cloud development, testing, and training, is essential for a company to fully leverage the power and performance of the cloud. With the monitoring, management, integration and automation capabilities of the cloud’s resources, it is essential that each type of resource is leveraged for maximum benefit.

To attain the clear benefits offered by the cloud, companies should implement a clearly defined strategy for using and managing the cloud. The Osterman Research and Electric Cloud study will help IT developers realize the size of the cloud opportunity and encourage them in the development of their own cloud resources. Find out more from Gigenet about cloud server choices for your company.

12 Essential System Administration Cheat Sheets

When it comes to websites and your online presence, there is an abundance of styles, ideas, and languages you can use. Some languages are more important, versatile, and common than others. These languages will make sure your website looks and works correctly in all browsers and operating systems, which will save you time and frustration. Look over the list below to see which languages you should have in your arsenal and how they are used.

First in this lineup is CSS. Cascading Style Sheets (CSS) are used for presentation/layout of your website. This language is used in partnership with other languages to lay out tables, divs, lists, and the style or color of certain elements. Having well formed CSS can make your website have great presentation and make coding much easier and cleaner. Instead of settings colors and spacing in the body of the code, this can be placed in a CSS rule, that can just be called. This will make your code clean and easy to follow. To add to the ease of use and functionality, you can use an add-on such as bootstrap, which makes spacing and layout much easier.

Next in line is HTML. This language is the most basic and is the bread and butter of most websites on the web today. HTML uses the CSS rules you created to form the content on your website. This includes tables, divs, lists, links, and text. The syntax for HTML calls to CSS rules to add the element to the body of the code where you can then add text or links. HTML has come a long way and evolved its’ functionality over the years to do away with out dated languages.

Last but certainly not least, is PHP. PHP is a very popular, multi-purpose, scripting language. This language creates, calls, and uses function to push and pull data to and from the data base. A great example of this, that many of us use several times a day, is logging into a website. When you login, PHP controls the fields. When you enter your username and password, it checks the data base against the information you entered to either complete the login or to inform you that your information is not correct. This is the most popular example, but definitely not the only. PHP powers blogs, images, data, and the list continues. This is the most powerful and important language for web development.

These are not the only important languages for web development, but a great place to start. With these fundamentals, you can begin programming your website and add-on from there. A couple other notable languages are javascript, jquery, and ruby on rails. Initially to learn programming languages you would have to read large boring books and practice in an IDE. Recently, the learning process has become much simpler and can be done online. A great place to begin learning is codecademy.com. This allows you to start lessons on the languages you want to learn, and creates projects based on those languages. Once you feel comfortable with languages you can then use codefights.com. This site lets you code battle other users and learn from your win/loss.

Also, you will need a program to start coding in. If you are using a framework like WordPress, the coding can be done inside the admin itself. If you are looking to code directly to a server you will use an IDE such as Adobe Dreamweaver, Notepad++, SublimeText, PHPStorm, or Aptana Studio.

Most of the time when hosting providers refers to Cloud hosting, they tend to leave out the fact that Cloud is primarily built on hardware-based dedicated server(s) infrastructure. The strength and resiliency of your cloud infrastructure depend on its underlying core hardware and the network where it’s hosted.

For the purpose of this discussion, I am going to stick to the definition of cloud as an off-premise solution built on a cluster of dedicated servers where resource pooling/sharing exist through virtualization. Cloud is not an instance of a virtual machine built on a single dedicated server.

Why should a startup go with cloud hosting?

I would hope that any startup company took the time to vet out their technology budget at minimum against the odds. Although financially prepared, the “unknown” factor remains in the growth of the company to scale higher than expected or simply fail. If a company was to quickly grow, this implies that resources need to be allocated with little to no downtime in order to satisfy demand. Cloud answers the question of uncertainty by allowing users to add on the fly in the event of significant growth. You are not limited to a single machine and its resources. The sky is the limit if you choose a good platform that promotes business agility and continuity. The last thing you need is having to shut down your business in order to increase capacity. This is exactly what VMware the #1 leading virtualization software evangelizes. Focus on your day to day business and don’t let technology be the reason for failure.

It’s possible that a startup may not have the budget to hire a CTO to build out clusters and load balance solutions along with troubleshooting on-site internal technology. Many cloud providers have now made a Virtual CTO available to their customers. Someone who works along-side the organization to address all their technology needs and promote productivity and business continuity for a fraction of the cost. Get rid of your local server room, cut cost in half!

Why would a startup choose dedicated server hosting?

Before there was cloud, there were dedicated servers. Hardware-based environment supersedes cloud! All cloud implementation and design need to take in consideration its underlying hardware: processing power, RAM capacity, and drives involved. That’s good engineering.

Depending on the nature of what’s being hosted, a dedicated server fully dedicated to a single entity or organization may be a compliance requirement. FISMA (Government), PCI-DSS (Credit card), HIPAA (Health) may require that a single tenant solution be deployed as opposed to adopting a cloud infrastructure that’s also housing other organizations.

While a Private Cloud infrastructure may sometimes satisfy the above, the dedicated world truly promotes logical and physical security. Back in the days, we use to think that servers are expensive, but now, the cost is starting to come down to the cloud level. A single server can now house multiple modular nodes like Microclouds by SuperMicro making it convenient for an organization to host all their IT infrastructure on a single chassis.

How about situations where money-hungry providers tend to oversell their cloud infrastructure? How many other users are you truly sharing resources with? What’s your uplink connection to the internet? (Usually unknown). What type of security risks are you exposing your organization to? The list goes on and on.

Conclusion

In order to run your business the best way possible, it’s important to understand the implications on both sides of the hosting platform. While cloud might be the easiest to scale with, dedicated servers can help address some security issues. Don’t fall in the trap of moving to the cloud just because the mass is doing so and ignoring your specific business requirements. Not one size fits all.

The GigeNET Cloud difference 

With the proliferation of Cloud Services across the internet and business verticals, there’s high level of confusion and panic when it comes to selecting the right type of cloud for your organization. It’s crucial to work with a provider who understands best cloud deployment practices while putting the customer in the driver seat. Why should anyone choose the GigeNET Cloud platform?

  • GigeNET shared or public cloud offering is built on a cluster of Enterprise servers connected at 10 Gbps redundant across the board where resources are not over-utilized. Be careful and ask questions about to the cloud you are about to purchase because most of the competition would sell a VPS instead of a cloud.
  • GigeNET cloud is not limited to a single location! Chicago, Los Angeles, and Washington DC all have cloud offerings to cater to the needs of clients looking to set up geographically redundant solutions.
  • GigeNET cloud is not confined to a single virtualization platform. Our engineers have the ability to think outside of the box and deploy solutions like Proxmox, VMware, Citrix Xen, unlike the competition where you have to use what’s offered.
  • When setting up a private cloud solution, GigeNET does not share the network or the server nodes with any other clients. The entire infrastructure is dedicated to your organization.

Because of the above, GigeNET continues to position itself as the mothership of all things custom when it comes to the cloud. Receive a free consultation services about your project from our experts.

Demand for Public Cloud Computing Services Remains Strong, but Adoption Hurdles Persist

NEW YORK, Aug. 20, 2013 /PRNewswire/ — Market Monitor, a service of 451 Research, projects that Cloud market revenue will increase at a 36% compound annual growth rate (CAGR), putting the cloud computing market just shy of $20 billion at the end of 2016.

The recently published Cloud as-a-Service overview report provides current market size and five-year growth rates for the infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and infrastructure software-as-a-service (SaaS) segments; a competitive landscape analysis for each category; and forecasts for revenue generated by 309 cloud-services providers and technology vendors across 14 sectors.

Leveraging 451 Research’s deep insight into established cloud vendors and startups, Market Monitor employs a pure bottom-up approach, with active participation from sector analysts. The resulting forecast incorporates the unique traits, strengths and weaknesses of each market participant, and when used with in-depth qualitative research from 451 Research, Market Monitor provides a deep, holistic view of the cloud computing marketplace.

“Cloud computing is on the upswing and demand for public cloud services remains strong,” stated Yulitza Peraza, Analyst, Quantitative Services, 451 Research and coauthor of the report. “However, public cloud adoption continues to face hurdles including security concerns, transparency and trust issues, workload readiness and internal non-IT-related organizational issues.”

Additional report highlights include:

  • IaaS accounted for the majority of total market revenue in 2012, with more than half of the total public cloud market share, and a 37% CAGR through 2016.
  • The PaaS layer accounted for nearly a quarter (24%) of the total public cloud revenue in 2012, and will experience the fastest growth – a projected CAGR of 41% between 2012 and 2016.
  • The infrastructure SaaS sector, which does not include enterprise SaaS revenue, represented 25% of total cloud revenue in 2012 and is expected to generate a 29% CAGR through 2016.
  • Publicly traded companies comprise 23% of the cloud vendors tracked, and generate 78% of the total revenue. The majority of vendors are still below the $5 million revenue threshold; vendors that constitute the cloud ‘midmarket’ (between $5 million and $50 million in revenue) accounted for 25% of total revenue in 2012.
  • Only a dozen vendors generated more than $75 million each in revenue in 2012.
  • 83% of all services provider generated $15 million or less each in 2012 revenue.

“Several vendors currently included in the cloud ‘midmarket’ are titans in their core IT sectors,” said Greg Zwakman, Research Director, Quantitative Services, 451 Research. “It is still early days for the cloud divisions at these vendors, and running the same revenue distribution analysis against our 2016 forecasts paints a different picture.”

About Market Monitor: Cloud Computing
Market Monitor: Cloud Computing as-a-Service is a market-sizing and forecasting service that offers a bottom-up market size, share and growth forecast for the rapidly evolving marketplace for cloud computing products delivered ‘as a service.’ The service covers infrastructure as a service (IaaS), platform as a service (PaaS) and infrastructure software as a service (online backup and recovery, cloud archiving and IT management as a service). Market Monitor provides a five-year forecast of market size and growth, full market and sector competitive landscapes, a summary of vendor revenue distribution, and geographic and vertical breakouts for each segment.

About 451 Research
451 Research, a division of The 451 Group, is focused on the business of enterprise IT innovation. The company’s analysts provide critical and timely insight into the competitive dynamics of innovation in emerging technology segments. Business value is delivered via daily concise and insightful published research, periodic deeper-dive reports, data tools, market-sizing research, analyst advisory, and conferences and events. Clients of the company – at vendor, investor, service-provider and end-user organizations – rely on 451 Research’s insight to support both strategic and tactical decision-making. 451 Research is headquartered in New York, with offices in key locations, including San Francisco, Washington DC, London, Boston, Seattle and Denver.

How Cloud Could Help Cure Cancer

Computer clouds have been credited with making the workplace more efficient and giving consumers anytime-anywhere access to emails, photos, documents, and music as well as helping companies crunch through masses of data to gain business intelligence.

Now it looks like the cloud might help cure cancer too.

The National Cancer Institute plans to sponsor three pilot computer clouds filled with genomic cancer information that researchers across the country will be able to access remotely and mine for information.

The program is based on a simple revelation, George Komatsoulis, interim director and chief information officer of the National Cancer Institute’s Center for Biomedical Informatics and Information Technology, told Nextgov. It turns out the gross physiological characteristics we typically use to describe cancer — a tumor’s size and its location in the body — often say less about the disease’s true character and the best course of treatment than genomic data buried deep in cancer’s DNA.

That’s sort of like saying you’re probably more similar to your cousin than to your neighbor, even though you live in New York and your cousin lives in New Delhi. It means treatments designed for one cancer site might be useful for certain tumors at a different site, but, in most cases, we don’t know enough about those tumors’ genetic similarities yet to make that call.

The largest barrier to gaining that information isn’t medical but technical, said Komatsoulis who’s leading the cancer institute’s cloud initiative. The National Cancer Institute is part of the National Institutes of Health.

The largest source of data about cancer genetics, the cancer institute’s Cancer Genome Atlas, contains half a petabyte of information now, he said, or the equivalent of about 5 billion pages of text. Only a handful of research institutions can afford to store that amount of information on their servers let alone manipulate and analyze it.

By 2014, officials expect the atlas to contain 2.5 petabytes of genomic data drawn from 11,000 patients. Just storing and securing that information would cost an institution $2 million per year, presuming the researchers already had enough storage space to fit it in, Komatsoulis told a meeting of the institute’s board of advisers in June.

To download all that data at 10 gigabytes per second would take 23 days, he said. If five or 10 institutions wanted to share the data, download speeds would be even slower. It could take longer than six months to share all the information.

That’s where computer clouds — the massive banks of computer servers that can pack information more tightly than most conventional data centers and make it available remotely over the Internet — come in. If the genomic information contained inside the atlas could be stored inside a cloud, he said, researchers across the world would be able to access and study it from the comfort of their offices. That would provide significant cost savings for researchers. More importantly, he said, it would democratize cancer genomics.

“As one reviewer from our board of scientific advisers put it, this means a smart graduate student someplace will be able to develop some new, interesting analytic software to mine this information and they’ll be able to do it in a reasonable timeframe,” Komatsoulis said, “and without requiring millions of dollars of investment in commodity information technology.”

It’s not clear where all this genomic information will ultimately end up. If one or more of the pilots prove successful, a private sector cloud vendor may be interested in storing the information and making it available to researchers on a fee-for-service basis, Komatsoulis said. This is essentially what Amazon has done for basic genetic information captured by the international Thousand Genomes Project.

A private sector cloud provider will have to be convinced that there’s a substantial enough market for genomic cancer information to make storing the data worth its while, Komatsoulis said. The vendor will also have to adhere to rigorous privacy standards, he said, because all the genomic data was donated by patients who were promised confidentiality.

One or more genomic cancer clouds may also be managed by university consortiums, he said, and it’s possible the government may have an ongoing role.

The cancer institute is seeking public input on the cloud through the crowdsourcing website Ideascale. The University of Chicago has already launched a cancer cloud to store some of that information. It’s not clear yet whether the university will apply to be one of the institute’s pilot clouds.

Because the types of data and the tools used to mine it differ so greatly, it’s likely there will have to be at least two cancer clouds after the pilot phase is complete, Komatsoulis said. As genomic research into other diseases progresses, it’s possible that information could be integrated into the cancer clouds as well, he said.

“Cancer research is on the bleeding edge of really large-scale data generation, he said. “So, as a practical matter, cancer researchers happen to be the first group to hit the point where we need to change the paradigm by which we do computational analysis on this data . . . But much of the data that I think we’re going to incorporate will be the same or similar as in other diseases.”

As scientists’ ability to sequence and understand genes improves, genome sequencing may one day become part of standard care for patients diagnosed with cancer, heart problems and other diseases with a genetic component, Komatsoulis said.

“As we learn more about the molecular basis of diseases, there’s every reason to believe that in the future if you present with cancer, the tumor will be sequenced and compared against known mutations and that will drive your physician’s treatment decisions,” he explained. “This is a very forward-looking model but, at some level, the purpose of things like The Cancer Genome Atlas is to develop a knowledge base so that kind of a future is possible.”

Does cloud computing really reduce energy demands and greenhouse gas emissions? This has been a subject of debate for some time.

The large data centres used for cloud computing are usually more energy efficient than in-house data centres, but there are transmission costs and other overheads.

Add to that the difficulty of proper metrics for cloud computing and there are many difficulties in comparing its energy consumption with older styles of computing. Now a study sponsored by GeSI (Global e-Sustainability Initiative – the people who brought you the Smart 2020 and Smarter 2020 reports) and Microsoft have put some firm numbers on the energy savings from cloud computing.

They are substantial. The study shows that increased use of cloud computing services has the potential to save over US$2.2 billion (€1.65 billion) a year. The savings come from a reduction in energy consumption and reduced global environmental damage. The study finds cloud computing is 95% more efficient – just 1 tonne of greenhouse gas (GHG) created by cloud leads to 20 tonnes abated from customers.

The study is called ‘The Enabling Technologies of a Low-Carbon Economy- a Focus on Cloud Computing’ examines both the energy savings and GHG abatement potential of cloud computing in 11 countries – Brazil, Canada, China, Czech Republic, France, Germany, Indonesia, Poland, Portugal, Sweden and the UK.

Download it here.

It was conducted by a research team from Harvard University, Imperial College and Reading University. The study says that 11.2 TWh less energy will be consumed annually if 80% of public and private organisations in the countries studied opt to provide cloud-based email, customer relationship management (CRM) and groupware solutions to their staff, beyond current levels of adoption.

This translates to 75% of the energy consumed by the Capital Region of Brussels or 25% of the energy consumed by London. It is equivalent to abating 4.5 mega tonnes of CO2 emissions annually or taking more than 1.7 million cars off the road, with 60% of these potential savings from small or micro-sized firms.

Dr Peter Thomond, who led the study, explains: “The findings show, contrary to the perception of power hungry data centres, that the energy efficiency of cloud infrastructure and its embedded carbon outperform on-site services by an order of magnitude. And that is only with these three applications – there are hundreds more.”

But the study says there are many hurdles to the broad adoption of cloud-based services. National policy-making creates uncertainty, even in positive policy documents such as China’s 12th Five Year Plan of Social and Economic Development and Britain’s Carbon Reduction Commitment, which provide a strong public pledge to reduce GHG emissions.

“Few government policies genuinely embrace the enabling potential of the ICT sector treat it more as part of the problem and less part of the solution, and government intent and targets are often neither clear nor justified,” says Dr Thomond.

“Governments often fail to embrace the full range of policy instruments at their disposal, in particular, there is an under-utilisation of government leading by example.

“It would help market adoption if more governments walked their talk when providing their services, or considering service procurement. This said the ultimate responsibility for the spread of enabling technologies, such as cloud, naturally lies with vendors, and they too need to act fast to overcome barriers to adoption.

“We need stronger economic cases for cloud, more credible, impartial evidence of how specific services enable GHG abatement, less one-size-fits all marketing approaches and more acknowledgment that the shift to cloud creates behaviour change challenges, too.”

Luis Neves, GeSI Chairman, said: “Cloud-based email, CRM and groupware are only the tip of the iceberg. In 2012, GeSI published the SMARTer2020 study that found that large-scale, systems-enabled broadband and information and communication technologies could deliver a 16.5% reduction in global greenhouse gas emissions and save up to US$1.9 trillion in savings by 2020.

“GeSI has taken a strong commitment to demonstrate the enabling potential of cloud computing in how it can tackle the difficult issue of climate change and boost economies”, said Neves. “This GeSI-supported study on the carbon abatement potential of cloud computing offers the first academically rigorous and industrially relevant study of its kind.”


Graeme Philipson

Graeme Philipson is senior associate editor at iTWire and editor of sister publication CommsWire. He is also founder and Research Director of Connection Research, a market research and analysis firm specialising in the convergence of sustainable, digital and environmental technologies. He has been in the high tech industry for more than 30 years, most of that time as a market researcher, analyst and journalist. He was founding editor of MIS magazine, and is a former editor of Computerworld Australia. He was a research director for Gartner Asia Pacific and research manager for the Yankee Group Australia. He was a long time IT columnist in The Age and The Sydney Morning Herald, and is a recipient of the Kester Award for lifetime achievement in IT journalism.

 

Load More ...