Featured

What is the value of a server? In this world of virtual machines and dedicated servers, our customers are becoming more and more removed from the physical components that comprise a server. Everything is easily replaceable — everything except the ...

12 Essential System Administration Cheat Sheets
Behind the Curtain

Albert Einstein, a man not known for his lack of learning, once said that we should never learn what we can look up in a book. While it’s often efficient to have all the commands and options we need at ...

Dedicated Hosting

Most people begin their business using a shared web hosting plan. However, they’ll eventually have to upgrade to a website host that will give them with the features and functions that allows them to grow their website. One type of ...

Dedicated Hosting

Anyone who starts an online business wants it to grow and establish a clear presence on the web. If you have a shared hosting plan, you will eventually have to upgrade to a host that can meet your changing business ...

An Introductory Guide to The InterPlanetary File System (IPFS)

I’ve always found peer-to-peer applications interesting. Central points of failure aren’t fun! Protocols like BitTorrent are widely used and well known. However, there’s something relatively new and uses BitTorrent-like technology, except it’s much more impressive.

What is IPFS?

The InterPlanetary File System (IPFS) is one that caught my eye during research. It’s basically a peer-to-peer, distributed file system, with file versioning (similar to git), deduplication, cryptographic hashes instead of file names and much more. Unlike your traditional file systems that we’ve grown to love, IPFS is very different. It can even possibly replace HTTP.

What’s amazing about IPFS is, for example, if you share a file or site on IPFS the network (anyone else running IPFS) has the ability to distribute that file or site globally. This means that other peers can retrieve that same file or set of files from anyone who cached it. It even can retrieve those files from the closest peer which is similar to a CDN with anycast routing without any of the complexity.

This has the potential to ensure data on the web can be retrieved faster than ever before and is never lost like it has been in the past. A famous example of data loss is GeoCities, a single entity wouldn’t have the ability to shut down thousands of sites like Yahoo did.

I’m not going to get too much into the complexity of what IPFS can do though, there is too much to explain in this short blog post. A good breakdown of what IPFS is and can do, can be found here.

How to install and begin with IPFS

Starting off, I spun up two VMs from GigeNET Cloud running Debian 9 (Stretch). One in our Chicago datacenter and another in our Los Angeles datacenter.

To get the installation of IPFS rolling we’ll go to this page and install ipfs-update, an easy tool to install IPFS with. We’re running on 64bit Linux so we’ll wget the proper tar.gz and extract it. Make sure you always fetch the latest version of ipfs-update!

IPFS distribution download

wget -qO- https://dist.ipfs.io/ipfs-update/v1.5.2/ipfs-update_v1.5.2_linux-amd64.tar.gz | tar xvz

Now lets cd to the extracted directory and run the install script from our cwd (current working directory). Make sure you’re running this with sudo or root privileges.

cd ipfs-update/ && ./install.sh

When ipfs-update gets installed (should be very quick) we’ll install IPFS for real with.

ipfs-update install latest

The output should look something like this.

ipfs root installation

Now that IPFS is installed we need to initialize it and generate a keypair which in turn gives you a unique identity hash. This hash is what identifies your node. Run the following command.

ipfs init

The output should look similar to this.

initializing ipfs node

With this identity hash you can now interact with the IPFS network, but first lets get online. This will start the IPFS daemon and send it to the background when you press CTRL + C. It’s probably not advisable to run this as root, or with elevated privileges. Keep this in mind!

ipfs daemon &

ipfs daemon

Now that we’re connected to the IPFS swarm we’ll try sharing a simple text file. I’ll be adding the file to IPFS which generates a hash that’s unique to that file and becomes its identifier. I’ll then pin the file on 2 servers so that it never disappears from the network as long as those servers are up. People can also pin your files if they run IPFS to distribute them!

Adding and pinning the file on my Chicago VM.

hello ipfs

Now that we have the file’s hash from the other VM we can pin it on our VM in Los Angeles to add some resiliency.

ipfs pin add

Now to test this we’ll cat the file from the IPFS network on another node!

ipfs hello cat

That was a pretty simple test, but it gives you an idea of what IPFS can do in basic situations. Overall the inner workings of IPFS are hard to understand, but it is a fairly new technology and it has a lot of potential.

Do you need your own dedicated server, or can you trust a cloud service for hosting?

Before you make a decision, it’s crucial to understand the differences between dedicated server hosting & cloud hosting—which are rapidly emerging as the dominant modes for organizational hosting.

There are real budgeting and operational implications to consider before settling on either solution.

Cloud servers: scalable flexibility—at a premium

With rapid deployment and a virtualized nature, cloud hosting models allow your organization to circumvent a direct investment in hardware. This benefits businesses that require a high degree of operational flexibility, want to experiment in the short-term or have highly seasonal demands.

The click of a button can deploy a new virtual server, allowing you to rapidly scale up your operations during times of high load, product releases, seasonal rushes and other demanding applications that require instantly available bursts of computing power. This can typically be accomplished from anywhere with an internet connection and a web browser—meaning that cloud hosting is a truly portable solution.

Since the cloud is virtualized, your organization isn’t tethered to the strength of a single server or limited in its computing power—you can scale up, on demand. Load balancing allows for rapid distribution during demanding workloads, tapping into the power of multiple servers instead of relying on one powerful server.

Cloud hosting is also exceptionally resilient. By introducing numerous instances, cloud hosting behaves redundantly. Coupled with a secure backup strategy, the cloud provides a degree of restorability and operational stability that other solutions struggle to match—with the caveat that it’s highly dependant on the hosting company’s decisions about which servers host its cloud.

This degree of flexibility and decreased initial investment comes at a price: cloud services are billed like a utility—so you pay for what you use. This means that the cloud, while a genuinely powerful and useful option for many organizations—can rapidly deteriorate into a wildly expensive, budget-busting affair without proper planning and skillful management.

Explore GigeNET’s cloud servers.

Dedicated servers: reliability & performance—with the right planning

Deploying a dedicated server is a more complicated task than deploying a cloud server. It requires procuring, installing and configuring a physical server in a data center—while this process usually takes only a few days, it is less than ideal when an organization needs instantly available computing capacity because of a rush.

For organizations with consistent demands or operational IT requirements that don’t change rapidly over time, dedicated servers represent an opportunity to introduce high-performing capabilities and an exceptional degree of business continuity.

Since the server is controlled solely by your organization, you don’t risk bad neighbors or unscrupulous actors introducing instability—the server is yours to do with as you please, and not shared with others who may mismanage their resources or interrupt your services.

That isn’t to say that dedicated servers can’t be designed to meet the needs of growing businesses—a skilled systems architect will build a strategy that meets your needs today and your needs in the coming years as well.

Managing a dedicated server requires a competent IT team that is capable of overseeing server maintenance and creating a strategy for keeping software and security updates under control. For organizations that need the benefits of a dedicated server but don’t have an IT team, there’s managed services that allow leveraging of our veteran support team.

However, there is one clear operational benefit to a dedicated server: consistent, stable and predictable billing based off a contract, rather than usage-based pricing structures that can confound budgets.

Explore GigeNET’s dedicated servers.

Deciding between the cloud and dedicated servers

Choosing your hosting solution requires taking stock of your organization: what are its goals? Where do you see your internet presence in 1, 3 and 5 years? What sort of IT requirements do you really have?

We have experienced system architects that will work with you develop an affordable and reasonable strategy for a scalable and flexible cloud hosting—or a reliable and consistent dedicated server.

If you’re looking to resolve the challenge of finding the right hosting strategy for your organization, receive a free consultation.

A cloud server is a virtualized server that powers storage resources and shared computing resources. Cloud server networks provide nearly unlimited resources and diverse capabilities.

The virtualization technology allows the demand for a particular set resources or network to be spread over multiple servers at the same time. This allows for maximized security and practically no down time.

In addition, customers do not have to worry about upgrading, as the hosting environment’s unique features are highly scalable and can leverage all size networks and sites.

Advantages of Cloud Servers

  • Route traffic automatically around network outages
  • support for multiple public IP addresses
  • unlimited free private bandwidth
  • R1Soft Backup Servers
  • IP network brownouts are effectively managed
  • continuous identification and selection of the most favorable path
  • Do not need to acquire and keep your own network and server
  • Scalability: add RAM, CPU capacity, storage, as the applications, data, or website traffic grows

Its flexibility, reliability, and affordability makes Cloud Server networks very beneficial for any e-commerce site which is why cloud servers are now in great demand.

Functionally unlimited power & storage. Use-based billing, like a utility. Rapid deployment speed. Unparalleled flexibility.

The hype around the cloud isn’t new, but the reality is that it still offers incredible advantages for the right applications.

Defining the cloud

It’s always worth clearly defining the cloud, since it’s such a slippery concept: the cloud is a set of virtualized servers.

This means the cloud “server” isn’t confined to one physical box, but is instead a software-defined set of computing resources. It’s a virtual tool, specifically designed to meet the needs of your unique application.

This means that you leverage the distributed computing power of multiple servers—rather than relying on one server to perform your needed tasks.

Who avoids revenue loss with cloud hosting?

The number one reason for businesses to utilize cloud hosting as revenue insurance is to thwart downtime caused by surges in traffic and computing loads.

During a spike in activity, resources can be rapidly allocated to cope with the strain. Since bandwidth and capacity are functionally limited with traditional servers, there’s significant lag in getting new servers online.

With a cloud model, bandwidth and capacity are constantly available at a moment’s notice.

Avoiding revenue loss by facilitating rapid expansion for seasonal businesses continues to be a growth area for cloud service providers.

So who benefits the most from cloud services? Organizations with business models that rely on high-traffic periods.

Some examples of organizations who benefit from cloud:

  • Tax preparers
  • Seasonal tourist businesses
  • Colleges & universities
  • Florists
  • Sports teams
  • Ticket and reservation providers
  • Rental agencies
  • Development and marketing agencies
  • Medical providers

Through a cloud model, the initial investment in IT infrastructure is lower and the payoff is immediately tangible.

Since the service is funded like a utility (you pay for what you use), businesses that experience drastic shifts in revenue can avoid spending large amounts of capital and reduce their overall operating costs.

That isn’t to say the cloud is always less expensive—in fact, that’s patently false. Cloud computing can be far more expensive than traditional dedicated servers when they’re mismanaged, and not every business can properly leverage the overall reduced cost of cloud hosting effectively.

Pay for what you use: more about billing & the cloud

While Amazon’s (awesome!) marketing makes the cloud seem cheaper by every metric, this isn’t actually the case.

Since cloud hosting is based on a contracted usage model, it can wind up being orders of magnitude more expensive than traditional dedicated servers. As we’ve explained elsewhere, the cloud is really an evolution of dedicated servers, but the brunt of the capital investment is borne by the hosting company instead of the individual business.

By paying for what you use, a smart organization can reduce their operating costs. They can rapidly experiment with the intention of failing fast—for example, deploying a new application or service for a limited time as a test case, testing different landing pages or scaling up their business during a push for more sales.

The cloud model also allows for smaller businesses to access powerful resources for short amounts of time at a very low cost. This can be extremely useful for things like market research and other projects that require computing power and bandwidth out of the scope of their budget, but without investing in costly servers.

But as we’ve hinted at – the pay-per-use model can mean that overspending is easy. It’s crucial to understand how to effectively manage your cloud. If that’s too daunting, GigeNET offers managed cloud services that can help alleviate the stress of gambling with your budget.

Some businesses find the unpredictability of cloud billing to be a hassle and instead opt for managed services or a dedicated server—it’s worth having a candid discussion with an expert systems architect before you jump into something expensive that seems like a good fit.

The technical benefits of cloud computing: uptime, uptime, uptime

There are practical real-world benefits to utilizing the cloud, which we absolutely don’t want to downplay. These include:

  • Routing web traffic around network outages and bottlenecks
  • Avoiding hardware outages through distribution of resources
  • Multiple public IP addresses for redundancy
  • Management of IP network brownouts
  • No need to navigate the procurement process for servers
  • Powerful scaling capabilities
  • Continuous best-path network routing

For each instance, you gain uptime and operational stability. It’s generally intuitive—distributing your data over multiple servers means you aren’t susceptible to a single point of failure crippling your organization.

A hidden benefit of cloud computing: highly redundant backups

The other benefit of distributing data in a cloud service model is that your data is distributed across many servers.

While it’s advisable to use redundant backups (like combining R1Soft backups and RAID arrays), the cloud is generally much safer because of its many nodes. Additionally, since the cloud’s servers are kept off site, your data is protected from on-site disasters, power outages and downtime.

The cloud, however, is not automatically a method for backing up your valuable data. We’d recommend investing carefully in a backup service that can meet the specific needs of your business instead of imagining that the cloud is foolproof. There have been cloud failures at scale, even for providers like Google, so ensuring that you have your own unique backup strategy is crucial to data security.

However, the cloud model still offers a kind of simple backup that can help reduce downtime significantly, particularly if your current server hardware is outdated or unreliable.

Dare to compare: GigeNET’s competitive cloud services

We’ve developed our cloud to compete with the big cloud providers. We’ve been developing and refining our cloud capacity for nearly a decade—we were early adopters.

If your organization needs the flexibility and reliability that a cloud service offers, contact one of our expert systems architects today for help with planning and implementing an ideal cloud solution.

GigeNET has been in business for more than 20 years, and our goal to is to make the internet better for everyone. We aim to be your hosting partner for life, and have the servers & talent to back up our claims.

Try GigeNET cloud for yourself. 

Managed Dedicated Server Hosting Solutions

IT teams are expensive.

Luckily, there are options besides hiring (and paying!) a team just to manage your server.

The most flexible and reliable options for organizations that don’t have the budget for their own in-house IT team—but have extensive IT requirements—remain the same as they ever were: managed dedicated servers.

Why would you want managed dedicated servers? The benefits.

The most immediate concern of any organization is going to be cost. With managed dedicated hosting, your organization acquires astonishing resilience without a large initial outlay.

Lowering the investment ceiling for a powerful server is the simplest benefit that managed dedicated hosting offers. Your organization gets around much of the costs associated with:

  • Hiring and retaining IT talent
  • Evaluating the right-sized IT infrastructure
  • Procuring and purchasing servers and hardware
  • Establishing maintenance and update protocols
  • Secure premises for your infrastructure

The overall costs associated with this kind of large-scale, long-term investment can be digested by large enterprises—but many small-to-medium organizations struggle with the knowledge gap and capital gap that accompanies this project.

Managed dedicated server hosting means lowering the bar for entrance into a high-performance and high-availability infrastructure. You’ll access to expertise without needing to pay a premium for retaining talented IT staff. You’ll harness the stability and security of a data center rather than the risk of an on-premises deployment. You’ll be able to built it right the first time and plan for growth in the future.

And more importantly: you’ll see consistent and stable costs, backed by a contract, that you can predict and plan for.

How managed dedicated servers work: from idea to execution

Initially, your organization takes stock of its operations and meets with an expert systems architect to develop a unique strategy for your organization. We’ll analyze your prospects for near-term and long-term growth and create an infrastructure strategy that exceeds your current expectations and anticipates plausible future scenarios.

Next, your existing systems can be migrated—or re-developed—onto your new architecture. Your hardware and software is kept up-to-date from Day 1, and you are granted access to the servers through server management software that you choose among viable options.

From there, you’ll have managed service from a staff of experts. You should expect on-demand service, as well as omnipresent systems monitoring to ensure “four-nines” (99.99%) uptime. You’ll be able to issue support tickets that should come with a rapid response.

The best part of managed dedicated servers? You get to focus on your operations instead of maintaining and updating your infrastructure. By accessing long-time industry professionals with expertise at a fraction of the cost of hiring your own IT team, you leverage the knowledge and experience that a company like GigeNET already has. It’s an investment in the stability of your operations without the large initial cost.

What the benefits of managed dedicated servers look like

Instead of combing Google for knowledge and straining to understand a field outside of your own expertise, managed dedicated server hosting means you have access to:

  • Hardware, software, bandwidth and power at a discount
  • Secure backups to ensure operational continuity
  • Highly managed security updates and omnipresent monitoring
  • Insurance against disasters through secure data center locales
  • Fastest-route optimized networks for high-speed service and content delivery
  • Customized service and responses to your unique problems

These benefits play out in a variety of ways: you’ll see reduced outlays for downtime, you’ll increase your capacity for growth and you’ll be able to focus on nurturing your organization’s primary goals.

The web hosting industry is a complex & constantly shifting landscape: things are quickly outdated, outmoded and out of favor—security threats rapidly evolve and often appear in novel forms that require expensive and time-consuming adaptation. That’s why managed dedicated hosting is so powerful: it doesn’t require the level of IT competency that other forms of hosting demand from organizations.

To get the most out of your host, you need a company that aligns with your organization’s skill level and understands the burden that a modern-day web presence places on operations.

The benefits of managed dedicated server look like an organization that’s laser-focused on its primary objectives, rather than stretched thin by a poorly managed online presence.

Run a business, not a server

The real point of managed dedicated servers is not to get out of hiring an IT team or reduce your capital expenditures: it’s to create a business that functions as well as it possibly can.

We’ve partnered with organizations for more than 20 years to develop solutions for their online infrastructure—longer than a majority of the large web hosts have even existed.

There’s a real need for affordable, reliable and high-performing hosting solutions like managed dedicated servers: organizations run leaner and with greater specificity than ever before, requiring them to stay focused on their operational tasks rather than checking to see if their server software is up to date and their hardware is running like it should.

The reason GigeNET partners with organizations is to improve their ability to execute their goals. Our high-performing servers, low-utilization support staff, industry-leading SLAs and nationwide data centers mean we can offer tremendous value for organizations of highly varying sizes with diverse operational needs: our bread-and-butter is creating powerful infrastructure so you can focus on the day-to-day efforts required to reach your goals.

When you’re ready to move toward a cost-conscious and trustworthy partnership with a company that offers managed dedicated hosting solutions, contact us or check out our managed dedicated servers.

traefik guide

As someone interested in following DevOps practices it is my goal to find the best solutions that work with our company’s principles.

At its core, DevOps is a strategy of teaming together administrators and developers to form a single unit with a common goal of working together to provide faster deployments, and better standards through automation. To make this strategy work, it’s essential to continuously explore new tools that can potentially provide you with better orchestration, deployments, or coding practices.

In my search for a more programmable HTTPD load balancer I spotted Traefik. Intrigued by the various features and backend support for Docker, I quickly spun up a VM and jumped straight into learning how-to integrate Traefik with Docker.

My first glance at the file-based configurations for Traefik had me a little uneasy. It was the first time I encountered the TOML formatting. It wasn’t the YAML like I encounter in most projects, or bracket-based formatting that I would have encountered in the past with Nginx, Apache or HaProxy.

Traefix Terminology

Before I jump into setting up the basic demonstration of Traefix with Nginx on Docker I’ll go over the new terminology that has been introduced with Traefik:

  • Entrypoints – The network entry points into the Træfik daemon such as the listening address, listening port, SSL certificates, and basic endpoint routes.
  • Frontend – A routing subset that handles the traffic coming in from the Entrypoints and sends the traffic to a specified backend depending on the traffics Header, and Path.
  • Backend – The actual configurations subset that sends the traffic from the frontend to the actual webserver. The webserver selected is based on the load balancing technique configured.

A basic demonstration of Traefix with Nginx on Docker

In demonstration of a basic Traefik setup, we will only focus on the file-based configuration of Traefik Entrypoints. This is due to the fact that Docker dynamically builds the Frontend and Backend configurations through Traefik’s native Docker Swarm integration.

Let’s start by defining two basic Entrypoints with the “defaultEntryPoints” flag. Under this configuration flag we created two Entrypoints labeled these ‘http’, and ‘https’. These labels represent the everyday traffic we see within our browsers. Under the “[entryPoints]” field within the configuration define the labeled entry point ‘http’ to utilize all interfaces, and we assign it port eighty for inbound traffic. Under the same Entrypoint label we instruct that all traffic entering port eight to be redirect to our second Entrypoint labeled “https”. The Entrypoint labeled ‘https’ follows the same syntax as the one labeled ‘http’ with a slight deviation on how it handles the actual web traffic.

Without a redirect it instructs Traefik to accept the traffic on port 443. We also instruct Traefik to utilizes secure certificates for web encryption and were to load the SSL certificates. A full example is shown below, and can also be found within our Github repository.

defaultEntryPoints = ["http","https"]

[entryPoints]

[entryPoints.http]

address = ":80"

[entryPoints.http.redirect]

entryPoint = "https"

[entryPoints.https]

address = ":443"

[entryPoints.https.tls]

[[entryPoints.https.tls.certificates]]

certFile = "/certs/website.crt"

keyFile  = "/certs/website.key"

The TOML configuration we have built is all that is required for a dynamic Docker demonstration. This file will need to be saved as traefik.toml. The configuration file will be used on our Docker Swarm management nodes and needs to be saved on each management node in your Docker Swarm.

To keep focus on Traefik we won’t go into the setup, and configuration of a Docker Swarm cluster. That will be a topic of discussion in a future blog post and will be back referenced here at some point in the future. We will design a basic Docker compose file to demonstrate the Dynamic loading of Traefik with a default Nginx instance. Within the Docker Swarm configuration file will focus on the Traefik image, and the Traefik flags required for basic backend load balancing.

Let’s start by downloading the docker-compose.yaml file that can be found on our github.com page. To download the file, you can go to https://github.com/gigenet-projects/blog-data/tree/master/traefikblog1.

The entire code repository can also be cloned the following command:

[root@dockermngt ~]#  git clone https://github.com/gigenet-projects/blog-data.git

[root@dockermngt ~]#  cd blog-data/traefikblog1

The docker-compose.ymal:

version: '3.3'

services:

 nginx:

   image: nginx

   ports:

     – target: 80

       protocol: tcp

       mode: host

   networks:

     – distributed

   deploy:

     restart_policy:

       delay: 10s

       max_attempts: 10

       window: 120s

     labels:

       traefik.frontend.rule: “Host:traefik,demo.gigenet.com”

       traefik.port: “80”

 traefik:

   image: traefik

   volumes:

     – /var/run/docker.sock:/var/run/docker.sock

     – /opt/traefik/certs/:/certs/

     – /opt/traefik/traefik.toml:/traefik.toml

   deploy:

     restart_policy:

       delay: 120s

       max_attempts: 10

       window: 120s

     placement:

       constraints: [node.role == manager]

   networks:

     – distributed

   ports:

     – 80:80

     – 443:443

     – 8080:8080

   command: –docker –docker.swarmmode –docker.domain=demo –docker.watch –web –loglevel=DEBUG

networks:

 distributed:

   driver: overlay

Under the Traefik service section we will specifically will focus on the Docker compose flags labeled volume, deploy, placement, networks, ports, and command. These flags have a direct impact on how the Traefik Docker image will operate, and need to be configured properly.

Within the volume flag we pass in the docker socket, and the Traefik TOML configuration file we built previously. The Docker socket is used by Traefik for API purposes such as grabbing the Traefix label flags that are assigned to other Docker services. We will go into this in further detail in a few steps. The actual Traefik configuration we built earlier utilized both http, and https. We defined these Entrypoints to tell the Traefik container image the base configuration it will use when starting for the first time. These configuration flags can be overwritten with Docker labels, but we will not be going into such advanced configurations within the blog. As the configuration file had a focus on encryption the configuration will be mounting our SSL certificates under the /certs directory on the Traefik container. The Traefik TOML configuration file, and SSL certificates should be installed on every Docker management node.

Under the deploy section we take focus on the placement flag. Traefik requires the Docker socket to get API data, and only management nodes can provide this data in a Docker Swarm. We tell the placement of the Traefik container to be constrained to only Docker names with a management role. This simple technique will enforce the requirement.

The network flag is a must have for Traefik to work properly. With Docker clusters we can build overlay networks that are internal to just the VM’s assigned to the network. This provides containers with network isolation. For the load balancing to work we need to have the Traefik container on the same network as every webhost we plan to load balance traffic too. In this case we named our network “distributed” and set it to use the overlay driver. 

How inbound traffic is passed to the overlay network

The port flag is very simple, and straight forward. On the Traefik configuration we assigned ports 80, and 443 to take in traffic, and forward traffic. The ports map the port of the container to the live port within the container. We also enable port 8080 in this example, and this is so we can demonstrate the web dashboard that Traefik provides. A snippet of the dashboard of a live cluster is shown below:

Lastly, the command flag will parse any additional commands that we did not configure in the Traefik TOML configuration file directly onto the Traefik binary on boot. We tell Traefik to utilize the Docker backend, and enable the dashboard with this Docker compose demonstration.

Now that we understand the Traefik section of the Docker compose file we can go into detail on how the other services such as Nginx are dynamically connected to the Traefik load balancer. Within the Nginx service we will focus on the ports, networks, and labels flags.

With this specific Nginx container image our web browser will only see a default Nginx “Welcome to nginx!” web page. With focus on the ports flag you’ll notice we are opening port eighty so that packets are not firewalled off by the Docker management service. In the Traefik service piece, we mentioned that the network has to include the same network as Traefik. Within our Nginx service you will notice the required network “distributed” has been assigned.

The labels flag is the most interesting section of the Nginx service. Within the label we set a few flags for Docker to register on the Docker Management API. This is how Traefik will know which backend to assign, and if that backend is alive. To keep this demonstration simplistic, we tell Traefik that the Nginx service has a single Nginx virtual host named ‘demo.gigenet.com’. To assign this to the Nginx service we utilize the ‘traefik.frontend.rule’ flag under the labels section as followed ‘traefik.frontend.rule: “Host:traefik,demo.gigenet.com”’. Notice how the Traefik Frontend is defined within the Docker compose configuration file, and not on the file-based Traefik configuration file. With this flag set Traefik will be able to get every IP address assigned under the network overlay. Traefik will also need to know which ports on the Nginx services are listening, and this is done by the flag “traefik.port”. In our example we assigned port eighty to the “traefik.port” flag, and this also represents the port we opened for network traffic.

With this configuration explained, and ready. It’s now time to launch the Docker stack we built and test out the load balancing. To launch the Docker stack run the following command within the Docker management node.

[root@dockermngt ~]#  docker stack deploy --compose-file=docker-compose.yaml demo

You should now see the “Welcome to nginx” page within your browser when going to the domain name you specified. You can also review the actual load balancing rules by appending :8080 to this domain as showing in the previous picture.

So you’ve navigated the plethora of hosting options out there, and determined that colocation is right for your organization.

But your diligence shouldn’t end there. You’ve got to choose the right host, with the right data center to secure your server.

This blog will arm you with questions to ask and issues to consider as you choose your colocation data center & hosting company.

Why you need to know what you need to know

While servers are expensive tools, the data on your servers is what is valuable.

As Jennifer Svensson, GigeNET’s head of support explains in her blog about data loss:

“Countless work hours have gone into making each server unique, with custom set-ups, modified WordPress templates, blog posts going back years, etc. This is where the value of a server lies: in the data.”

Choosing the right data center means keeping the risk of data loss low. Colocation is a real time and money saver, but—you’ve got to ensure that your valuables are under the watchful eye of an experienced and competent staff.

Otherwise, your investment becomes a costly lesson in how not to manage your infrastructure.

Pick your location wisely: be ready to go there if needed

With colocation, you are ultimately responsible for the installation and maintenance of your server.

Because of this requirement, it’s wise to choose a data center that’s easily accessible to your organization.

Data centers tend to cluster around geographically secure areas that aren’t prone to natural disasters, have a steady and inexpensive supply of water (for cooling) and power (for the servers, naturally)—so check and see if your provider is near other data centers.

This can be a cue that the data center has been carefully planned to take advantage of the area’s existing infrastructure.

What to consider for your location:

  1. Does your potential host have a data center that’s close enough to access in an emergency?
    • Being able to access your physical server when you need to is an oft-overlooked element to choosing your colocation host: you may need to upgrade or service your servers!
  2. Does your provider have multiple data centers across the United States?
    • This offers redundancy and reliability, as well as the potential for faster network connectivity.
  3. Is the provider in a physically secure location?
    • Areas prone to natural disasters (like earthquakes, floods, tornadoes or hurricanes) are hard to avoid entirely—for example, choosing a central US location versus a coastal US location can be a positive trade-off.

Knowing where your servers live creates security by ensuring you can access them—and through a stable outside environment.

Internet connectivity: how fast do you want to go?

Perhaps it’s obvious, but not all data centers are created equal.

Since your server has to connect to the internet do anything at all, it’s crucial to ask about the level of connectivity—and reliability—from the data center.

Here’s what we recommend asking the company’s representatives about the hosting company’s network:

  1. How much bandwidth is available?
    • The more, the better. Does the host offer a network speed test so you can determine it yourself?
  2. How much packet loss is there?
    • Less is more. You’re shooting for as little packet loss as possible.
  3. What’s the average uptime?
    • Anything less than 99% is below the current industry standards. Is their claim also backed up by their Service Level Agreement?
  4. Who are their transit providers?
    • The Tier 1 ISPs in the US (they have access to the entirety of the internet, directly through their physical infrastructure) are AT&T, Cogent, Verizon, Telia, Level 3 & Comcast. Do they have all of these, and others, as transit providers?
  5. What sort of routing optimization does the data center utilize?
    • Not all routing is created equal. With data centers, the fastest route is not always a straight line. They should be able to competently and clearly explain how their routing is optimized and why it’s so fast.

You should demand clear and concise answers to your questions about internet connectivity.

However—it’s vital that you understand your own bandwidth and internet requirements before you seek out requests for proposals. Take stock of your business goals—and current requirements—before shopping around.

Redundancy and backup requirements: your insurance policy

Besides a high-speed connection, data centers typically offer a degree of physical security beyond what your business could provide on-site.

With a good data center, you are protected against natural disasters—as well as insulated from downtime and data loss due to mismanagement.

This security is achieved through two specific areas: power and backups. So what should you consider about power consumption and backups before jumping into a colocation contract?

  1. Data centers need lots of power—and a plan for when power fails.
    • While a power loss event at a data center is rare, they do happen. It’s vital that you ask about the data center’s power backup strategy: do they have an on-site generator? What’s their overall power loss strategy?
  2. Understand what your growth requirements are for the next 5 years
    • If you’re anticipating drastic growth (or stable needs), ask about how the data center plans to grow its power capacity. The more computing power per square foot that the provider plans for, the easier growth will be.
    • Servers like Intel’s Xeon-D have been specifically engineered to harness maximum computing power with minimal energy expenditure—does the data center offer similar high-performing, high-density servers?
  3. Inquire about backup strategies
    • Data archival for the data center’s systems is worth inquiring about.

By ensuring that your data center has covered its bases regarding redundancy and power, you’re ensuring your own uptime—think of it as insurance against costly downtime.

The bells & whistles: consider managed services, even if you don’t think you’ll need them

The effect of turnover in the IT industry means that it’s vital to anticipate a loss of talent. What if your “server pro” leaves the company?

We recommend ensuring that your data center offers some degree of managed services—even if you don’t plan on utilizing them, you can prevent problems by having managed services as a fallback plan in case your talent leaves for greener pastures.

Additionally—managed services allow experts with long-standing experience to see to the health and administration of your servers. If you wind up in a situation where your IT experts can’t solve a problem (or are on vacation when disaster strikes), having managed services as an option—even if you don’t utilize it immediately—can be invaluable.

Another point of consideration: the self-service portal

How you access your servers makes a difference: not all data centers offer the same level of customer-centered software for controlling and managing your servers.

Some don’t offer any custom software at all—which can be fine—but a custom-designed software suite for accessing your servers is more than a luxury, it’s a powerful tool for accomplishing your business goals.

Inquire about how your servers can be accessed, what sorts of granular controls exist and how easy-to-use the software is. You should expect something built with the customer in mind—user friendly with a consistent user experience.

Colocation: still an affordable and reliable option

Colocation offers benefits and savings for organizations that have adequate IT staffing, a realistic vision of their infrastructure requirements and a clear vision for how they’ll meet their current and future goals.

At GigeNET, we’ve got more than 20 years of experience with server hosting—a very long time in this industry. We’ve got a staff of experienced industry veterans backed by three nationwide data centers designed for growth and adaptation to the coming changes in our industry.

If you’re looking for a colocation provider, shop around and utilize the questions we’ve provided here to poke & prod for the truth—but remember that long-time, stable providers are always the best bet.

If you’re ready to try GigeNET’s high-speed network and super-secure data centers, let’s discuss your needs and work toward crafting a custom colocation solution for your organization.

glusterfs

Introduction and use cases

GlusterFS is a clustered file system designed to increase the speed, redundancy, and availability of network storage. When configured correctly with several machines, it can greatly decrease downtime due to maintenance and failures.

Gluster has a variety of use cases, with most configurations being small three server clusters. I’ve personally used Gluster for VM storage in Proxmox, and as a highly available SMB file server setup for Windows clients.

Configurations and requirements

For the purposes of this demonstration, I’ll be using Gluster along with Proxmox. The two work very well with each other when set up in a cluster.

Before using Gluster, you’ll need at least three physical servers, or three virtual machines, which I also recommend be on separate servers. This is the minimal configuration to set up high availability storage. Each server will have a minimum of two drives, one drive will be used for the OS, the other will be used for Gluster.

Gluster operates on a quorum based system in order to maintain consistency across the cluster. In a three server scenario, at least two of the three servers must be online in order to allow writes to the cluster. Two node clusters are possible, but not recommended. With two nodes, the cluster risks a scenario known as split-brain, where the data on the two nodes isn’t the same. This type of inconsistency can cause major issues on production storage.

For demonstration purposes, I’ll be using 3 CentOS 7 virtual machines on a single Proxmox server.

There are two ways we can go about high availability and redundancy, one of which saves more space than the other.

  1. The first way is to set up gluster to simply replicate all the data across the three nodes. This configuration provides the highest availability of the data and maintains a three-node quorum, but also uses the most amount of space.
  2. The second way is similar, but takes up ⅓ less space. This method involves making the third node in the cluster into what’s called an arbiter node. The first two nodes will hold and replicate data. The third node will only hold the metadata of the data on the first two nodes. This way a three-node quorum is still maintained, but much less storage space is used.The only downside is that your data only exists on two nodes instead of three. In this demo I’ll be using the latter configuration, as there are a few extra steps to configuring it correctly.

Configuration

Start by setting up three physical servers or virtual machines with CentOS 7. In my case, I set up three virtual machines with 2 CPU cores, 1GB RAM, and 20GB OS disks. Through the guide, I’ll specify what should be done on all nodes, or one specific node.

glusterfs

All three machines should be on the same subnet/broadcast domain. After installing CentOS 7 on all three nodes, my IP configurations are as follows:

Gluster1: 10.255.255.21

Gluster1: 10.255.255.22

Gluster1: 10.255.255.23

All Nodes:

The first thing we’ll do after the install is edit the /etc/hosts file. We want to add the the hostname of each node along with their IPs into the file, this prevents Gluster from having issues in the case that a DNS server isn’t reachable.

My hosts file on each node is as follows:

glusterfs

All Nodes:

After configuring the hosts file, add the secondary disks to the hosts. I added an 80GB disk to Gluster1 and Gluster2, and a 10GB disk to Gluster3, which will be the arbiter node. If the Gluster nodes are VMs, the disks can simply be added live without shutting down.

Gluster 1&2  Drive configuration:

Gluster 3 Drive configuration:

After adding the disks, run lsblk to ensure they show up on each node:

sda is the OS disk, sdb is the newly added storage disk. We’ll want to format and mount the new storage disk for use. In the case of this demo, I’ll be using xfs for the storage drive.

fdisk /dev/sdb
n
p
enter
enter
w
enter
mkfs.xfs /dev/sdb1

You should now see sdb1 when you run an lsblk:

We’ll now create a mount point and add the drive into /etc/fstab in order for it to mount on boot:

My mountpoint will be named brick1, I’ll explain bricks in more detail after we mount the drive.

mkdir -p /data/brick1

After creating the mountpoint directory, we’ll need to pull the UUID of the drive, you can do this with blkid

blkid /dev/sdb1

Copy down the long UUID string, then go into /etc/fstab and add a similar line:

UUID=<UUID without quotes> /data/brick1 xfs defaults 1 2

Save the file, then run mount -a

Then run df -h

You should now see /dev/sdb1 mounted on /data/brick1

Make sure you format and mount the storage drives on each of the three nodes.

Gluster volumes are made up of what what are called bricks. These bricks can be treated almost like virtual hard drives in a what we’d use for a RAID array.

This depiction gives an idea of what a two server cluster with two replicated Gluster volumes would look like:

This is what the Gluster volume we’re creating will look closer to:

Now it’s time to install and enable GlusterFS, run the following on all three nodes:

yum install centos-release-gluster

yum install glusterfs-server -y

systemctl enable glusterd

Gluster doesn’t play well with selinux and the firewall, we’ll disable the two for now. Since connecting to services such as NFS and Gluster doesn’t require authentication, the cluster should be on a secure internal network in the first place.

Open up /etc/selinux/config with a text editor and change the following:

SELINUX=enforcing

to

SELINUX=disabled

Save and exit the file, then disable the firewall service:

systemctl disable firewalld.service

At this point, reboot your nodes in order for the SELINUX config change to take effect.

On Node Gluster1:

Now it’s time to link all the Gluster nodes together, from the first node, run the following:

gluster peer probe gluster2

gluster peer probe gluster3

Remember to run the above commands using the hostnames of the other nodes, not the IPs.

Now let’s check and see if the nodes have successfully connected to each other:

We can see that all of our nodes are communicating without issue.

Finally, we can create the replicated gluster volume, it’s a long command, ensure there aren’t any errors:

gluster volume create stor1 replica 3 arbiter 1 gluster1:/data/brick1/stor1 gluster2:/data/brick1/stor1 gluster3:/data/brick1/stor1

  • “stor1” is the name of the replicated volume we’re creating
  • “replica 3 arbiter 1” specifies that we wish to create a three node cluster with a single arbiter node, the last node specified in the command will become the arbiter
  • “gluster1:/data/brick1/stor1” creates the brick on the mountpoint we created earlier, I’ve named the bricks stor1 in order to reflect the name of the volume, but this isn’t imperative.

After running the command, you’ll need to start the volume, do this from node 1:

gluster volume start stor1

Then check the status and information:

gluster volume start stor1

gluster volume info stor1:

As you can see, the brick created on the third node is specified as an arbiter, and the other two nodes hold the actual data. At this point, your are ready to connect to GlusterFS from a client device.

I’ll demonstrate this connection from Proxmox, as the two naturally work very well together.

Ensure that the Gluster node hostnames are in your Proxmox /etc/hosts file or available through a DNS server prior to starting.

Start by logging into the Proxmox web gui, then go to Datacenter>Storage>Add>GlusterFS:

glusterfs proxmox

Then, input a storage ID, this can be any name, the first two node hostnames, and the name of the Gluster volume to be used. You can also specify what file types you wish to store on the Gluster volume:

proxmox glusterfs

Don’t worry about the fact that you weren’t able to add the third node into the Proxmox menu, Gluster will automatically discover the rest of the nodes after it connects to one of them.

 

Click add, and Proxmox should automatically mount the new Gluster volume:

glusterfs and proxmox

As we can see, we have a total of 80GB of space, which is now redundantly replicated. This newly added storage can now be used to store VM virtual hard disks, ISO images, backups, and more.

In a larger application scenarios, Gluster is able to increase the speed and resiliency of your storage network. Physical Gluster nodes combined with enterprise SSDs and 10/40G networking can make for an extremely high end storage cluster.

Similarly GigeNET uses enterprise SSDs with a 40 Gbe private network for our storage offerings.

Managed Hosting vs Colocation Hosting

Deciding which hosting plan is right for your organization requires a good grasp of the options. In this blog, we’ll help you compare two common web hosting plans that organizations choose between: managed web hosting and colocation hosting.

Colocation hosting and managed hosting offer tremendous advantages compared to hosting your server on-site, but the details matter – and can determine exactly which strategy is most sensible for your business model.

Managed hosting services: what are they?

Managed web hosting is a form of dedicated hosting.

You’ll purchase your own server, with full administrative control over the details. However, the hosting provider manages the essential physical tasks, while also implementing specialized software and dealing with the difficult tasks of micromanaging a server.

Fully Managed hosting services typically include the following elements, tended to by industry experts:

  1. Server installation and setup at the data center
  2. Approved software installations, according to your specifications
  3. Security monitoring
  4. Comprehensive customer support included
  5. Software updates and management
  6. Data backup and protection

The host signs a contract with the organization called a Service Level Agreement (SLA) which dictates the terms of the provided service. SLAs detail the exact parameters of the depth of service required and provide measurable metrics that the provider must meet.

Managed hosting services offer a real opportunity for small-to-medium sized organizations that lack the capital to keep and maintain their servers on-site, don’t have an appropriate IT team in place, or are time-constrained due to the demands of their business operations.

What are the cost benefits of managed services?

Managed hosting offers an immediate real-world budgetary benefit. It lowers IT overhead by outsourcing the expertise required to manage and host the organization’s infrastructure to seasoned experts at a specialized hosting company.

Finding, hiring, and paying an industry appropriate salary to IT experts can quickly balloon into a headache-inducing project. It’s time consuming to onboard an IT department. It’s difficult to retain true IT talent. Even more prohibitively, true hosting experts are in short supply – so there’s a highly competitive labor market.

Lowering costs through managed hosting services offers a clear benefit: instead of building your own costly IT department, or stretching your existing IT department to its breaking point, you can create a coherent and predictable cost structure for your hosting needs. Unlike cloud services, which are billed like a utility and can introduce unpredictable economic demands, managed hosting services typically have consistent and reliable costs. This means you can plan ahead and focus on your organization’s goals – instead of micromanaging your hosting, or becoming cost-constrained by unexpected demand.

So – what is colocation hosting?

While managed hosting services allow some degree of control and accessibility, colocation hosting allows complete control of all aspects of server hosting to be sourced within your organization.

Colocation hosting means your server sits in the third-party hosting provider’s rack and utilizes the data center’s plethora of power and bandwidth, but is entirely managed and maintained by your IT staff.

Compared to managed hosting services, colocation:

  1. Requires your organization to setup and install the server at the data center
  2. Allows your IT team complete control of all software setup and installation according to your operating procedures
  3. Doesn’t include support except for pay-per-use remote hands in case of emergency
  4. Lets you completely control software updates and management
  5. Puts data protection and security in your hands

While colocation is similar to dedicated hosting (you own – rather than lease – the server), the difference between colocation and managed dedicated hosting is the level of control. With colocation, your IT team has unilateral control over all aspects of your server’s management and implementation.

The cost benefits of colocation web hosting

We’ve discussed the difference between colocation and managed hosting – so how does colocation keep costs lower?

Colocation cuts costs compared to on-premise deployments through:

  1. Lowering the price of power consumption
  2. Cutting the cost of owning and operating networking hardware
  3. Offering significantly more bandwidth compared to a typical business location
  4. Offering superior physical security measures compared to an on-site deployment
  5. Enabling IT departments to expand their expertise through remote-hands services

Simply put: a colocation data center offers organizations steep discounts on the intrinsic physical costs of keeping servers on-site.

Compared to a managed or dedicated hosting provider, colocation also offers cost-saving benefits:

  1. Over the long term, an equipment lease will usually cost more than purchasing the equipment
  2. When migrating from an on-premises deployment, you already have the hardware

Data centers like GigeNET’s nationwide locations offer deep discounts on power consumption, superior bandwidth availability and security against emergencies like fires, theft and damage. Data centers like ours are specifically designed with the environment that servers need by providing proper cooling, inexpensive power, fire mitigation systems and expert 24/7 oversight.

There’s an additional benefit, as well: colocation significantly reduces the risk associated with keeping your valuable servers and data on-site and merely hoping a disaster never occurs.

Colocation is ideal for an enterprise that has robust IT requirements, a competent IT team and a specific awareness of their needs. If any of these elements are missing, GigeNET highly recommends our powerful managed hosting solutions.

How to choose what’s best for your organization

Colocation and managed hosting are two effective options for organizations that demand high-quality internet infrastructure. The difference comes down to the level of granular control that’s required for your organization’s tasks and the level of expertise you have access to within your organization.

Ultimately, both colocation and managed services offer tangible time and money savings compared to keeping your server on-site. The choice comes down to evaluating whether your organization can meet the technical demands of colocation hosting, or if it needs the expertise that managed hosting offers.

At GigeNET, we’re focused on providing our partners with the resources and expertise they need so they can focus on what really matters: achieving their organization’s goals instead of babysitting their server and worrying about their infrastructure.

We’ve got more than 20 years of experience in the server hosting industry. Our data centers and managed services are designed to meet and exceed the needs of organizations of every size – big, medium or small.

Let us help you customize your hosting plan so you can get back to business.

Unsure which hosting solution is best for you? Receive a free consultation.

If you’re determined to spend as little as possible – just choose shared hosting and hope for the best. 

But if you want stability and security, it’s time to take a serious look at dedicated servers.

The key difference? 

A dedicated server hosting plan means that your website is the only site hosted on the server. With shared hosting, the amount of disk space and bandwidth you are allotted is limited because there are others sharing the server. You will be charged if you surpass your allotted amount.

When choosing between shared hosting and dedicated hosting, the decision comes down to understanding what your organization requires. While there are pros and cons to both options, it’s also important to understand the differences between shared hosting and dedicated server hosting to clarify this vital choice in establishing and maintaining your business.

Sites Hosted on the Server

With a shared hosting package, there are other organizations that host their sites on the server, right alongside your organization.

A dedicated hosting plan means that your organization is the only user hosted on the server.

Bandwidth & Disk Space

With shared hosting, the amount of disk space and bandwidth you are allotted is limited since there are others sharing the server. You will be charged more if you surpass your allotted amount of bandwidth, and penalized if you exceed your amount of disk space – just like a utility.

Even if you’ve fairly purchased resources, some hosts will add extra rules to penalize you for having elements like videos or music—regardless of whether you hit your bandwidth cap!

With dedicated hosting, bandwidth and disk space are dedicated entirely to your organization and its server. There’s no resource sharing, so limitations on the amount of disk space and bandwidth are up to your organization’s requirements.

Costs

With shared hosting, the server’s resources are shared among several users – so operating costs are divided up among the users. This makes shared hosting more affordable, and ideal for smaller organizations or businesses just beginning to establish their web presence.

Because a dedicated server is dedicated solely to one user, it costs more. However – there’s a benefit! With a dedicated server, you’ve got far more operational flexibility to deal with traffic spikes, customize your server or install specialized software to meet your needs.

Required Technical Skill

With shared hosting, your organization doesn’t need a staff with specialized technical skills. Maintenance, administration and security are managed by the shared hosting provider. This dramatically simplifies operating the server. The tradeoff is that it limits what your organization can do.

With your own dedicated server, your organization should anticipate needing IT & webmaster skills to set up, install, administer and manage the server’s overall health.

If that’s too daunting for your organization because of time or money constraints – but you still need the power and space of a dedicated server – fully managed dedicated hosting plans are available at a higher cost.

Fully managed dedicated hosting plans are more expensive than colocated dedicated servers. However, it’s important to understand that the cost of managed services is typically still far less than building, staffing and onboarding your own IT department.

Security

With shared hosting, the hosting company installs firewalls, server security applications and programs. Experts in security are tasked with providing a safe & stable operating environment for the organizations on shared servers.

Securing a dedicated server will be your organization’s responsibility. Configuring software to detect and mitigate threats falls to your IT department, while your hosting company is only responsible for keep your server powered and physically secured.

On a dedicated server, your IT team will be able to control the security programs you install. However, since there your organization is the only user, there are fewer chances to acquire viruses, malware and spyware because of poor neighbors and misconfigured security.

While it seems counter intuitive, there is actually a higher risk of attack vectors being exploited through shared hosting. As the adage goes: “Good fences make good neighbors,” and your own dedicated server is the ultimate “fence.”

Website & IP Blacklisting

Shared servers introduce an interesting risk vector: there’s a chance that Google and other search engines will blacklist your websites because someone else on the server engaged in illegal or discouraged practices like spamming.

Bad neighbors on a shared server can get the entire IP address blacklisted, making your websites practically invisible.

On your own dedicated server, it’s extremely unlikely that you’ll get blacklisted – unless your organization engages in unethical or illegal internet practices. We really don’t recommend that!

Server Performance and Response Time

On shared hosting, unexpected bursts of web traffic could drain the server’s limited bandwidth resources. This leads to slow response times and slow loading times, through no direct fault of your own – frustrating customers and employees alike.

You’re at the whims of someone else’s customers. If your neighbor suddenly and unexpectedly gets popular, you’re stuck in a traffic jam with nowhere to go.

This same traffic jam scenario is very unlikely on a dedicated server. Since you’re are not sharing resources on a dedicated server, you can count on your server to be highly responsive with adequate bandwidth when you need it.

Level of Control

Shared hosting means less control. The hosting company ultimately holds the keys to the kingdom, and makes choices on your behalf. While hosting companies do their best to keep things running smoothly, many organizations require more granular control over how exactly their server is utilized.

A dedicated server offers a great deal of custom options and settings. Your organization will have full control over the server. You can add your preferred programs, applications and scripts to meet your operational requirements.

Dedicated servers offer tremendous latitude to control your operational flexibility and security – which is very beneficial for many businesses with the requisite knowledge and skills.

If you’re looking for a sweet spot somewhere in the middle, fully managed hosting services offer the speed and flexibility of a dedicated server combined with expert management from seasoned IT veterans – the best of both worlds, at a small premium.

Make an Informed Decision

Choosing the right kind of hosting solution involves evaluating your operation’s budget, understanding the options that exist, realistically grasping your needs and comprehending what degree of control is appropriate for your organization.

No matter which type of server hosting you choose, we want you to make an informed decision. If you’re looking for help, contact our expert system architects to evaluate your organization’s requirements. We’ve helped hundreds of businesses develop a comprehensive hosting strategy to meet their needs – big, medium or small.

GigeNET has over 20 years of web hosting experience. We partner with our clients for life – some of our partnerships are older than up-and-coming hosting companies that exist today! We have a seasoned, industry-leading support staff and three data centers across the United States: Chicago, Washington D.C. and Los Angeles.

If you’re ready to explore the options and see what fits your organization, we’re ready to lead you in the best direction for your future. Partner with us and help make a better internet for everyone.

Unsure which hosting solution is best for you? Explore our hosting solutions or receive a free consultation.

GigeNET backups

Why a backup strategy matters – even though it’s usually invisible

A truly beneficial backup strategy means that it will almost be invisible. It will run quietly in the background, archiving the progress of your business without impacting performance.

This is why it’s so often left to chance: in almost every case, your business can continue to function whether you have a backup plan in place or not.

But if something goes wrong—and trust us, it’s almost an inevitability—you’ll wish you had gone ahead and invested in the silent redundancy of a good backup.

There’s a real-world, bottom-line cost to failing to secure your data through backups—98% of businesses report that even an hour of downtime could cost them $100,000.

Can you afford to throw away $100,000?

The dire consequences of data loss

What exactly makes your data valuable?

It isn’t the server that it sits on. But the things that you can easily and unexpectedly lose through a data loss event include some of your most valuable operational elements.

  1. Meticulously created content like WordPress themes, blog posts, branding imagery & PDFs
  2. Mission-critical settings for software you’ve configured to meet your specific business needs
  3. Employee records and tools like spreadsheets, handbooks, playbooks and other operation-critical documentation
  4. Contracts and legally binding documents
  5. Email contact lists and records of important correspondence with clients
  6. Sensitive and private information you’ve collected about your users, employees or business practices

It may seem like losing some of these is unlikely. After all, don’t your employees keep all of their emails on their own computers—and aren’t those stored somewhere else?

Well, what if a flustered employee deletes all of their emails? This actually happened to The Alzheimer’s Association. Google’s massive cloud wasn’t enough to protect them against human error.

Even worse—what if someone accidentally deletes the big project you’re working on? It happened to Pixar, in one of the more amazing and frightening data loss events: they deleted almost the entirety of Toy Story 2 and only managed to save it because an employee had kept a personal backup at his house.

Data loss events can also take the form of clumsiness: an employee simply misplacing a laptop with access to sensitive information has cost companies like VeriSign, the Daily Mail, Bank of America and even governmental organizations like the Department of Veterans millions of dollars. Sometimes all it takes is bad luck, an unlocked car door or an opportune thief to compromise the security of your entire organization’s data. That, too, is a data loss event.

So what, exactly, should an organization look for in its backup strategy?

The solution to preventing catastrophic data loss

It’s important to build it right the first time. What do we mean by that?

Right from the beginning, you should aim to implement the best practices for creating a genuine archive of your organization’s activities, from its settings to its valuable content to its painfully compiled knowledge.

Don’t leave it to chance—seek out the advice of true backup experts and plan for a catastrophe. If it never comes, great—but if you find yourself facing a data loss event like a hard drive failure or a catastrophic natural disaster, you’ll be thrilled to discover that your backups act like a versioned history of your organization’s activities and data.

To craft your data loss prevention plan you’ll need to determine:

  1. What needs to be backed up.
    • This requires taking a holistic view of your organization. What is mission critical? What could you never do without? What software, content, websites and data do you rely on?
  2. Where you’re going to keep your backups.
    • You’ll need to determine whether you can safely keep your backups on-site or will need to enlist an off-site host. This guards against things like fires, floods, storms, power outages and employee error. The ideal combination is utilizing both, with your least sensitive data kept on site and your more valuable data in both locations.
  3. How often you should backup your data.
    • Daily backups are great, but what if your data landscape changes rapidly? There are options that can allow for backup increments as fine as 15 minutes or less. Some organizations may only require backups per quarter. Determine the timeframe that’s required to secure your organization’s data.

Creating a data backup plan means insulating your organization against the tremendously expensive risk of data loss. It strikes organizations of any size, and occurs in a variety of unpredictable and unforeseeable ways–so make a plan today and don’t wind up wishing you hadn’t heeded our advice.

If this seems like a daunting task, or you need some help figuring out exactly what sort of plan to create, we’re more than happy to help you.

Why R1Soft backups are awesome

At GigeNET, we’ve partnered with R1Soft to provide one prong for our two-pronged backup services.

R1Soft is widely considered the fastest, most scalable, yet affordable server backup software. Incremental & daily backups mean that if you lose your data on Friday, you can rewind to Thursday and resume operations.

But the benefit doesn’t end with R1Soft’s archival capabilities. They’ve pioneered block-level backups that are minimally intrusive and avoid hurting your server’s performance. Essentially, R1Soft’s backups only capture changes as they occur—rather than wastefully archiving all of your data, all of the time, even when it hasn’t changed.

You can find out a lot more about R1Soft’s awesome developments in backup technology straight from the source.

Why backups equal operational security and stability

99.99% uptime is no longer an idealistic goal, it’s widely considered an industry standard. Lost time is lost money, a self-evident truism that no manager needs explained – but with data loss, the picture seems to get a little fuzzy.

Downtime does more than damage your bottom line. It damages your reputation. Preserving your organization’s goodwill doesn’t have an easily quantifiable dollar amount, but beware of undercounting the cost of a data loss event—there are myriad sources that estimate the value of data loss as running into the trillions each year.

Data loss isn’t just a headache or a pause in operations. It’s devastating.

What a catastrophe really costs: backups are actually just insurance

The value of a strong backup strategy is that it acts like an insurance policy. If something goes wrong, you call on your backups to restore operations to normalcy with minimal lost time and disruption.

Crafting your backup strategy is a proactive step toward securing your organization. With expanding security threats from ransomware to novel DDoS attacks—and organizations becoming increasingly dependent on their IT infrastructure just to function—keeping your backups up-to-date and ensuring that they’re functioning is as basic as keeping the doors locked when you leave.

Insure your organization’s data like you’d insure your home or your car. Create a backup strategy. At GigeNET, we’re experts in crafting customized backup strategies and have been in the hosting industry for more than 20 years.

Don’t wait until your data disaster strikes. Contact us for help with developing a comprehensive data backup plan.

Load More ...
Colocation Instant Quote




Colocation Needs