Featured

What is the value of a server? In this world of virtual machines and dedicated servers, our customers are becoming more and more removed from the physical components that comprise a server. Everything is easily replaceable — everything except the ...

12 Essential System Administration Cheat Sheets
Behind the Curtain

Albert Einstein, a man not known for his lack of learning, once said that we should never learn what we can look up in a book. While it’s often efficient to have all the commands and options we need at ...

Dedicated Hosting

Most people begin their business using a shared web hosting plan. However, they’ll eventually have to upgrade to a website host that will give them with the features and functions that allows them to grow their website. One type of ...

Dedicated Hosting

Anyone who starts an online business wants it to grow and establish a clear presence on the web. If you have a shared hosting plan, you will eventually have to upgrade to a host that can meet your changing business ...

Managed Dedicated Server Hosting Solutions

IT teams are expensive.

Luckily, there are options besides hiring (and paying!) a team just to manage your server.

The most flexible and reliable options for organizations that don’t have the budget for their own in-house IT team—but have extensive IT requirements—remain the same as they ever were: managed dedicated servers.

Why would you want managed dedicated servers? The benefits.

The most immediate concern of any organization is going to be cost. With managed dedicated hosting, your organization acquires astonishing resilience without a large initial outlay.

Lowering the investment ceiling for a powerful server is the simplest benefit that managed dedicated hosting offers. Your organization gets around much of the costs associated with:

  • Hiring and retaining IT talent
  • Evaluating the right-sized IT infrastructure
  • Procuring and purchasing servers and hardware
  • Establishing maintenance and update protocols
  • Secure premises for your infrastructure

The overall costs associated with this kind of large-scale, long-term investment can be digested by large enterprises—but many small-to-medium organizations struggle with the knowledge gap and capital gap that accompanies this project.

Managed dedicated server hosting means lowering the bar for entrance into a high-performance and high-availability infrastructure. You’ll access to expertise without needing to pay a premium for retaining talented IT staff. You’ll harness the stability and security of a data center rather than the risk of an on-premises deployment. You’ll be able to built it right the first time and plan for growth in the future.

And more importantly: you’ll see consistent and stable costs, backed by a contract, that you can predict and plan for.

How managed dedicated servers work: from idea to execution

Initially, your organization takes stock of its operations and meets with an expert systems architect to develop a unique strategy for your organization. We’ll analyze your prospects for near-term and long-term growth and create an infrastructure strategy that exceeds your current expectations and anticipates plausible future scenarios.

Next, your existing systems can be migrated—or re-developed—onto your new architecture. Your hardware and software is kept up-to-date from Day 1, and you are granted access to the servers through server management software that you choose among viable options.

From there, you’ll have managed service from a staff of experts. You should expect on-demand service, as well as omnipresent systems monitoring to ensure “four-nines” (99.99%) uptime. You’ll be able to issue support tickets that should come with a rapid response.

The best part of managed dedicated servers? You get to focus on your operations instead of maintaining and updating your infrastructure. By accessing long-time industry professionals with expertise at a fraction of the cost of hiring your own IT team, you leverage the knowledge and experience that a company like GigeNET already has. It’s an investment in the stability of your operations without the large initial cost.

What the benefits of managed dedicated servers look like

Instead of combing Google for knowledge and straining to understand a field outside of your own expertise, managed dedicated server hosting means you have access to:

  • Hardware, software, bandwidth and power at a discount
  • Secure backups to ensure operational continuity
  • Highly managed security updates and omnipresent monitoring
  • Insurance against disasters through secure data center locales
  • Fastest-route optimized networks for high-speed service and content delivery
  • Customized service and responses to your unique problems

These benefits play out in a variety of ways: you’ll see reduced outlays for downtime, you’ll increase your capacity for growth and you’ll be able to focus on nurturing your organization’s primary goals.

The web hosting industry is a complex & constantly shifting landscape: things are quickly outdated, outmoded and out of favor—security threats rapidly evolve and often appear in novel forms that require expensive and time-consuming adaptation. That’s why managed dedicated hosting is so powerful: it doesn’t require the level of IT competency that other forms of hosting demand from organizations.

To get the most out of your host, you need a company that aligns with your organization’s skill level and understands the burden that a modern-day web presence places on operations.

The benefits of managed dedicated server look like an organization that’s laser-focused on its primary objectives, rather than stretched thin by a poorly managed online presence.

Run a business, not a server

The real point of managed dedicated servers is not to get out of hiring an IT team or reduce your capital expenditures: it’s to create a business that functions as well as it possibly can.

We’ve partnered with organizations for more than 20 years to develop solutions for their online infrastructure—longer than a majority of the large web hosts have even existed.

There’s a real need for affordable, reliable and high-performing hosting solutions like managed dedicated servers: organizations run leaner and with greater specificity than ever before, requiring them to stay focused on their operational tasks rather than checking to see if their server software is up to date and their hardware is running like it should.

The reason GigeNET partners with organizations is to improve their ability to execute their goals. Our high-performing servers, low-utilization support staff, industry-leading SLAs and nationwide data centers mean we can offer tremendous value for organizations of highly varying sizes with diverse operational needs: our bread-and-butter is creating powerful infrastructure so you can focus on the day-to-day efforts required to reach your goals.

When you’re ready to move toward a cost-conscious and trustworthy partnership with a company that offers managed dedicated hosting solutions, contact us or check out our managed dedicated servers.

traefik guide

As someone interested in following DevOps practices it is my goal to find the best solutions that work with our company’s principles.

At its core, DevOps is a strategy of teaming together administrators and developers to form a single unit with a common goal of working together to provide faster deployments, and better standards through automation. To make this strategy work, it’s essential to continuously explore new tools that can potentially provide you with better orchestration, deployments, or coding practices.

In my search for a more programmable HTTPD load balancer I spotted Traefik. Intrigued by the various features and backend support for Docker, I quickly spun up a VM and jumped straight into learning how-to integrate Traefik with Docker.

My first glance at the file-based configurations for Traefik had me a little uneasy. It was the first time I encountered the TOML formatting. It wasn’t the YAML like I encounter in most projects, or bracket-based formatting that I would have encountered in the past with Nginx, Apache or HaProxy.

Traefix Terminology

Before I jump into setting up the basic demonstration of Traefix with Nginx on Docker I’ll go over the new terminology that has been introduced with Traefik:

  • Entrypoints – The network entry points into the Træfik daemon such as the listening address, listening port, SSL certificates, and basic endpoint routes.
  • Frontend – A routing subset that handles the traffic coming in from the Entrypoints and sends the traffic to a specified backend depending on the traffics Header, and Path.
  • Backend – The actual configurations subset that sends the traffic from the frontend to the actual webserver. The webserver selected is based on the load balancing technique configured.

A basic demonstration of Traefix with Nginx on Docker

In demonstration of a basic Traefik setup, we will only focus on the file-based configuration of Traefik Entrypoints. This is due to the fact that Docker dynamically builds the Frontend and Backend configurations through Traefik’s native Docker Swarm integration.

Let’s start by defining two basic Entrypoints with the “defaultEntryPoints” flag. Under this configuration flag we created two Entrypoints labeled these ‘http’, and ‘https’. These labels represent the everyday traffic we see within our browsers. Under the “[entryPoints]” field within the configuration define the labeled entry point ‘http’ to utilize all interfaces, and we assign it port eighty for inbound traffic. Under the same Entrypoint label we instruct that all traffic entering port eight to be redirect to our second Entrypoint labeled “https”. The Entrypoint labeled ‘https’ follows the same syntax as the one labeled ‘http’ with a slight deviation on how it handles the actual web traffic.

Without a redirect it instructs Traefik to accept the traffic on port 443. We also instruct Traefik to utilizes secure certificates for web encryption and were to load the SSL certificates. A full example is shown below, and can also be found within our Github repository.

defaultEntryPoints = ["http","https"]

[entryPoints]

[entryPoints.http]

address = ":80"

[entryPoints.http.redirect]

entryPoint = "https"

[entryPoints.https]

address = ":443"

[entryPoints.https.tls]

[[entryPoints.https.tls.certificates]]

certFile = "/certs/website.crt"

keyFile  = "/certs/website.key"

The TOML configuration we have built is all that is required for a dynamic Docker demonstration. This file will need to be saved as traefik.toml. The configuration file will be used on our Docker Swarm management nodes and needs to be saved on each management node in your Docker Swarm.

To keep focus on Traefik we won’t go into the setup, and configuration of a Docker Swarm cluster. That will be a topic of discussion in a future blog post and will be back referenced here at some point in the future. We will design a basic Docker compose file to demonstrate the Dynamic loading of Traefik with a default Nginx instance. Within the Docker Swarm configuration file will focus on the Traefik image, and the Traefik flags required for basic backend load balancing.

Let’s start by downloading the docker-compose.yaml file that can be found on our github.com page. To download the file, you can go to https://github.com/gigenet-projects/blog-data/tree/master/traefikblog1.

The entire code repository can also be cloned the following command:

[root@dockermngt ~]#  git clone https://github.com/gigenet-projects/blog-data.git

[root@dockermngt ~]#  cd blog-data/traefikblog1

The docker-compose.ymal:

version: '3.3'

services:

 nginx:

   image: nginx

   ports:

     – target: 80

       protocol: tcp

       mode: host

   networks:

     – distributed

   deploy:

     restart_policy:

       delay: 10s

       max_attempts: 10

       window: 120s

     labels:

       traefik.frontend.rule: “Host:traefik,demo.gigenet.com”

       traefik.port: “80”

 traefik:

   image: traefik

   volumes:

     – /var/run/docker.sock:/var/run/docker.sock

     – /opt/traefik/certs/:/certs/

     – /opt/traefik/traefik.toml:/traefik.toml

   deploy:

     restart_policy:

       delay: 120s

       max_attempts: 10

       window: 120s

     placement:

       constraints: [node.role == manager]

   networks:

     – distributed

   ports:

     – 80:80

     – 443:443

     – 8080:8080

   command: –docker –docker.swarmmode –docker.domain=demo –docker.watch –web –loglevel=DEBUG

networks:

 distributed:

   driver: overlay

Under the Traefik service section we will specifically will focus on the Docker compose flags labeled volume, deploy, placement, networks, ports, and command. These flags have a direct impact on how the Traefik Docker image will operate, and need to be configured properly.

Within the volume flag we pass in the docker socket, and the Traefik TOML configuration file we built previously. The Docker socket is used by Traefik for API purposes such as grabbing the Traefix label flags that are assigned to other Docker services. We will go into this in further detail in a few steps. The actual Traefik configuration we built earlier utilized both http, and https. We defined these Entrypoints to tell the Traefik container image the base configuration it will use when starting for the first time. These configuration flags can be overwritten with Docker labels, but we will not be going into such advanced configurations within the blog. As the configuration file had a focus on encryption the configuration will be mounting our SSL certificates under the /certs directory on the Traefik container. The Traefik TOML configuration file, and SSL certificates should be installed on every Docker management node.

Under the deploy section we take focus on the placement flag. Traefik requires the Docker socket to get API data, and only management nodes can provide this data in a Docker Swarm. We tell the placement of the Traefik container to be constrained to only Docker names with a management role. This simple technique will enforce the requirement.

The network flag is a must have for Traefik to work properly. With Docker clusters we can build overlay networks that are internal to just the VM’s assigned to the network. This provides containers with network isolation. For the load balancing to work we need to have the Traefik container on the same network as every webhost we plan to load balance traffic too. In this case we named our network “distributed” and set it to use the overlay driver. 

How inbound traffic is passed to the overlay network

The port flag is very simple, and straight forward. On the Traefik configuration we assigned ports 80, and 443 to take in traffic, and forward traffic. The ports map the port of the container to the live port within the container. We also enable port 8080 in this example, and this is so we can demonstrate the web dashboard that Traefik provides. A snippet of the dashboard of a live cluster is shown below:

Lastly, the command flag will parse any additional commands that we did not configure in the Traefik TOML configuration file directly onto the Traefik binary on boot. We tell Traefik to utilize the Docker backend, and enable the dashboard with this Docker compose demonstration.

Now that we understand the Traefik section of the Docker compose file we can go into detail on how the other services such as Nginx are dynamically connected to the Traefik load balancer. Within the Nginx service we will focus on the ports, networks, and labels flags.

With this specific Nginx container image our web browser will only see a default Nginx “Welcome to nginx!” web page. With focus on the ports flag you’ll notice we are opening port eighty so that packets are not firewalled off by the Docker management service. In the Traefik service piece, we mentioned that the network has to include the same network as Traefik. Within our Nginx service you will notice the required network “distributed” has been assigned.

The labels flag is the most interesting section of the Nginx service. Within the label we set a few flags for Docker to register on the Docker Management API. This is how Traefik will know which backend to assign, and if that backend is alive. To keep this demonstration simplistic, we tell Traefik that the Nginx service has a single Nginx virtual host named ‘demo.gigenet.com’. To assign this to the Nginx service we utilize the ‘traefik.frontend.rule’ flag under the labels section as followed ‘traefik.frontend.rule: “Host:traefik,demo.gigenet.com”’. Notice how the Traefik Frontend is defined within the Docker compose configuration file, and not on the file-based Traefik configuration file. With this flag set Traefik will be able to get every IP address assigned under the network overlay. Traefik will also need to know which ports on the Nginx services are listening, and this is done by the flag “traefik.port”. In our example we assigned port eighty to the “traefik.port” flag, and this also represents the port we opened for network traffic.

With this configuration explained, and ready. It’s now time to launch the Docker stack we built and test out the load balancing. To launch the Docker stack run the following command within the Docker management node.

[root@dockermngt ~]#  docker stack deploy --compose-file=docker-compose.yaml demo

You should now see the “Welcome to nginx” page within your browser when going to the domain name you specified. You can also review the actual load balancing rules by appending :8080 to this domain as showing in the previous picture.

So you’ve navigated the plethora of hosting options out there, and determined that colocation is right for your organization.

But your diligence shouldn’t end there. You’ve got to choose the right host, with the right data center to secure your server.

This blog will arm you with questions to ask and issues to consider as you choose your colocation data center & hosting company.

Why you need to know what you need to know

While servers are expensive tools, the data on your servers is what is valuable.

As Jennifer Svensson, GigeNET’s head of support explains in her blog about data loss:

“Countless work hours have gone into making each server unique, with custom set-ups, modified WordPress templates, blog posts going back years, etc. This is where the value of a server lies: in the data.”

Choosing the right data center means keeping the risk of data loss low. Colocation is a real time and money saver, but—you’ve got to ensure that your valuables are under the watchful eye of an experienced and competent staff.

Otherwise, your investment becomes a costly lesson in how not to manage your infrastructure.

Pick your location wisely: be ready to go there if needed

With colocation, you are ultimately responsible for the installation and maintenance of your server.

Because of this requirement, it’s wise to choose a data center that’s easily accessible to your organization.

Data centers tend to cluster around geographically secure areas that aren’t prone to natural disasters, have a steady and inexpensive supply of water (for cooling) and power (for the servers, naturally)—so check and see if your provider is near other data centers.

This can be a cue that the data center has been carefully planned to take advantage of the area’s existing infrastructure.

What to consider for your location:

  1. Does your potential host have a data center that’s close enough to access in an emergency?
    • Being able to access your physical server when you need to is an oft-overlooked element to choosing your colocation host: you may need to upgrade or service your servers!
  2. Does your provider have multiple data centers across the United States?
    • This offers redundancy and reliability, as well as the potential for faster network connectivity.
  3. Is the provider in a physically secure location?
    • Areas prone to natural disasters (like earthquakes, floods, tornadoes or hurricanes) are hard to avoid entirely—for example, choosing a central US location versus a coastal US location can be a positive trade-off.

Knowing where your servers live creates security by ensuring you can access them—and through a stable outside environment.

Internet connectivity: how fast do you want to go?

Perhaps it’s obvious, but not all data centers are created equal.

Since your server has to connect to the internet do anything at all, it’s crucial to ask about the level of connectivity—and reliability—from the data center.

Here’s what we recommend asking the company’s representatives about the hosting company’s network:

  1. How much bandwidth is available?
    • The more, the better. Does the host offer a network speed test so you can determine it yourself?
  2. How much packet loss is there?
    • Less is more. You’re shooting for as little packet loss as possible.
  3. What’s the average uptime?
    • Anything less than 99% is below the current industry standards. Is their claim also backed up by their Service Level Agreement?
  4. Who are their transit providers?
    • The Tier 1 ISPs in the US (they have access to the entirety of the internet, directly through their physical infrastructure) are AT&T, Cogent, Verizon, Telia, Level 3 & Comcast. Do they have all of these, and others, as transit providers?
  5. What sort of routing optimization does the data center utilize?
    • Not all routing is created equal. With data centers, the fastest route is not always a straight line. They should be able to competently and clearly explain how their routing is optimized and why it’s so fast.

You should demand clear and concise answers to your questions about internet connectivity.

However—it’s vital that you understand your own bandwidth and internet requirements before you seek out requests for proposals. Take stock of your business goals—and current requirements—before shopping around.

Redundancy and backup requirements: your insurance policy

Besides a high-speed connection, data centers typically offer a degree of physical security beyond what your business could provide on-site.

With a good data center, you are protected against natural disasters—as well as insulated from downtime and data loss due to mismanagement.

This security is achieved through two specific areas: power and backups. So what should you consider about power consumption and backups before jumping into a colocation contract?

  1. Data centers need lots of power—and a plan for when power fails.
    • While a power loss event at a data center is rare, they do happen. It’s vital that you ask about the data center’s power backup strategy: do they have an on-site generator? What’s their overall power loss strategy?
  2. Understand what your growth requirements are for the next 5 years
    • If you’re anticipating drastic growth (or stable needs), ask about how the data center plans to grow its power capacity. The more computing power per square foot that the provider plans for, the easier growth will be.
    • Servers like Intel’s Xeon-D have been specifically engineered to harness maximum computing power with minimal energy expenditure—does the data center offer similar high-performing, high-density servers?
  3. Inquire about backup strategies
    • Data archival for the data center’s systems is worth inquiring about.

By ensuring that your data center has covered its bases regarding redundancy and power, you’re ensuring your own uptime—think of it as insurance against costly downtime.

The bells & whistles: consider managed services, even if you don’t think you’ll need them

The effect of turnover in the IT industry means that it’s vital to anticipate a loss of talent. What if your “server pro” leaves the company?

We recommend ensuring that your data center offers some degree of managed services—even if you don’t plan on utilizing them, you can prevent problems by having managed services as a fallback plan in case your talent leaves for greener pastures.

Additionally—managed services allow experts with long-standing experience to see to the health and administration of your servers. If you wind up in a situation where your IT experts can’t solve a problem (or are on vacation when disaster strikes), having managed services as an option—even if you don’t utilize it immediately—can be invaluable.

Another point of consideration: the self-service portal

How you access your servers makes a difference: not all data centers offer the same level of customer-centered software for controlling and managing your servers.

Some don’t offer any custom software at all—which can be fine—but a custom-designed software suite for accessing your servers is more than a luxury, it’s a powerful tool for accomplishing your business goals.

Inquire about how your servers can be accessed, what sorts of granular controls exist and how easy-to-use the software is. You should expect something built with the customer in mind—user friendly with a consistent user experience.

Colocation: still an affordable and reliable option

Colocation offers benefits and savings for organizations that have adequate IT staffing, a realistic vision of their infrastructure requirements and a clear vision for how they’ll meet their current and future goals.

At GigeNET, we’ve got more than 20 years of experience with server hosting—a very long time in this industry. We’ve got a staff of experienced industry veterans backed by three nationwide data centers designed for growth and adaptation to the coming changes in our industry.

If you’re looking for a colocation provider, shop around and utilize the questions we’ve provided here to poke & prod for the truth—but remember that long-time, stable providers are always the best bet.

If you’re ready to try GigeNET’s high-speed network and super-secure data centers, let’s discuss your needs and work toward crafting a custom colocation solution for your organization.

glusterfs

Introduction and use cases

GlusterFS is a clustered file system designed to increase the speed, redundancy, and availability of network storage. When configured correctly with several machines, it can greatly decrease downtime due to maintenance and failures.

Gluster has a variety of use cases, with most configurations being small three server clusters. I’ve personally used Gluster for VM storage in Proxmox, and as a highly available SMB file server setup for Windows clients.

Configurations and requirements

For the purposes of this demonstration, I’ll be using Gluster along with Proxmox. The two work very well with each other when set up in a cluster.

Before using Gluster, you’ll need at least three physical servers, or three virtual machines, which I also recommend be on separate servers. This is the minimal configuration to set up high availability storage. Each server will have a minimum of two drives, one drive will be used for the OS, the other will be used for Gluster.

Gluster operates on a quorum based system in order to maintain consistency across the cluster. In a three server scenario, at least two of the three servers must be online in order to allow writes to the cluster. Two node clusters are possible, but not recommended. With two nodes, the cluster risks a scenario known as split-brain, where the data on the two nodes isn’t the same. This type of inconsistency can cause major issues on production storage.

For demonstration purposes, I’ll be using 3 CentOS 7 virtual machines on a single Proxmox server.

There are two ways we can go about high availability and redundancy, one of which saves more space than the other.

  1. The first way is to set up gluster to simply replicate all the data across the three nodes. This configuration provides the highest availability of the data and maintains a three-node quorum, but also uses the most amount of space.
  2. The second way is similar, but takes up ⅓ less space. This method involves making the third node in the cluster into what’s called an arbiter node. The first two nodes will hold and replicate data. The third node will only hold the metadata of the data on the first two nodes. This way a three-node quorum is still maintained, but much less storage space is used.The only downside is that your data only exists on two nodes instead of three. In this demo I’ll be using the latter configuration, as there are a few extra steps to configuring it correctly.

Configuration

Start by setting up three physical servers or virtual machines with CentOS 7. In my case, I set up three virtual machines with 2 CPU cores, 1GB RAM, and 20GB OS disks. Through the guide, I’ll specify what should be done on all nodes, or one specific node.

glusterfs

All three machines should be on the same subnet/broadcast domain. After installing CentOS 7 on all three nodes, my IP configurations are as follows:

Gluster1: 10.255.255.21

Gluster1: 10.255.255.22

Gluster1: 10.255.255.23

All Nodes:

The first thing we’ll do after the install is edit the /etc/hosts file. We want to add the the hostname of each node along with their IPs into the file, this prevents Gluster from having issues in the case that a DNS server isn’t reachable.

My hosts file on each node is as follows:

glusterfs

All Nodes:

After configuring the hosts file, add the secondary disks to the hosts. I added an 80GB disk to Gluster1 and Gluster2, and a 10GB disk to Gluster3, which will be the arbiter node. If the Gluster nodes are VMs, the disks can simply be added live without shutting down.

Gluster 1&2  Drive configuration:

Gluster 3 Drive configuration:

After adding the disks, run lsblk to ensure they show up on each node:

sda is the OS disk, sdb is the newly added storage disk. We’ll want to format and mount the new storage disk for use. In the case of this demo, I’ll be using xfs for the storage drive.

fdisk /dev/sdb
n
p
enter
enter
w
enter
mkfs.xfs /dev/sdb1

You should now see sdb1 when you run an lsblk:

We’ll now create a mount point and add the drive into /etc/fstab in order for it to mount on boot:

My mountpoint will be named brick1, I’ll explain bricks in more detail after we mount the drive.

mkdir -p /data/brick1

After creating the mountpoint directory, we’ll need to pull the UUID of the drive, you can do this with blkid

blkid /dev/sdb1

Copy down the long UUID string, then go into /etc/fstab and add a similar line:

UUID=<UUID without quotes> /data/brick1 xfs defaults 1 2

Save the file, then run mount -a

Then run df -h

You should now see /dev/sdb1 mounted on /data/brick1

Make sure you format and mount the storage drives on each of the three nodes.

Gluster volumes are made up of what what are called bricks. These bricks can be treated almost like virtual hard drives in a what we’d use for a RAID array.

This depiction gives an idea of what a two server cluster with two replicated Gluster volumes would look like:

This is what the Gluster volume we’re creating will look closer to:

Now it’s time to install and enable GlusterFS, run the following on all three nodes:

yum install centos-release-gluster

yum install glusterfs-server -y

systemctl enable glusterd

Gluster doesn’t play well with selinux and the firewall, we’ll disable the two for now. Since connecting to services such as NFS and Gluster doesn’t require authentication, the cluster should be on a secure internal network in the first place.

Open up /etc/selinux/config with a text editor and change the following:

SELINUX=enforcing

to

SELINUX=disabled

Save and exit the file, then disable the firewall service:

systemctl disable firewalld.service

At this point, reboot your nodes in order for the SELINUX config change to take effect.

On Node Gluster1:

Now it’s time to link all the Gluster nodes together, from the first node, run the following:

gluster peer probe gluster2

gluster peer probe gluster3

Remember to run the above commands using the hostnames of the other nodes, not the IPs.

Now let’s check and see if the nodes have successfully connected to each other:

We can see that all of our nodes are communicating without issue.

Finally, we can create the replicated gluster volume, it’s a long command, ensure there aren’t any errors:

gluster volume create stor1 replica 3 arbiter 1 gluster1:/data/brick1/stor1 gluster2:/data/brick1/stor1 gluster3:/data/brick1/stor1

  • “stor1” is the name of the replicated volume we’re creating
  • “replica 3 arbiter 1” specifies that we wish to create a three node cluster with a single arbiter node, the last node specified in the command will become the arbiter
  • “gluster1:/data/brick1/stor1” creates the brick on the mountpoint we created earlier, I’ve named the bricks stor1 in order to reflect the name of the volume, but this isn’t imperative.

After running the command, you’ll need to start the volume, do this from node 1:

gluster volume start stor1

Then check the status and information:

gluster volume start stor1

gluster volume info stor1:

As you can see, the brick created on the third node is specified as an arbiter, and the other two nodes hold the actual data. At this point, your are ready to connect to GlusterFS from a client device.

I’ll demonstrate this connection from Proxmox, as the two naturally work very well together.

Ensure that the Gluster node hostnames are in your Proxmox /etc/hosts file or available through a DNS server prior to starting.

Start by logging into the Proxmox web gui, then go to Datacenter>Storage>Add>GlusterFS:

glusterfs proxmox

Then, input a storage ID, this can be any name, the first two node hostnames, and the name of the Gluster volume to be used. You can also specify what file types you wish to store on the Gluster volume:

proxmox glusterfs

Don’t worry about the fact that you weren’t able to add the third node into the Proxmox menu, Gluster will automatically discover the rest of the nodes after it connects to one of them.

 

Click add, and Proxmox should automatically mount the new Gluster volume:

glusterfs and proxmox

As we can see, we have a total of 80GB of space, which is now redundantly replicated. This newly added storage can now be used to store VM virtual hard disks, ISO images, backups, and more.

In a larger application scenarios, Gluster is able to increase the speed and resiliency of your storage network. Physical Gluster nodes combined with enterprise SSDs and 10/40G networking can make for an extremely high end storage cluster.

Similarly GigeNET uses enterprise SSDs with a 40 Gbe private network for our storage offerings.

Managed Hosting vs Colocation Hosting

Deciding which hosting plan is right for your organization requires a good grasp of the options. In this blog, we’ll help you compare two common web hosting plans that organizations choose between: managed web hosting and colocation hosting.

Colocation hosting and managed hosting offer tremendous advantages compared to hosting your server on-site, but the details matter – and can determine exactly which strategy is most sensible for your business model.

Managed hosting services: what are they?

Managed web hosting is a form of dedicated hosting.

You’ll purchase your own server, with full administrative control over the details. However, the hosting provider manages the essential physical tasks, while also implementing specialized software and dealing with the difficult tasks of micromanaging a server.

Managed hosting services typically include the following elements, tended to by industry experts:

  1. Server installation and setup at the data center
  2. Approved software installations, according to your specifications
  3. Security monitoring
  4. Comprehensive customer support included
  5. Software updates and management
  6. Data backup and protection

The host signs a contract with the organization called a Service Level Agreement (SLA) which dictates the terms of the provided service. SLAs detail the exact parameters of the depth of service required and provide measurable metrics that the provider must meet.

Managed hosting services offer a real opportunity for small-to-medium sized organizations that lack the capital to keep and maintain their servers on-site, don’t have an appropriate IT team in place, or are time-constrained due to the demands of their business operations.

What are the cost benefits of managed services?

Managed hosting offers an immediate real-world budgetary benefit. It lowers IT overhead by outsourcing the expertise required to manage and host the organization’s infrastructure to seasoned experts at a specialized hosting company.

Finding, hiring, and paying an industry appropriate salary to IT experts can quickly balloon into a headache-inducing project. It’s time consuming to onboard an IT department. It’s difficult to retain true IT talent. Even more prohibitively, true hosting experts are in short supply – so there’s a highly competitive labor market.

Lowering costs through managed hosting services offers a clear benefit: instead of building your own costly IT department, or stretching your existing IT department to its breaking point, you can create a coherent and predictable cost structure for your hosting needs. Unlike cloud services, which are billed like a utility and can introduce unpredictable economic demands, managed hosting services typically have consistent and reliable costs. This means you can plan ahead and focus on your organization’s goals – instead of micromanaging your hosting, or becoming cost-constrained by unexpected demand.

So – what is colocation hosting?

While managed hosting services allow some degree of control and accessibility, colocation hosting allows complete control of all aspects of server hosting to be sourced within your organization.

Colocation hosting means your server sits in the third-party hosting provider’s rack and utilizes the data center’s plethora of power and bandwidth, but is entirely managed and maintained by your IT staff.

Compared to managed hosting services, colocation:

  1. Requires your organization to setup and install the server at the data center
  2. Allows your IT team complete control of all software setup and installation according to your operating procedures
  3. Doesn’t include support except for pay-per-use remote hands in case of emergency
  4. Lets you completely control software updates and management
  5. Puts data protection and security in your hands

While colocation is similar to dedicated hosting (you own – rather than lease – the server), the difference between colocation and managed dedicated hosting is the level of control. With colocation, your IT team has unilateral control over all aspects of your server’s management and implementation.

The cost benefits of colocation web hosting

We’ve discussed the difference between colocation and managed hosting – so how does colocation keep costs lower?

Colocation cuts costs compared to on-premise deployments through:

  1. Lowering the price of power consumption
  2. Cutting the cost of owning and operating networking hardware
  3. Offering significantly more bandwidth compared to a typical business location
  4. Offering superior physical security measures compared to an on-site deployment
  5. Enabling IT departments to expand their expertise through remote-hands services

Simply put: a colocation data center offers organizations steep discounts on the intrinsic physical costs of keeping servers on-site.

Compared to a managed or dedicated hosting provider, colocation also offers cost-saving benefits:

  1. Over the long term, an equipment lease will usually cost more than purchasing the equipment
  2. When migrating from an on-premises deployment, you already have the hardware

Data centers like GigeNET’s nationwide locations offer deep discounts on power consumption, superior bandwidth availability and security against emergencies like fires, theft and damage. Data centers like ours are specifically designed with the environment that servers need by providing proper cooling, inexpensive power, fire mitigation systems and expert 24/7 oversight.

There’s an additional benefit, as well: colocation significantly reduces the risk associated with keeping your valuable servers and data on-site and merely hoping a disaster never occurs.

Colocation is ideal for an enterprise that has robust IT requirements, a competent IT team and a specific awareness of their needs. If any of these elements are missing, GigeNET highly recommends our powerful managed hosting solutions.

How to choose what’s best for your organization

Colocation and managed hosting are two effective options for organizations that demand high-quality internet infrastructure. The difference comes down to the level of granular control that’s required for your organization’s tasks and the level of expertise you have access to within your organization.

Ultimately, both colocation and managed services offer tangible time and money savings compared to keeping your server on-site. The choice comes down to evaluating whether your organization can meet the technical demands of colocation hosting, or if it needs the expertise that managed hosting offers.

At GigeNET, we’re focused on providing our partners with the resources and expertise they need so they can focus on what really matters: achieving their organization’s goals instead of babysitting their server and worrying about their infrastructure.

We’ve got more than 20 years of experience in the server hosting industry. Our data centers and managed services are designed to meet and exceed the needs of organizations of every size – big, medium or small.

Let us help you customize your hosting plan so you can get back to business.

Unsure which hosting solution is best for you? Receive a free consultation.

If you’re determined to spend as little as possible – just choose shared hosting and hope for the best. 

But if you want stability and security, it’s time to take a serious look at dedicated servers.

The key difference? 

A dedicated server hosting plan means that your website is the only site hosted on the server. With shared hosting, the amount of disk space and bandwidth you are allotted is limited because there are others sharing the server. You will be charged if you surpass your allotted amount.

When choosing between shared hosting and dedicated hosting, the decision comes down to understanding what your organization requires. While there are pros and cons to both options, it’s also important to understand the differences between shared hosting and dedicated hosting to clarify this vital choice in establishing and maintaining your business.

Sites Hosted on the Server

With a shared hosting package, there are other organizations that host their sites on the server, right alongside your organization.

A dedicated hosting plan means that your organization is the only user hosted on the server.

Bandwidth & Disk Space

With shared hosting, the amount of disk space and bandwidth you are allotted is limited since there are others sharing the server. You will be charged more if you surpass your allotted amount of bandwidth, and penalized if you exceed your amount of disk space – just like a utility.

Even if you’ve fairly purchased resources, some hosts will add extra rules to penalize you for having elements like videos or music—regardless of whether you hit your bandwidth cap!

With dedicated hosting, bandwidth and disk space are dedicated entirely to your organization and its server. There’s no resource sharing, so limitations on the amount of disk space and bandwidth are up to your organization’s requirements.

Costs

With shared hosting, the server’s resources are shared among several users – so operating costs are divided up among the users. This makes shared hosting more affordable, and ideal for smaller organizations or businesses just beginning to establish their web presence.

Because a dedicated server is dedicated solely to one user, it costs more. However – there’s a benefit! With a dedicated server, you’ve got far more operational flexibility to deal with traffic spikes, customize your server or install specialized software to meet your needs.

Required Technical Skill

With shared hosting, your organization doesn’t need a staff with specialized technical skills. Maintenance, administration and security are managed by the shared hosting provider. This dramatically simplifies operating the server. The tradeoff is that it limits what your organization can do.

With your own dedicated server, your organization should anticipate needing IT & webmaster skills to set up, install, administer and manage the server’s overall health.

If that’s too daunting for your organization because of time or money constraints – but you still need the power and space of a dedicated server – fully managed dedicated hosting plans are available at a higher cost.

Fully managed dedicated hosting plans are more expensive than colocated dedicated servers. However, it’s important to understand that the cost of managed services is typically still far less than building, staffing and onboarding your own IT department.

Security

With shared hosting, the hosting company installs firewalls, server security applications and programs. Experts in security are tasked with providing a safe & stable operating environment for the organizations on shared servers.

Securing a dedicated server will be your organization’s responsibility. Configuring software to detect and mitigate threats falls to your IT department, while your hosting company is only responsible for keep your server powered and physically secured.

On a dedicated server, your IT team will be able to control the security programs you install. However, since there your organization is the only user, there are fewer chances to acquire viruses, malware and spyware because of poor neighbors and misconfigured security.

While it seems counter intuitive, there is actually a higher risk of attack vectors being exploited through shared hosting. As the adage goes: “Good fences make good neighbors,” and your own dedicated server is the ultimate “fence.”

Website & IP Blacklisting

Shared servers introduce an interesting risk vector: there’s a chance that Google and other search engines will blacklist your websites because someone else on the server engaged in illegal or discouraged practices like spamming.

Bad neighbors on a shared server can get the entire IP address blacklisted, making your websites practically invisible.

On your own dedicated server, it’s extremely unlikely that you’ll get blacklisted – unless your organization engages in unethical or illegal internet practices. We really don’t recommend that!

Server Performance and Response Time

On shared hosting, unexpected bursts of web traffic could drain the server’s limited bandwidth resources. This leads to slow response times and slow loading times, through no direct fault of your own – frustrating customers and employees alike.

You’re at the whims of someone else’s customers. If your neighbor suddenly and unexpectedly gets popular, you’re stuck in a traffic jam with nowhere to go.

This same traffic jam scenario is very unlikely on a dedicated server. Since you’re are not sharing resources on a dedicated server, you can count on your server to be highly responsive with adequate bandwidth when you need it.

Level of Control

Shared hosting means less control. The hosting company ultimately holds the keys to the kingdom, and makes choices on your behalf. While hosting companies do their best to keep things running smoothly, many organizations require more granular control over how exactly their server is utilized.

A dedicated server offers a great deal of custom options and settings. Your organization will have full control over the server. You can add your preferred programs, applications and scripts to meet your operational requirements.

Dedicated servers offer tremendous latitude to control your operational flexibility and security – which is very beneficial for many businesses with the requisite knowledge and skills.

If you’re looking for a sweet spot somewhere in the middle, managed hosting services offer the speed and flexibility of a dedicated server combined with expert management from seasoned IT veterans – the best of both worlds, at a small premium.

Make an Informed Decision

Choosing the right kind of hosting solution involves evaluating your operation’s budget, understanding the options that exist, realistically grasping your needs and comprehending what degree of control is appropriate for your organization.

No matter which type of server hosting you choose, we want you to make an informed decision. If you’re looking for help, contact our expert system architects to evaluate your organization’s requirements. We’ve helped hundreds of businesses develop a comprehensive hosting strategy to meet their needs – big, medium or small.

GigeNET has over 20 years of web hosting experience. We partner with our clients for life – some of our partnerships are older than up-and-coming hosting companies that exist today! We have a seasoned, industry-leading support staff and three data centers across the United States: Chicago, Washington D.C. and Los Angeles.

If you’re ready to explore the options and see what fits your organization, we’re ready to lead you in the best direction for your future. Partner with us and help make a better internet for everyone.

Unsure which hosting solution is best for you? Explore our hosting solutions or receive a free consultation.

GigeNET backups

Why a backup strategy matters – even though it’s usually invisible

A truly beneficial backup strategy means that it will almost be invisible. It will run quietly in the background, archiving the progress of your business without impacting performance.

This is why it’s so often left to chance: in almost every case, your business can continue to function whether you have a backup plan in place or not.

But if something goes wrong—and trust us, it’s almost an inevitability—you’ll wish you had gone ahead and invested in the silent redundancy of a good backup.

There’s a real-world, bottom-line cost to failing to secure your data through backups—98% of businesses report that even an hour of downtime could cost them $100,000.

Can you afford to throw away $100,000?

The dire consequences of data loss

What exactly makes your data valuable?

It isn’t the server that it sits on. But the things that you can easily and unexpectedly lose through a data loss event include some of your most valuable operational elements.

  1. Meticulously created content like WordPress themes, blog posts, branding imagery & PDFs
  2. Mission-critical settings for software you’ve configured to meet your specific business needs
  3. Employee records and tools like spreadsheets, handbooks, playbooks and other operation-critical documentation
  4. Contracts and legally binding documents
  5. Email contact lists and records of important correspondence with clients
  6. Sensitive and private information you’ve collected about your users, employees or business practices

It may seem like losing some of these is unlikely. After all, don’t your employees keep all of their emails on their own computers—and aren’t those stored somewhere else?

Well, what if a flustered employee deletes all of their emails? This actually happened to The Alzheimer’s Association. Google’s massive cloud wasn’t enough to protect them against human error.

Even worse—what if someone accidentally deletes the big project you’re working on? It happened to Pixar, in one of the more amazing and frightening data loss events: they deleted almost the entirety of Toy Story 2 and only managed to save it because an employee had kept a personal backup at his house.

Data loss events can also take the form of clumsiness: an employee simply misplacing a laptop with access to sensitive information has cost companies like VeriSign, the Daily Mail, Bank of America and even governmental organizations like the Department of Veterans millions of dollars. Sometimes all it takes is bad luck, an unlocked car door or an opportune thief to compromise the security of your entire organization’s data. That, too, is a data loss event.

So what, exactly, should an organization look for in its backup strategy?

The solution to preventing catastrophic data loss

It’s important to build it right the first time. What do we mean by that?

Right from the beginning, you should aim to implement the best practices for creating a genuine archive of your organization’s activities, from its settings to its valuable content to its painfully compiled knowledge.

Don’t leave it to chance—seek out the advice of true backup experts and plan for a catastrophe. If it never comes, great—but if you find yourself facing a data loss event like a hard drive failure or a catastrophic natural disaster, you’ll be thrilled to discover that your backups act like a versioned history of your organization’s activities and data.

To craft your data loss prevention plan you’ll need to determine:

  1. What needs to be backed up.
    • This requires taking a holistic view of your organization. What is mission critical? What could you never do without? What software, content, websites and data do you rely on?
  2. Where you’re going to keep your backups.
    • You’ll need to determine whether you can safely keep your backups on-site or will need to enlist an off-site host. This guards against things like fires, floods, storms, power outages and employee error. The ideal combination is utilizing both, with your least sensitive data kept on site and your more valuable data in both locations.
  3. How often you should backup your data.
    • Daily backups are great, but what if your data landscape changes rapidly? There are options that can allow for backup increments as fine as 15 minutes or less. Some organizations may only require backups per quarter. Determine the timeframe that’s required to secure your organization’s data.

Creating a data backup plan means insulating your organization against the tremendously expensive risk of data loss. It strikes organizations of any size, and occurs in a variety of unpredictable and unforeseeable ways–so make a plan today and don’t wind up wishing you hadn’t heeded our advice.

If this seems like a daunting task, or you need some help figuring out exactly what sort of plan to create, we’re more than happy to help you.

Why R1Soft backups are awesome

At GigeNET, we’ve partnered with R1Soft to provide one prong for our two-pronged backup services.

R1Soft is widely considered the fastest, most scalable, yet affordable server backup software. Incremental & daily backups mean that if you lose your data on Friday, you can rewind to Thursday and resume operations.

But the benefit doesn’t end with R1Soft’s archival capabilities. They’ve pioneered block-level backups that are minimally intrusive and avoid hurting your server’s performance. Essentially, R1Soft’s backups only capture changes as they occur—rather than wastefully archiving all of your data, all of the time, even when it hasn’t changed.

You can find out a lot more about R1Soft’s awesome developments in backup technology straight from the source.

Why backups equal operational security and stability

99.99% uptime is no longer an idealistic goal, it’s widely considered an industry standard. Lost time is lost money, a self-evident truism that no manager needs explained – but with data loss, the picture seems to get a little fuzzy.

Downtime does more than damage your bottom line. It damages your reputation. Preserving your organization’s goodwill doesn’t have an easily quantifiable dollar amount, but beware of undercounting the cost of a data loss event—there are myriad sources that estimate the value of data loss as running into the trillions each year.

Data loss isn’t just a headache or a pause in operations. It’s devastating.

What a catastrophe really costs: backups are actually just insurance

The value of a strong backup strategy is that it acts like an insurance policy. If something goes wrong, you call on your backups to restore operations to normalcy with minimal lost time and disruption.

Crafting your backup strategy is a proactive step toward securing your organization. With expanding security threats from ransomware to novel DDoS attacks—and organizations becoming increasingly dependent on their IT infrastructure just to function—keeping your backups up-to-date and ensuring that they’re functioning is as basic as keeping the doors locked when you leave.

Insure your organization’s data like you’d insure your home or your car. Create a backup strategy. At GigeNET, we’re experts in crafting customized backup strategies and have been in the hosting industry for more than 20 years.

Don’t wait until your data disaster strikes. Contact us for help with developing a comprehensive data backup plan.

chicago colocation

Colocation hosting is a lower-cost server hosting strategy using a provider’s high-speed 24/7 internet services, excellent operational security, superior availability and long-term stability while using the customer’s current server inventory.

Typically, colocation hosting improves business operations, cuts capital costs and boosts the quality of service that customers experience.

Colocation takes the client’s existing server infrastructure and moves it from the customer’s on-premise deployment to the colocation provider’s datacenter.

From an organizational standpoint, the bottom-line benefits of colocation hosting are:

  • Offloading tremendously expensive network infrastructure management costs to specialized hosting experts
  • Leverage the colocation provider’s power contracts to drastically reduce energy costs
  • Access to a colocation provider’s multi-million dollar network

Colocation allows organizations to focus exclusively on managing their operations and improving their products while lowering their IT budget overhead, using servers that have already been purchased.

What is colocation?

Colocation is placing an organization’s physical server in a hosting company’s data center. The organization puts their server on the provider’s rack and uses the provider’s power and network connections.

The colocation hosting company provides the following services:

  • Physical security
  • High speed internet access
  • Uninterrupted power
  • Enterprise heating and cooling
  • Physical disaster mitigation

Colocation providers have in-depth knowledge about maintenance and resource provisioning for a variety of organizational needs. This specialized knowledge allows businesses to avoid costly investments in internet technology infrastructure.

Hiring IT experts, paying them commensurate salaries and simultaneously paying for expensive network hardware and an appropriately designed hosting facility is prohibitively expensive and time consuming for virtually every organization.

Knowledgeable, expert staff with significant experience in the hosting industry tend to data centers on a round-the-clock basis. Rather than paying a high premium for keeping their IT staff on-call, and trained in the latest methods and network technology, colocation offers businesses peace of mind and lowered operating costs compared to on-site IT operations.

For example – GigeNET’s expert engineers and service technicians typically have more than 5 years of direct data center experience – and are required to keep their knowledge and skills up-to-date with ongoing training and education.

To summarize – colocation means organizations side-step one of the expensive and risky elements required for any business in today’s world: hosting their information technology.

How does colocation work?

With colocation services, your organization purchases its own server hardware and software. The hosting provider handles the network, power and housing demands for maintaining the stringent environments that modern servers require.

Colocation providers do not actually touch the physical server unless customers purchase direct administration, typically called called remote hands is required.

You’re responsible for setting up and configuring your server to meet your organization’s needs, as well as managing the physical aspects (such as replacing an old server).

Organizations that utilize colocation require a fairly strong grasp of what they’ll do with their server, how they’ll do it and what hardware and software add-ons they’ll require to accomplish their hosting goals. Which is why most colocation customers are those who have picked up their current IT infrastructure out of the current on-premise environment and moved it to the colocation provider’s datacenter.

Since servers must be accessed remotely, the organization’s server is accessed and controlled via console.

Colocation offers geographic flexibility with the confidence that an expert staff has boots on the ground to manage physical issues at the data center in case of an emergency. The primary benefits are leveraging a data center’s power contracts and bandwidth capabilities. In an emergency, remote hands are available for an hourly fee.

If your server requires a large amount of bandwidth and maximum uptime and your organization can manage the installation, maintenance and software supervision internally, colocation offers huge savings and risk reduction over keeping your server on-site.

How much does colocation cost?

Colocation is priced according to:

  • The physical space that the server(s) takes up in the data center
  • The degree of network connectivity required
  • How power is delivered to the cluster
  • The amount of on-site support services required (on a pay-per-use basis)

With colocated hosting, your organization pays for the amount of physical space that your server requires. For an organization with robust needs that can’t summon capital to invest in their own high-grade server containment environment, colocation offers a realistic and affordable solution.

To price out your specific colocation setup, contact one of our network architects or our expert sales staff. It’s likely that we’ve designed a setup for an organization similar to yours.

GigeNET offers a wide range of flexible and customizable colocation options that can enable your organization to take significant strides at a lower cost than you’ve imagined. Our services begin at a mere $135 a month and are backed by an industry-leading service level agreement.

How to choose the best colocation provider

Colocation hosts like GigeNET offer options based on how much space your server needs. Organizations can lease a shared cage for their server – or for more vigorous applications, an entire cage for their collection of servers.

For applications that require an additional layer of security, organizations can invest in their own cabinet. This setup helps avoid the unlikely possibility that you’ll have “bad neighbors” who overload their power circuit or require constant physical access to their server. This also means no one else is going near your organization’s server, keeping already low risk at an absolute minimum.

In colocation hosting, thwarting downtime starts with your data center. We’ve invested heavily in our Chicago data center to create an ideal environment to protect your valuable data and infrastructure in case of an unlikely emergency.

When you’re choosing a colocation provider, ask about whether they’re proactively maintaining and updating their data center. We suggest inquiring about these crucial details:

  • Do they have staff on-site to help with remote hands, or do they rely on a contract with outsiders to handle emergencies?
  • Do they have experience hosting businesses comparable to yours?
  • How long have they been in the colocation business?
  • Can they offer customizable solutions, or is it a one-size-fits-all colocation provider?

A note about managed services in regards to colocation

Unfortunately, colocation providers rarely provide server & customer-owned systems management.

Why?

Because of system complexity.

Colocation deployments are typically performed according to the organization’s standard operating procedure (SOP) – which can run completely counter to the provider’s SOP.

When a colocation provider starts adding more and more organizations to its data center, these SOPs begin to introduce a high degree of complexity to the operation of the data center. Supporting all of these varying SOPs vastly reduces support efficiency.

In the end, customers suffer.

If your organization requires more support, consider recycling or selling off old server harder and purchasing a private cloud or dedicated servers from a Managed Hosting provider like GigeNET.

We’ve already helped hundreds of companies make the transition from colocation to fully managed services.

Since colocation requires some degree of knowledge to successfully orchestrate, many businesses without a dedicated IT department choose to move their IT infrastructure to a dedicated server solution using managed services instead of relying on colocation.

With dedicated servers, the customer leases servers from the provider’s inventory and configured jointly with the customer’s use case and the provider’s standards in mind. On top of this configuration, the customer may opt to add managed services. Although the details are different from provider to provider, managed services defines the system administration of the environment becomes the responsibility of the provider. In this manner, providers not only administer the network, but they also administer the server hardware and some providers will also administer the operating system.

Clients that have made this transition find much greater freedom to conduct business than just moving your server off-site via colocation.

If colocation seems like too much responsibility – we have a solution to help your organization achieve its goals.

It’s typically much less expensive to utilize a hosting company’s managed services instead of creating an IT department from scratch to handle your colocation efforts.

Are you ready for colocation?

GigeNET has been in the colocation industry for over two decades – longer than our competitors have existed, much less, focused on the organizational IT needs of businesses. We offer a team of experts for advice and service, and pride ourselves on being champions for our customers and advocates for a better internet.

Our prices are affordable. Our network is blisteringly fast. We want to be your hosting partner for life.

Let us show you how our colocation facility and long-standing expertise can serve your organization. We’re ready.

How To Migrate EasyApache 4 Profiles Between cPanel Servers

EasyApache is a convenient utility on cPanel servers which allows you to manage much of the important software that powers your web server. It manages your installations of Apache, PHP, and many PHP extensions and modules.

A common issue encountered while migrating websites between servers is differences in the environment on the destination server. For example, you may be migrating a website that uses the mbstring PHP extension, but that extension is not installed on the new server. So, after you migrate the website, it breaks due to the missing extension and you’re left finishing off a stressful server migration by digging around and troubleshooting all of these residual issues.

Of course, there is no way to completely eliminate this problem, but if you are running EasyApache 4, it is a simple matter to migrate your profile from one server to another. This will ensure that your target server has the same Apache and PHP packages available.

What If My Old Server Is Still Running EasyApache 3?

Although not covered under the scope of this guide, there is a migration process from EasyApache 3 to EasyApache 4, and it is very well documented.

If you are running EasyApache 3 and plan to migrate to a server with EasyApache 4 installed, your best bet is to upgrade the old server first so that you can iron out any issues with the EasyApache 4 upgrade separate from the migration.

Let’s Migrate!

1. On the old server, convert your existing settings to a Profile.

    1. Login to your WHM panel and navigate to Software > EasyApache 4.
    2. You will see at the top a section labeled “Currently Installed Packages”. In this section, click “Convert to profile” to create a profile from your existing settings.How to migrate EasyApache 4 profiles between cPanel servers.
    3. Enter a name and filename for your profile that will be meaningful to you, and then click the “Convert” button.How to migrate EasyApache 4 profiles between cPanel servers.
    4. Now your profile will be created!How to migrate EasyApache 4 profiles between cPanel servers.

2. Download your profile from the old server.

1. Scroll down within the EasyApache 4 interface, and your new profile is most likely at the bottom. You can identify it based upon what you named it when you created it.

2. Click the “Download” button to download a copy of the profile on to your computer.How to migrate EasyApache 4 profiles between cPanel servers.

3. Upload your profile to the new server.

1. On the new server, login to your WHM panel and navigate to Software > EasyApache 4.
2. Toward the top, you will find a button that says “Upload a profile”. Click that button to begin the upload process.How to migrate EasyApache 4 profiles between cPanel servers.
3. Browse for the json file on your computer which you downloaded from the old server. This will be the filename you entered while creating that profile in step 2c above.
4. Click the Upload button to upload this profile to your new server.How to migrate EasyApache 4 profiles between cPanel servers.

4. Provision the profile on the new server.

1. Now that you’ve uploaded the profile, scroll down in EasyApache, and you should find your new profile toward the bottom. You can identify it based upon the name that you entered in step 2c above.
2. Click the “Provision” button to apply this profile to your new server.How to migrate EasyApache 4 profiles between cPanel servers.
3. EasyApache 4 will go through the provisioning steps and install all of the software and modules which were copied over from the old server.

You’re done!

Now your new server should have the same PHP versions and modules available, which should greatly reduce your likelihood of encountering any migration headaches!

GigeNET cPanel partnership

GigeNET Marks 15 Years of Selling cPanel by Reinvesting in Relationship

GigeNET, an innovator in resilient managed cloud and dedicated hosting, reaffirms partnership with cPanel to mark 15 years of selling cPanel licensing.

“cPanel has been so instrumental in the creation of the hosting industry I think you would be hard pressed to imagine the web host industry without it. A lot of what we do today, be it Cloud, or IaaS, or shared, has roots in how cPanel defined the industry,” said Ameen Pishdadi, Founder and President of GigeNET.

15 years ago GigeNET sold its first cPanel license. A few months later (in 2004), GigeNET registered as an official cPanel partner. And since then has signed on numerous shared, VPS, and cloud hosting providers along with customers who want cPanel’s DevOps in a box approach.

GigeNET recently recertified all support on both cPanel and WHM Administration and System Administration certifications. On top of following cPanel certification, the GigeNET team also created their own internal study and testing materials to keep the information fresh.

“cPanel is the industry standard for server control and defined the business model for our shared hosters,” said Joe De Paolo, SVP of Products at GigeNET. “Furthermore, it is cPanel’s tiered resale-ability and ease of control that makes system administration and devops easy for the end user.”

GigeNET released new managed services last year and finalized cPanel Managed Services offerings. By combining the feature set of cPanel and GigeNET’s support customers can focus on what they love instead of system administration or developer operations of their IT environments.

“GigeNET is a valued long time cPanel Partner and has a network that is nothing short of amazing! They have proven to provide quality servers and fantastic network protection needed to succeed in hosting” says Eric Ellis, cPanel’s VP of Customer Experience.

To learn more about GigeNET, visit www.GigeNET.com. Or to learn more about how cPanel can help your company visit www.GigeNET.com/cPanel/.

To receive a 30-Day free cPanel & WHM trial license with dedicated server purchase click here and order now.

About GigeNET
GigeNET is your Hosting Partner for Life. We are your trusted collaborator for offloading your IT requirements and moving your business forward. As the first company to provide complete and fully automated DDoS protection, GigeNET is the leader in Resilient Cloud and Dedicated Managed Hosting. Businesses partner with GigeNET to solve their IT infrastructure problems with custom solutions that nurture continued growth.

About cPanel
Since 1997, cPanel has been delivering the web hosting industry’s most reliable and intuitive web hosting automation software. The robust automation software helps businesses thrive and allows them to focus on more important things. Customers and partners receive first-class support and a rich feature set, making cPanel & WHM® the Hosting Platform of Choice. For more information about cPanel, visit cpanel.com.

Load More ...
Colocation Instant Quote




Colocation Needs





Colocation Instant Quote




Colocation Needs