chicago colocation

Your website is the beating heart of your business. Whether or not your enterprise operates partly or exclusively through facilitating eCommerce transactions, in the digital age your website’s health and wellbeing are inextricably tied to the fate of your business. For many new prospects, your website is their first taste of your business and brand and it’s vital that you make the perfect first impression. But before you start thinking of web design, development, copy, UI, and UX, you must consider the foundation for your website — the control panel.

Here we’ll look at the most commonly used hosting control panel,  cPanel, to understand how it provides a solid foundation for your website. We will explore cPanel’s symbiotic relationship with Web Host Manager (WHM) and learn the differences between the two, as well as how they work together. The better you understand this relationship, the easier you’ll find it to manage your website and ensure robust performance…

What is cPanel?

Think of cPanel as the control panel for your website. It enables you to manage all of your website’s major functions while also managing different accounts sharing one dedicated server.

  •     Upload and manage files for your website
  •     Edit individual files and webpages
  •     Setup and manage email accounts for your domain
  •     Manage email settings and security features (e.g SPAM protection)
  •     Setup and manage databases
  •     Install your choice of Content Management System using an installer like Softaculous.
  •     Add and remove addon domains or subdomains
  •     Edit DNS records for your domains
  •     Check website statistics and gather analytics data
  •     Manage your backups.

As you can see cPanel gives you access to all the basic features you need to build and manage your website effectively.

What is WHM?

Web Host Manager is the backend control panel for managing a server and one or more websites managed by cPanel. WHM provides an interface for basic system administration needs of a server, access to various metrics and logs, control over various functions and services needed to host websites, and most notably, it compartmentalizes each website controlled by a cPanel account into its own environment. This allows WHM to be used as a reseller control panel allowing for the isolation and management any resold accounts (albeit with somewhat restricted rights compared to dedicated servers or a VPS).

WHM allows you to:

  •     Create and manage individual cPanel accounts
  •     Create custom hosting packages built around the user’s needs
  •     Manage features of the hosting packages
  •     Manage which versions of PHP, Apache, and other vital services are available to each cPanel account
  •     Set up private nameservers and modify DNS zone records for domains and subdomain
  •     Perform maintenance on basic systems and control panels
  •     Set resource limits to protect cPanel accounts from each other
  •     Access resold accounts at will without the need to enter login details

So… What’s the difference? – A comparison

As you can see, cPanel and WHM are two sides of the same coin. Both are integral to effective website management. Think of cPanel as managing the front end and WHM as managing the back end —  you cannot have one without the other.

If you have VPS Hosting or a Dedicated server through GigeNET or another hosting provider, you can add cPanel with WHM access to your service package.  If the level of management involved with WHM is of a concern, we offer a variety of management plans to assist you and our support staff are all certified for cPanel and WHM management.

Don’t be afraid to use IPv6, It’s not a whole lot different from IPv4. Let’s look at the IPv6 specification here,

IPv6 Basics

Taking a first look at IPv6 can be overwhelming, but in reality, the addressing scheme is exactly the same as IPv4. For example, it would be possible to write an IPv4 address as FFFF:FFFF, which would equate to Conversely, we could write an IPv6 address as, which would be FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF.

That’s 128-bits of an address space versus the 32-bits in IPv4. This equates to an unprecedented amount of addresses. It would mean over 250 addresses for every observable star in the known universe. So it’s going to take a while to use all of them unless we waste it all needlessly. I can’t envision us using all this address space until we populate other planets, even if we gave every grain of sand in the world an IP address.

So why is IPv6 in hexadecimal format? A quick search shows a few different answers, but to me, it’s easier to concatenate and easier to read than 16 numerical digits. For example, if a large amount of the address space is zeroes it can be concatenated 1234::5678:1. Granted this can only be done one time, 1234::1::45 is invalid. Just like in IPv4 leading zeros can be omitted, however, I find it easier to write it all out:

2001:1850:1:0:104::8a is also 2001:1850:0001:0000:0104:0000:0000:008a

Which also can be 2001:1850:1:0:104:0:0:8a

This looks confusing, so writing it all out (putting ALL the zeros) when taking notes or preparing policies will help understand it better.

IPv6 is another address family and another protocol, meaning it is a completely separate set of routing and adjacency tables and even its own ethernet frame type. This means in normal terms that IPv6 is totally independent of IPv4 and doesn’t even know IPv4 exists. On an existing IPv4 network, a new IPv6 network will be created on every device as if setting up a completely new network installation. Think about this for a server, an IPv4 default gateway will NOT work for IPv6 even though it might be the same MAC address, the IPv6 has to be set separately.

Other than the addressing scheme and hexadecimal, IPv6 is exactly the same as IPv4 for subnetting and routing purposes. A subnet is still a subnet, a /24 in IPv4 is simply a /120 in IPv6, the same amount of IP addresses. Under the hood for routing, IPv6 does have some technical changes which increase routing performance such as a much simpler header format.

But wait! You read about IPv6 and it says the smallest subnet it supposed to be a /64? This is true but not true, just like the smallest subnet in IPv4 was a “Class C” before it went classless (CIDR). However there is a reason, and an RFC to back it up why a /64 was selected. Certain features of IPv6 require a /64 at the moment and may not in the future.

Question and Answer Omnibus

So, no NAT in IPv6?

Well, while it’s entirely possible to DO address translation, there’s no need for it due to the number of addresses available. A stateful firewall is all that is needed.

How about neighbor resolution?

In IPv4 we know this as ARP. Here is a fundamental difference in the way IPv6 works vs IPv4 underneath it all. While this makes no functional difference in how the protocol functions (ex. With TCP or UDP or ICMP, etc) it does change how it forms an adjacency/neighbor. ARP does not exist in IPv6, instead, it’s called neighbor discovery and it uses ICMP. Many of us are probably used to filtering ICMP by now and since it plays an important role in IPv6 discovery neighbors as well as the actual operation of the IPv6 protocol itself.

For example, fragmentation is ONLY performed by endpoints in IPv6 (the hosts talking to each other), and not by any router in between. ICMP is used to determine if packets need to be fragmented or not. This is an ICMP “Type 2” IPv6 packet. Neighbor discovery is entirely done with ICMP via multicast and unicast.

IPv6 does not use broadcasts! There is no ‘broadcast’ IP or ‘network’ IP address in an IPv6 subnet. The last IP is usable, unlike in IPv4.

Link local IPs?

Wait we saw these 169.254.x.x IPs in IPv4 but only in extremely rare instances were they ever used. How are they used in IPv6? This is a major difference from IPv4! This is also an annoying difference. It does change how things operate and what filters need to be put in place. Link local IPs are in the FE80::/10 range, and are, unless otherwise specified, automatically configured by the devices on their interfaces. This IP range is specified as unroutable on all routing equipment and should not be forwarded, thus the name link local or LAN only. This means that every IPv6 interface will have a least two IP addresses configured on it for connectivity outside of the LAN. You may have noticed a link local IP on a server that IPv6 is enabled on but no IP address has been configured yet. This is normal.


IPv6 QoS works exactly as IPv4, with the exception that IPv6 has a new flow label field added into the header to help with marking flows and traffic class designation. Since this is widely unused at the moment it isn’t worth discussing here, but noted anyway as a difference.

Security in IPv6?

No real difference here from IPv4, although it has built-in support for IPSEC, that can’t be counted on in all circumstances (ex. neighbor discovery still uses ICMP, ICMP messages still need to be sent to hosts unencrypted). IPSEC also is available for IPv4. IPv6 neighbor discovery to some is less secure than ARP. While it is a lot more complicated to filter, the security differences are negligible.

What does this mean for system administrators and firewall managers?

For IPv6 on a server, the main difference is the neighbor resolution. Certain ICMP types (133-137) need to be allowed in the firewall which allow neighbor resolution to work. FE80::/10 should be allowed for these ICMP messages also. You cannot simply filter everything except the destination IPv6 IP to the server, the link local must be allowed as well.

If you are wondering why IPv6 seems broken when you add it to a server, check the firewall.

Firewall admins should allow type 1-4 (err msgs) and 128-129 (echoes) at the least to allow proper operation and ping testing.

On the next blog, we will talk about DHCPv6, DHCP-PD, Mobility and privacy extensions, IPV6 header breakdown, Multicasting, Neighbor Discovery in depth, SLAAC, SEND, IPSEC.

TL;DR Differences Between IPv6 vs IPv4:

  • 128-bit address versus 32-bit address
  • Different Ethernet frame type (0x86DD) [IPv4 is 0x0800]
  • No broadcast or network address
  • Hex instead of decimal notation
  • No ARP in IPv6 uses ICMPv6 neighbor solicitation with multicast
  • uses link-local IP addresses (which are UNROUTABLE) autoassigned by hardware id (derived from MAC address) to communicate neighbor discovery, autoconfiguration
  • Built-in multicasting
  • Ipv6 does not require IPv4 to operate nor does it interfere with IPv4 operation, and should be treated as such: meaning: on servers, IPv6 will have its own address, gateway, mask, etc.
  • You cannot NAT directly IPv4 to IPv6 or IPv6 to IPv4 although it can be proxied **
  • For DNS, IPv6 is AAAA instead of A and reverse is (see DNS section below)
  • Jumbo JUMBO JUMBO datagrams, did I mention Jumbo? 32 bit number for window size (4 gig!)
  • ICMP replies from routers for MTU error responses
  • Header checksum is removed from the top IP level (deemed unnecessary, but I disagree)
  • Mobility and privacy extensions
  • DHCPv6 with DHCP-PD (prefix delegation)


** There are some options to NAT (port translation) and NAT64 between IPv4/IPv6 but it isn’t a direct 1 to 1 mapping

Wikipedia also has a wonderful page on IPv6,

google sheets advanced functions

Many of us work with spreadsheets every day. It’s what allows us to deal with multiple projects at once, each with reams of data. A spreadsheet helps us tame this data — and the better the spreadsheet is laid out and designed, the more it can help us be efficient when processing this data. 

Since being promoted to Manager of our support team, I find my nose buried in a spreadsheet far more often than in a system log file. In addition, I’ve found that I can be much more productive by creating well-designed spreadsheets than I could be by turning a screwdriver or tuning some PHP parameters.

There is a basic level of skill that most of us have regarding spreadsheets, but this is just barely tapping the potential of what they can do. Surprisingly, it only takes mastering a few skills and functions to greatly up your spreadsheet game — taking you to the next level in productivity, and wowing your coworkers (which is, admittedly, the real goal here).

My examples use Google Sheets, mostly because it’s what I use daily, but also because everyone using it is using the same version. Almost all of the concepts I discuss can be done in Microsoft Excel, as well, but not only does the method sometimes differ from Google Sheets, it also differs between versions of Excel.

Top 10 Google Sheets Skills and Functions

  1. Drop-Down Lists with Data Validation
  2. Conditional Formatting
  3. Freeze
  4. Referencing Cells
  5. VLookup()
  6. Autofill
  7. Clean Presentation
  8. Unique()
  9. CountIf()
  10. IfError()

Skill: Drop-Down Lists with Data Validation

Ever wonder how to add a drop-down list to a cell? This is done through Data Validation. Typically, on any spreadsheet I make, I create a “Data” sheet (tab) to hold the various tables that are needed to enable this and other functions, without cluttering up the main workbook. 

To add a drop-down list to a cell in Google Sheets (as seen in fig. 1).

  1. Create a column in the Data tab with a list of all the options you want for the list (see fig. 2).
  2. Back on the Main tab, right-click on the cell getting the drop-down list.
  3. Select Data Validation from the bottom of the right-click menu.
    • A new window with several options will show up. Don’t be alarmed.
  4. Click on the Criteria text box, ensuring your cursor is blinking in the box (see fig. 3).
  5. Now, change tabs to the Data tab and highlight the block of list items.
    • Move the Data Validation window if it’s in your way, but don’t close it.
    • The Data Validation window will change to a “What data?” window.
    • This will display the range of the block you have chosen (see fig. 4).
      • i.e. Data!B3:B5
      • This means Data tab, from B3 through B5.
    • Click OK on the “What data?” window.
    • The range you selected will now be in the Criteria field on the Data Validation window that has returned.
  6. Click Save.

…And that’s it! It really is that simple. Once you’ve done this a couple times, it will become second nature, and then your problem will be restraining yourself from adding too many drop-down lists.

Fig 1. Add a drop-down list to a cell in Google Sheets
Fig 1. Add a drop-down list to a cell in Google Sheets
Fig 2. Create a column in the Data tab
Fig 2. Create a column in the Data tab
Fig 3. Click on the Criteria text box
Fig 3. Click on the Criteria text box
Fig. 4 Display the range of the block you have chosen
Fig. 4 Display the range of the block you have chosen

Skill: Conditional Formatting

Conditional Formatting allows you to let the spreadsheet do some of the thinking for you. It formats data in a way that makes it easier to visibly digest by helping you see trends and highlighting specific data points to be better noticed.

For example, I typically use conditional formatting to highlight duplicate items in long, unsorted lists of parts. Many of the part names are similar, so offloading the task of identifying them to the spreadsheet not only helps, but it reduces the chance of human error as well.

There are a number of conditions you can use to configure Conditional Formatting. On the shift schedule I maintain, I use conditional formatting to automatically change a cell color to an employee’s assigned color when it detects their name in the cell. All my shifts show up as cornflower blue, Kirk’s shifts are orange and Zach’s are green.

You can format based on dates — if a person’s membership is expired, mark it red. If it’s due soon, mark it orange, etc. It’s really only limited by your imagination.

In Google Sheets, Conditional Formatting is accessed from the main menu, under Format. Select the range you want to rule to apply to, then select the rule (or use a custom formula), and finally set the format to apply if the rule’s conditions are met.

Figure 5 shows some sample sales data with three Conditional Formatting rules set up for column G. The first rule identifies cells in column G with a value over 100,000 by changing the cell color to green. Next, we identify those cells with a value of over 10,000 with the color orange. Finally, anything equal to 10,000 or less is red.

If you’re paying attention, you might be thinking, why are values over 100,000 green, not orange — since these cells meet the conditions for two different rules. It works because the rules are processed in order. Rules higher on the list trump rules further down. When you mouse over a rule, four horizontal dots show up on the left side and a trashcan on the right. Grab the rule by the four dots to drag it up or down, changing its position. I’m going to let you figure out what the trashcan does — I know you can do it!

In Figure 6, you can see what happens when the rules are out of order. I moved the green rule down (you can see the four dots on the left of the rule that are used to drag the rule up and down), below the orange rule. As you can see, all the previously green cells are now orange, and the green rule has been made useless simply by changing the order.

Fig. 5 Proper Conditional Formatting
Fig. 5 Proper Conditional Formatting
Fig. 6 Out of Order Conditional Formating
Fig. 6 Out of Order Conditional Formating

Skill: Freeze

When working with large sets of data, it’s easy to get lost, especially when the data in several columns is similar. One way to fight this is to freeze the header row. By doing so, the top couple of rows that contain the header are always at the top of your view, and as you scroll down, the rows scroll by, but the header is frozen at the top.

Figure 7 shows the previous example with a frozen header row. Notice how the row numbers skip from two to twenty. No matter how far down you scroll, the header will always be visible.

You can also freeze columns to the left of the sheet, and if you’re feeling adventurous you can freeze both rows and columns.

Fig. 7 Adding a frozen header row to previous example
Fig. 7 Adding a frozen header row to previous example

Skill: Referencing Cells

Most spreadsheet users are somewhat familiar with how to reference cells, called A1 Notation. In this system, a cell is referenced by its column letter, followed by its row number — “B7”, for example. A range of cells is referenced by listing the upper-left cell of the range, followed by the lower-right cell, separated by a colon — “B7:D15”, for example. This example range would be three columns wide and nine rows tall. When we want to duplicate the contents of one cell in another cell, for example, we want the contents of B7 to show up in C12, the formula for C12 would be simply “=B7” where the equal sign indicates a formula to follow, and the formula is simply the cell reference.

Where things get a bit more complex is when we start dealing with relative and absolute cell references. Relative references are what allow the spreadsheet to change cell references when a formula is copied to another cell. For example, E3 is the sum of B3 through D3. The formula for E3 would be:


Because we are using relative cell references, this allows you to copy the formula from E3 down to E4, E5, E6, and so on, with the formulas automatically changing in each new row to add up the elements of that row, and not the original row, row 2. Relative referencing increments the row portion of the cell reference by one when the formula is pasted one row down. It increments by 5 when pasted five rows down, and by -1 when pasted one row above the original. Without this automatic adjustment, when the formula in E3 is copied to E4, rather than add up the elements in the fourth row, it shows the same total as E3 because the formula tells it to display the sum of B3 through D3, rather than B4 through D4. This is true for moving from column to column, as well as row to row.

This feature saves immeasurable time entering formulas into spreadsheets because you can simply set up the formula for one row or column, and copy it to work without modification on your others rows or columns.

In most cases, relative referencing is what you want — but there are situations where you don’t want the reference to change when copying the formula from cell to cell. To demonstrate this, imagine we’re adding tax to subtotals to get the final totals. For this example, we’ll use a mix of relative and absolute cell references. We’ll use an absolute reference to pull the tax rate from its cell, D2, and use it to calculate the tax by multiplying the tax rate by a relative reference to the subtotal. The relative reference to the subtotal will allow us to copy the formula from row to row, using the appropriate subtotal each time, while the tax rate stays fixed. To do this, we add dollar signs to the cell reference to tell the spreadsheet not to change the value, even if a formula is pasted elsewhere. In this case, our tax line for C5 looks like:


Notice how B5 has no dollar signs, while the D2 reference does? The first dollar sign locks down the column part of the reference, while the second locks down the row. The column lock isn’t necessary in this case, but it doesn’t hurt, either. With the column locked, as well, it allows us to use the same formula to generate tax on the cell to the left of it anywhere on the sheet, without losing the tax rate.

If we didn’t use an absolute reference for the tax rate, the first formula we put in would still work, but if we copied it to another row, it would try to pull the tax rate from another row as well. Say we copied the formula from D5 to D6. The tax rate would be blank because it would be referencing D3 since it would be a relative reference. Row 7 would be even worse — the tax rate would be “Total” — and I thought my tax rate was bad…

Anytime you want to lock down the row on a cell or range reference, put a dollar sign in front of the row designator. Do the same with the column designator to lock down the column reference. You’ll find that many of your errors with spreadsheets are caused by incorrectly referencing a cell.

In addition to these different ways to reference a cell, you can also reference cells on different tabs (sheets). I use a Data sheet to hide a lot of my reference tables away from sight. To reference B7 on my Data sheet from another sheet, I use “Data!B7” to reference the cell. The Data! tells the spreadsheet the sheet where the cell can be found.

Function: VLookup()

The Vertical Lookup function, VLookup() is a powerful feature that will help propel your spreadsheets to the next level. This function allows you to populate a cell based on information pulled from another table (often hidden on another tab, or even on another spreadsheet, but the latter would require the ImportRange() function, which you can investigate if you’re feeling adventurous).

Fig. 8 Final Example of VLookup(), GoogleTranslate(), and Data Validation.
Fig. 8 Final Example of VLookup(), GoogleTranslate(), and Data Validation.

This can be a bit confusing until you see it in action — so let’s start with an example, a simple translator using GoogleTranslate() (I know I haven’t explained this function yet, but trust me, it’s pretty simple), VLookup() and Data Validation. See figure 8 for the final result, which has 3 instances of the translator.

The language drop-down list is generated using Data Validation with the first column of the language list on the Data tab.

The translation cell is generated by embedding a VLookup() function within the GoogleTranslate() function

The GoogleTranslate() function has three elements: the source text, the source language abbreviation, and the target language abbreviation, resulting in the following formula structure:

=GoogleTranslate(<source text>,<source language>,<target language>)

Fig. 9 Data Tab for Data Validation
Fig. 9 Data Tab for Data Validation

Because GoogleTranslate() uses an abbreviation to represent languages, I copied a table from their documentation and pasted it to my Data tab (see figure 9 to see a subset of this table). This table has the full Language name in the first column and the abbreviation in the second.

The final translation formula looks like:


In this example, D3 is the source text, “Hello my friend!” The source language abbreviation is “en” for English, and VLookup() is used to convert the Language name chosen in the dropdown, F3, to its abbreviation.

The VLookup() function has four elements: the search key, the range, the index, and the optional is_sorted boolean (boolean is just a fancy word meaning it is either TRUE or FALSE). Using the VLookup() function looks like:

=VLookup(<search key>, <range>, <index>,[is_sorted])

In our example, the function is embedded within the GoogleTranslate() function, with the VLookup() portion looking like:


Here, the full name of the language chosen in the dropdown list, cell F3, is used as the search key. In the first translator, I have “Swedish” selected as this key.

The next field is the range, Data!B3:C66. As we learned in the Variables section, Data! references the Data tab, and B3:C66 refers to columns B and C on the Data tab, from row 3 through row 66. Figure 9 shows this table of the possible languages listed by full name in column B and the associated abbreviation in column C. Make sure not to include the table headers in the range.

The index is the column number of the result we want to return. Note that this index is in relation to the range chosen in the second element. In our example, I use “2” to reference the second column in the range, column C. What is happening in simple terms is the spreadsheet looks at our table on the Data tab and looks down the first column for an entry that matches our key, “Swedish.” Once this key is found, it looks across to the index column, the second column in the range — column C — and finds the abbreviation associated with “Swedish” — “sv” (for Svenska, Swedish for “Swedish”).  The VLookup() function then returns “sv” to the GoogleTranslate() function, which allows the translation to happen.

To show a more complicated example, I recently used VLookup() to help fill out a spreadsheet dealing with access to various security doors in our facility. We have four doors secured with a badge reader and five levels of access, with no access represented by a blank entry. Figure 10 shows my lookup table on the Data tab and figure 11 shows the result of several VLookup() functions at work on a lookup table that is more than just two columns.

Fig. 10 Data tab lookup table
Fig. 10 Data tab lookup table
Fig. 11 VLookup() function results
Fig. 11 VLookup() function results

In figure 11, each of the four doors has a slight variation of the VLookup() function. They end up looking like:

Front 1 =VLookup(F4,Tables!G$4:K$8,2,TRUE)

Front 2 =VLookup(F4,Tables!G$4:K$8,3,TRUE)

DC        =VLookup(F4,Tables!G$4:K$8,4,TRUE)

Dock    =VLookup(F4,Tables!G$4:K$8,5,TRUE)

One key detail to note is the addition of dollar signs “$” to the row element of the range in the VLookup() functions. This is important because if I just copied the functions from one row to the next without them, the range would automatically increment, causing errors after a few rows. This also happens when Autofill is used to replicate the rows quickly (Autofill will be discussed next), but again this can be avoided by using dollar signs to lock down the range of the VLookup().

You’ll notice the only difference between the entries for each door is the third element, the index. This tells the spreadsheet from which column within the lookup range to retrieve the value.

Skill: Autofill

Google Sheets has an autofill feature that can be used to quickly duplicate cells, or continue sequences and patterns, saving you a lot of time that would be wasted on tedious data entry. If you haven’t been using this feature, you certainly will be once you learn how it works.

The simplest Autofill feature is duplication. Say you have a list of questions in one column and answers in the next — but you don’t have any answers yet. Rather than leaving the Answers fields blank, you want to pre-populate the field with a placeholder, “<unanswered>.” Simply fill in the first Answers cell with the text you want and click on the cell to highlight it. The cell should be framed in blue, and you should see a blue, square dot in the lower right-hand corner of the cell. Grab that dot and pull down, releasing when you’ve highlighted all the answer cells. You will see that all the cells were autofilled with a duplicate of the first cell.

You can use this to fill down, or right. You can do both, but you have to do one at a time — Fill down, let go and grab the dot again, this time with the whole row still highlighted and fill right. The reverse order works as well (right, then down).

In addition, you can start with more than one cell. Say you have a column next to the Answers column that shows Answered by whom. Enter “<unanswered>” in the top Answers cell, and “<no one>” in the top Answered by whom cell right next to it. With both cells highlighted, grab the blue dot and pull down. Both columns will be filled with the appropriate text.

Granted, the times you’ll need to use this to duplicate text is likely somewhat limited — but it becomes much more useful when you realize you can duplicating formulas, not just text or numbers. When duplicating formulas, keep in mind what we learned about absolute and relative cell references — especially the use of the dollar sign. This will lock the references so they don’t increment from row to row. Some situations will call for this, and others won’t. I find I often end up with a mix of relative and absolute references in my functions (like the door access example in the VLookup() section).

The real power of Autofill is shown by its ability to iterate sequences. Say you want to number your questions 1 through 15. Simply fill in the first two numbers, highlight both cells and pull the blue dot down until you’ve highlighted 15 cells. When you let go, you’ll see the sequence continued into the area highlighted, leaving you with the number 1 through 15.

Now let’s try something a bit tricker — you want to count by twos. If you want odd numbers, leave the 1 in the first cell and replace the 2 with a 3. Highlight the first two cells again and drag down — you’ll see the numbers 1 through 15 have now been replaced with the odd number 1 through 29.

You don’t have to start with 1, either. Start at 23 and count upwards (or downwards), if it’s what you need. Do you double-space? Start with 1 (or another number) and select that cell and the empty cell below it. Drag down, and you’re numbering every other row (see figure 12).

Google Sheets will also detect patterns in your cell for Autofill. Start with “word1” and “word2” and you can use Autofill to further the sequence with “word3,” “word4,” etc. In my role as a support manager, I use this to populate lists of IP addresses (see figure 13) frequently.

If the spreadsheet doesn’t detect a number in the highlighted fields, in most cases it will simply repeat the sequence of highlighted fields over and over again. With a single field, this simplifies to basic duplication, but if you enter “duck,” “duck,” “duck,” and “goose” into four cells and Autofill them (see figure 14), you will see that pattern repeated.

In that last example, I said “in most cases” because there are a few exceptions. Google Sheets used to be able to reference a Google Labs feature called Google Sets. Google sets allow you to start listing items from a set and Autofill cells with additional members of the set. You could then use a function called GoogleLookup() to pull information about the items in the set. An example I saw used elements as the set (starting with Hydrogen, Helium, and Lithium). Additional columns were filled in by referencing (similar to a VLookup() function, but referencing data from Google Sets). These additional columns displayed each element’s Atomic Weight, Atomic Number, and Melting Point. To be honest, I’m not sure if I’d get much use of that feature — but it was great for showing off! Unfortunately, this feature was removed several years ago.

Why bring up an outdated feature? Well, imagining how Autofill used to work with sets will help understand how Autofill works with dates and times. Type the name of a month of the year, highlight it and drag down. You will see that instead of repeating that word, it filled in the rest of the months, in order. If you go past 12 cells, it will start repeating. You can do the same with the days of the week (see figure 15).

Dates can be Autofilled using a variety of formats (any format recognized by the spreadsheet as a date). By default, if you start on a specific date and use the one cell to Autofill, it will increment each cell by one day. If you want to increment monthly, or weekly, enter the first two elements of the sequence and select those two cells to Autofill from (see figure 16).

Times Autofill in a similar way. By default, the spreadsheet will use the 24-hour format, but adding “AM” or “PM” will force it to the 12-hour format for our delicate American sensibilities. Start with “12:00 PM” in a cell, and it will Autofill times in one-hour increments, using the 12-hour format. Want to count in 15-minute increments, fill in the second cell with “12:15 PM” and start your Autofill with the first two cells to achieve this (see figure 17).

Fig. 12 Autofill Patterns Auto Numbering
Fig. 12 Autofill Patterns Auto Numbering
Fig. 13 Autofill Patterns Auto Numbering Advanced
Fig. 13 Autofill Patterns Auto Numbering Advanced
Fig. 14 Autofill Patterns for Duck Duck Duck Goose
Fig. 14 Autofill Patterns for Duck Duck Duck Goose
Fig. 15 Autofill Patterns for days of the week
Fig. 15 Autofill Patterns for days of the week
Fig. 16 Autofill Patterns for weekly
Fig. 16 Autofill Patterns for weekly
Fig. 17 Autofill Patterns for time formats
Fig. 17 Autofill Patterns for time formats

Skill: Clean Presentation

This last skill is much more general than the previous skills discussed here. It is more of a collection of ideas, any number of which you can choose to incorporate in your own spreadsheets in order to clean up the presentation.

By default, spreadsheets can daunting blocks of raw data — and it doesn’t help that many of us have picked up some bad habits along the way. It’s amazing to see the difference a few small changes make to the look and feel of a spreadsheet.

  1. Provide a buffer.
    • The first thing I tend to do with a new, blank spreadsheet is resize the first column to the same width as the height of each row (21 pixels). I then start my spreadsheet from B2, which provides a nice buffer around my tables, keeping them from running into the edges, while not giving up too much valuable work area.
    • Size your columns so your data has breathing room, without losing valuable space.
  2. Align your data.
    • Right-justify numbers, left-justify everything else.
    • Fight the urge to always center your data.
    • This is not an absolute rule. Sometimes the presentation looks better with different justification — use your best judgment.
    • One big exception to the no-center suggestion is table titles. Merge the top row of cells above the header row into one and center the title in a large font.
  3. Choose your colors wisely.
    • Try to limit yourself to two or three colors on a page. 
    • Use muted colors, unless you’re trying to highlight something to make it more noticeable.
    • In the example below, I’ve changed the Conditional Formatting of the sales numbers to change the text color, not the cell color. I prefer more subtle indicators, but sometimes bold is what’s needed. Format accordingly.
    • Choose complementary colors — see the colors used in Format Alternating Colors for examples of muted colors that go well together.
      • Format Alternating Colors can be used to highlight the header row and put a light color on alternating rows below the header to help follow a line across the page.
    • I tend to use light greys and light blues as my go-to colors. If I need to venture beyond those colors, I choose a color and match it with a color just above or below it in the color-picker. Go up and down, not left and right — unless you’re working with greys.
  4. Avoid overuse of borders.
    • I will often limit borders to a single line separating the header column from the data — however, this is unnecessary when you use a bolder cell color to highlight the header or freeze the header column.
    • I also tend to frame the table when I have more than one small table on the same tab, often with a two or three pixel wide border.

Compare the before and after spreadsheets. It’s amazing what a few small changes can do.

Fig 18. Before Cleaning
Fig 18. Before Cleaning
Fig. 19 After Cleaning
Fig. 19 After Cleaning

Which one would you work with? Which one would you like to present to a large audience?

Function: Unique()

The Unique() function takes a range of data and returns a list with all duplicates removed. Elements are returned in the order they are encountered, so I often embed this function within a Sort() function to return a sorted list.

I frequently use this function for inventory tracking. Say there is a list of components for 40 servers, one per row, with the F column listing the server’s CPU. If we want a list of the different types of CPUs in use by these 40 servers, our function would look something like:


In this example, F3:F43 is the range covering the list of CPUs for the 40 servers. In this case, let’s say there were only four different types of CPU in use. The first type encountered would be displayed in the cell with the above formula. The following unique CPUs would fill out the three cells below.

If you want a sorted list, the Unique() function could be placed in a Sort() function:

=Sort(Unique(F3:F42), 1, TRUE)

To cover how Sort() works, in simple cases like this, the first element is the range to sort. The second element is the column within that range to sort — in our case, there is only one column, so we use “1.” Finally, a boolean value to represent if we want the list sorted in ascending order, or not. Sort() has a few more options to play with if you want, but those are the basics.

Function: CountIf()

The function CountIf() counts the number of cells that meet a definable condition within a range of cells. I find I often use it in conjunction with Unique() to make a table summarizing how many of each part are in use. Using the example from the Unique() description, once a list of unique CPUs is generated, I use CountIf() to count how many servers have each CPU type.

Say N5 is where we start our unique CPU list, generated from the range F3:F43. Since there are 4 different CPUs, these are listed in N5 though N8. I use the next column, M, to show counts of each of these CPUs. The formula for M5 would be:

=CountIf(F$3:F$34, N5)

Where the first element is the range in which to count (notice the absolute row references), and the second element is the condition that triggers a count if it is met. In our example, the range is again F3:F43 — the CPU list for the 40 servers, and the condition is N5, the first CPU on the unique list.

You can use more advanced criteria than simply matching, too. If I want to count how many values in a list are greater than 20, I’d use “>20” (including the quotation marks) as the criteria for CountIf().

Function: IfError()

IfError() is a simple function that helps clean up expected errors in a spreadsheet. Expected errors often turn up when some data has yet to be entered, and functions referencing that empty cell throw an error because it need some data to process, or it can be something as simple as dividing by zero.

Some common errors you’ll see will be #DIV/0!, #VALUE, #REF!, and #NUM!. Wrapping your function in IfError() can suppress these errors. The function:

=IfError(B2/B3, “Oops!”)

will return whatever the formula in the first element, here B2/B3, would normally return unless the return value is an error. In most cases, it would return the result of B2 divided by B3. However, if B3 happens to be a zero, the formula would return the error, #DIV/0! The IfError detects the error, suppresses it, and returns the 2nd (optional) element — in this case, “Oops!” If you leave out the second element, the error is still suppressed, if one is returned by the formula in the first element, but nothing is returned — the cell is left blank.

While this is a useful tool to keep errors from mucking up your nice, neat spreadsheet, it can make troubleshooting mistakes difficult. Keep that in mind when putting a spreadsheet together, and maybe add the IfError() wrappers after you’re confident in your work.

mysql basics

Introduction to MySQL

MySQL Replication using Binary Log File Position, as opposed to Global Transaction Identifiers(GTID), uses binary logs, relay logs, and index files to track the progress of events between the master and slave databases. GTID can be used in conjunction with binary/relay logs, however, starting with an understanding of binary log file position is beneficial. Shown here are the steps to set up new master and slave servers, including how to record the master log position to use with the slave configuration; resulting in consistent data between the master and slave servers.

This is an overview of the MySQL Replication setup process using Binary Log File Position. As a simplified guide with reference to configuration steps provided at the following:

Operating system and MySQL versions

CentOS 7

MySQL 5.7

MySQL Definitions

Keywords/filenames used with MySQL Replication

  • Master – primary database server data is copied from
  • Slave – one or more database servers data is copied to
  • Binary log file – containing database updates and changes written as events
  • Relay log file – contains database events read from the master’s binary log and written by the slave I/O thread
  • Index file – contains the names of all used binary log or relay log files
  • Master log info file – contains master configuration information including user, host, password, log file, master log position. Found on slave
  • Relay log info file – contains replication status information. Found on slave
  • Global Transaction Identifiers(GTID) – Alternative method for tracking replication position, does not require binary logs enabled on slave(not used with binary log file position)

1. Setup MySQL

The latest repository(MySQL 8.1) includes previous versions of MySQL. Once the repository is added, use yum-config-manager to disable mysql80-community and enable mysql57-community; or by editing /etc/yum.repos.d/mysql-community.repo directly.

  • Add MySQL Yum Repository

shell> sudo rpm -Uvh mysql80-community-release-el7-1.noarch.rpm

  • Install MySQL 5.7

shell> sudo yum-config-manager --disable mysql80-community

shell> sudo yum-config-manager --enable mysql57-community

shell> sudo yum install mysql-community-server

shell> sudo systemctl start mysqld.service

  • Reset MySQL root password

shell> sudo grep 'temporary password' /var/log/mysqld.log

shell> mysql -uroot -p

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass4!';

2. Setup Master Server

  • Add the following to the [mysqld] section of /etc/my.cnf




log-bin – The binary log file name, default stored in the MySQL data directory /var/lib/mysql.

server-id=1 – Unique identifier for the server. Defaults to 0 if not declared. If set to 0, connections to the slave servers will be refused.

Restart MySQL

shell> sudo systemctl restart mysqld.service

  • Create MySQL Replication User

mysql> CREATE USER 'replication'@'' IDENTIFIED BY 'password';

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'';

  • Record the Master binary log position for slave configuration





3. Setup Slave Server

  • Add the following to the [mysqld] section of /etc/my.cnf




log-bin – The binary log file name, default stored in the MySQL data directory /var/lib/mysql.

server-id=2 – Unique identifier for the server. Defaults to 0 if not declared. If set to 0, connections to the slave servers will be refused.

  • Configuration using the master server replication position information recorded in step 2c.


   ->   MASTER_HOST='master_host_name',

   ->   MASTER_USER='replication_user_name',

   ->   MASTER_PASSWORD='replication_password',

   ->   MASTER_LOG_FILE='recorded_log_file_name',

   ->   MASTER_LOG_POS=recorded_log_position;

  • Start the Slave replication process and view status.

mysql> STOP SLAVE;



Following these steps the slave server should be synced with the master log position. This can be read in the SHOW SLAVE STATUS\G; output, which is will be discussed in the next blog posts. In addition, more information on MySQL Replication GTID setup, variable configurations, and maintenance will be a topic of following blog posts.


Not quite ready to handle it yourself? Let us handle your server maintenance with GigeNET’s fully manged services.

Linux Server Maintenance Guide

If you’re running a Linux server and you value uptime and stability, this server maintenance guide will help keep you on track. It’s best to perform maintenance and checks on a regular basis for various reasons. It’s not fun being a sysadmin and finding out that a downtime causing issue could have been easily prevented.

Linux Server Maintenance Guide

  1. Check Disc Usage: One of the most common things that causes downtime and issues is a filesystem filling up and hitting 100% used. 80% used is generally a warning, and 90% is critical. It is very important that you’ve allocated enough space for your packages, databases, site files, logs, etc.If your filesystem becomes too full, you’ll have to scramble looking for files and logs to delete before it’s too late and services start to hang. To check your filesystem usage you can use the ‘df’ command, for example: df -h will display usage in human-readable format.
  2. Check RAID Array: Checking the status of your RAID array is important. If a member disk is missing from an array it should be replaced as soon as possible. Depending on your RAID controller there will be separate utilities you can download and use.For example: Adaptec controllers will use arcconf and LSI controllers may require MegaCLI or tw_cli depending on the model. It’s best to refer to the manufacturer’s documentation for guides.
  3. Check Storage Device Smart Stats: Keeping an eye on the SMART stats of your storage devices can warn you of pre-failure. Reallocated, current pending or uncorrectable sectors are generally cause for concern. The higher the number the sooner you should replace the disk. Power on hours may also something to look for.At GigeNET we replace drives with over 40,000 power on hours. On Linux servers you can use the ‘smartctl’ command to run tests and check the stats. More info on smartctl can be found here.
  4. Verify Backups are Working: Checking if your backups are running properly is good practice. You should also be testing restores of your backups every so often and verifying that they work as intended in a test environment.
  5. Ensure Security Patches are Applied: Patching vulnerabilities in the software that runs on your server is top priority. It’s best to subscribe to your distributions security announcements mailing list to be notified of when you need to get patching. You can use your OS package manager such as yum or apt to install and upgrade new packages.
  6. Check Remote Management: Depending on your server’s manufacturer, remote management tools like IPMI, iLO and iDRAC have proven to be useful. You should have them prepared for when you need to use them. Remote console has saved many when unable to SSH into a server.
  7. Check for Hardware Issues: Looking over syslog and something like the IPMI event log can let you know when there’s something wrong. Memory errors, overheating and power supply failures are some examples that warrant swift response. Depending on the hardware component that has gone bad the logged entry will vary.
  8. Check for Software Errors: Software error logs and syslog should be monitored regularly. Software sometimes hits configured limits and OOM killer is activated when you run out of memory. Sometimes this can slip by unnoticed. Depending on the software and configured log file output where you find those logs will vary. Most logs can be found in the /var/log directory however.
  9. Review Access: Check which users and individuals should have access to the server and modify that access as needed. A good overview of what files you should look in can be found here.
  10. Use Strong Passwords: Strong passwords whether randomly generated or made using the ‘diceware’ method are a must. Don’t cut your passwords short and use low entropy combinations.

Don’t have the time or resources for server maintenance? Explore our fully managed dedicated servers.

docker swarm blog header

Docker is a container management platform and is a rather new way of thinking about administration. It’s a tool to manage containers, and GigeNET mainly uses containers to manage processes on an individual level. Let’s take a step back to discuss what a container consists of. Containers are a method of virtualization much like virtual machines but without the emulation overhead. They all share a common kernel but maintain their own operating system configurations. Think of them as a more complicated chroot. Knowing this, it’s easy to see how containers can be used as a perfect method to isolate system services like Traefik and maintain a simplified method to build out applications in its entirety. We have demonstrated this on a previous blog about Traefik.

Docker Swarm is a core feature of Docker that brings process isolation to the mainstream by allowing process scaling, and high availability through its clustering software. With Docker Swarm, you have two types of nodes. A manager node that manages the swarm, container provisioning, overlay networks, and various other services. Then you have Docker worker nodes that are purely work horses, and simply run the containers themselves.

Now that we have a slight overview, let’s jump into configuring a Docker setup with Swarm enabled. We will be running this exercise on CentOS 7, and utilize the EPEL repository. To install Docker run the following commands:

~]# yum install epel-release -y
~]# yum install docker -y
~]# systemctl start docker
~]# systemctl enable docker

Docker should now be installed, and we can start working on initializing the Docker swarm setup. To do this we need to run the docker swarm init command on the host you want to elect as the manager. The manager still runs the containers, but remember it’s also in charge of managing the clustering services.

~]# docker swarm init –advertise-addr
Swarm initialized: current node (d1vm3qz3awt7vpod7hvx79r6m) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join –token TOKENABC123

The instructions given by Docker are pretty clear. It provides a token and the advertisement IP of the Docker manager node. If the token ever gets lost it can be simply recreated with the following command docker swarm join-token worker. Let’s run the command on the worker node now.

~]# docker swarm join –token TOKENABC123
This node joined a swarm as a worker.

The Docker cluster should now have been initiated, and you can now easily view the Docker nodes statuses with docker node ls. Looking at the node listing output you should notice that the manager node has a status of Leader. Since Docker can have more than a single manager for high availability the primary manager is elected as the leader. Other managers that have not been elected as the leader are then labeled with the status reachable.

~]# docker node ls
758sih0xq5h6hgiwav3av051x dkrblog2 Ready Active
d1vm3qz3awt7vpod7hvx79r6m * dkrblog1 Ready Active Leader

The entire setup didn’t take a lot of effort to do by hand, but I always try to automate every task I might do more than once. Sadly, Ansible still does not have a module to manage the Docker service on this level. I built a simple Ansible role to build out a Docker swarm configuration, and it can be retrieved from our git repository here Running the playbook, you should see something similar to the output shown, and refer to the git repository for a more detailed guide. If you are new to Ansible take a look at the cheat sheet to Ansible blog here

~]# ansible-playbook -i hosts docker.yml

Docker Tasks and Confirmations
With the live Docker cluster deployed, let’s have some fun and run a simple example against it. We will start out by running a basic Nginx service that has spawns two containers with the default Welcome to Nginx index page. This example is basically the HelloWorld of the Docker universe, and I personally use it sometimes to perform a quick test to check if a Docker cluster is functioning properly.

~]# docker service create –name GigeNET_Blog –replicas 2 –publish published=80,target=80 nginx

If you are familiar with Docker you should notice we are not running a container directly with docker run, but instead creating a service with the name GigeNET_Blog. We tell the services we want two of the same containers, and we want to expose Nginx’s port 80 to the Docker hosts port 80. Without the published argument the docker service would not be reachable from the outside. If the clustered setup is configured properly you should now see this page:
Welcome to nginx
To see a more advanced usage of Docker Swarm head over to our Traefik blog, and we will show you how to build a Docker compose service that heavily utilizes the Docker swarm services. It also goes into more advanced features like the network overlays, and you get to learn a little more about a leading container edge router that is Traefik.

Sound like a bit too much for your workload? Learn more about how GigeNET’s sysadmins can make your life easier or  chat with our specialists.

line command code

The Shell… A cloudy mystery for the novice, but an indispensable tool for any seasoned administrator. It is the underlying foundation in any operating system, and if used to its maximum potential, can be extremely effective.

You might be wondering, why is the command line considered useful? cPanel has almost every function you’d need for hosting integrated into a web panel, and Operating systems like Windows are built around a user interface being the primary form of interaction. As a sysadmin, sometimes the issues we face are much deeper than a GUI can allow us to troubleshoot. Other times we simply wish to automate certain functions of our daily routine.

Basic Command Line Utilities

Explored will be a few command line utilities that make our lives easier, along with examples of how we might script certain tasks. Automating some of our daily checks and our more mundane duties frees us up to focus on the more difficult issues we might face.

One of the best functions for the linux command line is the ability to easily string commands together on a single line, sending the output of one command into the next. This comes in handy when searching for a file or when trying to alter the output of a command.

command | awk (arguments) | grep (arguments)

In the above example, the base command is piped (i.e. output is sent) into awk, which alters the data , then its altered output is sent to grep, which searches for a specific keyword within that already-modified data.

While this works perfectly fine, what if we deem that the new output of the linked commands is useful in our daily routine? Sure, we could type out the entire command every time, but why do that if we can make our own command out of it. Some of these piped together commands can become long and tedious to type out every time:

df -h | awk ‘{print substr($5, 1, length($5)-1)}’ | awk ‘NR>1’

Let’s put that string of linked commands into a bash script, name it something memorable, and then put that script into /usr/bin on our linux system.

We can now use our script by just typing what we named it:


On the other end of the spectrum, we have Windows. Modern versions of Windows include two types of shells, the standard command prompt we all know, and Powershell.

As a quick history lesson, command prompt has been around for a long time. Its functionality is built on that of the original DOS operating system. While it serves as a decent shell, it isn’t very useful for scripting and lacks functionality compared to linux bash. In response to this, Microsoft created Powershell. As the name implies, it is a much more powerful shell, designed to be highly flexible and scriptable.

Without Powershell, Windows is limited to only basic batch scripts. Unlike Linux, Windows doesn’t have the ability to easily send the output of one command into another. At most, the output of a command can be sent to a file, or multiple commands ran at the same time using &&:

ipconfig > networkconfig.txt
ipconfig && echo “hi”

In order to perform more complex functions, we would have to resort to using Powershell.

Another big part of what we do is monitoring servers and their health. Most of the time, this is done through having an another server to act as a monitoring server. In some cases where we might need specific statistics, a quick script to run a few commands and email to output to us does the trick. We’re able to customize the output to exactly what we’re looking for, rather than being limited to what a pre-defined command can give us.

Having this flexibility allows us to decide whether we want to see an output such as this:

———-Storage Space———-

Filesystem Size Used Avail Use% Mounted on
udev 12G 0 12G 0% /dev
tmpfs 2.4G 240M 2.2G 10% /run
/dev/mapper/pve-root 94G 7.1G 83G 8% /
tmpfs 12G 34M 12G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/fuse 30M 16K 30M 1% /etc/pve
gluster1:stor1 80G 33M 80G 1% /mnt/pve/GlusterStorage
tmpfs 2.4G 0 2.4G 0% /run/user/0

Or an output like this:

———-Storage Space———-

All Volumes Below 80% Utilization

As sysadmins, we monitor, maintain, and configure many systems. The ability to script tasks allows us to shift our focus to more important tasks, and complete more of those tasks in less time.


Not something you’re quite ready to delve into? Learn more about how GigeNET’s sysadmins can make your life easier or  chat with our specialists.

qubes blog

Qubes OS is a distribution of Linux with security as its focus, and it does it very well. At its core it’s based on Fedora and utilizes the Xen hypervisor to achieve security by isolation. Qubes can take a bit to get used to and it has its quirks but generally works quite well and has been stable for years. To run the latest stable version of Qubes you’ll need to have a nice x86_64 CPU that supports Intel VT-x & VT-d or AMD-V & AMD-Vi. You’ll also want a hefty amount of RAM (I have 20GB) depending on how many VMs you’ll be running and an SSD. I’ve been using Qubes as my daily driver for years and I can say it’s the real deal when it comes to desktop security.

Usability of Qubes OS

Qubes Tutorial Screenshot

One of the most important factors in a desktop OS is its usability and looks. It has to function and look well! By default Qubes 4.0 will run the Xfce desktop environment, which has been one of my favorites over the years.

It’s very lightweight and customizable. You’ll be able to move around and fit the Xfce panel, change coloring and theme to your liking, as well as adding a few widgets while you’re at it. My desktop might be plain but it’s how I’ve had it for years and it works the best for me.

Qubes Tutorial Screenshot

One of the best parts about Qubes is the Qubes Manager, it gives you an overview of all the VMs you’ve created along with the ability to customize everything about them. While I like to use the CLI to perform most actions it’s nice to customize settings in a GUI. Here is a screenshot example of the basic options.

Along with the advanced settings.

Structure of Qubes

The structure of Qubes is similar to your typical hypervisor. Specifically, if you are familiar with Xen you will have an easy understanding of how it works. In Xen there are something called domains which are simply virtual machines or ‘VMs’. Dom0 is the privileged/root domain that controls DomU which are unprivileged domains. 

Qubes security domains

In this article I am just going to be calling DomU, VMs for simplicity. Dom0 is able to access hardware directly and assign the controllers to VMs as needed. 

For example, in a default Qubes 4.0 setup the installer will allow you to setup 2 important VMs, sys-usb and sys-net. sys-usb will hold the USB controllers of your device so that any USB device attached is only exposed to sys-usb and not directly to Dom0. The same goes for sys-net which will hold your wifi and ethernet controllers. This is important because exploitation through these controllers if they were attached to Dom0 would be fatal. 

If Dom0 is compromised it’s game over. The overall concept is that anything you do besides administering the VMs and installing Dom0 updates should be done through a VM. Hence security by isolation.

Qubes VM Types

In Qubes there are a few VM types. AdminVM (Dom0), TemplateVMs, Template based AppVMs, standalone AppVMs, standalone VMs not based on a template and DispVMs.

  • TemplateVMs: A TemplateVM is a VM that is to be used by AppVMs as a template for themselves. For example, you can install a Fedora 28 TemplateVM and configure the software you’d like for use in your AppVMs.
  • AppVMs: By default all of the running VMs in Qubes (besides Dom0) will be template based AppVMs. These VMs are almost entirely volatile besides the user’s /home directory which is bound to /rw. This is an important design concept because if a VM is compromised you can simply shut it down and start it back up with a clean slate. You can set files to persist with some configuration however. Standalone AppVMs are a bit different, instead of being volatile everything persists on a shutdown. At creation time it copies the TemplateVM and makes its own image. I generally prefer these as they’re easier to work with and you don’t need to install all of the software you need in your TemplateVMs. Although, they are arguably not as secure as a template based AppVM.
  • Standalone VMs: Standalone VMs not based on a template will require an installation source such as an ISO and are comparable in functionality to how you would manage a VM in VirtualBox. With these VMs you can practically run anything that will boot on top of Xen and x86_64, including Windows, just be prepared to add more RAM to your system!
  • DispVMs: DispVMs or Disposable VMs are machines that you can spin up at will, they are based on templates and you can open anything from an existing VM on a DispVM. They’re often used to open unsafe or questionable documents that you wouldn’t trust in your normal VMs and the DispVMs themselves are entirely volatile. Nothing is written to disk and when you close everything inside of the VM, the VM is shut down and gone forever. I find them most useful when opening email attachments or files such as PDFs that I don’t trust.

Qubes Review Summary

Overall Qubes OS is my favorite distribution of Linux because it’s really all you need. You can run Linux, BSD or Windows all on one computer and you’re able to do it securely. Stop with the dual boot hacks, just use a hypervisor! Although it’s not going to run games smoothly, who uses Linux for that anyways? I find it especially useful if run on a laptop. I can bring all of my work environments everywhere I go. There’s no need to cram everything onto one cluttered desktop with Qubes.

If you’re interested in giving Qubes a try have a look at their documentation for further details. No one explains it better than the creators themselves.

Sound like a bit too much? Chat with our experts.

sysadmin day

Being a sysadmin is no easy feat, especially for sysadmins working for a hosting provider like GigeNET. They’re tasked with overseeing the software and network of hundreds of our clients all over the world, working day and night to ensure our customer’s systems are up and running. Unlike many other professions, there’s not one clear path to becoming a sysadmin.

I sat down with one of our sysadmin experts, Kirk, to learn all about how to become a sysadmin, why he’s so passionate about system administration, and his advice to future sysadmin.

Q: How did you discover your love for technology? When did you know you wanted to become a sysadmin?

A: I honestly don’t remember a time when I was not interested in technology.

I started messing around with old MS-DOS computers when I was in about second grade and was playing games on my mom’s old Apple IIe long before that. It was in grade school that I started to experiment with programming in BASIC and taking an interest in learning how things worked and how to make computers work for me. I was learning on second-hand hardware and software which was old at the time, but I think that learning on older systems taught me more low-level skills and understanding.

I feel like if I had started later, with newer and more “graphical” operating systems, that I wouldn’t have had the opportunity to learn as much. I think that learning so early in life may have played a great role in making technology so second nature to me as well.

My mom has an anecdote about taking me to the library as a kid to go on the Internet and explore websites, before the Internet was very mainstream in our area in homes, but around the time that kids shows were advertising things like their “AOL keywords.” This was before I could even read. She told me that she thinks a huge motivating factor for me learning how to read early on was wanting to read things on the computer without her help.

I think this has been true for a lot of things, that my passion for technology has motivated me in other ways too.

Q: How did you become a sysadmin?

A: In my junior year of high school, I got in touch with the tech department and started to volunteer. At the time, their tech department was a one-man show. This was my first real professional experience with working in a tech department.

When I graduated and set up my plans for college, the high school hired me on part time and I continued to work for their tech department, which became my first tech job. I guess you could technically have called me a sysadmin at that time.

In one of my computer science classes in college, I met Sara, who at the time was working in a development role at GigeNET, and she brought this company to my attention and put me in touch with management here. The rest as they say, is history!

Q: You’ve been with GigeNET for a few years now. What does a typical day as a sysadmin look like?

A: It is difficult to describe a typical day, I often do not know what I’ll be walking in to each day.

A large part of the responsibility of GigeNET’s sysadmins is assisting with customer requests, and that is inherently very unpredictable. Some days may be relatively quiet, and others you may unexpectedly need to migrate several servers or take on some other large project for a client.

We also take on internal projects, everything from deploying new servers and infrastructure in our datacenters to managing our inventory and testing and recycling used hardware.

I personally also handle abuse complaints, so I work with our customers regularly to help them address issues with abuse matters like phishing, compromised servers, and copyright infringement complaints.

I also handle any legal compliance documentation that we receive so I will occasionally interact with the police, FBI, and other agencies to comply with law enforcement requests.

We do a lot of varied things here! The variation is partially why I enjoy my role here, it definitely doesn’t get boring.

Q. I’ve heard that our customers love working with you. If they call in or submit a support ticket, you’re always insightful and provide practical ways to make their systems better. Do you like this part of the job or would you rather stay behind the scenes?

A: I enjoy being on the front lines and interacting with customers. Of course there are occasionally some customers who are difficult to work with, but overall the customer service aspect is one of the most fulfilling parts of this role for me.

As a technical person myself, I’ve had no shortage of bad customer service experiences from tech companies. For example I’ve been in situations at home in the past where there have been issues with my Internet service, and convincing the ISP of what I believe the problem is (and that it’s not on my end) is an uphill battle.

At those companies, I’ve usually found one or two staff members who I can eventually get through to and then who act as a huge asset to me moving forward. This is the honest truth – I once left an ISP because my contact there left his job, the service deteriorated, and I couldn’t get anyone to listen to me anymore and could never get the issues resolved.

So I strive to be that person who someone else is glad to have pick up their call/ticket, I like knowing that a customer who knows me may feel a trust that their issue will be resolved completely because they see me working on it.
I know how it feels to be on the other end of that relationship and that it can really make a difference in someone’s life. That’s why I love the customer service part of the role and find it so fulfilling.

Q: What does your workspace set up look like?

A: I keep my desk very clean and spartan, I do the same at home actually. There’s not much on my desk at any given time besides my equipment. I keep a laptop and tablet charger here always so that I can quickly dock my mobile stuff when I get in the office.

My work computer is running Linux Mint with the MATE desktop interface, and I have 3 monitors attached on a monitor tree. My monitor tree is relatively high, allowing me to open my laptop on my desk without obstructing any of my workstation screens.

Usually my workflow is along the lines of: Skype, music, and a notepad on the left screen, research, any browser tabs, connections to any servers I’m working on, and any currently open documents on the middle screen, and always watching the ticket queue on the right screen.

Having a lot of screen real-estate really makes all the difference, especially when things get busy and I have multiple tickets and multiple customer servers open at the same time.

Since I work second shift and am often by myself, I brought in some nice speakers with a subwoofer so I can really turn it up after everyone heads home for the night. Music is very important and motivating to me. The right tunes can really put me in the zone sometimes.

Q: Now, I heard you have a pretty cool computer setup at home. What’s that like?

A: My setup at home is actually very similar to my setup at work, but with a few additions.

At my desk at home, I often use two computers. I use a program called Synergy to integrate their keyboard and mouse together, so that I can control both computers as if they are a single computer.

My main desktop at home is also running Linux Mint with the MATE desktop UI. I use a monitor tree at home similar to the one I have at work, also with 3 monitors, but my monitors at home are a bit larger. I use this computer most of the time for anything that I do online while I’m at home, from research to shopping to YouTube.

Above my Linux monitor tree, I mounted a 43” 4K TV which is attached to my Windows 10 gaming PC. I don’t keep this PC on all the time as it uses a fairly large amount of power and can really heat up the room, but I turn this on when I want to play some games. I have a rather large Steam library and recently upgraded this computer’s graphics card to a 1080TI, so it’s quite capable.

A picture is worth 1000 words:

sysadmin day workstation setup

I have put a lot of time and effort into my network at home. I have installed multiple ethernet runs to most rooms in my house which go back to centralized gigabit switches. I recently installed 7 IP cameras for video surveillance around my house as well. I have 3 WiFi access points, 2 of which are mounted in the attic and 1 of which is mounted in the garage rafters. These provide pretty good wireless coverage throughout my house as well as my front and back yard.

I also have my own little mini-datacenter, a 24U APC Netshelter server rack currently containing my custom built file server and 3 HP servers I picked up on eBay. I use the file server to store all of my data, backups, media, etc. The HP servers are more for “compute” than storage, and currently all 3 of them are VMware servers running virtual machines for my various internal services and projects.

In the future, I would like to look at some fiber projects, especially for the large things like my server rack, but currently everything in my house is gigabit ethernet. Since my uplink to the Internet is 500Mbps, this is fine (for now). 😉

Q: So I take it being a sysadmin is a hobby as well as a job for you? What are your other hobbies?

A: I’m sure you got that sense from my answer to the last question as well. 😛

A lot of my hobbies do revolve around technology, but in other ways.

I’ve spent a lot of time in the past experimenting with wireless. I wouldn’t call this “sysadmin” and it’s definitely not something I do at work.

I have a lot of high powered wireless gear and antennas and even a 2.4GHz spectrum analyzer around from past projects. I always found long-range wireless to be fascinating, and back when I lived with my parents in the more rural countryside where there were a lot of open fields, playing around with long range wireless was fairly practical. Not so much now that I live in the suburbs.

This became an interest of mine because this was how we got broadband at my parents’ house for years (in fact their broadband connection is still a long range wireless connection to a tower several miles away) so I wanted to know all the ins and outs so that I could troubleshoot and understand my connection and how to care for it when it was having issues.

All of this said – I do not like wireless. It is a necessary evil sometimes, but wired will always be faster and more reliable, hence why I put so much effort into wiring my house.

Of course I am also a gamer, although often this takes a back-seat as it feels “not productive” to me so I don’t spend a lot of time gaming usually. I only game on PC and have never had much of an interest in consoles, other than the original Nintendo Entertainment System, which was the first gaming system I ever owned. I still love Super Mario Bros. 3, but I usually play it on an emulator on my PC these days.

Along with the gaming hobby, I have been streaming on Twitch for the past several years. I started out by just streaming games I was playing in order to have a social experience of sharing them with friends, but it started to tie into my next thing:

About a year ago, I picked up my first professional DJ equipment and started to learn about mixing. I streamed most of my learning experience on Twitch, and eventually started to get a following from that. I still have a lot to learn, but I really enjoy mixing music and have been doing DJ sets on Twitch every weekend as much as possible. My favorite genres to mix are the type of stuff you would hear at an EDM festival, like dubstep and trap.

Q: I love reading your blogs because you’re always discovering new technologies and writing on how to use them. How do you find these and figure out what works best?

A: A lot of this just comes down to Google and involvement in the open source community. I’ve found that for every problem there are usually a handful of projects out there trying to solve it, and then it’s just about finding the best one for my use case. That can be the tricky part as usually none of the projects are perfect, and each one has a community surrounding it who will often blindly swear that their project is the best.

Usually I’ll look for some projects that do what I’m looking for, check out some documentation and see how good that is, and then maybe install 2 or 3 different ones and try them out. This is where my VM server at home comes in handy, because I’ll usually spin up separate test environments for each thing and then trash the ones I didn’t use.

For example when I found Syncthing, I also tried out Seafile and BitTorrent Sync (which no longer exists as an open source product now). I liked Syncthing the best out of those 3. I also considered ownCloud at the time as I was already using that for other things, but Syncthing seemed better suited for the large file library that I wanted synced in a less centralized way.

As with anything, my preferences are my personal preferences, and sometimes they aren’t for particularly good reasons. Sometimes I’ll choose to use one product over another because I just like the way the configuration works better, or its default options worked better for me than those of another project.

One thing that working with all of the customers at GigeNET has really shown me is that there are tons of different ways to set up your service for something.

There’s no one right way or one right software for everyone.

I try not to get too tied to one ecosystem software-wise, but it’s easy to get comfortable with one particular set of software. This is what most end users do all the time and that’s why people get so stuck on their OS (Windows, Mac, etc) and are so afraid to jump ship to something else. I’ll redo my whole setup for something if I find a product I like better.

Q: What advice would you give to someone aspiring to become a sysadmin?

A: Google is your friend, don’t be afraid to break everything, and get involved in open source projects that interest you.

I’ve found that most of the best sysadmins I’ve worked with are self-taught, even if they have a degree. I have a degree but most of the stuff I use every day is self-taught. If you are not comfortable using Google to find answers, the best advice that I can give to you is to learn that first.

Break everything – no seriously.

The best sysadmins I’ve worked with are tinkering with tons of projects at home. That usually entails setting up software you’ve never used before, and you’re going to break it a lot.

Like let’s say you followed my first piece of advice, you’re trying to set up a new service, and you’ve never set anything up like it before. So you’re following a guide that you found online. Most likely it’s not going to work. There will be something different about your setup that the guide forgot to take into account. That’s good, that’s how you learn to troubleshoot!

While optional, I would suggest getting involved in any open source communities that interest you. I’m not saying that you have to do development for them or even contribute to them in any way, but get involved.

I’ve spent a lot of hours over the years hanging out on IRC channels on places like Freenode that tailor to the open source community.

I remember when I was first learning Linux, I would hang out on IRC channels like #ubuntu on Freenode and marvel at how smart some of the people in that channel were. After awhile of experimenting and hanging out in the community, an evolution happened, where I started to marvel at how inexperienced some of the people there were.

The key is to be responsible with that realization, and if you have time, help some of those people. I always try to give back at least by answering questions when I can, because someone took the time to do that for me at one point too.

There’s a lot of knowledge out there on forums and IRC channels. So use that as a resource first, and then if you feel like it, give some help back later where you can. Even when you are helping others answer the simple questions that you are now feeling comfortable answering, you’ll run into more roadblocks and you’ll learn from that experience a lot of the time. This is a fantastic way to self-teach skills that you want to have.

Even with all my experience, there are still times that I get stuck and if I can’t find an answer on Google, I will go back to the IRC channel for the project and ask about my issue there. This has led to many fun and interesting experiences, I remember one night talking with the person who wrote the open source Nouveau driver for nVidia graphics cards on Linux. It felt like such an honor to me actually to be talking to him and he was right there in the #nouveau IRC channel on Freenode and he happened to respond to a question I asked. These kind of cool experiences have shaped me and my knowledge a lot over the years.

Basically get out there on the Internet and learn what interests you, it’s all out there.

Let GigeNET lighten your workload. 

An Introductory Guide to The InterPlanetary File System (IPFS)

I’ve always found peer-to-peer applications interesting. Central points of failure aren’t fun! Protocols like BitTorrent are widely used and well known. However, there’s something relatively new and uses BitTorrent-like technology, except it’s much more impressive.

What is IPFS?

The InterPlanetary File System (IPFS) is one that caught my eye during research. It’s basically a peer-to-peer, distributed file system, with file versioning (similar to git), deduplication, cryptographic hashes instead of file names and much more. Unlike your traditional file systems that we’ve grown to love, IPFS is very different. It can even possibly replace HTTP.

What’s amazing about IPFS is, for example, if you share a file or site on IPFS the network (anyone else running IPFS) has the ability to distribute that file or site globally. This means that other peers can retrieve that same file or set of files from anyone who cached it. It even can retrieve those files from the closest peer which is similar to a CDN with anycast routing without any of the complexity.

This has the potential to ensure data on the web can be retrieved faster than ever before and is never lost like it has been in the past. A famous example of data loss is GeoCities, a single entity wouldn’t have the ability to shut down thousands of sites like Yahoo did.

I’m not going to get too much into the complexity of what IPFS can do though, there is too much to explain in this short blog post. A good breakdown of what IPFS is and can do, can be found here.

How to install and begin with IPFS

Starting off, I spun up two VMs from GigeNET Cloud running Debian 9 (Stretch). One in our Chicago datacenter and another in our Los Angeles datacenter.

To get the installation of IPFS rolling we’ll go to this page and install ipfs-update, an easy tool to install IPFS with. We’re running on 64bit Linux so we’ll wget the proper tar.gz and extract it. Make sure you always fetch the latest version of ipfs-update!

IPFS distribution download

wget -qO- | tar xvz

Now lets cd to the extracted directory and run the install script from our cwd (current working directory). Make sure you’re running this with sudo or root privileges.

cd ipfs-update/ && ./

When ipfs-update gets installed (should be very quick) we’ll install IPFS for real with.

ipfs-update install latest

The output should look something like this.

ipfs root installation

Now that IPFS is installed we need to initialize it and generate a keypair which in turn gives you a unique identity hash. This hash is what identifies your node. Run the following command.

ipfs init

The output should look similar to this.

initializing ipfs node

With this identity hash you can now interact with the IPFS network, but first lets get online. This will start the IPFS daemon and send it to the background when you press CTRL + C. It’s probably not advisable to run this as root, or with elevated privileges. Keep this in mind!

ipfs daemon &

ipfs daemon

Now that we’re connected to the IPFS swarm we’ll try sharing a simple text file. I’ll be adding the file to IPFS which generates a hash that’s unique to that file and becomes its identifier. I’ll then pin the file on 2 servers so that it never disappears from the network as long as those servers are up. People can also pin your files if they run IPFS to distribute them!

Adding and pinning the file on my Chicago VM.

hello ipfs

Now that we have the file’s hash from the other VM we can pin it on our VM in Los Angeles to add some resiliency.

ipfs pin add

Now to test this we’ll cat the file from the IPFS network on another node!

ipfs hello cat

That was a pretty simple test, but it gives you an idea of what IPFS can do in basic situations. Overall the inner workings of IPFS are hard to understand, but it is a fairly new technology and it has a lot of potential.

Load More ...