cyber security month 2018
Featured

Our team has been laser-focused on security-related topics for National Cyber Security Awareness Month this October.  If there is any big takeaway from this exercise, it’s seeing how pervasive cyber security is. Cyber security permeates every layer of technology, every ...

business case for fully managed dedicated servers
Dedicated Hosting

Although you may salivate at the thought of a fine-dining experience and indulging in a perfectly seared, dry-aged steak, chances are you’re not about to purchase a tract of land and some cattle. The same can be said for dedicated ...

12 Essential System Administration Cheat Sheets
Behind the Curtain

Albert Einstein, a man not known for his lack of learning, once said that we should never learn what we can look up in a book. While it’s often efficient to have all the commands and options we need at ...

Dedicated Hosting

Most people begin their business using a shared web hosting plan. However, they’ll eventually have to upgrade to a website host that will give them with the features and functions that allows them to grow their website. One type of ...

Dedicated Hosting

Anyone who starts an online business wants it to grow and establish a clear presence on the web. If you have a shared hosting plan, you will eventually have to upgrade to a host that can meet your changing business ...

cyber security month 2018

Our team has been laser-focused on security-related topics for National Cyber Security Awareness Month this October.  If there is any big takeaway from this exercise, it’s seeing how pervasive cyber security is. Cyber security permeates every layer of technology, every node touched, every person involved — from client, to administrator, to developer, and beyond.  

Everyone, at every level, is responsible for part of the overall security of a system.  Even the most secure systems can be brought down by a weak password, an unpatched vulnerability, or a simple oversight in design.  Ensuring that all layers of a system are secure is done by having rigorous and uncompromising standards and policies in place — and making sure they are always followed.

Our guest blog this month covers how IT teams take all the precautions of ensuring their networks are secure but also that individuals play a big role in the process, by using strong passwords and avoiding phishing scams, etc.

Some will say there is no such thing as too much security, but overly restrictive security policies can have an adverse effect on the usability of a system.  Think about a complex password policy that results in passwords that are nearly impossible to memorize — then force the passwords to be changed on a daily basis.  A system like this would likely result in users tracking their current password in a variety of ways, some more secure than others (a note under their keyboard, an email to themselves, a sticky note on their monitor, etc).  The way in which people cope with the security policies can make the system less secure in the long run. Thus, in all but the most extreme cases, some level of compromise must be found.

Scott’s blog this month helps deal with this exact issue — it deals with centralizing authentication, which puts in place a single sign-on system where one account grants access to multiple systems.  This limits the number of passwords an end user will have to keep memorized while allowing for a robust password policy to ensure passwords are strong enough to not be guessed or quickly brute-forced.  In addition, he goes deeper into securing user access to systems by managing sudo through active directory and implementation of System Security Services Daemon (SSSD) to manage access to remote directories and authentication mechanisms.

There’s been a lot of focus on the end user, but that is hardly the sole vector of attack our systems must be able to withstand.  At the core of any security policy is locking down your servers. This month, Zach’s blog discusses the importance of promptly installing security patches.  No password policy can help you if a hacker can bypass a password and gain root access due to an outdated package.  If you know of a vulnerability that should be patched, you can safely assume that hackers are aware of this, too.

To wrap up our National Cyber Security Awareness Month blog series, we have Kirk’s blog which is a more general piece dealing with SSH security practices.  Definitely a must-read for anyone who administers a server. This piece will go a bit further in-depth than the basics by analyzing several SSH security practices, discussing the pros and cons of the different approaches — including when it is appropriate to implement them, and when it is not.

Thank you for your interest in our security blog series for National Cyber Security Awareness Month.  This will be the first of many sets of coordinated articles from us in the months to come.  Finally [queue Mission Impossible Theme Music], for security reasons, this message with self-destruct in 5…4…3…2…1…

Browse all of GigeNET’s security blogs or explore our security solutions.

ad linux security

Until recently, Linux authentication through a centralized identity service such as IPA, Samba Active Directory, or Microsoft Active Directory was overly complicated. It generally required you to manually join a server or workstation to a company’s domain through a mixture of Samba windbind tools, and kerbose krb5 utilities. These tools are not known for their ease of use and can generally lead to hours of troubleshooting. When kerbose was not applicable due to networking a limitation, an administrator had to resort to an even more complicated set of configurations with OpenLDAP. This can be frustrating to deal with and had led some to deploy custom scripts to manage user management. I have seen administrators utilize Puppet, Chef, and Ansible to roll out user management. At GigeNET, we are guilty of this with our Solaris systems. The bulk of our architecture is Linux based, and we now manage the authentication through Microsoft Active Directory.

The complexity of joining a domain has been severely diminished. The Linux community understood these tools were not ideal to manage, and have come up with a new solution. They introduced the System Security Services Daemon (SSSD). System Security Services Daemon (SSSD) is a core project which provides a set of daemons to manages remote authentication mechanisms, user directory creation, sudo rule access, ssh integration, and more. Even still the SSSD configuration can be quite complex on its own. Each component requires you to understand each of the underlying utilities I brought up in the introduction. While it’s good to understand each of these components it’s not fully necessary as, once again, the Linux community banded together to build a few tools that wrap around SSSD. In earlier Linux distributions the tool was called adcli. The tools to manage the integration processes is now referred to as realmd on most distributions. You can do most basic user administration with the realmd command. I have added a snippet of how one can easily join a domain:

[root@server001 ~]# realm join -v -U administrator gigenet.local.
* Resolving: _ldap._tcp.gigenet.local.
* Performing LDAP DSE lookup on: 192.168.0.10
* Performing LDAP DSE lookup on: 192.168.0.11
* Successfully discovered: gigenet.local
realm: server001 has joined the domain

With the snippet above you should have noticed it will look up the DNS of the domain and will try to perform a join. On the backend, this utilizes the net join command from winbind. This command will also build out a basic SSSD configuration file. If the join was successful we should now be able to utilize any user account within our domain. I normally perform a quick ssh login with my domain username. If successful you should notice that you are logged in, and a directory for the user account should have been created.

linux authentication tutorial

Doing research on SSSD I didn’t find any full-blown examples for a user’s SSSD configuration file. I believe in transparency and have included a templated example of our internal configuration file. It’s very basic in design and works on a few hundred internal systems without complaint.  Please note we have substituted our real domain in this example for gigenet.local.

[sssd]
domains = gigenet.local
config_file_version = 2
services = nss, pam, ssh, sudo[ssh]
debug_level = 0[sudo]
debug_level = 0[domain/gigenet.local]
debug_level = 0
ad_domain = gigenet.local
krb5_realm = GIGENET.LOCAL
realmd_tags = manages-system joined-with-samba
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u
access_provider = simple
simple_allow_groups = Operations
sudo_provider = ad
ldap_sudo_search_base = CN=Sudors,OU=Accounts,OU=North America,DC=gigenet,DC=local
ldap_sudo_full_refresh_interval=800
ldap_sudo_smart_refresh_interval=800

With basic user authentication working I wanted to focus on a small feature you most likely noticed within the SSSD configuration template we have provided. This feature is the sudo integration. The documentation on sudo integration is very sparse on the internet with conflicting information.  The documentation normally involved a few commands being shown around without any documentation of what the command does. It took me hours to piece the information together with guides, blog posts, and Linux man pages. Hopefully, the information I have detailed out below doesn’t follow this pattern. I still remember the hours of going through SSSD sudo log files line by line as if it was yesterday.

To utilize sudo we have to add the sudo schema to our Active Directory domain. This requires small modifications to the global Microsoft Active Directory schema. Before you perform the adjustments I really recommend doing a full domain backup. Touching the global schema tends to make some administrators very uncomfortable. Our domain is not very large and doesn’t have teams managing it like in some companies. We decided the benefits of centralized user authentication with centralized sudo configurations were worth the small adjustment. It’s also to my surprise that every guide I have found on the internet does not include the location of the actual schema file. I spare you a few hours of research the files are actually located within the actual sudo directory under /usr/share. We have also uploaded this to our GIT repository (https://github.com/gigenet-projects/blog-data/tree/master/sssdblog1)

Let’s dive into applying the schema file. Please ensure you have the ldifde command on your Windows Active Directory domain controller. To apply the schema you will also have to copy over the schema file named “schema.ActiveDirectory” to the domain controller your working on. Start up a powershell command prompt, and enter the command in the snippet below. Don’t forget to substitute the gigenet.local LDAP base with your own domain.

Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator\Desktop> ldifde -i -f schema.ActiveDirectory -c dc=X DC=gigenet,DC=local

With the schema applied we should be able to build out our first rule set. Don’t fret I’ll be following through this with a series of pictures to actually design a sudo rule. Before we begin you will need to create a group or a container within your domain where the rules will be configured. This will later be utilized by the SSSD configuration file. Once the group has been configured we will build our rules with the adsiedit.msc tool. Run adsiedit.msc within powershell to open the tool. With the tool open you will now need to transverse your domain tree and go to the domain group you created to store the sudo rules. To build out our sudoRole object started by right-clicking within the Microsoft Active Directory group. Follow the pictures as a general guideline. Our guide will be adding the ping command as a sudo rule. This sudo rule has a few configuration options that we spent many hours exploring on our end. Shall we begin?

linux authentication tutoriallinux authentication tutorial

With the right click option we select New->Object Creation. This brings up the second window with a hundred different types of objects. In our case, we select the sudoRole and move onto the next field. This role matches most of the options one would find in the /etc/sudoers file. This leads to a section where we will name the actual sudo rule, and the attributes we can assign to the rule we just named.

linux authentication linux authentication

The next three images will basically tell the story about the sudo rule we want to create. The attributes section has dozens of options to tailor your rules to your own design, but we will go through the three simple attributes that you would commonly see in a sudoers configuration file. The story we will describe is one of legend. Our rule will allow us to run the ping command as the root user account and without a password. With the first prompt, you will notice we specify the user account, and the second has a prompt for which commands we want to run. Pro TIP: Look for any extra whitespaces because this can lead to a few extra hours of troubleshooting. Whitespaces will break your rules. In the last image, we add a secret option to run the command without a sudo password. This hidden gem took me about a day to figure out as the internet had almost no documentation on this feature.

linux tutorial cybersecuritylinux authentication tutorial securitylinux authentication tutorial security

With the basics completed, we can now save the rule. Take a little time to explore all the additional options that can be set as an attribute. It’s worth taking about an hour to explore these options. Now to get this rule working on the actual Linux host we have to go back into our SSSD configuration. Under the SSSD section is a service value where we add the sudo configuration option.  We then apply a few sudo options under the domain section. Let me explain each configuration option as they are defined in the configuration above:

  • sudo_provider: The provider we utilize to pull in the sudo rules. We are utilizing the Active Directory provider in this configuration.
  • ldap_sudo_search_base: The Active Directory base group where we dumped the ping object. This base search will pull in every rule within this domain group, or container.
  • ldap_sudo_full_refresh_interval: The interval on which SSSD will look up, and pull new rules into the live sudoer configuration. This allows updates live so you don’t manually need to clear the SSSD cache, and perform a restart.

The last configuration required to get the sudo rules working is a small adjustment to the systems NSS configuration file. Please edit “/etc/nsswitch.conf” to include the following line “sudoers: files sss”.  The output below was taken from a live system:

passwd:   files sss
shadow:  files sss
group:    files sss
sudoers:  files sss

This about wraps up our entry-level introduction to Linux authentication through Active Directory. This security practice easily allows you to maintain centralized identity services so you don’t have to constantly push new users to a host, nor cleanup suspended user account. The passwd, and group files on the Linux system stay clean with this method. We will include a second follow up blog in the future that briefly goes through the painful details of adding a public ssh key to each account and storing the key within Active Directory.

 

Cybersecurity: Data on Vulnerabilities in Web Applications

At the speed that information travels, it’s easy to forget that the Internet is relatively young. With the potential for exponential growth, above all the negative foresight, we can start to see the benefits of the Internet when data is used to progress technology; and humanity as a whole.

Cybersecurity projects dedicated to analysis, development, and research of vulnerabilities are now working alongside industry leaders and corporations such as Cisco Talos, Google, and IBM with the intent to purposefully expose design flaws. The efforts to essentially break software intentionally appears malicious and rude in nature. However, these deliberate attacks provide transparency, promoting security strengthening from potential threats. In practice, it’s better the good guys find a flaw before the bad guys exploit it. Zero-day vulnerabilities are provided to vendors prior to public disclosure, giving developers the opportunity to implement a patch. The idea is to work together, as corporations such as Google partner with free software projects such as the GNU Project, that provides a platform for open source projects improves upon.

Using Analytical Data to Protect Users

Open source projects are largely community driven, and many projects are a product of member development and research contributions. The Open Web Application Security Project, abbreviated as OWASP, is a not-for-profit organization dedicated to Web Application security. Providing Web App security and analytical data, this open source community has a more direct effect on the server level. While larger corporations like Cisco, Google, and IBM operate on the cutting edge, projects like OWASP has compiled a Top-10 Security Risks in Web Applications using data gathered in 2017.

Top Cyber Security Risks

  1. Injection: SQL, XML Parser, OS commands, SMTP headers
    Injection-type attacks increased significantly—up 37 percent in 2017 from 2016. Code injection attacks can comprise an entire system, taking full control. SQL injection breaches the database, querying the most vital component that often houses personal information.
  2. Authentication: Brute force, Dictionary, Session Management attacks
    Weak passwords grow more susceptible to dictionary attacks as word lists continue to inflate. Refrain from setting special character limit and max length values that discourage password complexity. Successful authentications generate random session IDs with an idle timeout.
  3. Security Misconfiguration: Unpatched flaws, default accounts, unprotected files/dirs
    Errors were at the heart of almost one in five breaches.
  4. XML External Entities: DDoS, XML uploads, URI evaluation
    CMS using XML-RPC, which include WordPress and Drupal, vulnerable to remote intrusion. There have been many instances of pingback attacks used to send DoS/DDoS traffic. In most cases, the XML-RPC files can be removed completely. XML processors can evaluate URI, which can be exploited to upload malicious content.
  5. Insufficient Logging & Monitoring
    Preventing irreparable data leaks requires awareness. 68% of breaches took months or longer to discover. Logging and monitoring alerts are essential for recording irregularities.

Future of Cyber Security

Knowledge of the risks is the best defense. Preparedness of the seemingly inevitable attack is the greatest asset in a world network crawling with vulnerabilities. It’s no question that security starts with the individual. The majority of IT professionals agree that related courses should be a requirement. Vulnerabilities will occur as technology progresses, as a community, we can see the importance of data and analytics in innovation.

Explore GigeNET’s DDoS Protection services or chat with our experts now to create a custom solution.

cybersecurity workspace

Security experts can only do so much. Imagine the sophisticated systems at global banks, research facilities, and Las Vegas casinos (“Ocean’s Eleven,” anyone?) — an excess of cameras, guards, motion detectors, weight sensors, lasers, and failsafes.

But what happens if someone leaves the vault door open?

Similarly, server and network security measures can only go so far. Attackers don’t need to engineer a complex and highly technical method to infiltrate your business’s infrastructure: They just need to entice a somewhat gullible or distracted employee into clicking on a link or opening an attachment.

Whether an employee is acting intentionally or is unaware and careless, 60% of all attacks come from within. A vulnerability can be exposed by an accountant, a systems administrator, or a C-level executive, and the results can cost a company millions in downtime, lost sales, and damaged brand reputation.

IT teams can take all the modern precautions to shore up any potential vulnerabilities by following industry best practices with onsite hardware, applications, and websites. Employing a trusted hosting provider like GigeNET adds even stronger protections in the form of high-touch, individualized managed services and state-of-the-art DDoS protection.

But that may not be enough to protect your organization from well-meaning employees who fall for intricate phishing schemes or ransomware attacks. So, in the spirit of Cyber Security Month at GigeNET, here are a handful of ways businesses can turn their weak links into a strong line of defense.

Enforce Strong Passwords

This one seems like it’d be an obvious one — and relatively easy to control. But even in 2016, nearly two-thirds of data breaches involved exploiting weak, stolen, or default passwords. As the first line of defense against attacks, ensuring your employees follow stringent authentication practices is key to protecting your company’s sensitive data.

Educate employees on what constitutes a strong password and enforce the standards you implement. Passwords should be unique and lengthy combinations of upper- and lower-case letters, numbers, and symbols, and you can ban users from using easily guessed information like their first or last name, the company’s name, or even careless passwords such as ‘password’ or ‘1234.’

Once stronger password rules are in effect, require employees to update and change critical passwords periodically. You can encourage users to employ a password manager program to help them stay on top of their access rights.

Password management gets a little more complicated when there are different levels of employees who require various levels of access to certain applications and software. Regularly evaluate user permissions and make sure access is granted only to those who truly need it. Of course, proactively manage login permissions and shared passwords when employees leave the company — even if the parting is on good terms.

Educate and Test Employees on Phishing

We’re long past the days of the unjustly exiled Nigerian prince offering his family fortune to those willing to front him a little money for his escape. Email phishing is the attempt to obtain sensitive information — think usernames, passwords, credit card numbers, and other types of personal data — by sending fraudulent emails and typically impersonating reputable companies or people the intended victim knows.

Through the years, phishing attacks have become more subtle and harder to detect, even for the filters and safeguards employed by Office 365 and G Suite. Attackers will customize messages to exploit specific weaknesses in email clients and popular online platforms. Email phishing has scored some high-profile victories in recent years, enabling leaked emails from Sony Pictures and Hillary Clinton’s 2016 presidential campaign. In fact, the latter attempt even fooled the campaign’s computer help desk.

Attackers are more frequently targeting businesses and organizations instead of random individuals and often use the infiltration to start a ransomware attack. Personalized emails, shortened links, and fake login forms all serve to trick users into sharing sensitive login information or network access.

Train employees on modern phishing scams and how to spot them. Implement processes that enable employees to report possibly harmful messages, and consider deploying a service that runs phishing simulations or uses artificial intelligence or machine learning to detect spoofed senders, malicious code, or strange character sets.

Protect Against Human Error

Of course, no one is perfect. Mistakes happen, and there often isn’t a shred of malice behind and insider’s misstep. Given employees’ access to sensitive data, however, the slightest error can have disastrous results.

The threat of simple, bone-headed errors plagues businesses large and small. Even Amazon blamed an employee for inadvertently causing a major outage to Amazon Web Services in 2017. Several years earlier, an Apple software engineer mistakenly left a prototype of the highly anticipated iPhone 4 at a bar.

Whether your employees are handling important data or devices, training and awareness are critical to promoting stable and secure operations. An organization is only as strong as its weakest link, and one simple slip up can have major consequences.

Protect your organization by implementing rigorous coding standards, quality assurance checks, and backups. Take a critical look at user permissions and access to prevent employees from inadvertently making system changes or accidentally downloading or installing unauthorized software. Consider how company devices and sensitive data are handled across the organization, and prepare for worst-case scenarios.

Stay Vigilant and Rely on the Experts

Although a rare weak password or unused admin account may not pose an immediate threat to your company, any security oversight can lead to disastrous results at a moment’s notice. Act holistically when it comes to protecting your business infrastructure, devices, and data — inside and out.

GigeNET will gladly secure and monitor your systems to proactively diagnose and patch vulnerabilities before they become exploits, but comprehensive security extends beyond our server hardening, managed backups, and scalable DDoS protection service. Security is a team sport, so huddle up and let us draw up your organization’s security game plan.

 

linux security

While working as a sysadmin over the years, you truly start to understand the importance of security patches. On a semi-daily basis I see compromised servers that have landed in an unfortunate situation due to lack of security patching or insecure program execution (e.g. running a program as root unnecessarily). In this blog post I’ll be focusing on the importance of patching your Linux servers.

As you may know, there have been many high severity Linux kernel and general CPU vulnerabilities these past few years. For example, the Dirty COW Linux kernel vulnerability and the CPU speculative execution vulnerabilities all require patching. If you’re not taking security patching seriously, now is the time to start. Something as simple as subscribing to your Linux distribution’s security mailing list and applying patches as needed could prevent a compromise. Most that are concerned with security have learned the hard way and have had their servers compromised. But who wants to learn the hard way? There is a lot more attention that needs to go into securing your server, but patching is the first line of defense.

Top Linux server security practices: 

  1. Subscribe to your Linux distribution’s security announcements mailing list. For example the CentOS-announce or the debian-security-announce mailing lists. These will notify you when packages are updated that contain security patches. They’ll also go over which vulnerabilities the patch covers.
  2. Read security related news! It’s important to keep up with the latest news on security topics. I’ve discovered the need to patch software many times by just reading news.
  3. Check if you actually need the patch, and how it applies to your environment. It’s best to not blindly patch everything in the name of security. For instance, the vulnerability may not even affect you in any way. I commonly see this a lot with Linux kernel vulnerability patches. There’s generally a lot of them, but most are not too bad. It’s worth saving you from having to do yet another reboot.
  4. If you delay patches due to worries about downtime, implement redundancy into what you’re doing. It’s important that critical vulnerabilities get patched, but it’s also important that your production server remains up and accessible. The best option, even if difficult would be to figure out a redundant way of doing the things you do with high availability.

Patching is probably the easiest part of maintaining a secure environment. So there’s no excuse to neglect your system! It also prevents a headache for your future self.

How can GigeNET keep your business secure? Chat with our experts now.

top ssh security best practices blog header

SSH is a common system administration utility for Linux servers.  Whether you’re running CentOS, Debian, Ubuntu, or anything in between; if you’ve logged into a Linux server before, you likely have at least heard of it.

The acronym SSH stands for “Secure Socket Shell”, and as the name implies, the protocol is built with security in mind.  Many server administrators assume that SSH is pretty secure out of the box, and for the most part, they’d be correct. SSH by default has fantastic security features out of the box, like encryption of the communication to prevent man in the middle attacks, and also host key verification to alert the user if the identity of the server has changed since they last logged in.

Still, there are a large number of servers on the Internet running SSH, and attackers like to find attack vectors that could potentially affect a large number of servers.  With security, convenience tends to be sacrificed, so many server administrators intentionally, or without much thought, leave their servers running default SSH installations.  For the most part, this isn’t an issue for most of them, but there are some steps that you can take to be ahead of the curve. After all, I believe that being a little bit ahead of the curve is one of the best security practices to reach for, that way your server avoids being one of the lower hanging fruit that can be tempting to attackers.

With that in mind, here are some techniques that you may want to consider for your Linux server to help improve your SSH security.

Brute Force Protection

One of the most common techniques for improving SSH security is brute force protection.  This is because one of the most common security concerns faced by server administrators running SSH services is brute force attacks from automated bots.  Bots will try to guess usernames and passwords on the server, but brute force protection can automatically ban their IP address after a set amount of failures.

A few common open source brute force protection solutions are ConfigServer Firewall (CSF) and Fail2Ban.  CSF is most common on cPanel servers, since it has a WHM plugin.

Pros and Cons of Brute Force Protection

Pros

  • Will help to cut down on failed logins from bots by automatically banning them, making it much less likely that a bot will have the opportunity to guess the login details for one of your SSH accounts.
  • Very easy to implement with no changes to the SSH configuration required.

Cons

  • These brute force programs have no way to tell bots apart from you and your users.  If you fail login too many times by accident, you could lock yourself out. Make sure that you have a reliable means to get on to the server if this happens, such as whitelisting your own IP address, and having a KVM or IPMI console available as a last resort measure.

Changing The SSH Port Number

One of the most common techniques that I see is changing the SSH port number to something other than the default port, 22/tcp.  

This change is relatively simple to make, for example, if you wanted to change your SSH port from 22 to 2222, you would simply need to update the Port line of your sshd_config file like so:

Port 2222

By the way, port 2222 is a pretty common “alternate” port, so some of the brute force bots may still try this port.  It would be better to choose something more random, like 2452. It doesn’t even have to contain a 2, your SSH port could be 6543 if you wanted it to be.  Any port number up to 65535 that is not used by another program on the server is fair game.

Pros and Cons of Changing The SSH Port Number

Pros

  • This technique is usually pretty effective at cutting down automated bot attacks.  Most of these are unintelligent scripts and will only be looking for servers running on port 22.

Cons

  • This technique amounts to “security by obscurity”.  A bot that is trying alternate ports, or any human equipped with a port scanning tool like nmap will have no problem finding your server’s new port in just a few minutes.
  • This technique can make the SSH server a bit more inconvenient to access, as you will now need to specify the port number when connecting instead of just the IP.

Disabling Root Login via SSH

Another common technique is to disable the root user account from logging in via SSH altogether, or without an authorized SSH key.  You can still have root access via SSH by granting “sudo” privileges to one of your limited users, or using the “su” command to switch to the root account with a password.

This can be configured by adjusting the “PermitRootLogin” setting in your sshd_config file.

To allow root login with SSH key only, you would change the line to:

PermitRootLogin without-password

To completely disallow root login via SSH, you would change the line to:

PermitRootLogin no

Pros and Cons of Disabling Root Login via SSH

Pros

  • This technique is somewhat helpful, since the username “root” is common to most LInux servers (like “Administrator” on Windows servers), so it is easy to guess.  Disabling this account from logging in now means that the attacker must also guess a username correctly to be able to gain access.
  • If you are not using sudo, this technique puts root access behind a second password, requiring an attacker to know or guess two passwords correctly before having full access to the server.  (Sudo can diminish this benefit somewhat as usually it is configured to allow root access with the same password that the user used to login.)

Cons

  • This method may increase your risk of getting locked out of the server, in the event that something goes wrong with your sudo configuration.  It is still a good idea in this method to have an alternate way to access the server if you become locked out of root, such as a remote console.

Disabling Password Authentication, in favor of key authentication.

The first thing that everyone tells you about passwords is to make them long, difficult to guess, and not based on dictionary words.  An SSH key can replace password authentication with authentication by a key file.

SSH keys are very secure compared to a password, as they contain a large amount of random data.  If you have ever seen an SSL certificate or key file, an SSH key looks similar to this. It’s a very large string of random characters.

Instead of typing a password to login to the SSH server, you will authenticate using this key file, in much the same way that SSL certificates on websites work.

If you would like to disable password authentication, you can do so by modifying the “PasswordAuthentication” setting in the sshd_config file, like so:

PasswordAuthentication no

Pros and Cons of Disabling Password Authentication, in favor of key authentication.

Pros

  • This method strongly decreases the likelihood that a brute force attempt against your SSH server will be successful.
    • Most brute force bots are only trying passwords to begin with, they will be using the completely wrong authentication method to try to break in, so those bots will never succeed.
    • Even if someone was doing a targeted attack, SSH keys are so much longer than passwords that guessing one correctly is orders of magnitude harder, simply because there’s so much entropy and potential combinations.

Cons

  • This technique can make it less convenient to access the server.  If you don’t have the key file handy, you won’t be able to SSH in.
  • Due to the above, you are also increasing risk of getting locked out of SSH, for example if you lose the key file.  So, it’s a good idea to have an alternative way to access the server if you need to let yourself back in, like a remote console.

In the event that someone gets ahold of your key file, just like a password, they will now be able to login as you.  But, unlike passwords, keys can be easily expired and new keys created, and the new key will operate the same way.

Another interesting quirk about the SSH keys method is you can authorize multiple SSH keys on a single account, whereas an account can typically only have one password.

It’s worth noting that you can use SSH keys to access accounts even if password authentication is turned on.  By default, SSH keys will work as an authentication method if you authorize a key.

Allow Whitelisted IPs Only

A very effective security technique is only allowing whitelisted IP addresses to connect to the SSH server.  This can be accomplished through firewall rules, only opening the SSH port to authorized IP addresses.

This can be impractical for home users or shared web hosting providers, since it can be difficult to know which IP addresses will need access, and home IP addresses tend to be dynamic, so your IP address might change.  But, for situations where you are using a VPN or mostly accessing from a static IP address, it can be a low maintenance and extremely secure solution.

Pros and Cons of Allowing Whitelisted IPs Only

Pros

  • This method provides very strong security, since attackers would need to have access to one of your whitelisted IPs already in order to try to SSH in.
  • Arguably, this method can supercede the need for other security methods like brute force protection or disabling password authentication, since the threat of brute force attacks is now basically nullified.

Cons

  • This method increases your chances of getting locked out of the server, especially if you are in a location where your IP address may change, like a residential Internet connection.
  • The convenience of access is also reduced, since you will be unable to access the server from locations that you haven’t whitelisted ahead of time.
  • There is some effort that goes into this, since you now have to maintain your IP address whitelist by adding and removing IPs as the needs change.

On my own personal servers, this is usually the technique that I use.  This way I can still have the convenience of authenticating with a password and using the normal SSH port, while having strong security.  I also change my servers frequently, creating new ones when needed, and I find that implementing this whitelist is the fastest method for me to make my new servers secure without messing with other configurations, I can simply copy my whitelist from another server.

A Hybrid Approach: Allow passwords from a list of IPs, but allow keys from all IPs.

If you want to get fancy, there are a number of “hybrid” approaches that you can implement that combine one or more of these security techniques.

I ran into a situation once with one of our customers at GigeNET where they wanted to provide staff with password access, so that they could leave a password on file with us, but they wanted to only use key authentication themselves and not have password authentication open to the Internet.

This was actually very simple to implement, and it provides most of the security of disabling password authentication, while still allowing the convenience of password authentication in most cases.

To do this, you would want to add the following lines to your sshd_config:

# Global Setting to disable password authentication

PasswordAuthentication no

[...]

# Override the Global Settings for IP Whitelist

# Place this section at the -end- of your sshd_config file.

Match address 1.2.3.4/32

PasswordAuthentication yes

For the above, 1.2.3.4 is the whitelisted IP address.  You can repeat that section of the configuration to whitelist multiple IPs, and you can change the /32 to another IPv4 CIDR such as /28, /27, etc in order to whitelist a range of IPs.

Remember that the Match address block should be placed at the very end of your sshd_config file.

Pros and cons of a hybrid approach

Pros

  • This technique can provide the security of key authentication by preventing passwords from working for most of the Internet, but allowing the convenience of password authentication from frequent access locations.  So, it allows you to reduce some of the drawbacks while keeping most of the security.
  • If your IP address changes and you are no longer whitelisted, you can still SSH in with the key file so long as you have it saved locally.

Cons

  • Like the IP whitelist firewall method, this method takes some maintenance since you have to update your SSH configuration if your IP address changes or you need to whitelist other locations, but unlike other methods, updating the whitelist here is less critical since you can still access via the key method even if you’re not whitelisted.

Ultimately, you will have to choose what’s best for your use case.  

Hopefully this list of techniques and examples provides some food for thought that you can use when you are security your servers: what the risks are and what possible techniques exist to mitigate them.

Based upon how important you think the security of the server is, and the practicality of implementing the various security solutions toward mitigating the risks you’re concerned about, you can choose one or more techniques to move forward.

At the end of the day, I always remind everyone that security is relative.  You will never have anything that is fully impenetrable, and the main thing is to keep yourself at least one step ahead of everyone else.  Even if you implement just one of these security practices, you are more secure as a result than a large number of Linux servers out there that are running with the default settings and SSH wide open to anyone that wants to try to login.

 

How can GigeNET keep your business secure? Chat with our experts now.

Don’t be afraid to use IPv6, It’s not a whole lot different from IPv4. Let’s look at the IPv6 specification here, https://www.ietf.org/rfc/rfc2460.txt.

IPv6 Basics

Taking a first look at IPv6 can be overwhelming, but in reality, the addressing scheme is exactly the same as IPv4. For example, it would be possible to write an IPv4 address as FFFF:FFFF, which would equate to 255.255.255.255. Conversely, we could write an IPv6 address as 255.255.255.255.255.255.255.255.255.255.255.255.255.255.255.255, which would be FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF.

That’s 128-bits of an address space versus the 32-bits in IPv4. This equates to an unprecedented amount of addresses. It would mean over 250 addresses for every observable star in the known universe. So it’s going to take a while to use all of them unless we waste it all needlessly. I can’t envision us using all this address space until we populate other planets, even if we gave every grain of sand in the world an IP address.

So why is IPv6 in hexadecimal format? A quick search shows a few different answers, but to me, it’s easier to concatenate and easier to read than 16 numerical digits. For example, if a large amount of the address space is zeroes it can be concatenated 1234::5678:1. Granted this can only be done one time, 1234::1::45 is invalid. Just like in IPv4 leading zeros can be omitted, however, I find it easier to write it all out:

2001:1850:1:0:104::8a is also 2001:1850:0001:0000:0104:0000:0000:008a

Which also can be 2001:1850:1:0:104:0:0:8a

This looks confusing, so writing it all out (putting ALL the zeros) when taking notes or preparing policies will help understand it better.

IPv6 is another address family and another protocol, meaning it is a completely separate set of routing and adjacency tables and even its own ethernet frame type. This means in normal terms that IPv6 is totally independent of IPv4 and doesn’t even know IPv4 exists. On an existing IPv4 network, a new IPv6 network will be created on every device as if setting up a completely new network installation. Think about this for a server, an IPv4 default gateway will NOT work for IPv6 even though it might be the same MAC address, the IPv6 has to be set separately.

Other than the addressing scheme and hexadecimal, IPv6 is exactly the same as IPv4 for subnetting and routing purposes. A subnet is still a subnet, a /24 in IPv4 is simply a /120 in IPv6, the same amount of IP addresses. Under the hood for routing, IPv6 does have some technical changes which increase routing performance such as a much simpler header format.

But wait! You read about IPv6 and it says the smallest subnet it supposed to be a /64? This is true but not true, just like the smallest subnet in IPv4 was a “Class C” before it went classless (CIDR). However there is a reason, and an RFC to back it up why a /64 was selected. Certain features of IPv6 require a /64 at the moment and may not in the future.

Question and Answer Omnibus

So, no NAT in IPv6?

Well, while it’s entirely possible to DO address translation, there’s no need for it due to the number of addresses available. A stateful firewall is all that is needed.

How about neighbor resolution?

In IPv4 we know this as ARP. Here is a fundamental difference in the way IPv6 works vs IPv4 underneath it all. While this makes no functional difference in how the protocol functions (ex. With TCP or UDP or ICMP, etc) it does change how it forms an adjacency/neighbor. ARP does not exist in IPv6, instead, it’s called neighbor discovery and it uses ICMP. Many of us are probably used to filtering ICMP by now and since it plays an important role in IPv6 discovery neighbors as well as the actual operation of the IPv6 protocol itself.

For example, fragmentation is ONLY performed by endpoints in IPv6 (the hosts talking to each other), and not by any router in between. ICMP is used to determine if packets need to be fragmented or not. This is an ICMP “Type 2” IPv6 packet. Neighbor discovery is entirely done with ICMP via multicast and unicast.

IPv6 does not use broadcasts! There is no ‘broadcast’ IP or ‘network’ IP address in an IPv6 subnet. The last IP is usable, unlike in IPv4.

Link local IPs?

Wait we saw these 169.254.x.x IPs in IPv4 but only in extremely rare instances were they ever used. How are they used in IPv6? This is a major difference from IPv4! This is also an annoying difference. It does change how things operate and what filters need to be put in place. Link local IPs are in the FE80::/10 range, and are, unless otherwise specified, automatically configured by the devices on their interfaces. This IP range is specified as unroutable on all routing equipment and should not be forwarded, thus the name link local or LAN only. This means that every IPv6 interface will have a least two IP addresses configured on it for connectivity outside of the LAN. You may have noticed a link local IP on a server that IPv6 is enabled on but no IP address has been configured yet. This is normal.

QoS?

IPv6 QoS works exactly as IPv4, with the exception that IPv6 has a new flow label field added into the header to help with marking flows and traffic class designation. Since this is widely unused at the moment it isn’t worth discussing here, but noted anyway as a difference.

Security in IPv6?

No real difference here from IPv4, although it has built-in support for IPSEC, that can’t be counted on in all circumstances (ex. neighbor discovery still uses ICMP, ICMP messages still need to be sent to hosts unencrypted). IPSEC also is available for IPv4. IPv6 neighbor discovery to some is less secure than ARP. While it is a lot more complicated to filter, the security differences are negligible.

What does this mean for system administrators and firewall managers?

For IPv6 on a server, the main difference is the neighbor resolution. Certain ICMP types (133-137) need to be allowed in the firewall which allow neighbor resolution to work. FE80::/10 should be allowed for these ICMP messages also. You cannot simply filter everything except the destination IPv6 IP to the server, the link local must be allowed as well.

If you are wondering why IPv6 seems broken when you add it to a server, check the firewall.

Firewall admins should allow type 1-4 (err msgs) and 128-129 (echoes) at the least to allow proper operation and ping testing.

On the next blog, we will talk about DHCPv6, DHCP-PD, Mobility and privacy extensions, IPV6 header breakdown, Multicasting, Neighbor Discovery in depth, SLAAC, SEND, IPSEC.

TL;DR Differences Between IPv6 vs IPv4:

  • 128-bit address versus 32-bit address
  • Different Ethernet frame type (0x86DD) [IPv4 is 0x0800]
  • No broadcast or network address
  • Hex instead of decimal notation
  • No ARP in IPv6 uses ICMPv6 neighbor solicitation with multicast
  • uses link-local IP addresses (which are UNROUTABLE) autoassigned by hardware id (derived from MAC address) to communicate neighbor discovery, autoconfiguration
  • Built-in multicasting
  • Ipv6 does not require IPv4 to operate nor does it interfere with IPv4 operation, and should be treated as such: meaning: on servers, IPv6 will have its own address, gateway, mask, etc.
  • You cannot NAT directly IPv4 to IPv6 or IPv6 to IPv4 although it can be proxied **
  • For DNS, IPv6 is AAAA instead of A and reverse is IP6.arpa (see DNS section below)
  • Jumbo JUMBO JUMBO datagrams, did I mention Jumbo? 32 bit number for window size (4 gig!)
  • ICMP replies from routers for MTU error responses
  • Header checksum is removed from the top IP level (deemed unnecessary, but I disagree)
  • Mobility and privacy extensions
  • DHCPv6 with DHCP-PD (prefix delegation)

 

** There are some options to NAT (port translation) and NAT64 between IPv4/IPv6 but it isn’t a direct 1 to 1 mapping

Wikipedia also has a wonderful page on IPv6, https://en.wikIPedia.org/wiki/IPv6.

google sheets advanced functions

Many of us work with spreadsheets every day. It’s what allows us to deal with multiple projects at once, each with reams of data. A spreadsheet helps us tame this data — and the better the spreadsheet is laid out and designed, the more it can help us be efficient when processing this data. 

Since being promoted to Manager of our support team, I find my nose buried in a spreadsheet far more often than in a system log file. In addition, I’ve found that I can be much more productive by creating well-designed spreadsheets than I could be by turning a screwdriver or tuning some PHP parameters.

There is a basic level of skill that most of us have regarding spreadsheets, but this is just barely tapping the potential of what they can do. Surprisingly, it only takes mastering a few skills and functions to greatly up your spreadsheet game — taking you to the next level in productivity, and wowing your coworkers (which is, admittedly, the real goal here).

My examples use Google Sheets, mostly because it’s what I use daily, but also because everyone using it is using the same version. Almost all of the concepts I discuss can be done in Microsoft Excel, as well, but not only does the method sometimes differ from Google Sheets, it also differs between versions of Excel.

Top 10 Google Sheets Skills and Functions

  1. Drop-Down Lists with Data Validation
  2. Conditional Formatting
  3. Freeze
  4. Referencing Cells
  5. VLookup()
  6. Autofill
  7. Clean Presentation
  8. Unique()
  9. CountIf()
  10. IfError()

Skill: Drop-Down Lists with Data Validation

Ever wonder how to add a drop-down list to a cell? This is done through Data Validation. Typically, on any spreadsheet I make, I create a “Data” sheet (tab) to hold the various tables that are needed to enable this and other functions, without cluttering up the main workbook. 

To add a drop-down list to a cell in Google Sheets (as seen in fig. 1).

  1. Create a column in the Data tab with a list of all the options you want for the list (see fig. 2).
  2. Back on the Main tab, right-click on the cell getting the drop-down list.
  3. Select Data Validation from the bottom of the right-click menu.
    • A new window with several options will show up. Don’t be alarmed.
  4. Click on the Criteria text box, ensuring your cursor is blinking in the box (see fig. 3).
  5. Now, change tabs to the Data tab and highlight the block of list items.
    • Move the Data Validation window if it’s in your way, but don’t close it.
    • The Data Validation window will change to a “What data?” window.
    • This will display the range of the block you have chosen (see fig. 4).
      • i.e. Data!B3:B5
      • This means Data tab, from B3 through B5.
    • Click OK on the “What data?” window.
    • The range you selected will now be in the Criteria field on the Data Validation window that has returned.
  6. Click Save.

…And that’s it! It really is that simple. Once you’ve done this a couple times, it will become second nature, and then your problem will be restraining yourself from adding too many drop-down lists.

Fig 1. Add a drop-down list to a cell in Google Sheets
Fig 1. Add a drop-down list to a cell in Google Sheets
Fig 2. Create a column in the Data tab
Fig 2. Create a column in the Data tab
Fig 3. Click on the Criteria text box
Fig 3. Click on the Criteria text box
Fig. 4 Display the range of the block you have chosen
Fig. 4 Display the range of the block you have chosen

Skill: Conditional Formatting

Conditional Formatting allows you to let the spreadsheet do some of the thinking for you. It formats data in a way that makes it easier to visibly digest by helping you see trends and highlighting specific data points to be better noticed.

For example, I typically use conditional formatting to highlight duplicate items in long, unsorted lists of parts. Many of the part names are similar, so offloading the task of identifying them to the spreadsheet not only helps, but it reduces the chance of human error as well.

There are a number of conditions you can use to configure Conditional Formatting. On the shift schedule I maintain, I use conditional formatting to automatically change a cell color to an employee’s assigned color when it detects their name in the cell. All my shifts show up as cornflower blue, Kirk’s shifts are orange and Zach’s are green.

You can format based on dates — if a person’s membership is expired, mark it red. If it’s due soon, mark it orange, etc. It’s really only limited by your imagination.

In Google Sheets, Conditional Formatting is accessed from the main menu, under Format. Select the range you want to rule to apply to, then select the rule (or use a custom formula), and finally set the format to apply if the rule’s conditions are met.

Figure 5 shows some sample sales data with three Conditional Formatting rules set up for column G. The first rule identifies cells in column G with a value over 100,000 by changing the cell color to green. Next, we identify those cells with a value of over 10,000 with the color orange. Finally, anything equal to 10,000 or less is red.

If you’re paying attention, you might be thinking, why are values over 100,000 green, not orange — since these cells meet the conditions for two different rules. It works because the rules are processed in order. Rules higher on the list trump rules further down. When you mouse over a rule, four horizontal dots show up on the left side and a trashcan on the right. Grab the rule by the four dots to drag it up or down, changing its position. I’m going to let you figure out what the trashcan does — I know you can do it!

In Figure 6, you can see what happens when the rules are out of order. I moved the green rule down (you can see the four dots on the left of the rule that are used to drag the rule up and down), below the orange rule. As you can see, all the previously green cells are now orange, and the green rule has been made useless simply by changing the order.

Fig. 5 Proper Conditional Formatting
Fig. 5 Proper Conditional Formatting
Fig. 6 Out of Order Conditional Formating
Fig. 6 Out of Order Conditional Formating

Skill: Freeze

When working with large sets of data, it’s easy to get lost, especially when the data in several columns is similar. One way to fight this is to freeze the header row. By doing so, the top couple of rows that contain the header are always at the top of your view, and as you scroll down, the rows scroll by, but the header is frozen at the top.

Figure 7 shows the previous example with a frozen header row. Notice how the row numbers skip from two to twenty. No matter how far down you scroll, the header will always be visible.

You can also freeze columns to the left of the sheet, and if you’re feeling adventurous you can freeze both rows and columns.

Fig. 7 Adding a frozen header row to previous example
Fig. 7 Adding a frozen header row to previous example

Skill: Referencing Cells

Most spreadsheet users are somewhat familiar with how to reference cells, called A1 Notation. In this system, a cell is referenced by its column letter, followed by its row number — “B7”, for example. A range of cells is referenced by listing the upper-left cell of the range, followed by the lower-right cell, separated by a colon — “B7:D15”, for example. This example range would be three columns wide and nine rows tall. When we want to duplicate the contents of one cell in another cell, for example, we want the contents of B7 to show up in C12, the formula for C12 would be simply “=B7” where the equal sign indicates a formula to follow, and the formula is simply the cell reference.

Where things get a bit more complex is when we start dealing with relative and absolute cell references. Relative references are what allow the spreadsheet to change cell references when a formula is copied to another cell. For example, E3 is the sum of B3 through D3. The formula for E3 would be:

=Sum(B3:D3)

Because we are using relative cell references, this allows you to copy the formula from E3 down to E4, E5, E6, and so on, with the formulas automatically changing in each new row to add up the elements of that row, and not the original row, row 2. Relative referencing increments the row portion of the cell reference by one when the formula is pasted one row down. It increments by 5 when pasted five rows down, and by -1 when pasted one row above the original. Without this automatic adjustment, when the formula in E3 is copied to E4, rather than add up the elements in the fourth row, it shows the same total as E3 because the formula tells it to display the sum of B3 through D3, rather than B4 through D4. This is true for moving from column to column, as well as row to row.

This feature saves immeasurable time entering formulas into spreadsheets because you can simply set up the formula for one row or column, and copy it to work without modification on your others rows or columns.

In most cases, relative referencing is what you want — but there are situations where you don’t want the reference to change when copying the formula from cell to cell. To demonstrate this, imagine we’re adding tax to subtotals to get the final totals. For this example, we’ll use a mix of relative and absolute cell references. We’ll use an absolute reference to pull the tax rate from its cell, D2, and use it to calculate the tax by multiplying the tax rate by a relative reference to the subtotal. The relative reference to the subtotal will allow us to copy the formula from row to row, using the appropriate subtotal each time, while the tax rate stays fixed. To do this, we add dollar signs to the cell reference to tell the spreadsheet not to change the value, even if a formula is pasted elsewhere. In this case, our tax line for C5 looks like:

=B5*$D$2

Notice how B5 has no dollar signs, while the D2 reference does? The first dollar sign locks down the column part of the reference, while the second locks down the row. The column lock isn’t necessary in this case, but it doesn’t hurt, either. With the column locked, as well, it allows us to use the same formula to generate tax on the cell to the left of it anywhere on the sheet, without losing the tax rate.

If we didn’t use an absolute reference for the tax rate, the first formula we put in would still work, but if we copied it to another row, it would try to pull the tax rate from another row as well. Say we copied the formula from D5 to D6. The tax rate would be blank because it would be referencing D3 since it would be a relative reference. Row 7 would be even worse — the tax rate would be “Total” — and I thought my tax rate was bad…

Anytime you want to lock down the row on a cell or range reference, put a dollar sign in front of the row designator. Do the same with the column designator to lock down the column reference. You’ll find that many of your errors with spreadsheets are caused by incorrectly referencing a cell.

In addition to these different ways to reference a cell, you can also reference cells on different tabs (sheets). I use a Data sheet to hide a lot of my reference tables away from sight. To reference B7 on my Data sheet from another sheet, I use “Data!B7” to reference the cell. The Data! tells the spreadsheet the sheet where the cell can be found.

Function: VLookup()

The Vertical Lookup function, VLookup() is a powerful feature that will help propel your spreadsheets to the next level. This function allows you to populate a cell based on information pulled from another table (often hidden on another tab, or even on another spreadsheet, but the latter would require the ImportRange() function, which you can investigate if you’re feeling adventurous).

Fig. 8 Final Example of VLookup(), GoogleTranslate(), and Data Validation.
Fig. 8 Final Example of VLookup(), GoogleTranslate(), and Data Validation.

This can be a bit confusing until you see it in action — so let’s start with an example, a simple translator using GoogleTranslate() (I know I haven’t explained this function yet, but trust me, it’s pretty simple), VLookup() and Data Validation. See figure 8 for the final result, which has 3 instances of the translator.

The language drop-down list is generated using Data Validation with the first column of the language list on the Data tab.

The translation cell is generated by embedding a VLookup() function within the GoogleTranslate() function

The GoogleTranslate() function has three elements: the source text, the source language abbreviation, and the target language abbreviation, resulting in the following formula structure:

=GoogleTranslate(<source text>,<source language>,<target language>)

Fig. 9 Data Tab for Data Validation
Fig. 9 Data Tab for Data Validation

Because GoogleTranslate() uses an abbreviation to represent languages, I copied a table from their documentation and pasted it to my Data tab (see figure 9 to see a subset of this table). This table has the full Language name in the first column and the abbreviation in the second.

The final translation formula looks like:

=GoogleTranslate(D3,”en”,VLookup(F3,Data!B3:C66,2,True))

In this example, D3 is the source text, “Hello my friend!” The source language abbreviation is “en” for English, and VLookup() is used to convert the Language name chosen in the dropdown, F3, to its abbreviation.

The VLookup() function has four elements: the search key, the range, the index, and the optional is_sorted boolean (boolean is just a fancy word meaning it is either TRUE or FALSE). Using the VLookup() function looks like:

=VLookup(<search key>, <range>, <index>,[is_sorted])

In our example, the function is embedded within the GoogleTranslate() function, with the VLookup() portion looking like:

VLookup(F3,Data!B3:C66,2,True)

Here, the full name of the language chosen in the dropdown list, cell F3, is used as the search key. In the first translator, I have “Swedish” selected as this key.

The next field is the range, Data!B3:C66. As we learned in the Variables section, Data! references the Data tab, and B3:C66 refers to columns B and C on the Data tab, from row 3 through row 66. Figure 9 shows this table of the possible languages listed by full name in column B and the associated abbreviation in column C. Make sure not to include the table headers in the range.

The index is the column number of the result we want to return. Note that this index is in relation to the range chosen in the second element. In our example, I use “2” to reference the second column in the range, column C. What is happening in simple terms is the spreadsheet looks at our table on the Data tab and looks down the first column for an entry that matches our key, “Swedish.” Once this key is found, it looks across to the index column, the second column in the range — column C — and finds the abbreviation associated with “Swedish” — “sv” (for Svenska, Swedish for “Swedish”).  The VLookup() function then returns “sv” to the GoogleTranslate() function, which allows the translation to happen.

To show a more complicated example, I recently used VLookup() to help fill out a spreadsheet dealing with access to various security doors in our facility. We have four doors secured with a badge reader and five levels of access, with no access represented by a blank entry. Figure 10 shows my lookup table on the Data tab and figure 11 shows the result of several VLookup() functions at work on a lookup table that is more than just two columns.

Fig. 10 Data tab lookup table
Fig. 10 Data tab lookup table
Fig. 11 VLookup() function results
Fig. 11 VLookup() function results

In figure 11, each of the four doors has a slight variation of the VLookup() function. They end up looking like:

Front 1 =VLookup(F4,Tables!G$4:K$8,2,TRUE)

Front 2 =VLookup(F4,Tables!G$4:K$8,3,TRUE)

DC        =VLookup(F4,Tables!G$4:K$8,4,TRUE)

Dock    =VLookup(F4,Tables!G$4:K$8,5,TRUE)

One key detail to note is the addition of dollar signs “$” to the row element of the range in the VLookup() functions. This is important because if I just copied the functions from one row to the next without them, the range would automatically increment, causing errors after a few rows. This also happens when Autofill is used to replicate the rows quickly (Autofill will be discussed next), but again this can be avoided by using dollar signs to lock down the range of the VLookup().

You’ll notice the only difference between the entries for each door is the third element, the index. This tells the spreadsheet from which column within the lookup range to retrieve the value.

Skill: Autofill

Google Sheets has an autofill feature that can be used to quickly duplicate cells, or continue sequences and patterns, saving you a lot of time that would be wasted on tedious data entry. If you haven’t been using this feature, you certainly will be once you learn how it works.

The simplest Autofill feature is duplication. Say you have a list of questions in one column and answers in the next — but you don’t have any answers yet. Rather than leaving the Answers fields blank, you want to pre-populate the field with a placeholder, “<unanswered>.” Simply fill in the first Answers cell with the text you want and click on the cell to highlight it. The cell should be framed in blue, and you should see a blue, square dot in the lower right-hand corner of the cell. Grab that dot and pull down, releasing when you’ve highlighted all the answer cells. You will see that all the cells were autofilled with a duplicate of the first cell.

You can use this to fill down, or right. You can do both, but you have to do one at a time — Fill down, let go and grab the dot again, this time with the whole row still highlighted and fill right. The reverse order works as well (right, then down).

In addition, you can start with more than one cell. Say you have a column next to the Answers column that shows Answered by whom. Enter “<unanswered>” in the top Answers cell, and “<no one>” in the top Answered by whom cell right next to it. With both cells highlighted, grab the blue dot and pull down. Both columns will be filled with the appropriate text.

Granted, the times you’ll need to use this to duplicate text is likely somewhat limited — but it becomes much more useful when you realize you can duplicating formulas, not just text or numbers. When duplicating formulas, keep in mind what we learned about absolute and relative cell references — especially the use of the dollar sign. This will lock the references so they don’t increment from row to row. Some situations will call for this, and others won’t. I find I often end up with a mix of relative and absolute references in my functions (like the door access example in the VLookup() section).

The real power of Autofill is shown by its ability to iterate sequences. Say you want to number your questions 1 through 15. Simply fill in the first two numbers, highlight both cells and pull the blue dot down until you’ve highlighted 15 cells. When you let go, you’ll see the sequence continued into the area highlighted, leaving you with the number 1 through 15.

Now let’s try something a bit tricker — you want to count by twos. If you want odd numbers, leave the 1 in the first cell and replace the 2 with a 3. Highlight the first two cells again and drag down — you’ll see the numbers 1 through 15 have now been replaced with the odd number 1 through 29.

You don’t have to start with 1, either. Start at 23 and count upwards (or downwards), if it’s what you need. Do you double-space? Start with 1 (or another number) and select that cell and the empty cell below it. Drag down, and you’re numbering every other row (see figure 12).

Google Sheets will also detect patterns in your cell for Autofill. Start with “word1” and “word2” and you can use Autofill to further the sequence with “word3,” “word4,” etc. In my role as a support manager, I use this to populate lists of IP addresses (see figure 13) frequently.

If the spreadsheet doesn’t detect a number in the highlighted fields, in most cases it will simply repeat the sequence of highlighted fields over and over again. With a single field, this simplifies to basic duplication, but if you enter “duck,” “duck,” “duck,” and “goose” into four cells and Autofill them (see figure 14), you will see that pattern repeated.

In that last example, I said “in most cases” because there are a few exceptions. Google Sheets used to be able to reference a Google Labs feature called Google Sets. Google sets allow you to start listing items from a set and Autofill cells with additional members of the set. You could then use a function called GoogleLookup() to pull information about the items in the set. An example I saw used elements as the set (starting with Hydrogen, Helium, and Lithium). Additional columns were filled in by referencing (similar to a VLookup() function, but referencing data from Google Sets). These additional columns displayed each element’s Atomic Weight, Atomic Number, and Melting Point. To be honest, I’m not sure if I’d get much use of that feature — but it was great for showing off! Unfortunately, this feature was removed several years ago.

Why bring up an outdated feature? Well, imagining how Autofill used to work with sets will help understand how Autofill works with dates and times. Type the name of a month of the year, highlight it and drag down. You will see that instead of repeating that word, it filled in the rest of the months, in order. If you go past 12 cells, it will start repeating. You can do the same with the days of the week (see figure 15).

Dates can be Autofilled using a variety of formats (any format recognized by the spreadsheet as a date). By default, if you start on a specific date and use the one cell to Autofill, it will increment each cell by one day. If you want to increment monthly, or weekly, enter the first two elements of the sequence and select those two cells to Autofill from (see figure 16).

Times Autofill in a similar way. By default, the spreadsheet will use the 24-hour format, but adding “AM” or “PM” will force it to the 12-hour format for our delicate American sensibilities. Start with “12:00 PM” in a cell, and it will Autofill times in one-hour increments, using the 12-hour format. Want to count in 15-minute increments, fill in the second cell with “12:15 PM” and start your Autofill with the first two cells to achieve this (see figure 17).

Fig. 12 Autofill Patterns Auto Numbering
Fig. 12 Autofill Patterns Auto Numbering
Fig. 13 Autofill Patterns Auto Numbering Advanced
Fig. 13 Autofill Patterns Auto Numbering Advanced
Fig. 14 Autofill Patterns for Duck Duck Duck Goose
Fig. 14 Autofill Patterns for Duck Duck Duck Goose
Fig. 15 Autofill Patterns for days of the week
Fig. 15 Autofill Patterns for days of the week
Fig. 16 Autofill Patterns for weekly
Fig. 16 Autofill Patterns for weekly
Fig. 17 Autofill Patterns for time formats
Fig. 17 Autofill Patterns for time formats

Skill: Clean Presentation

This last skill is much more general than the previous skills discussed here. It is more of a collection of ideas, any number of which you can choose to incorporate in your own spreadsheets in order to clean up the presentation.

By default, spreadsheets can daunting blocks of raw data — and it doesn’t help that many of us have picked up some bad habits along the way. It’s amazing to see the difference a few small changes make to the look and feel of a spreadsheet.

  1. Provide a buffer.
    • The first thing I tend to do with a new, blank spreadsheet is resize the first column to the same width as the height of each row (21 pixels). I then start my spreadsheet from B2, which provides a nice buffer around my tables, keeping them from running into the edges, while not giving up too much valuable work area.
    • Size your columns so your data has breathing room, without losing valuable space.
  2. Align your data.
    • Right-justify numbers, left-justify everything else.
    • Fight the urge to always center your data.
    • This is not an absolute rule. Sometimes the presentation looks better with different justification — use your best judgment.
    • One big exception to the no-center suggestion is table titles. Merge the top row of cells above the header row into one and center the title in a large font.
  3. Choose your colors wisely.
    • Try to limit yourself to two or three colors on a page. 
    • Use muted colors, unless you’re trying to highlight something to make it more noticeable.
    • In the example below, I’ve changed the Conditional Formatting of the sales numbers to change the text color, not the cell color. I prefer more subtle indicators, but sometimes bold is what’s needed. Format accordingly.
    • Choose complementary colors — see the colors used in Format Alternating Colors for examples of muted colors that go well together.
      • Format Alternating Colors can be used to highlight the header row and put a light color on alternating rows below the header to help follow a line across the page.
    • I tend to use light greys and light blues as my go-to colors. If I need to venture beyond those colors, I choose a color and match it with a color just above or below it in the color-picker. Go up and down, not left and right — unless you’re working with greys.
  4. Avoid overuse of borders.
    • I will often limit borders to a single line separating the header column from the data — however, this is unnecessary when you use a bolder cell color to highlight the header or freeze the header column.
    • I also tend to frame the table when I have more than one small table on the same tab, often with a two or three pixel wide border.

Compare the before and after spreadsheets. It’s amazing what a few small changes can do.

Fig 18. Before Cleaning
Fig 18. Before Cleaning
Fig. 19 After Cleaning
Fig. 19 After Cleaning

Which one would you work with? Which one would you like to present to a large audience?

Function: Unique()

The Unique() function takes a range of data and returns a list with all duplicates removed. Elements are returned in the order they are encountered, so I often embed this function within a Sort() function to return a sorted list.

I frequently use this function for inventory tracking. Say there is a list of components for 40 servers, one per row, with the F column listing the server’s CPU. If we want a list of the different types of CPUs in use by these 40 servers, our function would look something like:

=Unique(F3:F42)

In this example, F3:F43 is the range covering the list of CPUs for the 40 servers. In this case, let’s say there were only four different types of CPU in use. The first type encountered would be displayed in the cell with the above formula. The following unique CPUs would fill out the three cells below.

If you want a sorted list, the Unique() function could be placed in a Sort() function:

=Sort(Unique(F3:F42), 1, TRUE)

To cover how Sort() works, in simple cases like this, the first element is the range to sort. The second element is the column within that range to sort — in our case, there is only one column, so we use “1.” Finally, a boolean value to represent if we want the list sorted in ascending order, or not. Sort() has a few more options to play with if you want, but those are the basics.

Function: CountIf()

The function CountIf() counts the number of cells that meet a definable condition within a range of cells. I find I often use it in conjunction with Unique() to make a table summarizing how many of each part are in use. Using the example from the Unique() description, once a list of unique CPUs is generated, I use CountIf() to count how many servers have each CPU type.

Say N5 is where we start our unique CPU list, generated from the range F3:F43. Since there are 4 different CPUs, these are listed in N5 though N8. I use the next column, M, to show counts of each of these CPUs. The formula for M5 would be:

=CountIf(F$3:F$34, N5)

Where the first element is the range in which to count (notice the absolute row references), and the second element is the condition that triggers a count if it is met. In our example, the range is again F3:F43 — the CPU list for the 40 servers, and the condition is N5, the first CPU on the unique list.

You can use more advanced criteria than simply matching, too. If I want to count how many values in a list are greater than 20, I’d use “>20” (including the quotation marks) as the criteria for CountIf().

Function: IfError()

IfError() is a simple function that helps clean up expected errors in a spreadsheet. Expected errors often turn up when some data has yet to be entered, and functions referencing that empty cell throw an error because it need some data to process, or it can be something as simple as dividing by zero.

Some common errors you’ll see will be #DIV/0!, #VALUE, #REF!, and #NUM!. Wrapping your function in IfError() can suppress these errors. The function:

=IfError(B2/B3, “Oops!”)

will return whatever the formula in the first element, here B2/B3, would normally return unless the return value is an error. In most cases, it would return the result of B2 divided by B3. However, if B3 happens to be a zero, the formula would return the error, #DIV/0! The IfError detects the error, suppresses it, and returns the 2nd (optional) element — in this case, “Oops!” If you leave out the second element, the error is still suppressed, if one is returned by the formula in the first element, but nothing is returned — the cell is left blank.

While this is a useful tool to keep errors from mucking up your nice, neat spreadsheet, it can make troubleshooting mistakes difficult. Keep that in mind when putting a spreadsheet together, and maybe add the IfError() wrappers after you’re confident in your work.

business case for fully managed dedicated servers

Although you may salivate at the thought of a fine-dining experience and indulging in a perfectly seared, dry-aged steak, chances are you’re not about to purchase a tract of land and some cattle.

The same can be said for dedicated servers — even though your business requires high-end hosting, you might not want to take on the expenses and headaches of maintaining your own architecture.

That’s where managed services (and five-star restaurants) come in: marrying tantalizing, decadent ingredients with the careful touch of experts well-versed in delivering superb experiences to patrons. When catered by top-notch hosting experts, fully managed dedicated servers present top-notch computing power, performance, and server administration backed by the industry’s best practices.

The high-touch service may seem overly indulgent, but investing in fully managed dedicated servers can greatly streamline your company’s operations and reduce operating expenses. As hosting becomes more complex — and pivotal to an organization’s success — managed services empower companies to focus on mission-critical objectives without spending financial or human resources on tedious daily tasks.

1. Enjoy More Predictable, Lower Costs

The benefits to managed services loosely resemble all the perks and add-ons that new website owners encounter when signing up for a shared hosting plan: standard system management and updates, security protection, 24/7 support, and guaranteed uptime rates. The differences between shared and dedicated servers are certainly numerous, but the inexpensive hosting tier introduces a lot of the same concepts found with managed servers — including a standard monthly rate.

Sure, there may be a little sticker shock when comparing fully managed dedicated server packages, but be sure to take a holistic view of what you’re getting: modern hardware tailored to your specific needs, datacenter space, a professionally built and maintained network, and a team of experts invested in your company’s security and success. In most cases, you can customize your managed services plan to cover the aspects and systems that will have the greatest impact on your bottom line, while staying within your budget. Even if unforeseen issues arise, you’re locked in to a standard monthly payment your company can build around.

2. Maximize Employee Productivity

Because your company will no longer have to spend time and money replacing outdated equipment, purchasing software licenses, hiring consultants, or cross-training employees, the business will also shed several expenses by moving to a fully managed infrastructure. Recruiting, hiring, training, and retaining top-level systems administrators can be expensive; However, with the right service provider managing your dedicated server, you can rely on its support and engineering teams, datacenters, network, and infrastructure instead.

Interestingly, increased productivity typically represents the most significant financial benefit of managed server hosting. According to an IBM white paper, surveyed companies optimized their IT staff resources by 42% when using managed services for mundane day-to-day infrastructure maintenance tasks. Freeing up developers, engineers, and IT administrators from tedious chores enables them to concentrate on projects and objectives that drive business forward.

More importantly, the improved employee output extends beyond the server room and IT offices. Upgraded infrastructure and maintenance will typically speed up various business processes and reduce unplanned downtime or outages — boosting momentum in all corners of the office.

3. Rely on Modern Expertise and Hardware

Budgetary constraints often lead to companies operating with legacy systems and old hardware that’s more likely to fail. Switching to fully managed dedicated servers hosted in a service provider’s datacenter reduces the amount of infrastructure your business needs in-house, as well as the energy consumption and office space required to operate the machines.

Instead of spending staff time and resources researching and procuring high-performance hardware, you can consult with hosting infrastructure experts as part of your onboarding process with managed hosting. The account managers and technicians will help you determine exactly what customizations and configurations will optimize your company’s specific hosting needs, then build and deploy the infrastructure in their datacenter.

Because managed service providers are responsible for powering the online success of several businesses, their employees are always plugged in to industry best practices, new technologies, and the day-to-day operations of maintaining healthy hardware. They know how to best design dedicated servers for the host’s special blend of proprietary solutions and network optimizations built for maximizing speed, security, and scalability.

4. Protect Your Data With Cutting-Edge Security

It may be comforting to be able to walk directly up to the server housing your data and applications and know the exact people responsible for keeping things running smoothly — but wouldn’t you rather know your server is one of hundreds being maintained in a state-of-the-art facility run by specialists who have seen it all? On-prem servers are typically more prone to outages, inefficient operations, and security risks; trusting a managed hosting expert with your dedicated server ensures the equipment is located in a leading datacenter that frequently includes redundant power and cooling systems, along with 24/7 security personnel and multi-factor authentication for server access.

Beyond physical security measures, managed hosting providers are responsible for monitoring server health and performance, running frequent backups, implementing disaster recovery and business continuity solutions, and shielding your infrastructure from attacks. Additionally, GigeNET includes a one-of-a-kind automated system for DDoS protection with most managed server plans.

5. Rest Easy With Around-the-Clock, Customized Support

Upgrading to managed hosting means you’re no longer just an account number — you have entire teams of people who care about your business and its online success. On-site support teams are on standby 24/7 to respond to any alerts or issues that arise with your infrastructure. In addition to handling your equipment and backend systems, managed service providers help oversee your company’s capacity for growth and enable you to focus on your business passions instead of daily chores.

At GigeNET, that includes periodic, casual conversations that help us learn how we can continue to improve our services to you. To learn more about the cost, performance, and security optimizations your company can achieve with fully managed dedicated servers from GigeNET, feel free to  chat with our specialists or read more about our managed hosting options.

mysql basics

Introduction to MySQL

MySQL Replication using Binary Log File Position, as opposed to Global Transaction Identifiers(GTID), uses binary logs, relay logs, and index files to track the progress of events between the master and slave databases. GTID can be used in conjunction with binary/relay logs, however, starting with an understanding of binary log file position is beneficial. Shown here are the steps to set up new master and slave servers, including how to record the master log position to use with the slave configuration; resulting in consistent data between the master and slave servers.

This is an overview of the MySQL Replication setup process using Binary Log File Position. As a simplified guide with reference to configuration steps provided at the following: https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html

Operating system and MySQL versions

CentOS 7

MySQL 5.7

MySQL Definitions

Keywords/filenames used with MySQL Replication

  • Master – primary database server data is copied from
  • Slave – one or more database servers data is copied to
  • Binary log file – containing database updates and changes written as events
  • Relay log file – contains database events read from the master’s binary log and written by the slave I/O thread
  • Index file – contains the names of all used binary log or relay log files
  • Master log info file – contains master configuration information including user, host, password, log file, master log position. Found on slave
  • Relay log info file – contains replication status information. Found on slave
  • Global Transaction Identifiers(GTID) – Alternative method for tracking replication position, does not require binary logs enabled on slave(not used with binary log file position)

1. Setup MySQL

The latest repository(MySQL 8.1) includes previous versions of MySQL. Once the repository is added, use yum-config-manager to disable mysql80-community and enable mysql57-community; or by editing /etc/yum.repos.d/mysql-community.repo directly.

  • Add MySQL Yum Repository

shell> sudo rpm -Uvh mysql80-community-release-el7-1.noarch.rpm

  • Install MySQL 5.7

shell> sudo yum-config-manager --disable mysql80-community

shell> sudo yum-config-manager --enable mysql57-community

shell> sudo yum install mysql-community-server

shell> sudo systemctl start mysqld.service

  • Reset MySQL root password

shell> sudo grep 'temporary password' /var/log/mysqld.log

shell> mysql -uroot -p

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass4!';

2. Setup Master Server

  • Add the following to the [mysqld] section of /etc/my.cnf

[mysqld]

log-bin=mysql-bin

server-id=1

log-bin – The binary log file name, default stored in the MySQL data directory /var/lib/mysql.

server-id=1 – Unique identifier for the server. Defaults to 0 if not declared. If set to 0, connections to the slave servers will be refused.

Restart MySQL

shell> sudo systemctl restart mysqld.service

  • Create MySQL Replication User

mysql> CREATE USER 'replication'@'%.example.com' IDENTIFIED BY 'password';

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%.example.com';

  • Record the Master binary log position for slave configuration

mysql> FLUSH TABLES WITH READ LOCK;

mysql> SHOW MASTER STATUS;

mysql> UNLOCK TABLES;

 

3. Setup Slave Server

  • Add the following to the [mysqld] section of /etc/my.cnf

[mysqld]

log-bin=mysql-bin

server-id=2

log-bin – The binary log file name, default stored in the MySQL data directory /var/lib/mysql.

server-id=2 – Unique identifier for the server. Defaults to 0 if not declared. If set to 0, connections to the slave servers will be refused.

  • Configuration using the master server replication position information recorded in step 2c.

mysql> CHANGE MASTER TO

   ->   MASTER_HOST='master_host_name',

   ->   MASTER_USER='replication_user_name',

   ->   MASTER_PASSWORD='replication_password',

   ->   MASTER_LOG_FILE='recorded_log_file_name',

   ->   MASTER_LOG_POS=recorded_log_position;

  • Start the Slave replication process and view status.

mysql> STOP SLAVE;

mysql> SHOW SLAVE STATUS\G;

Conclusion

Following these steps the slave server should be synced with the master log position. This can be read in the SHOW SLAVE STATUS\G; output, which is will be discussed in the next blog posts. In addition, more information on MySQL Replication GTID setup, variable configurations, and maintenance will be a topic of following blog posts.

 

Not quite ready to handle it yourself? Let us handle your server maintenance with GigeNET’s fully manged services.

Load More ...