Featured

What is the value of a server? In this world of virtual machines and dedicated servers, our customers are becoming more and more removed from the physical components that comprise a server. Everything is easily replaceable — everything except the ...

12 Essential System Administration Cheat Sheets
Behind the Curtain

Albert Einstein, a man not known for his lack of learning, once said that we should never learn what we can look up in a book. While it’s often efficient to have all the commands and options we need at ...

Dedicated Hosting

Most people begin their online business using a shared web hosting plan. However, they will eventually have to upgrade to a website host that will give them with the features and functions that allows them to grow their ecommerce site. ...

Dedicated Hosting

Anyone who starts an online business wants it to grow and establish a clear presence on the web. If you have a shared hosting plan, you will eventually have to upgrade to a host that can meet your changing business ...

backup data

From that presentation you were up all night working on to important tax information, it’s second nature for us to save copies of our personal data whether that be on a thumb drive or a service like Dropbox. Yet, drives fail, coffee spills, and humans make mistakes. And the fact is, the risk of losing your presentation is a lot less than losing massive amounts of critical business data. How can entire organizations keep their business up and running? They need robust solutions to prevent the loss of critical data and to minimize the cost of downtime.

While each business has unique needs and requirements deserving of a custom solution, the bare minimum is to backup critical data offsite. Data backups involve taking a snapshot of your data and storing it offsite. From protecting against disasters, both natural and otherwise, to keeping an archival of records, data backups play a vital role in keeping your business up and running.

Data Loss and Business Continuity

A business continuity plan is a set of protocols and procedures in place to prevent loss of critical data in the event of an unplanned incident and keep essential business functions running. A good business continuity plan will account for unplanned downtime resulting from natural disasters, network disruption, and human error.

While the cost of downtime ranges based on a broad spectrum of factors, this survey found that 98% of organizations reported a single hour of downtime would cost them over $100,000. This may include anything from replacing failed hardware and paying your staff overtime to fix the issue to the loss of sales and critical data. The name of the game is, How fast can you get back up and running from unplanned downtime? The company which recovers fastest gains the competitive advantage.

In the event of a disaster like flooding or local power outages, an offsite replication of critical data will provide something for your systems to retrieve, preventing you from unnecessarily rebuilding your business. The latest backup prior to the malware attack can be retrieved and unpacked, to get your data back and your business running again.

Archival

While backups are most commonly discussed when building a business continuity plan, they also play a large role in storing key data for archival purposes. With backups, the data is not saved on top of itself but rather alongside the prior data. In other words, with backups, you’re creating separate versions of your data and routinely assembling a chronological archive for your business.

This archival serves as a beneficial tool in the chance your organization undergoes a routine audit. Your team can simply sort through the history of backups and pull the relevant version of your data needed. With offsite backups, you can have access to this comprehensive archival without the burden of storing the data on premise.

The second reason why archives are a great reason for your company to have backups is to safeguard against ransomware. With any virus or malware, once it’s infiltrated your systems the process of recovering data can be tricky at best and impossible without the right precautions in place. With the archival provided by data backups, you can simply retrieve a version of your data before the ransomware took hold and restore your systems back to that point in time.

Build it right the first time.

While all this sounds a bit scary, it’s important to note that “build it right the first time” is a thing. Most providers want to assure you they are doing the best practices just like you do if you had the chance. Yet, you’re often still left scrambling in the case of an emergency. Instead of doing double the work to either recover from downtime or panicking to find archives, consistently backup your data. We’ve made it easy for you. Choose from prepackaged solutions with R1soft backups or turn our storage solution, GigeVAULT, into a data backup system that fits in any budget.

Privacy and security can be difficult to achieve, especially for your entire organization. It involves many factors and can be difficult to manage from the top. While you may not want to, or don’t have the ability to manage every aspect of your organization’s members there are some things you can do. One of the most important and sensitive factors would be how your organization’s members communicate about internal matters. While talking face to face is one of the more common ways, this is not always possible. More people than ever work remote. Especially in the IT industry. There is an obvious need for remote communication methods.

Instant messaging is probably one of the more popular ways to communicate. There’s many platforms like Skype, Slack, and WhatsApp that simplify this. While some of them may boast encryption from client to server or even end-to-end encryption, you’re still transferring trust to a 3rd party and their code. If this worries you, it may be best to run your own instant messaging server. Commonly, organizations and individuals who are concerned about this have setup XMPP servers (formerly known as Jabber). While this arguably isn’t a bad solution, XMPP can be tricky to work with compared to other more modern solutions.

One of the most notable competitors to the XMPP protocol would have to be Matrix. Matrix, like XMPP can be a decentralized (federated) but you can tweak it to your organization’s needs. For example, you can disable public registration, use LDAP for authentication and disable federation. Just like XMPP, there are many implementations of the Matrix protocol. In this tutorial we will be going over how to setup your own Matrix Synapse server on GigeNET Cloud. This will show you the basics of how to run your own Matrix server. If you don’t have a GigeNET Cloud account, head over here and check out our plans. Synapse is the server created by Matrix developers and can be found here.

First, we’ll need to create a GigeNET Cloud machine. Once you’re logged in, it’ll look like this.

Click on “Create Cloud Lite”

Set a proper hostname for your new machine, select the desired location, zone and OS. For this tutorial we’ll be using Debian 9 (Stretch). You’ll then need to pick a plan that fits your needs. Matrix Synapse recommends at least 1GB of memory. We’ll go with GCL Core 2. After you’ve set everything to what you want press “Create VM”.

Now your cloud VM has begun spinning up on one of our hypervisors. It may take a bit, but you can ping the VM’s public IP until you see that it’s up. This page will show all of the details you’ll need to know to login.

Once the VM is up, you can SSH in with your favorite SSH client. I use Linux, so I’ll be using openssh-client. We’ll want to perform a full upgrade of all packages on Debian, so you’ll need to run this.

root@matrix-test:~# apt update && apt dist-upgrade

Once that has finished, reboot your VM.

root@matrix-test:~# reboot

Once you’re back in after the reboot. Let’s take a look at the available Matrix serversThere’s quite a few, but as mentioned, we’ll be using Synapse. Click Synapse.

If you’re interested in learning more about Matrix Synapse I highly recommend that you check out their GitHub repository.

Before you grab their repo key you’ll need to install apt-transport-https. This is required to use HTTPS with the apt package manager.

root@matrix-test:~# apt install apt-transport-https

When that finishes you can then grab their repo key, import it and add the repository into your sources file with the following commands.

root@matrix-test:~# wget -qO - https://matrix.org/packages/debian/repo-key.asc | apt-key add -

root@matrix-test:~# echo deb https://matrix.org/packages/debian/ stretch main | tee -a /etc/apt/sources.list.d/matrix-synapse.list

root@matrix-test:~# apt update

If everything checks out you’re now ready to install Matrix Synapse! We’ll also install a few extras.

  • Certbot (to get a free Let’s Encrypt certificate) 
  • Haveged (to speed up entropy collection)  

root@matrix-test:~# apt install matrix-synapse certbot haveged

You’ll get an ncurses interface during the installation asking for a few configuration parameters. Make sure to set your FQDN here.

It’s up to you whether you want to send anonymous statistics. I chose not to.

If you have your own certificate you can simply copy over the certificate and private key in the same way. Now let’s get our Let’s Encrypt certificate!

A few more things to note.

  • You’ll need to ensure your domain or subdomain points to your new server via a DNS A record or AAAA record if you want to use IPv6.
  • You’ll need to enter an email address to receive certificate expiry notices.
  • You’ll need to agree to the Let’s Encrypt terms and conditions.

root@matrix-test:~# certbot certonly --standalone -d matrix-test.gigenet.com

Once we have our certificate and private key we need to copy them over to /etc/matrix-synapse like so (change directory to your FQDN).

cp /etc/letsencrypt/live/matrix-test.gigenet.com/fullchain.pem /etc/matrix-synapse/fullchain.pem

cp /etc/letsencrypt/live/matrix-test.gigenet.com/privkey.pem /etc/matrix-synapse/privkey.pem

Next, we’ll need to generate a registration secret. Anyone who has this secret will be able to register an account. So you want to keep it safe.

root@matrix-test:~# cat /dev/random | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1

Output should be a random string of 64 characters like: TDfdIXPBWDOqaVsR5erVJLKdqPqIAsrvfvEtgHfY8oZ06F5NMYnhdbHhVbneDiTF

Now we need to edit the config. You can use nano or your favorite text editor.

root@matrix-test:~# nano /etc/matrix-synapse/homeserver.yaml

Search for the parameter when you’re in nano with CRTL + W and enter registration_shared_secret

Ensure that the line looks like this:

registration_shared_secret: “TDfdIXPBWDOqaVsR5erVJLKdqPqIAsrvfvEtgHfY8oZ06F5NMYnhdbHhVbneDiTF”

We’ll also need to enable TLS support for the web client and add the paths for our certificate and private key.

Make sure the following line web_client looks like this.

web_client: True

Now we’ll add our certificate and private key to the config. The lines should look something like this.

tls_certificate_path: “/etc/matrix-synapse/fullchain.pem”

tls_private_key_path: “/etc/matrix-synapse/privkey.pem”

Save and exit your text editor after you’ve followed the steps above. We can now enable matrix-synapse to start on boot, and start the service!

systemctl enable matrix-synapse

systemctl start matrix-synapse

If everything checks out the service should have started successfully. If not you can check its status to see why it failed with.

systemctl status matrix-synapse

Now we’re ready to setup our first user. This command will allow you to register a user and make it the administrator. You can also use this command to register normal users. By default, Matrix Synapse is not configured to allow public registration.

register_new_matrix_user -c /etc/matrix-synapse/homeserver.yaml https://localhost:8448

We’ve got our first user, now we’re going to have to pick a Matrix chat client. You can see a list of clients here but in this tutorial we’ll be using Riot on a Windows VM. It has very good support and is cross-platform. Chats can also be end-to-end encrypted with Riot! Go here to download it. 

Once you have it installed for your platform of choice and launch it. You’ll be greeted with a similar window as the one below. Click “Login”.

You’ll then need to enter your server’s details along with the credentials you set for your administrator account.

After you’ve signed in you’ll be greeted with a similar interface. Let’s create our first room by pressing the + button on the bottom left of the window.

We’ll just name it “Admin Room” for this test.

Now we’ve got our own room that we can invite other users to!

Need to know how to do more with Riot? They have a great FAQ with a few video tutorials on how to perform some basic tasks. 

While administering a Matrix server might be a bit of a learning curve it’s worth it if you value having control of your own data. If you want to dive more in-depth on how to setup other Matrix Synapse features I highly recommend that you head over to their GitHub page

syncthing

What is Syncthing?

Syncthing is a decentralized file synchronization tool. It shares similarities with commercial cloud storage products you may be familiar with, like Dropbox or Google Drive, but unlike these cloud storage products, it does not require you to upload your data to a public cloud. It also shares similarities with self-hosted cloud storage platforms like ownCloud or NextCloud, but unlike those products, it does not require a central server of any kind.

Syncthing works off of a peer-to-peer architecture rather than a client-server architecture. Computers attached to your Syncthing network each retain copies of the files in your shared folders and push new content and changes to each other through peer-to-peer connections. Unlike other peer-to-peer software you may be familiar with, like file sharing applications, Syncthing uses a private sharing model and only devices specifically authorized with each other can share files. All communication between the peers is encrypted to protect against man in the middle attacks intercepting your private data.

My Use Case

In my case, I have a library of almost 3TB of data consisting of over 250,000 files in over 20,000 directories. Most of these files average between 5MB and 100MB in size. There are currently 4 people working on the project who need access to the files. We each need the ability to add, remove, and edit items in the library with the changes synchronizing out to everyone else.

Faced with the challenge of mirroring this rather considerable amount of data between multiple computers, we have gone through a variety of solutions to decide what will work best for us.

The original setup was a central server where we all pulled backups via rsync. This had the advantage of simplifying change synchronization, since we always trusted everything on the central server to be the latest copy of the data. However, it made it more difficult to make updates since we would have to login to FTP to update the data, even though we all had a local copy. The real Achilles heel of this method, for us, was the need for a central server which raised the cost of our project, especially considering the amount of data we were hosting.

We looked into alternative cloud sync tools such as ownCloud and NextCloud, but again these do require a central server. We could have made one of our home servers the central server, but that would have consumed a lot of bandwidth for one of us. We looked at cloud storage solutions as well, but due to security concerns and the sheer cost of hosting 3TB on the cloud at the time, this didn’t seem practical for us either.

Enter Syncthing – a peer-to-peer file synchronization tool without the need for a central server. This solution was set up to be the most cost effective and simple way for us to manage our collections. Once set up, our computers would propagate changes to each other, still using a bit more bandwidth than we did when we had the central server, but at least that burden was distributed equally. It seemed like a great solution since we all had a copy of the files anyway and wanted it to stay that way, and since it allowed us to also begin editing the files locally on our own machines rather than going through a process to get them on the central server.

With all of these benefits, we decided to give it a try, and started using it day to day. It was impressive how we were each able to connect our existing folders (since they were rsynced with each other up to this point, they had the same contents). So we didn’t even have to go through a painful initial sync process. Once Syncthing was set up on all of our machines, it scanned the files and communicated with the other peers to make sure everyone had the same content. Once that was complete, everything was in sync and we were ready to go.

syncthing

How Syncthing Works

Syncthing enables the sharing of folders on your computer in a peer-to-peer manner. There is no central server or authority to manage the files, and you authorize peers in your client to allow them to connect and begin sharing the folder with you.

Peers connect directly to each other over the Internet in order to share data. This is the fastest and most secure method offered by Syncthing, since the data goes directly from one peer computer to the other with no central server or middle man handling the data. This method does require a firewall port to be opened on your network in order to communicate with peers that aren’t on the same network. By default, Syncthing uses TCP port 22000 for this purpose.

If you are syncing between servers or other Internet connections having a static IP address, you could easily lock down your firewall to only allow connections to this port from known IP addresses of your other peers, for additional security if that is a concern for you.

In some cases, direct peer connectivity is simply not possible, such as if you are behind a corporate or school network’s firewall or carrier NAT where you do not have access to the router to ask for a port to be forwarded. In these cases, Syncthing still is able to work, but it will adjust its connection strategy.

If connectivity is not possible directly between peers for any reason, Syncthing will fall back to using a relay server. In this case, you are adding a middle man to your connection, which generally does result in reduced performance. However, since Syncthing uses end-to-end encryption, these relay servers should not be able to see what data you are relaying through them.

The public relay servers used by default are operated for free by members of the community, and anyone can run a Syncthing relay. Relay servers do not store any data, they simply act as a proxy between peers that are unable to connect directly. So, you do not need a server with a lot of disk space to run a relay, but they can use a lot of bandwidth.

In some cases, you may need to use the relay functionality but do not want to rely on public relays out of security concerns, or maybe you simply want to have better performance by running your own private relay. Syncthing makes this possible as well through private relay pools. This still does create a centralized point for your Syncthing environment, but it is only used if the peer-to-peer connection is not possible. If you set up your Syncthing relay on a high speed server provider, like GigeNET, you can rest assured that your relay will operate in a fast and secure manner while you continue using Syncthing to enhance your project.

If you are interested in running a relay, be it a public relay for the good of the community or a private relay for your own project using Syncthing, the official documentation on the process can be found here.

Great, So How Do I Install Syncthing?

A typical Syncthing installation will use simply the Syncthing Core application, which provides a command line tool and a Web UI of Syncthing. You can download the version of Syncthing Core for your operating system. There are pre-built packages for most Linux distributions, Windows, MacOS, and other popular operating systems.

The exact procedure for installing may vary from system to system, but for most Linux platforms, you simply need to download and extract a tar.gz archive, then run the Syncthing binary to launch the program.

By default, the Web UI will be available while Syncthing is running on https://localhost:8384/. You can access the Web UI on the local computer through a web browser, or by setting up an SSH tunnel if it is running on a remote server.

Additionally, you can configure Syncthing’s Web UI to listen on other IPs besides localhost if the need arises. Further documentation on this process is available here.

Connecting To Your First Peers

Connecting another peer to a shared folder for the first time is a very straight forward process. You will need to know their Device ID, which you can obtain by going to Actions > Show ID on the upper right corner of the web UI. The Device ID is an alphanumeric string that looks similar to a product license key.

To add the peer, click on the “Add Remote Device” button, which you’ll find toward the bottom left corner of the web UI. On this dialog, enter the device ID provided by your peer who you wish to connect.

You can enter anything you want for the Device Name, it is for your reference only so you know who the peer is. Generally, you can leave the address setting as “dynamic”, which will allow Syncthing to autodiscover the remote address for you.

If you would like the new peer to be able to add other devices to your shared folder, you can add them as an “Introducer” by checking that checkbox. This way, if your peer authorizes a new device on the folder, that peer will be introduced to you and you will begin sharing with them directly without any other steps required.

If you would like the peer to be able to create new shared folders and add them to your Syncthing easily, you can check the “Auto Accept” checkbox which will allow them to do just that.

Lastly, you simply need to check any checkboxes next to folders that you want to share with this peer. Once all of these steps are completed, simply click save, and allow Syncthing some time to connect to the peer. You should be on your way to syncing!

Is Syncthing Perfect?

No, of course not. Syncthing is a free open source application, and it’s not without its imperfections, but it works pretty well and development continues on the project every day. I still plan to use it for a long time to come, despite its imperfections.

I’ve found that with my massive library of files, the default rescan interval is too high for me and creates excessive server load. If you are sharing a very large library (say, hundreds of thousands of files), you too may want to increase your scan interval. Keep in mind, that this will increase the time between a change being made on Syncthing and that change propagating out to your peers. If you want to change this setting, you can do this by clicking the Edit button attached to the specific shared folder from the web UI, and adjusting the value of the setting “rescan interval” under advanced settings. I set mine to 36000 seconds (10 hours) to keep my server load down, since I don’t add files that often. Even with this scan interval, if I want to push changes out right away, I can simply go to the web UI and click the resync button to initiate an immediate scan.

Another pet peeve of mine is I’d like to see better support for the propagation of deletion events. I’ve found that if I delete a file while a peer is disconnected from Syncthing, when that peer eventually reconnects, they will sync back my deleted file to me. This can get really annoying, and sometimes causes me to hold off on making changes if one of my peers is offline for some reason. I would like to see some kind of global “deletion event roster” so that these delete events are not ignored by reconnecting peers, but it seems that Syncthing isn’t doing that yet.

I do sometimes have trust issues with Syncthing, because I’ve encountered some glitches in the web UI that make it seem like there could be a problem, but most of these concerns have been unfounded and Syncthing has done a great job managing my data. I’ve had some instances where the web UI will say that I am hundreds of gigabytes out of sync with my peers, and it appears to be actually syncing data, but not really using any bandwidth. Glitches like this reduce my confidence, but after using it safely for some time, I have learned to trust it even when the web UI is acting bizarrely.

Conclusion

Overall, what Syncthing accomplishes is a challenging task to pull off, and it does a pretty good job of it. I would love to see further development on the project, and I’ve seen new functionality and better interface polishing introduced in the timeframe that I’ve been using it. I think it will only continue to improve with the passage of time, and I definitely think it’s worth a serious look for your file synchronization needs.

sysadmin

During day-to-day server administration, there are a variety of important system metrics to analyze in order to assess the performance of the server, as well as diagnose any issues. A few of the most important metrics from a hardware standpoint for a system administrator to monitor are CPU usage, memory usage, disk I\O. Log data from applications themselves can be equally important when it comes to diagnosing problems with specific programs or websites running on a server.

With that in mind, I am outlining some basic tools I use as a system administrator which are either commonly bundled with Linux distributions, or easily installable from software repositories, and can greatly aid in diagnosing server issues or checking up on the health of the server day to day.

1. atop

If you are a Linux administrator or a system administrator familiar with the command line, you have probably heard of the “top” utility for monitoring system resources and running programs. It is similar to the Task Manager utility on a Windows system.

Atop is a utility similar to top which provides a more detailed look into important server metrics, it can be an even more helpful tool for identifying performance issues.

Atop provides a detailed breakdown of system resource usage such as:

  • CPU usage, both overall and by process ID.
  • System load average.
  • Breakdown of memory usage, overall and by process ID.
  • Disk I\O statistics per physical disk, as well as per LVM volume if you use LVM.
  • Network usage statistics, broken down by network interface.
  • A top-style process list breaking down programs running and sortable by resource usage.

Atop Service

Atop can also be run as a service on the machine. When running as a service, atop will record a snapshot of its statistics every few minutes and record the data to a log file. These log files can then be played back later using the atop utility to review the historic data. This can be incredibly useful in cases where the server is going down for an unknown reason. You can then go back to the historic atop logs and see if a program began consuming a lot of resources just before the server went down.

In order to launch Atop as a service and configure it to run at boot time, you can simply use the following commands:

chkconfig atop on
service atop start

Reviewing Historic Logs

By default, atop logs are stored in daily log files located in /var/log/atop/ with files rotated and renamed based on the date of the log.

To load a historic atop log and view its contents, you can simply open the log with “atop -r”, for example:

atop -r /var/log/atop/atop_20180315

Installing Atop

Atop is not included by default in most Linux distributions. You can check with your specific distribution resources to see where it is available for you.

On CentOS 7, the package is available in EPEL and can be installed in this way:

# Add the EPEL repository to your system
yum install epel-release
# Install the atop package
yum install atop

2. mysqltuner.pl

The “mysqltuner.pl” utility is a third party Perl script which provides fantastic insights into MySQL performance and tuning needs.

I have used this utility on many occasions to optimize poorly performing database servers in order to alleviate high load conditions without requiring hardware upgrades or even any changes to the usage pattern of the databases. Often, MySQL performance can be greatly improved by simply tweaking some basic options in the configuration.

Note: In order to get the best results from this tool, you should wait to run it if you have recently restarted MySQL. Some of the recommendations will not be accurate until MySQL has been running for some time (at least 24 hours) under normal activity.

A few examples of helpful information provided by the mysqltuner.pl utility include:

  • Memory usage information (maximum reached memory usage, maximum possible memory usage given the configuration values)
  • Statistics on amount of slow queries
  • Statistics on server connection usage
  • Statistics on table locks
  • Statistics specific to MyISAM, such as key buffer usage and hit rate.
  • Statistics specific to InnoDB, such as buffer pool use and efficiency.

Additionally, the tool provides recommendations for adjustments to common configuration options in the /etc/my.cnf configuration file. It may recommend adjustments to settings pertaining to things like the query cache size, temporary table size, InnoDB buffer pool size, and other settings.

As with any tool, it is important to exercise your experience as well as common sense when handling its recommendations. Some recommendations made by the tool could result in introducing instability to the MySQL server. For example, if your server is running low on memory already, increasing cache sizes dramatically can cause MySQL to exhaust the rest of your server’s available memory rapidly.

For this reason, I personally always go back and run the tool a second time after applying new settings, paying special attention to the statistic on maximum possible memory usage. That statistic will be accurate even when running MySQL immediately after a restart, although some other statistics provided by the tool may not be. The permissible range here can vary depending on the use case of your server. Safe values could be as high as 90% or higher on a dedicated MySQL server with very little other software running, but on a server with a lot of other programs running such as a cPanel server, allowing MySQL to use this much memory could exhaust the memory needed for other resources.

Obtaining & Running This Tool

The mysqltuner.pl tool is not usually packaged with a Linux distribution or with MySQL. The creator provides it for download on Github. It can be obtained here. The creator also maintains a short link domain to the tool: https://mysqltuner.pl

Once you’ve downloaded the tool, you can execute it by running this command:

perl mysqltuner.pl

3. ss

ss is a command line utility which can be used to gain insights into network connections and open sockets on your server. The tool is included in the iproute2 package and is intended to be a substitute for netstat. It is also notably faster, compared to netstat.

A common use for ss is to check open TCP or UDP ports on the server. This can be useful for creating firewall rules or checking whether a service is really listening on the port you have configured it to listen on.

The commands to run for these types of uses would include:

# Show listening TCP ports on the server
ss -lt
# Show listening UDP ports on the server
ss -lu

Another common use would be checking open connections to the server, which can be helpful for determining the connection volume or whether a connection is open between your server and another IP address.

The commands to run for these types of uses would include:

# Show open TCP connections
ss -t
# Show open UDP connections
ss -u

4. grep

Grep is a very helpful tool for “finding a needle in a haystack.” If you have a lot of text you need to sort through, such as log files or a folder full of configuration files, grep can greatly simplify the task.

A common use for grep is finding log data pertaining to some event, such as sifting through log data for Apache to find access attempts with a specific criteria. For example, if your Apache log file was stored in /var/log/httpd/access.log, you could use commands like these to find relevant log lines.

A few examples:
cat /var/log/httpd/access.log | grep “the text you are searching”
cat /var/log/httpd/access.log | grep index.html
cat /var/log/httpd/access.log | grep 127.0.0.1

Grep is also useful for sorting through the output of other commands, such as the “ss” command covered earlier.

For example, if you are looking for established TCP connections, you could run “ss -t” and pass it to grep like so:
ss -t | grep ESTAB

If you are looking for TCP connections to\from a specific IP, you can find that too!
ss -t | grep 127.0.0.1

A more advanced use of Grep is searching through files to find files containing a string of text. This can be useful if you are searching through multiple configuration files for a setting with a known value, but unknown location.

A few examples of searching folders with grep:
grep -r “text you want” /path/to/search/
grep -r “mysql” /home/user/public_html/

5. nc

Nc is a command line utility to establish connections to servers and interact with the service running on that port. It is an alternative tool to an older command line utility called Telnet. It is useful for testing connectivity and responses from services on a server.

You can use nc to see if a TCP connection is working, which can help in diagnosing service issues like a firewall blocking a port. The tool can connect to any TCP socket service, including protocols such as HTTP, XMPP, MySQL, or even Memcached.

In order to use the tool to interact with a specific service, beyond testing connectivity, you may need to know some specifics of the protocol so that you know what to “say” to the server in order to get a response.

Test Connectivity to HTTP

It is very simple to use nc to test an HTTP web server, you would run this command:
nc server.address.com 80

After connecting, you would use this command on the prompt to request a URL from the web server:
GET /

Test Connectivity to SMTP

Testing connectivity to an SMTP server is a slightly more advanced process, but still very straight forward. Sometimes, these steps are recommended by email blacklist RBL’s to test connectivity to a mail server and check any errors that are encountered.

To connect to an SMTP server, you would use this command:
nc server.address.com 25

Once connected, you can use these SMTP commands to send a test email.

MAIL FROM:sender@address.com
RCPT TO:recipient@address.com
Type email message data
.
QUIT

SSL Alternative

Nc is not designed to connect to services that are SSL enabled. If you are using an SSL service, it is better to use the OpenSSL command line utility. Other than the commands to connect, the process is the same.

The basic command format is: openssl s_client -connect server.address.com:port

So, to connect to an HTTPS server, you could run the following command:

openssl s_client -connect website.com:443

Once the client is connected, you can run protocol commands in exactly the same manner as with nc. This way you can perform the same tests or commands to the SSL enabled service.

Conclusion

While no short blog post can comprehensively cover all of the tools needed in the day to day life of a Linux Administrator, and in fact many of the very common well-known tools are not covered here, hopefully, these insights provoked some new thought and these simple tools will send you down a path toward discovering more in-depth information about your Linux system.

Sound like a hassle? Let us manage your systems.

What is Ansible?

Ansible is a world leading automation and configuration management tool. At GigeNET we heavily invest in the use of Ansible as our automation backbone. We use it to manage deployments on a large array of our platforms from our public cloud’s 1-Click applications to managing our own internal systems desired configurations.

Have you asked yourself questions like, “Why should we invest our resources in utilizing Ansible?”and “How Ansible can simplify our application deployments” or “How can Ansible streamline our production environments” lately? In this blog I will demonstrate the ease of kick starting Ansible development and how simple it is to start building your desired infrastructure state.

The Basics.

The information technology industry likes to develop new terminology. Ansible has fallen into this and has willed their own terminology related to their toolkits.

Key ansible terms:

Playbooks: A set of instructions for Ansible to follow. This normally includes a target to run these instructions sets on, a collection of variables for the roles you execute, and the roles themselves that will be executed.

Inventory: A group or collection of systems in which to execute your Playbooks against.

Tasks: A task is a XML statement that tell Ansible what actions to perform. These statements often involve calling modules.

Roles: A series of tasks that work together to execute the desired state you set out to design.

Modules: A prebuilt script that Ansible uses to run actions on a target. A full list of built-in modules is documented on Ansible’s website here.

The Workstation Setup.

The recommended setup for Ansible is to have a centralized server setup for your “Workstation.” The workstation is where you will keep all of your playbooks, your roles, and manage your system inventory. The install process of Ansible is pretty relaxed and only has a single requirement: You must have python installed.

How to set up your workstation on CentOS 7:

The first thing we will need is to ensure we have python and the python pip extension installed.

[ansible@TheWorkstation ~]$ sudo yum install python-pip -y

With the install of python-pip we will install the Ansible tools through Pip.  Most operating systems have a system package for Ansible, but it has too many limitations for my taste. You must wait for the package maintainer to update the Ansible version, and often time they are behind what is considered stable.  Pip is a package manager, but only manages python packages. In this case we will utilize Pip to perform the install and configure the Ansible configuration file manually to suit our needs.

[ansible@TheWorkstation ~]$ sudo pip install ansible===2.4.0.0
[ansible@TheWorkstation ~]$ mkdir playbook inventory roles modules

Place this configuration file into the playbook directory. We will utilize these configuration flags to prevent host key problems, set the system path of our roles, and modules directories. In a production environment you will want to keep host key checking enabled due to security implications. You can read more about the configuration options here.

ansible@TheWorkstation ~]$ cat <> Playbooks/ansible.cfg
> [defaults]
> host_key_checking = False
> library = /home/ansible/modules
> roles_path = /home/ansible/roles
> EOF

The Inquisition.

Let’s get our hands dirty, and dive into the actual development of a Ansible role. It’s best to think of roles as a set of instructions for Ansible to follow. The initial creation of a role will be building out the recommended directory structure for the role. We will build a small role in the Playbooks directory that will update a system and install Nginx. Let’s get started!

[ansible@TheWorkstation Playbooks]$ mkdir -p roles/MyRole/tasks roles/MyRole/handlers roles/MyRole/files roles/MyRole/templates roles/MyRole/defaults
[ansible@TheWorkstation Playbooks]$ touch roles/MyRole/tasks/main.yaml roles/MyRole/templates/main.yaml roles/MyRole/defaults/main.yaml

Before we start building the Ansible tasks you’ll need to have a desired configuration goal in mind. My natural first step is to determine what I want accomplished and what state I want the system to be in. For this example our goal is to build a simple Nginx role, and with the system to have Nginx installed and a simple website displayed. To get to this desired system state I normally spin up a virtual machine on Virtualbox or on a Cloud instance provider like GigeNET. Once I have a temporary work environment I tend to document each command used to get to my stated goal.

These are the manual commands required to get a simple Nginx configuration on CentOS:
[ansible@TaskBuilder ~]# sudo yum update -y
[ansible@TaskBuilder ~]# sudo yum install epel-release -y
[ansible@TaskBuilder ~]# sudo yum install nginx -y
[ansible@TaskBuilder ~]# sudo service nginx start
[ansible@TaskBuilder ~]# sudo chkconfig nginx on

You should now be able to view in your browser a “Welcome to Nginx” website on the temporary environment.

Now that I know the tasks required to build a role I can start translating these commands to Ansible modules. I start by researching the Modules in the link listed previously in this blog post. We utilize “yum” on our manual adventure so I’ll try to find the exact “yum” module on the website listing. Below is a screenshot that documents the module. You can click on it for a more detailed summary.

With the documentation of the module we can start translating our commands to Ansible tasks. We will utilize two parameters on the yum module: name and state. Documented on the yum modules page are the details and how-to uses these parameters.

Name: Package name, or package specifier with version.
State: Whether to install (present or installed, latest), or remove (absent or removed) a package.

Now that we have the correct module information let’s translate it to something usable on our workstation. Ansible looks for the main.yaml file under the tasks directory to initiate the role.

Here is one of the main files we have previously touched on earlier:
[ansible@TheWorkstation ~]$ cat <> roles/MyRole/tasks/main.yaml

– name: Upgrade all packages
yum:
name: ‘*’
state: latest

– name: Install EPEL Repository
yum:
name: httpd
state: latest

– name: Install Nginx
yum:
name: nginx
state: latest
> EOF

The Inventory File.

The Ansible inventory file is a configuration file where you designate your host groups and list each host under said group. With larger inventories it can get quite complex, but we are only working towards launching a basic role at this time. In our inventory file we create a group named Target, and we set the IP address of the host we want our playbook to run the role against.

[root@TheWorkstation ansible]# cat <> Inventory/hosts
> [target]
> 199.168.117.102
> EOF

The Playbook.

Now that we have a very basic role designed, we need a method to call our role. This is where the Ansible playbook comes in. You can view the role as a single play in a NFL coach’s arsenal and the inventory as the actual team. The playbook is the coach and coach decides which plays the team runs on the field. Previously we built an inventory file with a group named target. In the playbook we designate that our hosts will be every system under the target group. We then tell the playbook to use our role MyRole.

[root@TheWorkstation ansible]# cat <> Playbooks/MyPlay.yaml
> —
> – hosts: target
> roles:
> – MyRole
> EOF

The Launch.

Now that we have the very basics finalized. It’s time to launch our very first Ansible playbook. To launch a playbook, you would simply run the ansible-playbook with the inventory file and playbook we configured earlier.

[ansible@TheWorkstation Playbooks]$ ansible-playbook -i ../Inventory/hosts MyPlay.yaml -k

If everything worked out you will see the following output:

What is Cloud

Cloud computing is an operations model, and a cloud server (or VM) is the productization of that operations model.

When it comes to the actual cloud server, there is no cloud layer or cloud software. Software called a hypervisor is used to abstract resources through virtualization. Cloud itself is not a technology, but a bundle of technologies and procedures.

With your average physical server, the physicality of the server along with its operating system defines the minimum and maximum resources available. However, because cloud servers are abstractions, they do not benefit from these predefined rules. Instead, cloud servers first started as user-defined. Every time you spun up a cloud server, you defined its resource usage, which is all well and good, but abstraction can lead to so much more.

Cloud servers eventually moved from user-defined to software-defined. In other words, instead of a user directly orchestrating cloud server deployments, the user creates a list of rules, and cloud servers are deployed and defined by those rules. These rules create an “elastic” server. A server that can both expand and contract based on rules. The rules also created cattle servers, servers that started and stopped, transferred data all based on rules.  We call them cattle because they are servers we are not attached to (whereas I named every one of my personal computers after computer villains, true story, GLaDOS is my current).

The next technology that folds into cloud is API (Application Programming Interface). Let’s say you have a piece of software and I have a piece of software. My software creates cloud servers through a set of rules; your software installs software packages onto servers. Through the use of an API, your software can connect to mine and take the system requirements of an application and use that to provision a server. Now when a user clicks Create a WordPress site on your system, your API tells my system to provision a cloud server with the appropriate resources and to install PHP, Apache (I would say NGINX, but today I feel like saying Apache), MySQL, and WordPress. This process happens in seconds.

Lastly, it is not just servers that can be provisioned with the cloud operations model but network gear as well. Going back to the previous example, instead of one cloud server, we can have four cloud servers with two load balancers, and two firewalls spun up. Of the four servers, two servers running WordPress and two servers running MySQL. Each component orchestrated automatically and at speed to create a complex solution.

And all of this leads to software-defined datacenters. Full orchestrations of company IT infrastructure using a set of rules.

Now the inherent strengths or flaws in the cloud system depend on the service provider. The base server hardware of the service provider’s cloud carries tremendous weight in the overall system health of the customer’s cloud servers. Internal network connection speed is also vital in determining overall efficiency and speed. Lastly, the often ignored, physical proximity of the servers that make up your cloud can also have a lasting effect on your system’s overall health.

GigeNET standardized our cloud servers on the Xeon-D platform. The Xeon-D family was based on a partnership between Facebook and Intel (for more information on that partnership, check Facebook’s blog) in a play to increase computing power while reducing power consumption. The Xeon-D allows us to increase the capacity of our cloud and provide resources at impressive amounts of scale.

And if the public cloud is not to your liking, we are currently working on automated orchestrations of private cloud infrastructure to turn a months-long project into just a handful of hours.

Our customers talked to us about what bothers them the most about cloud, and not only did we listen, but are making the solution a reality.

There is a lot of noise these days about this soon-to-be implemented EU regulation, the GDPR (General Data Protection Regulation), making the topic hard to miss — but how much do you understand about GDPR, and to what extent can it can impact your U.S.-based business?

What is this GDPR thing, and why should you care?

Adopted by the European Union on April 27th, 2016, and scheduled to become enforceable on May 25th, 2018, the GDPR is a regulation designed to greatly strengthen an EU citizen’s control over their own personal data. In addition, the regulation is meant to unify the myriad of regulations dealing with data protection and data privacy across member states. Finally, its reach also extends to the use and storage of data by entities outside of the EU (Spoiler Alert! This is the part that affects us).

Enforcement of the provisions within GDPR is done via severe penalties for non-compliance, with fines up to €20 million, or 4% of the worldwide annual revenue (whichever is greater). Now, as a non-EU entity, you may think that your company won’t be subject to these fines, but that is incorrect. As a U.S. company that collects or processes the personal data of EU citizens, the EU regulators have the authority and jurisprudence, with the aid of international law, to levy fines for non-compliance.

In addition, your EU-based clients can be held accountable for providing personal information to a non-compliant 3rd party (your company). This is strong incentive for EU-based citizens and companies to insist on working only with GDPR-compliant 3rd parties, costing your company all EU-based business.

As you will soon realize, the GDPR is a vast set of regulations, with a large scope and sharp teeth. I cannot possibly go into enough detail in a blog post to map out a roadmap towards compliance, and neither is that my goal. If that is what you are looking for in a blog post, well, maybe you shouldn’t be responsible for anyone’s personal data….

No, my intent here is to demonstrate the importance of the GDPR, hopefully convince you to take it seriously and start down the road to compliance, and finally to point you in the right direction to start your journey.

The expanding scope

The GDPR expands the definition of personal data in order to widen the scope of its protections, aiming to establish data protection as a right of all EU citizens.  

The following types of data are examples of what will be considered personal data under the GDPR:

Does your company collect, store, use or process anything considered personal data related to an EU citizen by the GDPR?  If you have any EU clients, customers, or even just market to anyone in the EU, it is unlikely you could avoid being subject to GDPR.

The EU is seeking to make data privacy for individuals a fundamental right, broken down into several more-precise rights:

  • The right to be informed
      • A key transparency issue of the GDPR
      • Upon request, individuals must be informed about:
        • The purpose for processing their personal data
        • Retention periods for their personal data
        • All 3rd parties with which the data is to be shared
      • Privacy information must be provided at the time of collection
        • Data collected from a source other than the individual extends this requirement to within one month
      • Information must be provided in a clear and concise manner.
  • The right of access
      • Grants access to all personal data and supplementary information
      • Includes confirmation that their data is being processed
  • The right to rectification
      • Grants the right to correct inaccurate or incomplete information
  • The right to erasure
      • Also known as “the right to be forgotten”
      • Allows an individual to request the deletion of personal data when:
        • The data is no longer needed under the reason it was originally collected
        • Consent is withdrawn
        • The data was unlawfully collected or processed
  • The right to restrict processing
      • This blocks processing of information, but still allows for its retention
  • The right to data portability
      • Allows an individual’s data to be moved, copied or transferred between IT environments in a safe and secure manner.
      • Aimed to allow consumers access to services which can find better values, better understand understand spending habits, etc.
  • The right to object
      • Allows an individual to opt-out of various uses of their personal data, including:
        • Direct marketing
        • Processing for the purpose of research or statistics
  • Rights in relation to automated decision making and profiling
    • Limits the use of automated decision making and profiling using collected data

Sprechen Sie GDPR?

Before diving deeper, it is important to understand some key terms used by the regulation.

The GDPR applies to what it calls “controllers” and “processors.”  These terms are further defined as Data Controllers (DCs) and Data Processors (DPs).  The GDPR applies differently in some areas to entities based upon their classification as either a DC or as a DP.

  • A Controller is an entity which determines the purpose and means of processing personal data.
  • A Processor is an entity which processes personal data on behalf of a controller.

What does it mean to process data?  In this scope, it means:

  • Obtaining, recording or holding data
  • Carry out any operation on the data, including:
    • Organization, adaptation or alteration of the data
    • Retrieval, consultation or use of the data
    • Transfer of data to other parties
    • Sorting, combining or removal of the data

The Data Protection Officer, or DPO, is a role set up by the GDPR to:

  • Inform and advise the organization about the steps needed to be in compliance
  • Monitor the organization’s compliance with the regulations
  • Be the primary point of contact for supervisory authorities
  • Be an independent, adequately resourced expert in data protection
  • Reports to the highest level of management, yet is not a part of the management team.

The GDPR requires a DPO to be appointed to any organization that is a public authority, or one that carries out certain types of processing activities, such as processing data relating to criminal convictions and offences.

Even if the appointment of a DPO for your organization is not deemed necessary by the GDPR, you may still elect to appoint one anyway.  The DPO plays a key role in achieving and monitoring compliance, as well as following through on accountability obligations.

The Nitty Gritty

In addition to expanding the definition of personal data and providing individuals broad rights governing the use of that data, the GDPR provided a number of requirements for organizations requiring that data shall be:

“a) processed lawfully, fairly and in a transparent manner in relation to individuals;

b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall not be considered to be incompatible with the initial purposes;

c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed;

d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay;

e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes subject to implementation of the appropriate technical and organisational measures required by the GDPR in order to safeguard the rights and freedoms of individuals; and

f) processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.” 

— GDPR, Article 5 

 

Additionally, Article 5 (2) states:

“the controller shall be responsible for, and be able to demonstrate, compliance with the principles.”

This last piece, known as the accountability principle, states that it is your responsibility to demonstrate compliance.  To do so, you must:

  • Demonstrate relevant policies.
    • Staff Training, Internal Audits, etc.
  • Maintain documentation on processing activities
  • Implement policies that support the protection of data
    • Data minimisation
      • A policy that encourages analysis of what data is needed for processing, and the removal of any excess data, or simply collecting only what is needed, and no more
    • Pseudonymisation
      • A process to make data neither anonymous, nor directly identifying
      • Achieved by separating data from direct identifiers, making linkage to an identity impossible without additional data that is stored separately.
    • Transparency
      • Demonstration that personal data is processed in a transparent manner in relation to the data subject
      • This obligation begins at data collection, and applies throughout the life cycle of processing that data
    • Allow for the evolution of security features going forward.
      • Security cannot be static when faced with a constant-evolving environment.
      • Policies must be flexible enough to protect from not just today’s and  yesterday’s threats, but from tomorrow’s.

The best laid plans…

Despite one’s adherence to these new policies, and implementation of tight security policies, there is no guarantee the data you are responsible for keeping safe will be absolutely secure.  Data breaches are more or less inevitable. Being aware of this, the GDPR has provisions regarding the reporting of data breaches should (when) they happen.

Not sure how to navigate these waters with your current infrastructure? We can help.

A data breach is a broader term than one may think.  Typically, the deliberate or accidental release of data to an outside party (say, a hacker) would be what one would consider a breach — and you would be right, it is a breach — but there is much more that can be considered a breach.

All of the following examples constitute a data breach:

  • Access by an unauthorized third party
  • Loss or theft of storage devices containing personal data
  • Sending personal data to an incorrect recipient, whether intended or not
  • Alteration of personal data without prior authorization
  • Loss of availability, or corruption of personal data

Data breaches must be reported to the relevant supervisory authority within 72 hours of first detection. Should the breach be likely to result in risk to an individual, that individual must also be notified without delay. All breaches, reported or not, must be documented.

Bit off more than you can chew?

This may seem like a lot to take in, and it should be.  The GDPR was designed to expand the privacy rights of all EU citizens, as well as replace the existing regulations of all member states with one, uniform set of regulations.

The good new is, as a U.S. company, you don’t have to take every step towards compliance alone.

The U.S. government, working with the EU, developed a framework to provide adequate protections for the transfer of EU personal data to the United States. This framework, called Privacy Shield, was adopted by the EU in 2016 and has passed its first annual review.

In order to participate in the Privacy Shield program, U.S. companies must:

  • Self-certify compliance with the program
  • Commit to process data only in accordance to the guidelines of Privacy Shield
  • Be subject to the enforcement authority of either:
    • The U.S. Federal Trade Commission
    • The U.S. Department of Transportation

To learn more about Privacy Shield, visit www.privacyshield.gov

How I learned to stopped worrying and love the GDPR

Getting compliant with the GDPR may seem like a huge P.I.T.A., but there are real benefits to following this path that extend beyond retaining EU contracts and avoiding hefty fines.  Data privacy is a huge issue world-wide, and being compliant with one of the strictest sets of regulations will help appease clients and customers from all corners of the globe. Even if you don’t have any interaction with EU citizens or organizations, becoming GDPR compliant may still be a great idea.

Compliance forces you to evaluate your systems and processes, ensuring that they are secure and robust enough to survive in the ever-changing landscape in which data privacy resides.  This transforms compliance from a tedious duty to a strong selling point.

Click Here to find out how GigeNET can help you!

Securing Memached Services

Over the past few weeks, a new DDoS attack vector through the use of memcached has become prevalent. Memcached is an object caching system with the original intent of speeding up dynamic LiveJournal websites back in 2003. It does this by caching data in RAM instead of calling data from a hard drive, thus reducing costly disk operations.

Deeper analysis of the security issues:

Memcached was designed to give the fastest possible cache access, hence it isn’t recommended to leave open on a public network interface. The recent attacks utilizing Memcached take advantage of the UDP protocol and an attack method known as UDP reflection.

An attacker is able to send a UDP request to a server with a spoofed source address, thus causing the server to reply to the spoofed source address instead of the original sender. On top of sending requests towards a server with the intent of “reflecting” them towards another server, attackers are able to easily add to the cache. Because memcached was designed to sit locally on a server, it was never created with any form of authentication. Attackers can connect and add to the cache in order to amplify the magnitude of the attack.

The initial release of Memcached was in May of 2003. Since then, the uses of it have expanded greatly, but the original technology has remained the same. While its uses have been expanded, its security features have not.

Below is a sample packet we captured from a server participating in one of these reflection attacks. This is the layer 3 information of the packet, the source IP is spoofed to point to a victim’s server:

memcached

This is the layer 4 information, Memcached listens on port 11211:

memcached

In addition to being able to be used as a reflector for attackers, attackers can also extract highly sensitive data from within the cache because of its lack of authentication. All of the data within the cache has a TTL (Time To Live) value before it is removed, but it still isn’t difficult to pull information from.

Below is an example of how easy it is for an attacker to alter the cache on an unsecured server. We simply connected on port 11211 over telnet and were immediately able to make changes to the cache:

memcached

Solution Overview

In order to decide how to best secure Memcached on your server, you must first determine how your services use it. Memcached was originally designed to run locally on the same machine as the web server.

A: If you don’t require remote access, it is best to completely prevent it from using internet protocol.

B: If you require remote access, it is recommended to whitelist the source IPs of what needs to access it. This way you control exactly what machines can make changes and read from it.

Solution Instructions:

In the case that remote access is not required, it is advised to ensure Memcached can only speak to local host on 127.0.0.1 on startup.

Ubuntu based servers:

sudo nano /etc/memcached.conf

Ensure the following two lines are present in your configuration:

-l 127.0.0.1

This will bind Memcached to your local loopback interface preventing access from anything remote.

-U 0

This will disable UDP for Memcached thus preventing it from being used as a reflector.

Then restart the service to apply the settings:

sudo service memcached restart

CentOS based servers:

nano /etc/sysconfig/memcached

Add the following to the OPTIONS line:

OPTIONS="-l 127.0.0.1 -U 0"

Restart the service to apply the settings:

service memcached restart

If Memcached needs to be accessed remotely, whitelisting the IPs that are allowed to connect will best secure your server.

Using iptables:

sudo iptables -A INPUT -i lo -j ACCEPT

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

sudo iptables -A INPUT -p tcp -s IP_OF_REMOTE_SERVER/32 --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

sudo iptables -P INPUT DROP

Defining a /32 in the above commands specifies a single server that will be allowed access. If multiple servers in a range require access, the CIDR notation of the range can be input instead:

sudo iptables -A INPUT -p tcp -s IP_RANGE/XX --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

Using CSF:

nano /etc/csf/csf.allow

Add the following line to whitelist IPs:

tcp|in|d=11211|s=x.x.x.x

You can also specify a range using CIDR:

tcp|in|d=11211|s=x.x.x.x/xx

tcp = the protocol that will be used to access Memcached
In = direction of traffic
d = port number
s = source IP address or IP range

Save the file and then restart the service:

csf -r

After whitelisting the IPs allowed to access Memcached, we must rebind the service to use the interface we wish for it to communicate on.

On Ubuntu based servers:

sudo nano /etc/memcached.conf

Change the IP on this line to represent the IP of the interface on your server:

-l x.x.x.x

Then restart the service to apply the settings:

sudo service memcached restart

On CentOS based servers:

nano /etc/sysconfig/memcached

Change the IP following the -l flag to that of your server’s interface:

OPTIONS="-l x.x.x.x -U 0"

Restart the service to apply the settings:

service memcached restart

Conclusion

The best way to secure your server from these vulnerabilities is to prevent Memcached from talking on anything other than the local host. If the service must be accessed remotely, ensure to adequately secure it using your server’s firewall. Securing your server will not only prevent it from being used in malicious DDoS attacks, but also ensure that confidential data isn’t compromised. Taking the above actions will help the community as a whole and prevent unwanted bandwidth overages.

GigeNET Cloud Adds Package Based Public Cloud for Easier Consumption

GigeNET, an innovator in resilient managed cloud and dedicated hosting, announced, through their subsidiary GigeNET Cloud, a new cloud server offering, GigeNET Cloud Lite.

“Our classic cloud product allowed for extreme levels of granular control over system resources with the ability to scale memory by the megabyte. However, this form of control is no longer a common use case. Instead, customers have grown accustomed to consuming public clouds in a package form, and our Lite Cloud product reflects this reality,” said Ameen Pishdadi, founder, and president of GigeNET Cloud.

GigeNET’s Lite cloud product offers self-healing SAN, high availability, live migrations, seamless failover, fastest route optimization, and much more.

Cloud Lite comes in three forms Balanced, Core, and Max.

Cloud Lite Balanced offers a standard mix of both memory and compute power allocation. These cloud servers provide a best of breed experience for a wide range of applications.

Cloud Lite Core offers an increased amount of compute necessary for applications that are CPU intensive. The Core family is suited for high computational scientific workloads, high-trafficked websites, and online gaming.

Cloud Lite Max offers increased memory for high-performance applications. The Max family is excellent for dev environments, performance-intensive databases, and the like. In the next iteration of Max packages, memory will scale up to 64GB with possible future increases likely.

“GigeNET Cloud Lite may not have the same level of control as our classic offering, but, like our classic cloud product, it does score consistently higher in both performance and reliability tests when compared to other clouds. We wanted to make sure we kept the innovations GigeNET is known for without compromise and I think the team did just that,” Joe De Paolo, SVP of Products at GigeNET.

GigeNET’s Cloud Lite offering runs on their standardized KVM platform backed Xeon-D hardware for consistent high-performance and availability.

To learn more about GigeNET and GigeNET Cloud, please visit https://www.GigeNET.com and https://www.GigeNETCloud.com/ respectively.

For a full list of features offered by the new GigeNET Cloud Lite, please visit https://www.gigenetcloud.com/about-us/compare/.

About GigeNET
GigeNET is your Hosting Partner for Life. We are your trusted collaborator for offloading your IT requirements and moving your business forward. As the first company to provide complete and fully automated DDoS protection, GigeNET is the leader in Resilient Cloud and Dedicated Managed Hosting. Businesses partner with GigeNET to solve their IT infrastructure problems with custom solutions that nurture continued growth.

Mitigating-the-effects of Data Loss RAID vs Backups

GigeNET adds the last server building block to its server line-up, and completely removes legacy hardware

GigeNET, an innovator in resilient managed cloud and dedicated hosting, announced the release of the Dual E5-2630 v4 and the finalization of the GigeNET server line. This 20 core server has space for up to two terabytes of DDR4 RAM and as much as 64 TB of local storage. In preliminary tests, the Dual E5, was able to accomplish network applications faster than any of the other servers we tested.

“The Dual E5 is a monster of a server and allows us to tackle many of the problems our customers saw with private clouds designed by other providers. In a cluster with failover, the memory ceiling is off the charts removing a critical barrier to private cloud adoption,” said Joe De Paola, SVP of Products at GigeNET.

Intel built the E5-2600 v4 series to be the foundation for software-defined enterprise data centers. According to Intel’s benchmark numbers, the dual E5 can support up to 240% more VMs while reducing operation costs by 58%.

“Like the Xeon-D family, the E5-2600 v4 series is a model of efficiency and raw power and augments our recent rollouts. We have made an environmentally conscious decision by standardizing on both of these servers without compromising performance for our customers. We are looking to greatly increase our datacenters’ server density by as much as 10x while using far less power. This is a huge win,” said De Paola.

GigeNET’s Xeon E5-2630 v4 start with 128GB of DDR4 and can quickly scale to 2 TB. Rounding out the configuration, every Xeon E5-2630 v4 comes with a 960GB Enterprise SSD or 8 TB Enterprise 7200 RPM SATA drive standard. The Xeon E5-2630 v4 can scale up to 8 total hard disks. Order yours today.

Load More ...