Web Security

cyber security month 2018

National Cyber Security Awareness Month

Our team has been laser-focused on security-related topics for National Cyber Security Awareness Month this October.  If there is any big takeaway from this exercise, it’s seeing how pervasive cyber ...

ad linux security

Until recently, Linux authentication through a centralized identity service such as IPA, Samba Active Directory, or Microsoft Active Directory was overly complicated. It generally required you to manually join a server or workstation to a company’s domain through a mixture of Samba windbind tools, and kerbose krb5 utilities. These tools are not known for their ease of use and can generally lead to hours of troubleshooting. When kerbose was not applicable due to networking a limitation, an administrator had to resort to an even more complicated set of configurations with OpenLDAP. This can be frustrating to deal with and had led some to deploy custom scripts to manage user management. I have seen administrators utilize Puppet, Chef, and Ansible to roll out user management. At GigeNET, we are guilty of this with our Solaris systems. The bulk of our architecture is Linux based, and we now manage the authentication through Microsoft Active Directory.

The complexity of joining a domain has been severely diminished. The Linux community understood these tools were not ideal to manage, and have come up with a new solution. They introduced the System Security Services Daemon (SSSD). System Security Services Daemon (SSSD) is a core project which provides a set of daemons to manages remote authentication mechanisms, user directory creation, sudo rule access, ssh integration, and more. Even still the SSSD configuration can be quite complex on its own. Each component requires you to understand each of the underlying utilities I brought up in the introduction. While it’s good to understand each of these components it’s not fully necessary as, once again, the Linux community banded together to build a few tools that wrap around SSSD. In earlier Linux distributions the tool was called adcli. The tools to manage the integration processes is now referred to as realmd on most distributions. You can do most basic user administration with the realmd command. I have added a snippet of how one can easily join a domain:

[root@server001 ~]# realm join -v -U administrator gigenet.local.
* Resolving: _ldap._tcp.gigenet.local.
* Performing LDAP DSE lookup on: 192.168.0.10
* Performing LDAP DSE lookup on: 192.168.0.11
* Successfully discovered: gigenet.local
realm: server001 has joined the domain

With the snippet above you should have noticed it will look up the DNS of the domain and will try to perform a join. On the backend, this utilizes the net join command from winbind. This command will also build out a basic SSSD configuration file. If the join was successful we should now be able to utilize any user account within our domain. I normally perform a quick ssh login with my domain username. If successful you should notice that you are logged in, and a directory for the user account should have been created.

linux authentication tutorial

Doing research on SSSD I didn’t find any full-blown examples for a user’s SSSD configuration file. I believe in transparency and have included a templated example of our internal configuration file. It’s very basic in design and works on a few hundred internal systems without complaint.  Please note we have substituted our real domain in this example for gigenet.local.

[sssd]
domains = gigenet.local
config_file_version = 2
services = nss, pam, ssh, sudo[ssh]
debug_level = 0[sudo]
debug_level = 0[domain/gigenet.local]
debug_level = 0
ad_domain = gigenet.local
krb5_realm = GIGENET.LOCAL
realmd_tags = manages-system joined-with-samba
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u
access_provider = simple
simple_allow_groups = Operations
sudo_provider = ad
ldap_sudo_search_base = CN=Sudors,OU=Accounts,OU=North America,DC=gigenet,DC=local
ldap_sudo_full_refresh_interval=800
ldap_sudo_smart_refresh_interval=800

With basic user authentication working I wanted to focus on a small feature you most likely noticed within the SSSD configuration template we have provided. This feature is the sudo integration. The documentation on sudo integration is very sparse on the internet with conflicting information.  The documentation normally involved a few commands being shown around without any documentation of what the command does. It took me hours to piece the information together with guides, blog posts, and Linux man pages. Hopefully, the information I have detailed out below doesn’t follow this pattern. I still remember the hours of going through SSSD sudo log files line by line as if it was yesterday.

To utilize sudo we have to add the sudo schema to our Active Directory domain. This requires small modifications to the global Microsoft Active Directory schema. Before you perform the adjustments I really recommend doing a full domain backup. Touching the global schema tends to make some administrators very uncomfortable. Our domain is not very large and doesn’t have teams managing it like in some companies. We decided the benefits of centralized user authentication with centralized sudo configurations were worth the small adjustment. It’s also to my surprise that every guide I have found on the internet does not include the location of the actual schema file. I spare you a few hours of research the files are actually located within the actual sudo directory under /usr/share. We have also uploaded this to our GIT repository (https://github.com/gigenet-projects/blog-data/tree/master/sssdblog1)

Let’s dive into applying the schema file. Please ensure you have the ldifde command on your Windows Active Directory domain controller. To apply the schema you will also have to copy over the schema file named “schema.ActiveDirectory” to the domain controller your working on. Start up a powershell command prompt, and enter the command in the snippet below. Don’t forget to substitute the gigenet.local LDAP base with your own domain.

Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator\Desktop> ldifde -i -f schema.ActiveDirectory -c dc=X DC=gigenet,DC=local

With the schema applied we should be able to build out our first rule set. Don’t fret I’ll be following through this with a series of pictures to actually design a sudo rule. Before we begin you will need to create a group or a container within your domain where the rules will be configured. This will later be utilized by the SSSD configuration file. Once the group has been configured we will build our rules with the adsiedit.msc tool. Run adsiedit.msc within powershell to open the tool. With the tool open you will now need to transverse your domain tree and go to the domain group you created to store the sudo rules. To build out our sudoRole object started by right-clicking within the Microsoft Active Directory group. Follow the pictures as a general guideline. Our guide will be adding the ping command as a sudo rule. This sudo rule has a few configuration options that we spent many hours exploring on our end. Shall we begin?

linux authentication tutoriallinux authentication tutorial

With the right click option we select New->Object Creation. This brings up the second window with a hundred different types of objects. In our case, we select the sudoRole and move onto the next field. This role matches most of the options one would find in the /etc/sudoers file. This leads to a section where we will name the actual sudo rule, and the attributes we can assign to the rule we just named.

linux authentication linux authentication

The next three images will basically tell the story about the sudo rule we want to create. The attributes section has dozens of options to tailor your rules to your own design, but we will go through the three simple attributes that you would commonly see in a sudoers configuration file. The story we will describe is one of legend. Our rule will allow us to run the ping command as the root user account and without a password. With the first prompt, you will notice we specify the user account, and the second has a prompt for which commands we want to run. Pro TIP: Look for any extra whitespaces because this can lead to a few extra hours of troubleshooting. Whitespaces will break your rules. In the last image, we add a secret option to run the command without a sudo password. This hidden gem took me about a day to figure out as the internet had almost no documentation on this feature.

linux tutorial cybersecuritylinux authentication tutorial securitylinux authentication tutorial security

With the basics completed, we can now save the rule. Take a little time to explore all the additional options that can be set as an attribute. It’s worth taking about an hour to explore these options. Now to get this rule working on the actual Linux host we have to go back into our SSSD configuration. Under the SSSD section is a service value where we add the sudo configuration option.  We then apply a few sudo options under the domain section. Let me explain each configuration option as they are defined in the configuration above:

  • sudo_provider: The provider we utilize to pull in the sudo rules. We are utilizing the Active Directory provider in this configuration.
  • ldap_sudo_search_base: The Active Directory base group where we dumped the ping object. This base search will pull in every rule within this domain group, or container.
  • ldap_sudo_full_refresh_interval: The interval on which SSSD will look up, and pull new rules into the live sudoer configuration. This allows updates live so you don’t manually need to clear the SSSD cache, and perform a restart.

The last configuration required to get the sudo rules working is a small adjustment to the systems NSS configuration file. Please edit “/etc/nsswitch.conf” to include the following line “sudoers: files sss”.  The output below was taken from a live system:

passwd:   files sss
shadow:  files sss
group:    files sss
sudoers:  files sss

This about wraps up our entry-level introduction to Linux authentication through Active Directory. This security practice easily allows you to maintain centralized identity services so you don’t have to constantly push new users to a host, nor cleanup suspended user account. The passwd, and group files on the Linux system stay clean with this method. We will include a second follow up blog in the future that briefly goes through the painful details of adding a public ssh key to each account and storing the key within Active Directory.

 

Cybersecurity: Data on Vulnerabilities in Web Applications

At the speed that information travels, it’s easy to forget that the Internet is relatively young. With the potential for exponential growth, above all the negative foresight, we can start to see the benefits of the Internet when data is used to progress technology; and humanity as a whole.

Cybersecurity projects dedicated to analysis, development, and research of vulnerabilities are now working alongside industry leaders and corporations such as Cisco Talos, Google, and IBM with the intent to purposefully expose design flaws. The efforts to essentially break software intentionally appears malicious and rude in nature. However, these deliberate attacks provide transparency, promoting security strengthening from potential threats. In practice, it’s better the good guys find a flaw before the bad guys exploit it. Zero-day vulnerabilities are provided to vendors prior to public disclosure, giving developers the opportunity to implement a patch. The idea is to work together, as corporations such as Google partner with free software projects such as the GNU Project, that provides a platform for open source projects improves upon.

Using Analytical Data to Protect Users

Open source projects are largely community driven, and many projects are a product of member development and research contributions. The Open Web Application Security Project, abbreviated as OWASP, is a not-for-profit organization dedicated to Web Application security. Providing Web App security and analytical data, this open source community has a more direct effect on the server level. While larger corporations like Cisco, Google, and IBM operate on the cutting edge, projects like OWASP has compiled a Top-10 Security Risks in Web Applications using data gathered in 2017.

Top Cyber Security Risks

  1. Injection: SQL, XML Parser, OS commands, SMTP headers
    Injection-type attacks increased significantly—up 37 percent in 2017 from 2016. Code injection attacks can comprise an entire system, taking full control. SQL injection breaches the database, querying the most vital component that often houses personal information.
  2. Authentication: Brute force, Dictionary, Session Management attacks
    Weak passwords grow more susceptible to dictionary attacks as word lists continue to inflate. Refrain from setting special character limit and max length values that discourage password complexity. Successful authentications generate random session IDs with an idle timeout.
  3. Security Misconfiguration: Unpatched flaws, default accounts, unprotected files/dirs
    Errors were at the heart of almost one in five breaches.
  4. XML External Entities: DDoS, XML uploads, URI evaluation
    CMS using XML-RPC, which include WordPress and Drupal, vulnerable to remote intrusion. There have been many instances of pingback attacks used to send DoS/DDoS traffic. In most cases, the XML-RPC files can be removed completely. XML processors can evaluate URI, which can be exploited to upload malicious content.
  5. Insufficient Logging & Monitoring
    Preventing irreparable data leaks requires awareness. 68% of breaches took months or longer to discover. Logging and monitoring alerts are essential for recording irregularities.

Future of Cyber Security

Knowledge of the risks is the best defense. Preparedness of the seemingly inevitable attack is the greatest asset in a world network crawling with vulnerabilities. It’s no question that security starts with the individual. The majority of IT professionals agree that related courses should be a requirement. Vulnerabilities will occur as technology progresses, as a community, we can see the importance of data and analytics in innovation.

Explore GigeNET’s DDoS Protection services or chat with our experts now to create a custom solution.

cybersecurity workspace

Security experts can only do so much. Imagine the sophisticated systems at global banks, research facilities, and Las Vegas casinos (“Ocean’s Eleven,” anyone?) — an excess of cameras, guards, motion detectors, weight sensors, lasers, and failsafes.

But what happens if someone leaves the vault door open?

Similarly, server and network security measures can only go so far. Attackers don’t need to engineer a complex and highly technical method to infiltrate your business’s infrastructure: They just need to entice a somewhat gullible or distracted employee into clicking on a link or opening an attachment.

Whether an employee is acting intentionally or is unaware and careless, 60% of all attacks come from within. A vulnerability can be exposed by an accountant, a systems administrator, or a C-level executive, and the results can cost a company millions in downtime, lost sales, and damaged brand reputation.

IT teams can take all the modern precautions to shore up any potential vulnerabilities by following industry best practices with onsite hardware, applications, and websites. Employing a trusted hosting provider like GigeNET adds even stronger protections in the form of high-touch, individualized managed services and state-of-the-art DDoS protection.

But that may not be enough to protect your organization from well-meaning employees who fall for intricate phishing schemes or ransomware attacks. So, in the spirit of Cyber Security Month at GigeNET, here are a handful of ways businesses can turn their weak links into a strong line of defense.

Enforce Strong Passwords

This one seems like it’d be an obvious one — and relatively easy to control. But even in 2016, nearly two-thirds of data breaches involved exploiting weak, stolen, or default passwords. As the first line of defense against attacks, ensuring your employees follow stringent authentication practices is key to protecting your company’s sensitive data.

Educate employees on what constitutes a strong password and enforce the standards you implement. Passwords should be unique and lengthy combinations of upper- and lower-case letters, numbers, and symbols, and you can ban users from using easily guessed information like their first or last name, the company’s name, or even careless passwords such as ‘password’ or ‘1234.’

Once stronger password rules are in effect, require employees to update and change critical passwords periodically. You can encourage users to employ a password manager program to help them stay on top of their access rights.

Password management gets a little more complicated when there are different levels of employees who require various levels of access to certain applications and software. Regularly evaluate user permissions and make sure access is granted only to those who truly need it. Of course, proactively manage login permissions and shared passwords when employees leave the company — even if the parting is on good terms.

Educate and Test Employees on Phishing

We’re long past the days of the unjustly exiled Nigerian prince offering his family fortune to those willing to front him a little money for his escape. Email phishing is the attempt to obtain sensitive information — think usernames, passwords, credit card numbers, and other types of personal data — by sending fraudulent emails and typically impersonating reputable companies or people the intended victim knows.

Through the years, phishing attacks have become more subtle and harder to detect, even for the filters and safeguards employed by Office 365 and G Suite. Attackers will customize messages to exploit specific weaknesses in email clients and popular online platforms. Email phishing has scored some high-profile victories in recent years, enabling leaked emails from Sony Pictures and Hillary Clinton’s 2016 presidential campaign. In fact, the latter attempt even fooled the campaign’s computer help desk.

Attackers are more frequently targeting businesses and organizations instead of random individuals and often use the infiltration to start a ransomware attack. Personalized emails, shortened links, and fake login forms all serve to trick users into sharing sensitive login information or network access.

Train employees on modern phishing scams and how to spot them. Implement processes that enable employees to report possibly harmful messages, and consider deploying a service that runs phishing simulations or uses artificial intelligence or machine learning to detect spoofed senders, malicious code, or strange character sets.

Protect Against Human Error

Of course, no one is perfect. Mistakes happen, and there often isn’t a shred of malice behind and insider’s misstep. Given employees’ access to sensitive data, however, the slightest error can have disastrous results.

The threat of simple, bone-headed errors plagues businesses large and small. Even Amazon blamed an employee for inadvertently causing a major outage to Amazon Web Services in 2017. Several years earlier, an Apple software engineer mistakenly left a prototype of the highly anticipated iPhone 4 at a bar.

Whether your employees are handling important data or devices, training and awareness are critical to promoting stable and secure operations. An organization is only as strong as its weakest link, and one simple slip up can have major consequences.

Protect your organization by implementing rigorous coding standards, quality assurance checks, and backups. Take a critical look at user permissions and access to prevent employees from inadvertently making system changes or accidentally downloading or installing unauthorized software. Consider how company devices and sensitive data are handled across the organization, and prepare for worst-case scenarios.

Stay Vigilant and Rely on the Experts

Although a rare weak password or unused admin account may not pose an immediate threat to your company, any security oversight can lead to disastrous results at a moment’s notice. Act holistically when it comes to protecting your business infrastructure, devices, and data — inside and out.

GigeNET will gladly secure and monitor your systems to proactively diagnose and patch vulnerabilities before they become exploits, but comprehensive security extends beyond our server hardening, managed backups, and scalable DDoS protection service. Security is a team sport, so huddle up and let us draw up your organization’s security game plan.

 

linux security

While working as a sysadmin over the years, you truly start to understand the importance of security patches. On a semi-daily basis I see compromised servers that have landed in an unfortunate situation due to lack of security patching or insecure program execution (e.g. running a program as root unnecessarily). In this blog post I’ll be focusing on the importance of patching your Linux servers.

As you may know, there have been many high severity Linux kernel and general CPU vulnerabilities these past few years. For example, the Dirty COW Linux kernel vulnerability and the CPU speculative execution vulnerabilities all require patching. If you’re not taking security patching seriously, now is the time to start. Something as simple as subscribing to your Linux distribution’s security mailing list and applying patches as needed could prevent a compromise. Most that are concerned with security have learned the hard way and have had their servers compromised. But who wants to learn the hard way? There is a lot more attention that needs to go into securing your server, but patching is the first line of defense.

Top Linux server security practices: 

  1. Subscribe to your Linux distribution’s security announcements mailing list. For example the CentOS-announce or the debian-security-announce mailing lists. These will notify you when packages are updated that contain security patches. They’ll also go over which vulnerabilities the patch covers.
  2. Read security related news! It’s important to keep up with the latest news on security topics. I’ve discovered the need to patch software many times by just reading news.
  3. Check if you actually need the patch, and how it applies to your environment. It’s best to not blindly patch everything in the name of security. For instance, the vulnerability may not even affect you in any way. I commonly see this a lot with Linux kernel vulnerability patches. There’s generally a lot of them, but most are not too bad. It’s worth saving you from having to do yet another reboot.
  4. If you delay patches due to worries about downtime, implement redundancy into what you’re doing. It’s important that critical vulnerabilities get patched, but it’s also important that your production server remains up and accessible. The best option, even if difficult would be to figure out a redundant way of doing the things you do with high availability.

Patching is probably the easiest part of maintaining a secure environment. So there’s no excuse to neglect your system! It also prevents a headache for your future self.

How can GigeNET keep your business secure? Chat with our experts now.

top ssh security best practices blog header

SSH is a common system administration utility for Linux servers.  Whether you’re running CentOS, Debian, Ubuntu, or anything in between; if you’ve logged into a Linux server before, you likely have at least heard of it.

The acronym SSH stands for “Secure Socket Shell”, and as the name implies, the protocol is built with security in mind.  Many server administrators assume that SSH is pretty secure out of the box, and for the most part, they’d be correct. SSH by default has fantastic security features out of the box, like encryption of the communication to prevent man in the middle attacks, and also host key verification to alert the user if the identity of the server has changed since they last logged in.

Still, there are a large number of servers on the Internet running SSH, and attackers like to find attack vectors that could potentially affect a large number of servers.  With security, convenience tends to be sacrificed, so many server administrators intentionally, or without much thought, leave their servers running default SSH installations.  For the most part, this isn’t an issue for most of them, but there are some steps that you can take to be ahead of the curve. After all, I believe that being a little bit ahead of the curve is one of the best security practices to reach for, that way your server avoids being one of the lower hanging fruit that can be tempting to attackers.

With that in mind, here are some techniques that you may want to consider for your Linux server to help improve your SSH security.

Brute Force Protection

One of the most common techniques for improving SSH security is brute force protection.  This is because one of the most common security concerns faced by server administrators running SSH services is brute force attacks from automated bots.  Bots will try to guess usernames and passwords on the server, but brute force protection can automatically ban their IP address after a set amount of failures.

A few common open source brute force protection solutions are ConfigServer Firewall (CSF) and Fail2Ban.  CSF is most common on cPanel servers, since it has a WHM plugin.

Pros and Cons of Brute Force Protection

Pros

  • Will help to cut down on failed logins from bots by automatically banning them, making it much less likely that a bot will have the opportunity to guess the login details for one of your SSH accounts.
  • Very easy to implement with no changes to the SSH configuration required.

Cons

  • These brute force programs have no way to tell bots apart from you and your users.  If you fail login too many times by accident, you could lock yourself out. Make sure that you have a reliable means to get on to the server if this happens, such as whitelisting your own IP address, and having a KVM or IPMI console available as a last resort measure.

Changing The SSH Port Number

One of the most common techniques that I see is changing the SSH port number to something other than the default port, 22/tcp.  

This change is relatively simple to make, for example, if you wanted to change your SSH port from 22 to 2222, you would simply need to update the Port line of your sshd_config file like so:

Port 2222

By the way, port 2222 is a pretty common “alternate” port, so some of the brute force bots may still try this port.  It would be better to choose something more random, like 2452. It doesn’t even have to contain a 2, your SSH port could be 6543 if you wanted it to be.  Any port number up to 65535 that is not used by another program on the server is fair game.

Pros and Cons of Changing The SSH Port Number

Pros

  • This technique is usually pretty effective at cutting down automated bot attacks.  Most of these are unintelligent scripts and will only be looking for servers running on port 22.

Cons

  • This technique amounts to “security by obscurity”.  A bot that is trying alternate ports, or any human equipped with a port scanning tool like nmap will have no problem finding your server’s new port in just a few minutes.
  • This technique can make the SSH server a bit more inconvenient to access, as you will now need to specify the port number when connecting instead of just the IP.

Disabling Root Login via SSH

Another common technique is to disable the root user account from logging in via SSH altogether, or without an authorized SSH key.  You can still have root access via SSH by granting “sudo” privileges to one of your limited users, or using the “su” command to switch to the root account with a password.

This can be configured by adjusting the “PermitRootLogin” setting in your sshd_config file.

To allow root login with SSH key only, you would change the line to:

PermitRootLogin without-password

To completely disallow root login via SSH, you would change the line to:

PermitRootLogin no

Pros and Cons of Disabling Root Login via SSH

Pros

  • This technique is somewhat helpful, since the username “root” is common to most LInux servers (like “Administrator” on Windows servers), so it is easy to guess.  Disabling this account from logging in now means that the attacker must also guess a username correctly to be able to gain access.
  • If you are not using sudo, this technique puts root access behind a second password, requiring an attacker to know or guess two passwords correctly before having full access to the server.  (Sudo can diminish this benefit somewhat as usually it is configured to allow root access with the same password that the user used to login.)

Cons

  • This method may increase your risk of getting locked out of the server, in the event that something goes wrong with your sudo configuration.  It is still a good idea in this method to have an alternate way to access the server if you become locked out of root, such as a remote console.

Disabling Password Authentication, in favor of key authentication.

The first thing that everyone tells you about passwords is to make them long, difficult to guess, and not based on dictionary words.  An SSH key can replace password authentication with authentication by a key file.

SSH keys are very secure compared to a password, as they contain a large amount of random data.  If you have ever seen an SSL certificate or key file, an SSH key looks similar to this. It’s a very large string of random characters.

Instead of typing a password to login to the SSH server, you will authenticate using this key file, in much the same way that SSL certificates on websites work.

If you would like to disable password authentication, you can do so by modifying the “PasswordAuthentication” setting in the sshd_config file, like so:

PasswordAuthentication no

Pros and Cons of Disabling Password Authentication, in favor of key authentication.

Pros

  • This method strongly decreases the likelihood that a brute force attempt against your SSH server will be successful.
    • Most brute force bots are only trying passwords to begin with, they will be using the completely wrong authentication method to try to break in, so those bots will never succeed.
    • Even if someone was doing a targeted attack, SSH keys are so much longer than passwords that guessing one correctly is orders of magnitude harder, simply because there’s so much entropy and potential combinations.

Cons

  • This technique can make it less convenient to access the server.  If you don’t have the key file handy, you won’t be able to SSH in.
  • Due to the above, you are also increasing risk of getting locked out of SSH, for example if you lose the key file.  So, it’s a good idea to have an alternative way to access the server if you need to let yourself back in, like a remote console.

In the event that someone gets ahold of your key file, just like a password, they will now be able to login as you.  But, unlike passwords, keys can be easily expired and new keys created, and the new key will operate the same way.

Another interesting quirk about the SSH keys method is you can authorize multiple SSH keys on a single account, whereas an account can typically only have one password.

It’s worth noting that you can use SSH keys to access accounts even if password authentication is turned on.  By default, SSH keys will work as an authentication method if you authorize a key.

Allow Whitelisted IPs Only

A very effective security technique is only allowing whitelisted IP addresses to connect to the SSH server.  This can be accomplished through firewall rules, only opening the SSH port to authorized IP addresses.

This can be impractical for home users or shared web hosting providers, since it can be difficult to know which IP addresses will need access, and home IP addresses tend to be dynamic, so your IP address might change.  But, for situations where you are using a VPN or mostly accessing from a static IP address, it can be a low maintenance and extremely secure solution.

Pros and Cons of Allowing Whitelisted IPs Only

Pros

  • This method provides very strong security, since attackers would need to have access to one of your whitelisted IPs already in order to try to SSH in.
  • Arguably, this method can supercede the need for other security methods like brute force protection or disabling password authentication, since the threat of brute force attacks is now basically nullified.

Cons

  • This method increases your chances of getting locked out of the server, especially if you are in a location where your IP address may change, like a residential Internet connection.
  • The convenience of access is also reduced, since you will be unable to access the server from locations that you haven’t whitelisted ahead of time.
  • There is some effort that goes into this, since you now have to maintain your IP address whitelist by adding and removing IPs as the needs change.

On my own personal servers, this is usually the technique that I use.  This way I can still have the convenience of authenticating with a password and using the normal SSH port, while having strong security.  I also change my servers frequently, creating new ones when needed, and I find that implementing this whitelist is the fastest method for me to make my new servers secure without messing with other configurations, I can simply copy my whitelist from another server.

A Hybrid Approach: Allow passwords from a list of IPs, but allow keys from all IPs.

If you want to get fancy, there are a number of “hybrid” approaches that you can implement that combine one or more of these security techniques.

I ran into a situation once with one of our customers at GigeNET where they wanted to provide staff with password access, so that they could leave a password on file with us, but they wanted to only use key authentication themselves and not have password authentication open to the Internet.

This was actually very simple to implement, and it provides most of the security of disabling password authentication, while still allowing the convenience of password authentication in most cases.

To do this, you would want to add the following lines to your sshd_config:

# Global Setting to disable password authentication

PasswordAuthentication no

[...]

# Override the Global Settings for IP Whitelist

# Place this section at the -end- of your sshd_config file.

Match address 1.2.3.4/32

PasswordAuthentication yes

For the above, 1.2.3.4 is the whitelisted IP address.  You can repeat that section of the configuration to whitelist multiple IPs, and you can change the /32 to another IPv4 CIDR such as /28, /27, etc in order to whitelist a range of IPs.

Remember that the Match address block should be placed at the very end of your sshd_config file.

Pros and cons of a hybrid approach

Pros

  • This technique can provide the security of key authentication by preventing passwords from working for most of the Internet, but allowing the convenience of password authentication from frequent access locations.  So, it allows you to reduce some of the drawbacks while keeping most of the security.
  • If your IP address changes and you are no longer whitelisted, you can still SSH in with the key file so long as you have it saved locally.

Cons

  • Like the IP whitelist firewall method, this method takes some maintenance since you have to update your SSH configuration if your IP address changes or you need to whitelist other locations, but unlike other methods, updating the whitelist here is less critical since you can still access via the key method even if you’re not whitelisted.

Ultimately, you will have to choose what’s best for your use case.  

Hopefully this list of techniques and examples provides some food for thought that you can use when you are security your servers: what the risks are and what possible techniques exist to mitigate them.

Based upon how important you think the security of the server is, and the practicality of implementing the various security solutions toward mitigating the risks you’re concerned about, you can choose one or more techniques to move forward.

At the end of the day, I always remind everyone that security is relative.  You will never have anything that is fully impenetrable, and the main thing is to keep yourself at least one step ahead of everyone else.  Even if you implement just one of these security practices, you are more secure as a result than a large number of Linux servers out there that are running with the default settings and SSH wide open to anyone that wants to try to login.

 

How can GigeNET keep your business secure? Chat with our experts now.

cybersecurity basics firewall

Why is a firewall so important?

Security on an open Internet becomes more important with each day. Along with the growth of Internet and Internet literacy, the benefits of a dedicated/virtual server can now be felt around the world.

For many, personal data and web service accessibility has become an integral part of the daily life. Having the benefit of accessibility means that the service is public facing, making the service susceptible to undesirable and seemingly random connections.

Often conducted using bots and spoofed IP addresses, it’s not uncommon on the open Internet to experience login attempts, port scans, and other intrusive activity.

There are basic security and firewall practices that can help prevent these activities from turning into a more alarming issue.

Without a firewall, your open ports look like this:

First off, to help grasp the motive behind these connections, a newly installed server was used to log incoming connections over 2 days. With no firewall blocking connections to the server, the log data can be analyzed to pinpoint areas of concentration.

Technical Information

OS: CentOS 7 + cPanel

(cPHulk disabled)

– Using iptables to log connections, and logged to the following directory –

/etc/rsyslog.d/my_iptables.conf

:msg,contains,"[netfilter] " /var/log/iptables.log

The following iptables rule was used to log NEW(state) Inbound packets to eth0

iptables -A INPUT -i eth0 -m state --state NEW -j LOG --log-prefix='[netfilter] '

Example Log entry 

Jun 15 08:02:27 gigenet kernel: [netfilter] IN=eth0 OUT= MAC=d6:f4:8e:aa:a7:94:00:25:90:0a:ad:1c:08:00 SRC=<remote IP> DST=<server IP> LEN=40 TOS=0x00 PREC=0x00 TTL=244 ID=24288 PROTO=TCP SPT=54102 DPT=1433 WINDOW=1024 RES=0x00 SYN URGP=0

(IP addresses have been removed)

SRC – Source IP address

DST – Destination IP address

SPT – Source Port

DPT – Destination Port

PROTO – Internet Protocol

A script was created analyze and format log data

[root@gigenet ~]# ./analyze-iptableslog.sh

Log File: iptables-1.log

Log Date

# awk 'NR==1{print "Start Date: " $1, $2, $3;}; END{print "End Date: " $1, $2, $3;}' iptables-1.log

Start Date: Jun 13 08:02:21

End Date: Jun 15 08:02:27

Total Number of New Connections Logged

# wc -l iptables-1.log
 16299 iptables-1.log

Number of Connections per Protocol

# awk '{for (i=1;i<=NF;i++) if( ~/PROTO=/) print $i}' iptables-1.log | sort | uniq -c | sort -rn
15900 PROTO=TCP
366 PROTO=UDP
33 PROTO=ICMP

Number of Unique SRC IP Addresses

# awk '{for (i=1;i<=NF;i++) if( ~/SRC=/) print $i}' iptables-1.log | sort -n | uniq | wc -l
2886 IP Addresses

Number of Enties with a DPT(Total-ICMP)

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-1.log | wc -l
16266 DPT Connections

Number of Unique DPT Hit

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-1.log | sort -n | uniq | wc -l
1531 Unique DPT

Number of Connections per DPT, List Top 15

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-1.log | sort -n | uniq -c | sort -rn | head -n 15
9595 DPT=22
1309 DPT=80
885 DPT=445
742 DPT=23
188 DPT=8000
157 DPT=1433
153 DPT=5060
111 DPT=8080
90 DPT=8545
90 DPT=3389
83 DPT=81
80 DPT=3306
73 DPT=443
67 DPT=2323
44 DPT=8888

How to use firewall mitigate ports

The data show the primary destination ports of contact. As expected, the ports with the largest amount of connections are common for Linux and Windows web services.

Port 22 – Secure Shell(SSH)
Port 23 – telnet
Port 80 – Http
Port 445 – SMB (Windows network file sharing)
Port 1433 – MSSQL
Port 3306 – MYSQL
Port 3389 – RDP

Depending on the services being run, these ports may need to be available to remote services. The ports of note are SSH port 22, telnet port 23, and RDP port 3389.

Ideally, these connections should be restricted by the system firewall to specific IP addresses only. In addition, bots are typically programmed to target default ports. Thus, changing the default SSH and RDP port will help prevent intrusion.

  1. Changing SSH port(Linux, Freebsd)
  2. SSH configuration file:
    /etc/ssh/sshd_config

Modify the line with an uncommon port(0-65535)

  • Port 22

Restart SSHD:

  • CentOS: service sshd restart
  • Debian: service ssh restart
  • FreeBSD: /etc/rc.d/sshd restart

Change RDP port(Windows)

  • Windows RDP should never be open to the public. If necessary, the RPD port should be changed to minimize anonymous connections.

Open Registry Editor

  • Locate the following registry subkey:
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber

Modify the Decimal value to an unused port, click OK. Reboot.

Basic Firewall Setup

There are a number of firewall services that can serve as a primary mode of security. Provided are a few basic rule commands to help get started.

1. To Add iptables Rules

iptables is the most common, and familiar, Linux firewall. The default firewall for CentOS <=6, iptables is often used as the baseline Linux firewall.

Basic rules

  • Allow Established Connections: iptables -A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
  • Allow INPUT Policy: iptables -P INPUT ACCEPT
  • Allow IP: iptables -A INPUT -s 120.0.0.1/32 -j ACCEPT
  • Allow IP/Port: iptables -A INPUT -s 120.0.0.1/32 -p tcp -m state –state NEW -m tcp –dport 22 -j ACCEPT
  • Allow lo(localhost) interface: iptables -A INPUT -i lo -j ACCEPT
  • Allow Ping: iptables -A INPUT -p icmp -j ACCEPT
  • Allow Port: iptables -A INPUT -p tcp –dport 22 -j ACCEPT
  • Insert Allow IP(pos. 5): iptables -I INPUT 5 -s 120.0.0.1/32 -j ACCEPT
  • Insert Allow IP/multiport: iptables -I INPUT 5 -s 127.0.0.1/32 -p tcp -m state –state NEW -m multiport –dport port#1,port#2 -j ACCEPT

(Alternatively, substitute “ACCEPT” with “DROP” to deny)
Remove an Existing Rule using the -D option:

  •  Remove Allow IP: iptables -D INPUT -s 120.0.0.1/32 -j ACCEPT

Reject the rest, rejects(blocks) all connections not defined in previous rules.

  • Iptables -A -j REJECT –reject-with icmp-host-prohibited

Flush rules

  • iptables -F

2. Basic firewalld Commands

Firewalld features prominently in CentOS 7. Firewalld essentially provides more human readable commands for committing iptables rules.

Operational, print current state information

  • State: firewall-cmd –state
  • Start/Stop: systemctl start/stop firewalld.service
  • Start On Boot: systemctl enable firewalld

Zone Information, pinrt zone parameters

  • Default Zone: firewall-cmd –get-default-zone
  • Default Zone Info: firewall-cmd –list-all
  • List Zones: firewall-cmd –get-zones
  • Zone Info: firewall-cmd –zone=public –list-all

Modify Zone

  • Create New Zone: firewall-cmd –permanent –new-zone=new_zone – Change Default Zone: firewall-cmd –set-default-zone=public
  • Change Interface: firewall-cmd –zone=public –change-interface=eth0

Modify Rules, subnet of a zone

  • Allow Service: firewall-cmd –zone=public –add-service=http
  • Allow Port: firewall-cmd –zone=public –add-port=22/tcp
  • List Services: firewall-cmd –get-services
  • List Services Allowed: firewall-cmd –zone=public –list-services
  • List Ports Allowed: firewall-cmd –list-ports
  • Allow IP/Port/Proto using rich-rule,  explicit rules
  • firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=”ipv4″ source address=”127.0.0.1/32″ port protocol=”tcp” port=”22″ accept’

(Use the –permanent option to create persistent rules for reboots)

3. Basic ufw rules

Introduced as ufw(UncomplicatedFirewall), supported in Ubuntu 8.04+, it is shipped as the default firewall for Ubuntu systems.

Operational

  • Enable/Disable: ufw enable/disable
  • Print Rules: ufw status verbose

Allow Rules

  • Allow Port: ufw allow 22
  • Allow IP: ufw allow from 127.0.0.1
  • Allow IP/Port/TCP: ufw allow from 127.0.0.1 to any port 22 proto tcp
  • (Alternatively, substitute “allow” for “deny” for deny rules)

Delete Existing Rules

  • ufw delete allow from 127.0.0.1

4. Windows Firewall(Windows Server 2008 a newer)

Control Panel >> Windows Firewall >> Advanced Settings >> Inbound/Outbound >> New Rule

Bonus: cPanel tools – cpHulk(?)

As a test case, WHM’s cPHulk Bruteforce Protection was enabled with default settings. During the 24 hours logged, there has been significantly fewer new connections as recorded by iptables.

[root@gigenet ~]# ./analyze-iptableslog.sh

Log File: iptables-cphulk.log

Log Date

# awk 'NR==1{print "Start Date: " $1, $2, $3;}; END{print "End Date: " $1, $2, $3;}' iptables-cphulk.log

Start Date: Jun 19 04:31:43

End Date: Jun 20 04:53:53

Total Number of New Connections Logged

# wc -l iptables-cphulk.log
3223 iptables-cphulk.log

Number of Connections per Protocol

# awk '{for (i=1;i<=NF;i++) if( ~/PROTO=/) print $i}' iptables-cphulk.log | sort | uniq -c | sort -rn
2974 PROTO=TCP
213 PROTO=UDP
36 PROTO=ICMP

Number of Unique SRC IP Addresses

# awk '{for (i=1;i<=NF;i++) if( ~/SRC=/) print $i}' iptables-cphulk.log | sort -n | uniq | wc -l
1432 IP Addresses

Number of Enties with a DPT(Total-ICMP)

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-cphulk.log | wc -l
3187 DPT Connections

Number of Unique DPT Hit

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-cphulk.log | sort -n | uniq | wc -l
943 Unique DP

Number of Connections per DPT, List Top 15

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-cphulk.log | sort -n | uniq -c | sort -rn | head -n 15
415 DPT=445
270 DPT=23
257 DPT=22
233 DPT=80
97 DPT=5060
72 DPT=1433
59 DPT=8545
53 DPT=8000
50 DPT=81
49 DPT=8080
46 DPT=443
41 DPT=3389
34 DPT=25
33 DPT=3306
27 DPT=2323

business chat

Privacy and security can be difficult to achieve, especially for your entire organization. It involves many factors and can be difficult to manage from the top. While you may not want to, or don’t have the ability to manage every aspect of your organization’s members there are some things you can do. One of the most important and sensitive factors would be how your organization’s members communicate about internal matters. While talking face to face is one of the more common ways, this is not always possible. More people than ever work remote. Especially in the IT industry. There is an obvious need for remote communication methods.

Instant messaging is probably one of the more popular ways to communicate. There’s many platforms like Skype, Slack, and WhatsApp that simplify this. While some of them may boast encryption from client to server or even end-to-end encryption, you’re still transferring trust to a 3rd party and their code. If this worries you, it may be best to run your own instant messaging server. Commonly, organizations and individuals who are concerned about this have setup XMPP servers (formerly known as Jabber). While this arguably isn’t a bad solution, XMPP can be tricky to work with compared to other more modern solutions.

One of the most notable competitors to the XMPP protocol would have to be Matrix Synapse. Matrix, like XMPP can be a decentralized (federated) but you can tweak it to your organization’s needs. For example, you can disable public registration, use LDAP for authentication and disable federation. Just like XMPP, there are many implementations of the Matrix protocol.

Matrix Tutorial

In this tutorial we will be going over how to setup your own Matrix Synapse server on GigeNET Cloud. This will show you the basics of how to run your own Matrix server. If you don’t have a GigeNET Cloud account, head over here and check out our plans. Synapse is the server created by Matrix developers and can be found here.

First, we’ll need to create a GigeNET Cloud machine. Once you’re logged in, it’ll look like this.

How to secure your chats with MatrixClick on “Create Cloud Lite”

How to secure your chats with MatrixSet a proper hostname for your new machine, select the desired location, zone and OS. For this tutorial we’ll be using Debian 9 (Stretch). You’ll then need to pick a plan that fits your needs. Matrix Synapse recommends at least 1GB of memory. We’ll go with GCL Core 2. After you’ve set everything to what you want press “Create VM”.

How to secure your chats with MatrixNow your cloud VM has begun spinning up on one of our hypervisors. It may take a bit, but you can ping the VM’s public IP until you see that it’s up. This page will show all of the details you’ll need to know to login.

How to secure your chats with MatrixOnce the VM is up, you can SSH in with your favorite SSH client. I use Linux, so I’ll be using openssh-client. We’ll want to perform a full upgrade of all packages on Debian, so you’ll need to run this.

root@matrix-test:~# apt update && apt dist-upgrade

Once that has finished, reboot your VM.

root@matrix-test:~# reboot

Once you’re back in after the reboot. Let’s take a look at the available Matrix serversThere’s quite a few, but as mentioned, we’ll be using Synapse. Click Synapse.

How to secure your chats with MatrixIf you’re interested in learning more about Matrix Synapse I highly recommend that you check out their GitHub repository.

How to secure your chats with MatrixBefore you grab their repo key you’ll need to install apt-transport-https. This is required to use HTTPS with the apt package manager.

root@matrix-test:~# apt install apt-transport-https

When that finishes you can then grab their repo key, import it and add the repository into your sources file with the following commands.

root@matrix-test:~# wget -qO - https://matrix.org/packages/debian/repo-key.asc | apt-key add -

root@matrix-test:~# echo deb https://matrix.org/packages/debian/ stretch main | tee -a /etc/apt/sources.list.d/matrix-synapse.list

root@matrix-test:~# apt update

If everything checks out you’re now ready to install Matrix Synapse! We’ll also install a few extras.

  • Certbot (to get a free Let’s Encrypt certificate) 
  • Haveged (to speed up entropy collection)  

root@matrix-test:~# apt install matrix-synapse certbot haveged

You’ll get an ncurses interface during the installation asking for a few configuration parameters. Make sure to set your FQDN here.

How to secure your chats with MatrixIt’s up to you whether you want to send anonymous statistics. I chose not to.

How to secure your chats with MatrixIf you have your own certificate you can simply copy over the certificate and private key in the same way. Now let’s get our Let’s Encrypt certificate!

A few more things to note.

  • You’ll need to ensure your domain or subdomain points to your new server via a DNS A record or AAAA record if you want to use IPv6.
  • You’ll need to enter an email address to receive certificate expiry notices.
  • You’ll need to agree to the Let’s Encrypt terms and conditions.

root@matrix-test:~# certbot certonly --standalone -d matrix-test.gigenet.com

How to secure your chats with Matrix

Once we have our certificate and private key we need to copy them over to /etc/matrix-synapse like so (change directory to your FQDN).

cp /etc/letsencrypt/live/matrix-test.gigenet.com/fullchain.pem /etc/matrix-synapse/fullchain.pem

cp /etc/letsencrypt/live/matrix-test.gigenet.com/privkey.pem /etc/matrix-synapse/privkey.pem

Next, we’ll need to generate a registration secret. Anyone who has this secret will be able to register an account. So you want to keep it safe.

root@matrix-test:~# cat /dev/random | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1

Output should be a random string of 64 characters like: TDfdIXPBWDOqaVsR5erVJLKdqPqIAsrvfvEtgHfY8oZ06F5NMYnhdbHhVbneDiTF

Now we need to edit the config. You can use nano or your favorite text editor.

root@matrix-test:~# nano /etc/matrix-synapse/homeserver.yaml

Search for the parameter when you’re in nano with CRTL + W and enter registration_shared_secret

Ensure that the line looks like this:

registration_shared_secret: “TDfdIXPBWDOqaVsR5erVJLKdqPqIAsrvfvEtgHfY8oZ06F5NMYnhdbHhVbneDiTF”

We’ll also need to enable TLS support for the web client and add the paths for our certificate and private key.

Make sure the following line web_client looks like this.

web_client: True

Now we’ll add our certificate and private key to the config. The lines should look something like this.

tls_certificate_path: “/etc/matrix-synapse/fullchain.pem”

tls_private_key_path: “/etc/matrix-synapse/privkey.pem”

Save and exit your text editor after you’ve followed the steps above. We can now enable matrix-synapse to start on boot, and start the service!

systemctl enable matrix-synapse

systemctl start matrix-synapse

If everything checks out the service should have started successfully. If not you can check its status to see why it failed with.

systemctl status matrix-synapse

Now we’re ready to setup our first user. This command will allow you to register a user and make it the administrator. You can also use this command to register normal users. By default, Matrix Synapse is not configured to allow public registration.

register_new_matrix_user -c /etc/matrix-synapse/homeserver.yaml https://localhost:8448

How to secure your chats with Matrix

We’ve got our first user, now we’re going to have to pick a Matrix chat client. You can see a list of clients here but in this tutorial we’ll be using Riot on a Windows VM. It has very good support and is cross-platform. Chats can also be end-to-end encrypted with Riot! Go here to download it. 

How to secure your chats with MatrixOnce you have it installed for your platform of choice and launch it. You’ll be greeted with a similar window as the one below. Click “Login”.

How to secure your chats with Matrix

You’ll then need to enter your server’s details along with the credentials you set for your administrator account.

How to secure your chats with Matrix

After you’ve signed in you’ll be greeted with a similar interface. Let’s create our first room by pressing the + button on the bottom left of the window.

How to secure your chats with MatrixWe’ll just name it “Admin Room” for this test.

How to secure your chats with Matrix

Now we’ve got our own room that we can invite other users to!

How to secure your chats with MatrixNeed to know how to do more with Riot? They have a great FAQ with a few video tutorials on how to perform some basic tasks. 

While administering a Matrix server might be a bit of a learning curve it’s worth it if you value having control of your own data. If you want to dive more in-depth on how to setup other Matrix Synapse features I highly recommend that you head over to their GitHub page

Sound like a bit too much? Let our team of experts manage your systems.

 

There is a lot of noise these days about this soon-to-be implemented EU regulation, the GDPR (General Data Protection Regulation), making the topic hard to miss — but how much do you understand about GDPR, and to what extent can it can impact your U.S.-based business?

What is this GDPR thing, and why should you care?

Adopted by the European Union on April 27th, 2016, and scheduled to become enforceable on May 25th, 2018, the GDPR is a regulation designed to greatly strengthen an EU citizen’s control over their own personal data. In addition, the regulation is meant to unify the myriad of regulations dealing with data protection and data privacy across member states. Finally, its reach also extends to the use and storage of data by entities outside of the EU (Spoiler Alert! This is the part that affects us).

Enforcement of the provisions within GDPR is done via severe penalties for non-compliance, with fines up to €20 million, or 4% of the worldwide annual revenue (whichever is greater). Now, as a non-EU entity, you may think that your company won’t be subject to these fines, but that is incorrect. As a U.S. company that collects or processes the personal data of EU citizens, the EU regulators have the authority and jurisprudence, with the aid of international law, to levy fines for non-compliance.

In addition, your EU-based clients can be held accountable for providing personal information to a non-compliant 3rd party (your company). This is strong incentive for EU-based citizens and companies to insist on working only with GDPR-compliant 3rd parties, costing your company all EU-based business.

As you will soon realize, the GDPR is a vast set of regulations, with a large scope and sharp teeth. I cannot possibly go into enough detail in a blog post to map out a roadmap towards compliance, and neither is that my goal. If that is what you are looking for in a blog post, well, maybe you shouldn’t be responsible for anyone’s personal data….

No, my intent here is to demonstrate the importance of the GDPR, hopefully convince you to take it seriously and start down the road to compliance, and finally to point you in the right direction to start your journey.

The expanding scope

The GDPR expands the definition of personal data in order to widen the scope of its protections, aiming to establish data protection as a right of all EU citizens.  

The following types of data are examples of what will be considered personal data under the GDPR:

GDPR personal data

Does your company collect, store, use or process anything considered personal data related to an EU citizen by the GDPR?  If you have any EU clients, customers, or even just market to anyone in the EU, it is unlikely you could avoid being subject to GDPR.

The EU is seeking to make data privacy for individuals a fundamental right, broken down into several more-precise rights:

  • The right to be informed
      • A key transparency issue of the GDPR
      • Upon request, individuals must be informed about:
        • The purpose for processing their personal data
        • Retention periods for their personal data
        • All 3rd parties with which the data is to be shared
      • Privacy information must be provided at the time of collection
        • Data collected from a source other than the individual extends this requirement to within one month
      • Information must be provided in a clear and concise manner.
  • The right of access
      • Grants access to all personal data and supplementary information
      • Includes confirmation that their data is being processed
  • The right to rectification
      • Grants the right to correct inaccurate or incomplete information
  • The right to erasure
      • Also known as “the right to be forgotten”
      • Allows an individual to request the deletion of personal data when:
        • The data is no longer needed under the reason it was originally collected
        • Consent is withdrawn
        • The data was unlawfully collected or processed
  • The right to restrict processing
      • This blocks processing of information, but still allows for its retention
  • The right to data portability
      • Allows an individual’s data to be moved, copied or transferred between IT environments in a safe and secure manner.
      • Aimed to allow consumers access to services which can find better values, better understand understand spending habits, etc.
  • The right to object
      • Allows an individual to opt-out of various uses of their personal data, including:
        • Direct marketing
        • Processing for the purpose of research or statistics
  • Rights in relation to automated decision making and profiling
    • Limits the use of automated decision making and profiling using collected data

gdpr data privacy rights

Sprechen Sie GDPR?

Before diving deeper, it is important to understand some key terms used by the regulation.

The GDPR applies to what it calls “controllers” and “processors.”  These terms are further defined as Data Controllers (DCs) and Data Processors (DPs).  The GDPR applies differently in some areas to entities based upon their classification as either a DC or as a DP.

  • A Controller is an entity which determines the purpose and means of processing personal data.
  • A Processor is an entity which processes personal data on behalf of a controller.

What does it mean to process data?  In this scope, it means:

  • Obtaining, recording or holding data
  • Carry out any operation on the data, including:
    • Organization, adaptation or alteration of the data
    • Retrieval, consultation or use of the data
    • Transfer of data to other parties
    • Sorting, combining or removal of the data

The Data Protection Officer, or DPO, is a role set up by the GDPR to:

  • Inform and advise the organization about the steps needed to be in compliance
  • Monitor the organization’s compliance with the regulations
  • Be the primary point of contact for supervisory authorities
  • Be an independent, adequately resourced expert in data protection
  • Reports to the highest level of management, yet is not a part of the management team.

The GDPR requires a DPO to be appointed to any organization that is a public authority, or one that carries out certain types of processing activities, such as processing data relating to criminal convictions and offences.

Even if the appointment of a DPO for your organization is not deemed necessary by the GDPR, you may still elect to appoint one anyway.  The DPO plays a key role in achieving and monitoring compliance, as well as following through on accountability obligations.

The Nitty Gritty

In addition to expanding the definition of personal data and providing individuals broad rights governing the use of that data, the GDPR provided a number of requirements for organizations requiring that data shall be:

“a) processed lawfully, fairly and in a transparent manner in relation to individuals;

b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall not be considered to be incompatible with the initial purposes;

c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed;

d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay;

e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes subject to implementation of the appropriate technical and organisational measures required by the GDPR in order to safeguard the rights and freedoms of individuals; and

f) processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.” 

— GDPR, Article 5 

 

Additionally, Article 5 (2) states:

“the controller shall be responsible for, and be able to demonstrate, compliance with the principles.”

This last piece, known as the accountability principle, states that it is your responsibility to demonstrate compliance.  To do so, you must:

  • Demonstrate relevant policies.
    • Staff Training, Internal Audits, etc.
  • Maintain documentation on processing activities
  • Implement policies that support the protection of data
    • Data minimisation
      • A policy that encourages analysis of what data is needed for processing, and the removal of any excess data, or simply collecting only what is needed, and no more
    • Pseudonymisation
      • A process to make data neither anonymous, nor directly identifying
      • Achieved by separating data from direct identifiers, making linkage to an identity impossible without additional data that is stored separately.
    • Transparency
      • Demonstration that personal data is processed in a transparent manner in relation to the data subject
      • This obligation begins at data collection, and applies throughout the life cycle of processing that data
    • Allow for the evolution of security features going forward.
      • Security cannot be static when faced with a constant-evolving environment.
      • Policies must be flexible enough to protect from not just today’s and  yesterday’s threats, but from tomorrow’s.

The best laid plans…

Despite one’s adherence to these new policies, and implementation of tight security policies, there is no guarantee the data you are responsible for keeping safe will be absolutely secure.  Data breaches are more or less inevitable. Being aware of this, the GDPR has provisions regarding the reporting of data breaches should (when) they happen.

Not sure how to navigate these waters with your current infrastructure? We can help.

A data breach is a broader term than one may think.  Typically, the deliberate or accidental release of data to an outside party (say, a hacker) would be what one would consider a breach — and you would be right, it is a breach — but there is much more that can be considered a breach.

All of the following examples constitute a data breach:

  • Access by an unauthorized third party
  • Loss or theft of storage devices containing personal data
  • Sending personal data to an incorrect recipient, whether intended or not
  • Alteration of personal data without prior authorization
  • Loss of availability, or corruption of personal data

Data breaches must be reported to the relevant supervisory authority within 72 hours of first detection. Should the breach be likely to result in risk to an individual, that individual must also be notified without delay. All breaches, reported or not, must be documented.

Bit off more than you can chew?

This may seem like a lot to take in, and it should be.  The GDPR was designed to expand the privacy rights of all EU citizens, as well as replace the existing regulations of all member states with one, uniform set of regulations.

The good new is, as a U.S. company, you don’t have to take every step towards compliance alone.

The U.S. government, working with the EU, developed a framework to provide adequate protections for the transfer of EU personal data to the United States. This framework, called Privacy Shield, was adopted by the EU in 2016 and has passed its first annual review.

In order to participate in the Privacy Shield program, U.S. companies must:

  • Self-certify compliance with the program
  • Commit to process data only in accordance to the guidelines of Privacy Shield
  • Be subject to the enforcement authority of either:
    • The U.S. Federal Trade Commission
    • The U.S. Department of Transportation

To learn more about Privacy Shield, visit www.privacyshield.gov

How I learned to stopped worrying and love the GDPR

Getting compliant with the GDPR may seem like a huge P.I.T.A., but there are real benefits to following this path that extend beyond retaining EU contracts and avoiding hefty fines.  Data privacy is a huge issue world-wide, and being compliant with one of the strictest sets of regulations will help appease clients and customers from all corners of the globe. Even if you don’t have any interaction with EU citizens or organizations, becoming GDPR compliant may still be a great idea.

Compliance forces you to evaluate your systems and processes, ensuring that they are secure and robust enough to survive in the ever-changing landscape in which data privacy resides.  This transforms compliance from a tedious duty to a strong selling point.

Click Here to find out how GigeNET can help you!

Securing Memached Services

Over the past few weeks, a new DDoS attack vector through the use of memcached has become prevalent. Memcached is an object caching system with the original intent of speeding up dynamic LiveJournal websites back in 2003. It does this by caching data in RAM instead of calling data from a hard drive, thus reducing costly disk operations.

Deeper analysis of the security issues:

Memcached was designed to give the fastest possible cache access, hence it isn’t recommended to leave open on a public network interface. The recent attacks utilizing Memcached take advantage of the UDP protocol and an attack method known as UDP reflection.

An attacker is able to send a UDP request to a server with a spoofed source address, thus causing the server to reply to the spoofed source address instead of the original sender. On top of sending requests towards a server with the intent of “reflecting” them towards another server, attackers are able to easily add to the cache. Because memcached was designed to sit locally on a server, it was never created with any form of authentication. Attackers can connect and add to the cache in order to amplify the magnitude of the attack.

The initial release of Memcached was in May of 2003. Since then, the uses of it have expanded greatly, but the original technology has remained the same. While its uses have been expanded, its security features have not.

Below is a sample packet we captured from a server participating in one of these reflection attacks. This is the layer 3 information of the packet, the source IP is spoofed to point to a victim’s server:

memcached

This is the layer 4 information, Memcached listens on port 11211:

memcached

In addition to being able to be used as a reflector for attackers, attackers can also extract highly sensitive data from within the cache because of its lack of authentication. All of the data within the cache has a TTL (Time To Live) value before it is removed, but it still isn’t difficult to pull information from.

Below is an example of how easy it is for an attacker to alter the cache on an unsecured server. We simply connected on port 11211 over telnet and were immediately able to make changes to the cache:

memcached

Solution Overview

In order to decide how to best secure Memcached on your server, you must first determine how your services use it. Memcached was originally designed to run locally on the same machine as the web server.

A: If you don’t require remote access, it is best to completely prevent it from using internet protocol.

B: If you require remote access, it is recommended to whitelist the source IPs of what needs to access it. This way you control exactly what machines can make changes and read from it.

Solution Instructions:

In the case that remote access is not required, it is advised to ensure Memcached can only speak to local host on 127.0.0.1 on startup.

Ubuntu based servers:

sudo nano /etc/memcached.conf

Ensure the following two lines are present in your configuration:

-l 127.0.0.1

This will bind Memcached to your local loopback interface preventing access from anything remote.

-U 0

This will disable UDP for Memcached thus preventing it from being used as a reflector.

Then restart the service to apply the settings:

sudo service memcached restart

CentOS based servers:

nano /etc/sysconfig/memcached

Add the following to the OPTIONS line:

OPTIONS="-l 127.0.0.1 -U 0"

Restart the service to apply the settings:

service memcached restart

If Memcached needs to be accessed remotely, whitelisting the IPs that are allowed to connect will best secure your server.

Using iptables:

sudo iptables -A INPUT -i lo -j ACCEPT

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

sudo iptables -A INPUT -p tcp -s IP_OF_REMOTE_SERVER/32 --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

sudo iptables -P INPUT DROP

Defining a /32 in the above commands specifies a single server that will be allowed access. If multiple servers in a range require access, the CIDR notation of the range can be input instead:

sudo iptables -A INPUT -p tcp -s IP_RANGE/XX --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

Using CSF:

nano /etc/csf/csf.allow

Add the following line to whitelist IPs:

tcp|in|d=11211|s=x.x.x.x

You can also specify a range using CIDR:

tcp|in|d=11211|s=x.x.x.x/xx

tcp = the protocol that will be used to access Memcached
In = direction of traffic
d = port number
s = source IP address or IP range

Save the file and then restart the service:

csf -r

After whitelisting the IPs allowed to access Memcached, we must rebind the service to use the interface we wish for it to communicate on.

On Ubuntu based servers:

sudo nano /etc/memcached.conf

Change the IP on this line to represent the IP of the interface on your server:

-l x.x.x.x

Then restart the service to apply the settings:

sudo service memcached restart

On CentOS based servers:

nano /etc/sysconfig/memcached

Change the IP following the -l flag to that of your server’s interface:

OPTIONS="-l x.x.x.x -U 0"

Restart the service to apply the settings:

service memcached restart

Conclusion

The best way to secure your server from these vulnerabilities is to prevent Memcached from talking on anything other than the local host. If the service must be accessed remotely, ensure to adequately secure it using your server’s firewall. Securing your server will not only prevent it from being used in malicious DDoS attacks, but also ensure that confidential data isn’t compromised. Taking the above actions will help the community as a whole and prevent unwanted bandwidth overages.

Linux Encryption Backup Tools

If the data you store on your server or other service is important to you, likely you’d prefer it not ending up in the hands of others. If so you should use the power of cryptography. There are many options to choose from whether you’re running Windows, Linux or BSD but we’ll be focusing on my favorite Linux based tools for now. You can choose from encrypting parts of your filesystem to encrypting an entire block device. Depending on what you prefer it’s relatively easy to do if you’re even a little familiar with Linux and can follow tutorials. It doesn’t require you to be a mathematician or cryptography expert.

As a sysadmin, here are my top Linux encryption and backup tools:

EncFS

One of my personal favorites for filesystem encryption is EncFS. It allows you to easily setup encrypted directories which is incredibly useful for storing off-site backups on systems that you don’t necessarily trust.

For example, you could have plain-text website backups dumped to /backups and then setup EncFS to encrypt that data to /encrypted-backups. You’d then be able to use tools like rsync or rclone to move the data somewhere else, even onto a system that you don’t trust.

Keep in mind, if you don’t have a complex/strong password, your encrypted data is likely unsafe. In the event that you lose data on your local system, you could rsync/rclone the data from /encrypted-backups and mount it again via FUSE as long as youe the original password you encrypted the data with.

Duplicity

If you’re familiar with GPG, Duplicity is a great tool to use for encrypted remote/local compressed backups with many features. It’s meant to be a tool for backing up specified directories in increments to save space, but can also be used to perform full backups each time.

With Duplicity you’ll need to create a GPG key and protect it with a strong password. You can then use that key with Duplicity to encrypt and sign the backups. Just like with EncFS, you can use rsync, rclone or another tool to transfer the encrypted ups off-site. The best implementation of Duplicity that I’ve found is backupninja, allowing you to create multiple backup actions with an easy-to-use configuration.

dm-crypt

Another option is to encrypt your entire block device with dm-crypt + LUKS. Using this tool, all of the data on the block device is encrypted and even someone with local access cannot decipher it.

There are few exceptions to this. For instance, if the attacker has your root password or can read from memory via a cold boot attack when the system is powered on, then it would be possible to either simply login or grab the encryption keys from memory. What’s neat about dm-crypt + LUKS is that you can also set it up remotely on your server if you have access to IPMI and you boot a recovery image.

Once setup, you can make it prompt you via SSH for a password when the server boots instead of having to type it in locally. LUKS only protects you completely from unauthorized local access when your system is powered off, which is likely what the attacker must do. It’s unlikely that your data can be deciphered if you have a strong password. If someone were to compromise your system while the encrypted volume is mounted, you are in trouble.

Remember that encryption doesn’t protect you from the lack of safe security practice on your part!

Be sure to also read my blog, How to secure your chats with Matrix.

Already have enough on your plate? Explore GigeNET’s managed services.

What is the value of a server? In this world of virtual machines and dedicated servers, our customers are becoming more and more removed from the physical components that comprise a server. Everything is easily replaceable — everything except the data contained within the servers.

Countless work hours have gone into making each server unique, with custom set-ups, modified WordPress templates, blog posts going back years, etc.

This is where the value of a server lies, in the data. What is this data worth to you? How do you even begin to measure that?

The data is stored on the server’s hard drives.

And guess what? This is, by far, the most common part of a server to experience failure. So it’s absolutely necessary to create a backup strategy.

So how do you to create the best backup strategy? Where do you begin?

There are two ways to prevent against the effects of data loss from drive failure – prevention and recovery.

On the prevention vector, we focus on RAID: Redundant Array of Independent Disksvarious

RAID configurations can be implemented to allow your data to withstand the loss of one, two, or more drives. There are several different configurations that can be tailored to your specific needs, essentially finding the sweet spot between performance, resilience, and cost that is right for your environment. RAID uses two or more drives to store your data in ways that can not only survive drive loss but can also improve performance.

However, even RAID can only protect from so many simultaneous failures. While it certainly helps prevent data loss in most cases, it doesn’t reduce it to statistic nothingness. Servers are susceptible to multi-drive failure, which is more common than one would expect.

When setting up a server with multiple drives, often these drives are all from the same batch, if they are installed new. If one drive has a flaw, it is likely that this flaw is shared by the other drives in that batch, making the loss of 2 or more drives in a short time more likely than one would expect. In addition to often coming from the same batch, drives in a server are exposed to the same environmental factors, as well.

Furthermore, in addition to the fear of multiple drive failures, files and filesystems can become corrupt — either accidentally or maliciously. In this case, no RAID level will help you out of this jam.

This is where our second vector comes into play – recovery.

Backing up your data to an external system and keeping multiple recovery points is one of the best ways to mitigate the effect of unpreventable data loss. No matter how robust your data storage plan is, it can fail.

Our R1Soft backup service is set up to take daily incremental backup of your data.

Your data got corrupted on Tuesday? No Problem! Just restore the data from Monday.

You lost the “wrong two” drives in your RAID 10 array? Easy! We’ll simply replace the bad drives, and any others that don’t test 100% stable, then perform a bare metal recovery of your OS and data.

Every client with the service has full control and visibility of their backups with the ability to review, edit and download their backups using a personalized interface.

So, should you choose RAID or R1Soft backups?

While RAID alone is a great option that can prevent the downtime associated with recovery on single or even multiple-drive failures (as long as they are the “right” drives that fail, but that is another topic), it is not failproof.

On the other hand, backups alone can require lengthy downtime for recovery to occur, and the backup is only as current as the last recovery point.

This is why, for protecting your cannot-lose data, we recommend the dual-vectored approach of prevention and recovery. Significantly reduce the need to recover from backups by using a robust RAID, but have those backups on hand for when you do need them.

If you would like to add RAID or R1Soft backups (or better yet, both) to your current setup, chat with our specialists.

Dual Core and Quad Core Servers Optimal Ecommerce Solutions

Three banks plundered with DDoS distraction.

Criminals have recently hijacked the wire payment switch at several US banks to steal millions from accounts, a security analyst says.

Gartner vice president Avivah Litan said at least three banks were struck in the past few months using “low-powered” distributed denial-of-service (DDoS) attacks meant to divert the attention and resources of banks away from fraudulent wire transfers simultaneously occurring.

The loses “added up to millions [lost] across the three banks”, she said.

“It was a stealth, low-powered DDoS attack, meaning it wasn’t something that knocked their website down for hours.”

The attack against the wire payment switch — a system that manages and executes wire transfers at banks — could have resulted in even far greater loses, Litan said.

It differed from traditional attacks which typically took aim at customer computers to steal banking credentials such as login information and card numbers.

While it was unclear how the attackers gained access to the wire payment switch, fraudsters could have targeted bank staff with phishing emails to plant malware on bank computers.

RSA researcher Limor Kessem said she had not seen the wire payment switch attacks in the wild, but the company had received reports of the attacks from customers.

“The service portal is down, the bank is losing money and reliability, and the security team is juggling the priorities of what to fix first,” she said.

“That’s when the switch attack – which is very rare because those systems are not easily compromised [and require] high-privilege level in a more advanced persistent threat style case – takes place.”

Litan declined to name the victim banks but said that the attacks did not appear linked to recent hacktivist-launched DDoS attacks against US banks since they were entirely financially driven.

Researchers at Dell SecureWorks in April detailed how DDoS attacks were used as a cover for fraudulent attacks against banks.

The researchers said fraudsters were using Dirt Jumper, a $200 crimeware kit that launches DDoS attacks, to draw bank employees’ attention away from fraudulent wire and ACH transactions ranging from $180,000 to $2.1 million in attempted transfers.

Last September, the FBI, Financial Services Information Sharing and Analysis Center, and the Internet Crime Complaint Center, issued a joint alert about the Dirt Jumper crimeware kit being used to prevent bank staff from identifying fraudulent transactions.

In the alert, the organisations said criminals used phishing emails to lure bank employees’ into installing remote access trojans and keystroke loggers that stole their credentials.

In some incidents, attackers who gained the credentials of multiple employees were able to obtain privileged access rights and “handle all aspects of a wire transaction, including the approval,” the alert said – a feat that sounds daringly similar to recent attacks on the wire hub at banks.

“In at least one instance, actors browsed through multiple accounts, apparently selecting the accounts with the largest balance.”

Litan suggested that financial institutions “slow down” their money transfer system when experiencing DDoS attacks in order to minimise the impact of such threats.

This article originally appeared at scmagazineus.com

DDoS Protected Hosting

Izz ad-Din al-Qassam Cyber Fighters, the group behind three phases of distributed-denial-of-service attacks against banks since last September, now says more attacks against U.S. banks are on the way. The group made its announcement in a July 23 posting on the open forum Pastebin.

al-Qassam Cyber Fighters hasn’t attacked since the first week of May, when it announced it was halting attacks for the week, in honor of Anonymous’ Operation USA. But the group has remained quiet since then, apparently bringing to a close its third phase of attacks, which began March 5 (see New Wave of DDoS Attacks Launched).

Experts who’ve been following the group’s DDoS attacks say this fourth phase was expected and likely will follow the pattern of earlier phases.

“The QCF always start out a phase of Operation Ababil with something new,” says Mike Smith of online security provider Akamai Technologies. “It might be new targets, a larger botnet, new techniques, etc. This is how they try to evade the protections that the targets have deployed. They’ve also demonstrated a bit of showmanship in the past with announcing the attack before they resumed hostilities, and this could be another tactic to generate more press buzz.”

‘A Bit Different’
In its most recent post, al-Qassam Cyber Fighters says: “Planning the new phase will be a bit different and you’ll feel this in the coming days.”

John LaCour, CEO of cyber-intelligence firm PhishLabs, says the group’s plans for different attacks are in response to banking institutions’ heightened DDoS-mitigation strategies. “Major banks had improved their defenses prior to the quiet period,” he says. “If new types of attacks appear, then banks will need to be prepared to respond quickly to prevent significant impact to their online services.”

Based on the impact of the first three phases of DDoS attacks, LaCour notes: “Today’s announcement should put financial organizations on high alert for future attacks seeking to disrupt their online operations.”

In its post, al-Qassam also says, “The break’s over and it’s now time to pay off. After a chance given to banks to rest awhile, now the Cyber Fighters of Izz ad-Din al-Qassam will once again take hold of their destiny.”

Brobot’s Growth
So far, the only activity DDoS experts have noted is growth and maintenance of the botnet, known as Brobot, used in the previous three phases. No attack activity against banking institutions was apparent as of the afternoon of July 23.

Although experts did not directly link PDF download attacks waged in late June against two mid-tier banks to al-Qassam, some speculated those may have been a test for the next phase of attacks (see Another Version of DDoS Hits Banks).

LaCour told Information Security Media Group in early July that new code files linked to Brobot had been identified on compromised web servers the hacktivists had taken over. “The new code we see on these web servers is one of the strong indicators that the botnet is being rebuilt,” he pointed out.

The code behind the malware had changed and included configurations not seen in the first three phases, LaCour said.

Multiple Phases
Phase three of the attacks, which ran for eight weeks, lasted longer than the earlier phases. The first campaign, which began Sept. 18, lasted six weeks. The second campaign, which kicked off Dec. 10, lasted only seven.

Experts won’t speculate about how long this fourth phase might last, although al-Qassam does include a complex formula in its July 23 post to hint at how long the attacks could drag on.

But financial fraud expert Avivah Litan, an analyst with the consultancy Gartner Inc., says the timing of this latest announcement is not surprising, given that she believes there’s little doubt these attacks are backed by Iran.

Dedicated Server Storage

Numericable is a cable TV company operating in France, Belgium and Luxembourg. Rex Mundi claimed to have stolen customer data and demanded €22,000 for its return. Numericable declined, and denied that the hackers had the data.

Rex Mundi (king of the world) is a hacker group that makes a habit of hacking for extortion. Last week,Numericable Belgium‘s IT manager received an email saying that the hackers accessed a database of 6000 new customers, demanding a €22,000 ransom for the data.

Numericable’s response was threefold. It refused to pay the ransom, denied that the hackers could obtain the customer data, and referred the matter to the police. “Hackers have managed to get the data requests for information through our website, but have failed to obtain the data from our customers for the reason that we all separated and the data were not available via the site” (Google translation), Martial Foucart, CIO at Numericable, told RTL.

Rex Mundi responded first on Twitter. “So, Numericable claims that we didn’t steal any data… Our dump tomorrow will be rather humiliating for them then.”

According to Softpedia, Rex Mundi followed up by posting the database to dpaste.de (it has since been ‘removed’). An accompanying note apparently laid the blame on Numericable. “In life, when someone makes a mistake, especially a mistake that could potentially have grave consequences for other people, you would expect that person to man up and own up to it. But not Numericable.”

In Rex Mundi’s logic, Numericable made the mistake (in not securing the data) and then refused to ‘man up’ – and pay the price.

Direct extortion is a growing motivation for cybercriminals. Ransomware, or the ‘police trojan,’ is used to extort money directly from users. The threat of a DDoS attack is used to extort money from both large and small companies. And the threat of data leaks, such as in this case, is simple blackmail. On Tuesday this week, Rex Mundi separately announced that it had breached a Belgian recruitment agency.

However, “More often than not these blackmail threats go unreported,” commented Ashley Stephenson, CEO of Corero. We only tend to hear about them, he added, “when a threat is received and a decision taken to ignore it.”

Meanwhile, Numericable is facing a separate concern: the European Commission has launched an investigation into whether it received unfair aid from France in receiving the French cable infrastructure. “The Commission has doubts that such aid could be found compatible with EU rules,” said an EC statement.

In September 2012, six major American banks came under attack by hackers, and customers could not gain access to their accounts or pay bills online. The attacks did not affect customer bank accounts, but the rash of so-called distributed denial-of-service, or DDOS, attacks such as these against major financial institutions have forced them to step up their game in combating such threats.

DDOS attacks are becoming more frequent and sophisticated, according to the 2013 annual report of the Financial Stability Oversight Council. The council and cybersecurity experts have outlined a number of ways the financial service industry can mitigate the risk. They also say consumers need to be better educated about cybersecurity.

Danny Miller, national practice leader for cybersecurity and privacy at Grant Thornton LLP, worries that at some point, cyberattackers will begin to disrupt the ability of targeted banks to conduct business.

“They don’t really have to shut down a bank’s website for a long period of time,” Miller says. “What they could do — and what it appears their strategy is — is to do it using guerilla tactics. In other words, they’re doing small, concentrated attacks that make it look to the rest of the world that the banks are not able to control their infrastructure and protect themselves.”

Sneaky hackers

Miller says hackers have developed sneakier methods for doing their worst damage. For example, they’ll use insiders to steal information from one department at a bank while security experts are distracted by a cyberattack on another department.

Individual consumers and investors add to the problem with risky behavior such as accessing their personal banking information via unsecured Wi-Fi connections and inadvertently leaving clues about their passwords — think birthdays and pet names — on social media sites, says Jerry Irvine, a member of the National Cyber Security Task Force.

A joint effort of the Department of Homeland Security and the U.S. Chamber of Commerce, the task force involves members of the public and private sectors sharing information about security risks and prevention strategies, says Irvine, who is chief information officer of Prescient Solutions, an information technology outsourcing firm in the Chicago area.

The Financial Stability Oversight Council report encourages these types of public-private partnerships, along with better cooperation with the banking sector and 15 other industries to help decrease cyberthreats.

Cybersecurity legislation needed

In his May 2013 testimony before the Senate Committee on Banking, Housing and Urban Affairs, Treasury Secretary Jacob Lew called for a bipartisan effort to pass comprehensive cybersecurity legislation that would enhance the sharing of information among banks.

Todd McClelland, an attorney with Alston and Bird LLP in Atlanta, advises financial institutions, retailers, payment processors and other clients on information security issues. His firm represents several clients who have a stake in proposed cybersecurity legislation.

“It seems that there’s always some bill pending in front of Congress legislating additional cybersecurity standards, additional risk assessments or the like,” McClelland says.
A February 2013 presidential executive order tasked the National Institute of Standards and Technology — an agency of the U.S. Department of Commerce — with producing a new framework to improve cybersecurity for the nation’s critical infrastructure. One of the agency’s goals is to standardize the measures financial institutions use to control cybersecurity risks. The NIST aims to have the final framework for guidelines ready to roll out by February 2014.

Miller says each bank needs to first identify its most important information and then focus on securing that information from both external and internal threats. As a consultant, Miller advises banks to securely delete any customer information they don’t need to store, while tailoring their security policies to fit each category of data they decide to keep.

As for consumers, Miller says, “If you don’t need to share information … don’t.”

Password tips

Make sure you understand how the financial institution is using your information, who it is sharing it with and how long it plans to keep it in its database, Miller says. And if you’re able to opt out of having your information stored, you should.

“The longer they keep it, the more likely it is going to be stolen and exposed,” Miller says.

Irvine adds these tips:

  • Use a complex password of 10 or more characters. It should be alphanumeric, uppercase and lowercase, and have special characters.
  • Be wise about selecting and answering security questions. If a site asks for your mother’s maiden name, which a hacker might easily discover by checking out your Facebook page, use another one. Pick someone you haven’t seen since elementary school. You can lie on your security questions — just remember them.
  • Don’t create the same password for all of the sites you need to access.

“If you use the same password on Facebook and LinkedIn and other social networking sites and then you use it on your banking site, you might as well just be taking the money out and giving it to the hackers yourself,” Irvine says.

Copyright 2013, Bankrate Inc.

Zimbabweans knocked offline and see data wiped because of slew of cyber attacks last week during the elections, TechWeekEurope learns.

Cyber Repression: In the weeks leading up to and following Zimbabwe’s election of last Thursday, Zimbabweans were hit by significant Internet-based attacks. In some, they could have just been the victims of collateral damage. In others, they were targeted directly.

Two massive distributed denial of service (DDoS) attacks against hosting providers took place this weekend. They took a slew of sites offline, a number of which were reporting heavily on the hugely controversial Zimbabwean election, TechWeekEurope has learned.

One of the hosting providers, GreenNet, which describes itself as an ethical hoster and ISP, with Privacy International and Fair Trade Africa amongst its customers, suspects it may have been hit because of goings on in Zimbabwe. One of its clients is the Zimbabwe Human Rights Forum, which told TechWeekEurope it believes it may have been the subject of a separate hack earlier in the week.

Human rights group hit

The coordinator of the international office of the Zimbabwe Human Rights Forum said he was alerted to the DDoS by an employee of the Congressional Research Service in Washington DC, who had been looking at the ZHRF’s election “situation-room”, a live feed updating users on the political situation in the African nation.

At 6pm Wednesday, just before the DDoS started, the coordinator noticed all the information on that feed had mysteriously been wiped. “I lost information I had gathered for eight hours,” he said. “All of the information I had recorded on 30 July in the evening through to lunchtime the next day had been wiped.

“Even our website designer and engineer couldn’t really explain what happened. Then, whilst we were still talking about the wiping, we realised the site wasn’t working.

“It is curious because we have never had this problem before in the past 10 years.”

He claimed he was putting out the most comprehensive feed on the election, drawing from a variety of sources for users, and that’s why he could have been a target.

Zimbabweans have set up numerous sites, to draw attention to fears of rigging, violent repression and threats that had blighted the 2008 election.

One, electionride.com, has been taken offline. On its Facebook page on election day, it claimed to have been compromised.

Last month, Kubatana.net, which has been disseminating information via various electronic means, said it had been blocked from sending bulk text messages. Its mobile provider Econet Wireless had been told by the government’s telecoms regulator to enforce the block, it was claimed.

“Kubatana.net views the interference in our work as obstructive, repressive and hostile. It is our opinion that as we approach the July poll the Zimbabwean authorities are increasing their control of the media,” the organisation said on its website on 25 July.

This election has proven just as controversial as 2008’s, with the two main parties at loggerheads over the result, which went strongly in favour of President Robert Mugabe. Opposition leader Morgan Tsvangirai, of the Movement for Democratic Change (MDC) party, has claimed the vote was rigged, whilst the official figures indicate Mugabe won with a significant majority.

MDC members have now claimed they were the victims of physical attacks by Mugabe supporters. Zanu-PF, Mugabe’s party, has denied the claims.

GreenNet taken out

GreenNet is only just recovering today, with some customer websites still down, having reported the strike on Thursday morning, the day after Zimbabweans headed to the polls. It appeared to be a powerful attack – TechWeek understands it was at the 100Gbps level – aimed at GreenNet’s co-location data centre provider. Its upstream provider Level 3 subsequently did not let GreenNet route through its infrastructure. Level 3 was not available for comment.

Cedric Knight, technical consultant at GreenNet, said the company suspected the massive attack, which knocked all its 3,000 customers offline, with email also disrupted, could have been launched because of the Zimbabwean organisations running off its infrastructure.

However, it could not be certain, saying only that it was one GreenNet customer that was targeted. Many of its customers from environmental, gender equality and human rights groups have powerful enemies.

It believes a government entity or a private organisation was responsible. A tweet from GreenNet earlier this week read: “The nature and magnitude of this attack does suggest corporate or governmental sponsors, likely a very unsavoury one.”Zimbabwe election – Shutterstock – © Stephen Finn

The DDoS that hit GreenNet was not a crude attack using a botnet to fire traffic straight at a target port, but a DNS reflection attack using UDP packets, which can generate considerable power. DNS reflection sees the attacker spoof their IP address to pretend to be the target, send lines of attack code to a DNS server, which then sends back large amounts of traffic to the victim.

HostGator, a huge hosting provider in the US, also suffered a big DDoS hit over the weekend. That took out popular Zimbabwean news service Nehanda Radio, amongst many others. Lance Guma, managing editor of the organisation’s website, said he was not sure what exactly had happened. But he has become used to attempted cyber attacks.

“Every time you have a big story, it depends whether the government want people to read it or not,” he said, admitting it was sometimes hard to tell if a story had just been hugely popular, causing the server to crash, or if it was a genuine attack.

Neganda Radio also receives plenty of threats via email: “We received a lot of those this last week. Obviously we never open any,” Guma added.

“We’ve been receiving a lot of election reports and then there’s a link you’re meant to click, but we never click anything because you can tell the subject matter is dodgy.

“They try all that… we normally just open emails from trusted sources.”

Guma said Mugabe’s government is fairly useless when it came to anything to do with technology, but China is believed to be assisting the nation’s cyber police. “You can just outsource this stuff now,” he added.

This article is part of TechWeek’s Cyber Repression Series – check out the first article on attacks stemming from China on spiritual activists and military bodies and the second on IP tracking in Bahrain.

Turkish security researcher claims to have found flaw in system, which has been offline since Thursday as company ‘rebuilds and strengthens’ security around databases

Apple says its Developer portal has been hacked and that some information about its 275,000 registered third-party developers who use it may have been stolen.

The portal at developer.apple.com had been offline since Thursday without explanation, raising speculation among developers first that it had suffered a disastrous database crash, and then that it had been hacked.

A Turkish security researcher, Ibrahim Balic, claims that he was behind the “hack” but insisted that his intention was to demonstrate that Apple’s system was leaking user information. He posted a video on Youtube which appears to show that the site was vulnerable to an attack, but adding “I have reported all the bugs I found to the company and waited for approval.” A screenshot in the video showed a bug filed on 19 July – the same day the site was taken down – saying “Data leaks user information. I think you should fix it as soon as possible.”

The video appears to show developer names and IDs. However, a number of the emails belong to long-deprecated services, including Demon, Freeserve and Mindspring. The Guardian is trying to contact the alleged owners of the emails.

Balic told the Guardian: “My intention was not attacking. In total I found 13 bugs and reported [them] directly one by one to Apple straight away. Just after my reporting [the] dev center got closed. I have not heard anything from them, and they announced that they got attacked. My aim was to report bugs and collect the datas [sic] for the purpose of seeing how deep I can go with it.”

Apple said in an email to developers late on Sunday night that “an intruder attempted to secure personal information of our registered developers… [and] we have not been able to rule out the possibility that some developers’ names, mailing addresses and/or email addresses may have been accessed.”

It didn’t give any indication of who carried out the attack, or what their purpose might have been. Apple said it is “completely overhauling our developer systems, updating our server software, and rebuilding our entire database [of developer information].”

Some people reported that they had received password resets against their Apple ID – used by developers to access the portal – suggesting that the hacker or hackers had managed to copy some key details and were trying to exploit them.

If they managed to successfully break into a developer’s ID, they might be able to upload malicious apps to the App Store. Apple said however that the hack did not lead to access to developer code.

The breach is the first known against any of Apple’s web services. It has hundreds of millions of users of its iTunes and App Store e-commerce systems. Those systems do not appear to have been affected: Apple says that they are completely separate and remained safe.

Apple’s Developer portal lets developers download new versions of the Mac OS X and iOS 7 betas, set up new devices so they can run the beta software and access forums to discuss problems. A related service for developers using the same user emails and passwords, iTunes Connect, lets developers upload new versions of apps to the App Store. While developers could log into that service, they could not find or update apps and could not communicate with Apple.

But if the hack provided access to developer IDs which could then be exploited through phishing attacks, there would be a danger that apps could be compromised. Apps are uploaded to the App Store in a completed form – so hackers could not download “pieces” of an existing app – and undergo a review before being made publicly available.

High-profile companies are increasingly the target of increasingly skilful hackers. In April 2011, Sony abruptly shut down its PlayStation Network used by 77 million users and kept it offline for seven days so that it could carry out forensic security testing, after being hit by hackers – who have never been identified.

It has also become a risk of business for larger companies and small ones alike. On Saturday, the Ubuntu forums were hacked, and all of the passwords for the thousands of users stolen – although they were encrypted. On Sunday, the hacking collective Anonymous said that it hacked the Nauruan government’s website.

On Sunday, the Apple Store, used to sell its physical products, was briefly unavailable – reinforcing suspicions that the company was carrying out a wide-ranging security check. The company has not commented on the reasons for the store going down.

Marco Arment, a high-profile app developer, noted on his blog before Apple confirmed the hack that ” I don’t know anything about [Apple’s] infrastructure, but for a web service to be down this long with so little communication, most ‘maintenance’ or migration theories become very unlikely.”

He suggested that the problem could either be “severe data loss” in which restoring from backups has failed – but added that the downtime “is pretty long even for backup-restoring troubles” – or else “a security breach, followed by cleanup and increases defenses”.

Of the downtime, he said “the longer it goes, especially with no statements to the contrary, the more this [hacking hypothesis] becomes the most likely explanation.”

About Graeme Caldwell — Graeme works as an inbound marketer for InterWorx, a revolutionary web hosting control panel for hosts who need scalability and reliability. Follow InterWorx on Twitter at @interworx, Like them on Facebook and check out their blog, http://www.interworx.com/community.

An extremely hard to find backdoor that exposes web users to malware infection has been discovered in the wild by security researchers. The Linux/Cdorked. A backdoor uses a number of advanced methods to avoid detection with the techniques normally employed by system administrators, and is estimated to be present on hundreds of machines.

The most recent of a series of serious Apache exploits discovered over the last few weeks, Linux/Cdorked.A is particularly pernicious because, in addition to providing a platform from which the Blackhole toolkit can be used against target machines, it makes almost no easily detectable changes to infected systems. The usual remediation techniques employed by system administrators are likely to simply destroy evidence of infection.

The backdoor stores none of its configuration files on disk, instead using shared memory to store its instructions and configuration. The only evidence on the filesystems of infected machines is a modified HTTP daemon binary. The backdoor receives its instructions via obfuscated URLs that Apache does not log and is capable of receiving 70 different instructions, indicating a comprehensive and fine grained control capability.

In addition to control via URL, the modified server binary also contains a reverse connect backdoor that can be triggered by a URL containing hostname and port data to connect to a shell session that the attacker controls.
Linux.Cdorked.A redirects clients to machines that contain malware payloads, but makes itself even more difficult to detect by avoiding redirecting clients that meet conditions that indicate that the connecting machine may be used by a site’s administrators. For example, it won’t redirect if the URL or hostname contains strings like “support” or “adm”. An administrator visiting an infected site is likely to see no evidence of the site having been exploited. Additionally, the backdoor sets a cookie on clients it redirects and won’t redirect the same client again, making it further difficult to determine the source of infection.

If an administrator suspects that their server has been infected they can use a tool created by ESET, whose researchers made the initial discovery, to dump the shared memory used by the backdoor for analysis.
It’s not clear how servers become infected initially, but all system administrators should employ industry best practices to ensure that their sites are not easily exploited, including having the most recent version of the Apache server installed and verifying that users with SSH access to servers are using secure passwords, as there is some indication that brute force attacks on SSH servers may be responsible.

Our CTO here at GigeNET, Ameen Pishdadi, was recently interviewed by Net-Security.org. In this interview he discusses the various types of DDoS attacks, tells us who is at risk, tackles information gathering during attacks, lays out the lessons that he’s learned when he mitigated large DDoS attacks, and more.

Read the full article on the Net-Security.org website

Attacks on computer systems are on the rise. If a hacker gets into a system and steals credit card numbers, customer data, Social Security numbers, it can be financially devastating for a company. Businesses can lose most of their customers when they no longer trust them with their personal and financial. For this reason, it is vital that a business stays ahead of the web criminals. The question is how much will you pay for security?

The Costs are Greater if you do Nothing

If you do not acquire effective website security and your server is breached, you can pay immense costs, which includes losing customers and suffering serious loss of sales. If you have an online business, customer data and financial information could be stolen. The result can be lawsuits and loss of reputation which could be financially devastating. Deciding on the security measures that you will implement will depend on the type of website you have such as a large corporation website or a small online store offering select products or services.

Generally, you have to consider such measures as security penetration tests, virus scanners, firewalls, technology to prevent hackers, routine security assessments, Phishing and Malware protection, anti-virus protection, and anti-DDoS software. As well, you have to make sure you are upgrading these security systems on a regular basis. You will also have to implement an office security policy for your employees.

Lessening the Risks

Security prevention means you must reduce the risks. When considering what you will need to in your hosting security plan, you should consider the following: regulatory compliance, security breach history, industry standards, and size of network and system. In addition you need to consider such risks to infrastructure, code, and applications and how susceptible your system is to URL manipulation, SQL injection, and cross-site scripting.

The impact of a security breach can be devastating to a business so it is essential to budget for a quality all-inclusive security plan. Implementing an effective security system can be expensive; however, the cost of a breach can destroy a business. A good security system can assure and give you peace of mind knowing that your system and data is protected at all times.

Due to the increasing number of DDoS attacks, it is vital that businesses implement a diverse number of security measures in order to protect their websites and data from a wide range of security threats.

Five ways to protect against DDoS attacks:

  1. Vulnerability Scanning and Penetration Testing: Prevention is the key to website security. Vulnerability scanning is an effective prevention tool. A vulnerability scanner is a tool that scans a site for security vulnerabilities. The results of the scan allows administrator to secure the vulnerable spots such as improving firewalls. As well, penetration testing is another tool that helps to identify weakness in such areas as application codes and browser scripts.
  2. DDoS Protection Software: A DDoS (Distributed Denial of Service) takes place when a server is overwhelmed with tasks and requests and the server is no longer able to function properly. A DDoS attack can cause a server to use up a resource such as storage capacity, bandwidth, and processing power. By doing so, there is no more of that resource remaining for regular legitimate traffic. DDoS protection software runs on existing hardware and it is involved in analyzing incoming traffic. When the software detects malicious packets, it will filter them out which efficiently stops a traffic flood attack.
  3. Application Firewalls: A web application firewall is a tool that is located between a client browser and web server. This device prevents web attacks, prevents data leaks, and analyzes HTTP traffic. It is an effective method of blocking web attacks.
  4. Browser Security Tools: Make sure your browser has tools such as built-in XSS filter to minimize the risk of XSS attacks.
  5. Application Whitelists: Implement a policy of approved applications through the use of application whitelists.

When securing your website, make sure you prioritize and choose the security tools that are affordable and provide you with an efficient level of security. Detecting diverse attacks along with a security program that prevents attacks will significantly help prevent any type of DDOS attacks against your website.

 

DDoS Protected Hosting

Distributed Denial of Service (DDoS) attacks have become more prevalent and are now considered among the highest types of attacks against a web server. DDoS attacks have resulted in not just taking websites temporarily offline, but also shutting websites down for days. Because of malicious attacks such as Wikileaks ‘Operation Payback,’ more enterprises are taking these attacks seriously and are looking for effective anti-DDoS technology. There are now efficient anti-DDoS technologies available to safeguard web servers. DDoS protected hosting is an efficient and affordable solution to preventing malicious DDoS attacks.

DDoS protected hosting provides protection for your website from DDoS attacks. The objective is to respond to the attack using DDoS prevention measures. DDoS attacks normally operate by driving an overwhelming amount of web traffic to a targeted server so that the server can no longer function properly and stops working. The authentic traffic is then lost. When there is a sudden spike in IP traffic and unfiltered IP traffic starts to increase, this is usually an indication that a DDoS attack is making its way into the network. Anti-DDoS software will start filtering the traffic immediately until the traffic slows down to normal levels. This is an indication that the attack has been mitigated. A couple of minutes later the traffic is flowing in at normal levels while remaining filtered, and authentic traffic continues to flow undisrupted.

By placing a website behind a ProxyShield mitigation system, DDoS attacks will be effectively stopped which would have resulted in extended periods of downtime Businesses benefit from complete protection for their website IP address, and automatic detection and filtering of DoS / DDoS attacks. DDoS protected hosting provides clients with the most current technology to ensure their websites are protected from malevolent elements. When choosing DDoS protected hosting, it is important to understand the level the protection the service provides to ensure that you have the most reliable services for DDoS protection.

Website downtime caused by Distributed Denial of Service (DDoS) attacks can cost your business hundreds of thousands and even millions of dollars. Today, DDoS attacks aimed at shutting down websites have become one of the most costly computer crimes. If you have a growing e-commerce site, it is essential that you have complete DDoS attack protection. DDoS protected hosting is the most efficient and most affordable DDoS security solution.