Web Security

business chat

How to secure your chats with Matrix

Privacy and security can be difficult to achieve, especially for your entire organization. It involves many factors and can be difficult to manage from the top. While you may not ...

There is a lot of noise these days about this soon-to-be implemented EU regulation, the GDPR (General Data Protection Regulation), making the topic hard to miss — but how much do you understand about GDPR, and to what extent can it can impact your U.S.-based business?

What is this GDPR thing, and why should you care?

Adopted by the European Union on April 27th, 2016, and scheduled to become enforceable on May 25th, 2018, the GDPR is a regulation designed to greatly strengthen an EU citizen’s control over their own personal data. In addition, the regulation is meant to unify the myriad of regulations dealing with data protection and data privacy across member states. Finally, its reach also extends to the use and storage of data by entities outside of the EU (Spoiler Alert! This is the part that affects us).

Enforcement of the provisions within GDPR is done via severe penalties for non-compliance, with fines up to €20 million, or 4% of the worldwide annual revenue (whichever is greater). Now, as a non-EU entity, you may think that your company won’t be subject to these fines, but that is incorrect. As a U.S. company that collects or processes the personal data of EU citizens, the EU regulators have the authority and jurisprudence, with the aid of international law, to levy fines for non-compliance.

In addition, your EU-based clients can be held accountable for providing personal information to a non-compliant 3rd party (your company). This is strong incentive for EU-based citizens and companies to insist on working only with GDPR-compliant 3rd parties, costing your company all EU-based business.

As you will soon realize, the GDPR is a vast set of regulations, with a large scope and sharp teeth. I cannot possibly go into enough detail in a blog post to map out a roadmap towards compliance, and neither is that my goal. If that is what you are looking for in a blog post, well, maybe you shouldn’t be responsible for anyone’s personal data….

No, my intent here is to demonstrate the importance of the GDPR, hopefully convince you to take it seriously and start down the road to compliance, and finally to point you in the right direction to start your journey.

The expanding scope

The GDPR expands the definition of personal data in order to widen the scope of its protections, aiming to establish data protection as a right of all EU citizens.  

The following types of data are examples of what will be considered personal data under the GDPR:

GDPR personal data

Does your company collect, store, use or process anything considered personal data related to an EU citizen by the GDPR?  If you have any EU clients, customers, or even just market to anyone in the EU, it is unlikely you could avoid being subject to GDPR.

The EU is seeking to make data privacy for individuals a fundamental right, broken down into several more-precise rights:

  • The right to be informed
      • A key transparency issue of the GDPR
      • Upon request, individuals must be informed about:
        • The purpose for processing their personal data
        • Retention periods for their personal data
        • All 3rd parties with which the data is to be shared
      • Privacy information must be provided at the time of collection
        • Data collected from a source other than the individual extends this requirement to within one month
      • Information must be provided in a clear and concise manner.
  • The right of access
      • Grants access to all personal data and supplementary information
      • Includes confirmation that their data is being processed
  • The right to rectification
      • Grants the right to correct inaccurate or incomplete information
  • The right to erasure
      • Also known as “the right to be forgotten”
      • Allows an individual to request the deletion of personal data when:
        • The data is no longer needed under the reason it was originally collected
        • Consent is withdrawn
        • The data was unlawfully collected or processed
  • The right to restrict processing
      • This blocks processing of information, but still allows for its retention
  • The right to data portability
      • Allows an individual’s data to be moved, copied or transferred between IT environments in a safe and secure manner.
      • Aimed to allow consumers access to services which can find better values, better understand understand spending habits, etc.
  • The right to object
      • Allows an individual to opt-out of various uses of their personal data, including:
        • Direct marketing
        • Processing for the purpose of research or statistics
  • Rights in relation to automated decision making and profiling
    • Limits the use of automated decision making and profiling using collected data

gdpr data privacy rights

Sprechen Sie GDPR?

Before diving deeper, it is important to understand some key terms used by the regulation.

The GDPR applies to what it calls “controllers” and “processors.”  These terms are further defined as Data Controllers (DCs) and Data Processors (DPs).  The GDPR applies differently in some areas to entities based upon their classification as either a DC or as a DP.

  • A Controller is an entity which determines the purpose and means of processing personal data.
  • A Processor is an entity which processes personal data on behalf of a controller.

What does it mean to process data?  In this scope, it means:

  • Obtaining, recording or holding data
  • Carry out any operation on the data, including:
    • Organization, adaptation or alteration of the data
    • Retrieval, consultation or use of the data
    • Transfer of data to other parties
    • Sorting, combining or removal of the data

The Data Protection Officer, or DPO, is a role set up by the GDPR to:

  • Inform and advise the organization about the steps needed to be in compliance
  • Monitor the organization’s compliance with the regulations
  • Be the primary point of contact for supervisory authorities
  • Be an independent, adequately resourced expert in data protection
  • Reports to the highest level of management, yet is not a part of the management team.

The GDPR requires a DPO to be appointed to any organization that is a public authority, or one that carries out certain types of processing activities, such as processing data relating to criminal convictions and offences.

Even if the appointment of a DPO for your organization is not deemed necessary by the GDPR, you may still elect to appoint one anyway.  The DPO plays a key role in achieving and monitoring compliance, as well as following through on accountability obligations.

The Nitty Gritty

In addition to expanding the definition of personal data and providing individuals broad rights governing the use of that data, the GDPR provided a number of requirements for organizations requiring that data shall be:

“a) processed lawfully, fairly and in a transparent manner in relation to individuals;

b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall not be considered to be incompatible with the initial purposes;

c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed;

d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay;

e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes subject to implementation of the appropriate technical and organisational measures required by the GDPR in order to safeguard the rights and freedoms of individuals; and

f) processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.” 

— GDPR, Article 5 


Additionally, Article 5 (2) states:

“the controller shall be responsible for, and be able to demonstrate, compliance with the principles.”

This last piece, known as the accountability principle, states that it is your responsibility to demonstrate compliance.  To do so, you must:

  • Demonstrate relevant policies.
    • Staff Training, Internal Audits, etc.
  • Maintain documentation on processing activities
  • Implement policies that support the protection of data
    • Data minimisation
      • A policy that encourages analysis of what data is needed for processing, and the removal of any excess data, or simply collecting only what is needed, and no more
    • Pseudonymisation
      • A process to make data neither anonymous, nor directly identifying
      • Achieved by separating data from direct identifiers, making linkage to an identity impossible without additional data that is stored separately.
    • Transparency
      • Demonstration that personal data is processed in a transparent manner in relation to the data subject
      • This obligation begins at data collection, and applies throughout the life cycle of processing that data
    • Allow for the evolution of security features going forward.
      • Security cannot be static when faced with a constant-evolving environment.
      • Policies must be flexible enough to protect from not just today’s and  yesterday’s threats, but from tomorrow’s.

The best laid plans…

Despite one’s adherence to these new policies, and implementation of tight security policies, there is no guarantee the data you are responsible for keeping safe will be absolutely secure.  Data breaches are more or less inevitable. Being aware of this, the GDPR has provisions regarding the reporting of data breaches should (when) they happen.

Not sure how to navigate these waters with your current infrastructure? We can help.

A data breach is a broader term than one may think.  Typically, the deliberate or accidental release of data to an outside party (say, a hacker) would be what one would consider a breach — and you would be right, it is a breach — but there is much more that can be considered a breach.

All of the following examples constitute a data breach:

  • Access by an unauthorized third party
  • Loss or theft of storage devices containing personal data
  • Sending personal data to an incorrect recipient, whether intended or not
  • Alteration of personal data without prior authorization
  • Loss of availability, or corruption of personal data

Data breaches must be reported to the relevant supervisory authority within 72 hours of first detection. Should the breach be likely to result in risk to an individual, that individual must also be notified without delay. All breaches, reported or not, must be documented.

Bit off more than you can chew?

This may seem like a lot to take in, and it should be.  The GDPR was designed to expand the privacy rights of all EU citizens, as well as replace the existing regulations of all member states with one, uniform set of regulations.

The good new is, as a U.S. company, you don’t have to take every step towards compliance alone.

The U.S. government, working with the EU, developed a framework to provide adequate protections for the transfer of EU personal data to the United States. This framework, called Privacy Shield, was adopted by the EU in 2016 and has passed its first annual review.

In order to participate in the Privacy Shield program, U.S. companies must:

  • Self-certify compliance with the program
  • Commit to process data only in accordance to the guidelines of Privacy Shield
  • Be subject to the enforcement authority of either:
    • The U.S. Federal Trade Commission
    • The U.S. Department of Transportation

To learn more about Privacy Shield, visit www.privacyshield.gov

How I learned to stopped worrying and love the GDPR

Getting compliant with the GDPR may seem like a huge P.I.T.A., but there are real benefits to following this path that extend beyond retaining EU contracts and avoiding hefty fines.  Data privacy is a huge issue world-wide, and being compliant with one of the strictest sets of regulations will help appease clients and customers from all corners of the globe. Even if you don’t have any interaction with EU citizens or organizations, becoming GDPR compliant may still be a great idea.

Compliance forces you to evaluate your systems and processes, ensuring that they are secure and robust enough to survive in the ever-changing landscape in which data privacy resides.  This transforms compliance from a tedious duty to a strong selling point.

Click Here to find out how GigeNET can help you!

Securing Memached Services

Over the past few weeks, a new DDoS attack vector through the use of memcached has become prevalent. Memcached is an object caching system with the original intent of speeding up dynamic LiveJournal websites back in 2003. It does this by caching data in RAM instead of calling data from a hard drive, thus reducing costly disk operations.

Deeper analysis of the security issues:

Memcached was designed to give the fastest possible cache access, hence it isn’t recommended to leave open on a public network interface. The recent attacks utilizing Memcached take advantage of the UDP protocol and an attack method known as UDP reflection.

An attacker is able to send a UDP request to a server with a spoofed source address, thus causing the server to reply to the spoofed source address instead of the original sender. On top of sending requests towards a server with the intent of “reflecting” them towards another server, attackers are able to easily add to the cache. Because memcached was designed to sit locally on a server, it was never created with any form of authentication. Attackers can connect and add to the cache in order to amplify the magnitude of the attack.

The initial release of Memcached was in May of 2003. Since then, the uses of it have expanded greatly, but the original technology has remained the same. While its uses have been expanded, its security features have not.

Below is a sample packet we captured from a server participating in one of these reflection attacks. This is the layer 3 information of the packet, the source IP is spoofed to point to a victim’s server:


This is the layer 4 information, Memcached listens on port 11211:


In addition to being able to be used as a reflector for attackers, attackers can also extract highly sensitive data from within the cache because of its lack of authentication. All of the data within the cache has a TTL (Time To Live) value before it is removed, but it still isn’t difficult to pull information from.

Below is an example of how easy it is for an attacker to alter the cache on an unsecured server. We simply connected on port 11211 over telnet and were immediately able to make changes to the cache:


Solution Overview

In order to decide how to best secure Memcached on your server, you must first determine how your services use it. Memcached was originally designed to run locally on the same machine as the web server.

A: If you don’t require remote access, it is best to completely prevent it from using internet protocol.

B: If you require remote access, it is recommended to whitelist the source IPs of what needs to access it. This way you control exactly what machines can make changes and read from it.

Solution Instructions:

In the case that remote access is not required, it is advised to ensure Memcached can only speak to local host on on startup.

Ubuntu based servers:

sudo nano /etc/memcached.conf

Ensure the following two lines are present in your configuration:


This will bind Memcached to your local loopback interface preventing access from anything remote.

-U 0

This will disable UDP for Memcached thus preventing it from being used as a reflector.

Then restart the service to apply the settings:

sudo service memcached restart

CentOS based servers:

nano /etc/sysconfig/memcached

Add the following to the OPTIONS line:

OPTIONS="-l -U 0"

Restart the service to apply the settings:

service memcached restart

If Memcached needs to be accessed remotely, whitelisting the IPs that are allowed to connect will best secure your server.

Using iptables:

sudo iptables -A INPUT -i lo -j ACCEPT

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

sudo iptables -A INPUT -p tcp -s IP_OF_REMOTE_SERVER/32 --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

sudo iptables -P INPUT DROP

Defining a /32 in the above commands specifies a single server that will be allowed access. If multiple servers in a range require access, the CIDR notation of the range can be input instead:

sudo iptables -A INPUT -p tcp -s IP_RANGE/XX --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

Using CSF:

nano /etc/csf/csf.allow

Add the following line to whitelist IPs:


You can also specify a range using CIDR:


tcp = the protocol that will be used to access Memcached
In = direction of traffic
d = port number
s = source IP address or IP range

Save the file and then restart the service:

csf -r

After whitelisting the IPs allowed to access Memcached, we must rebind the service to use the interface we wish for it to communicate on.

On Ubuntu based servers:

sudo nano /etc/memcached.conf

Change the IP on this line to represent the IP of the interface on your server:

-l x.x.x.x

Then restart the service to apply the settings:

sudo service memcached restart

On CentOS based servers:

nano /etc/sysconfig/memcached

Change the IP following the -l flag to that of your server’s interface:

OPTIONS="-l x.x.x.x -U 0"

Restart the service to apply the settings:

service memcached restart


The best way to secure your server from these vulnerabilities is to prevent Memcached from talking on anything other than the local host. If the service must be accessed remotely, ensure to adequately secure it using your server’s firewall. Securing your server will not only prevent it from being used in malicious DDoS attacks, but also ensure that confidential data isn’t compromised. Taking the above actions will help the community as a whole and prevent unwanted bandwidth overages.

Linux Encryption Backup Tools

If the data you store on your server or other service is important to you, likely you’d prefer it not ending up in the hands of others. If so you should use the power of cryptography. There are many options to choose from whether you’re running Windows, Linux or BSD but we’ll be focusing on my favorite Linux based tools for now. You can choose from encrypting parts of your filesystem to encrypting an entire block device. Depending on what you prefer it’s relatively easy to do if you’re even a little familiar with Linux and can follow tutorials. It doesn’t require you to be a mathematician or cryptography expert.

As a sysadmin, here are my favorite Linux encryption and backup tools:


One of my personal favorites for filesystem encryption is EncFS. It allows you to easily setup encrypted directories which is incredibly useful for storing off-site backups on systems that you don’t necessarily trust.

For example, you could have plain-text website backups dumped to /backups and then setup EncFS to encrypt that data to /encrypted-backups. You’d then be able to use tools like rsync or rclone to move the data somewhere else, even onto a system that you don’t trust.

Keep in mind, if you don’t have a complex/strong password, your encrypted data is likely unsafe. In the event that you lose data on your local system, you could rsync/rclone the data from /encrypted-backups and mount it again via FUSE as long as youe the original password you encrypted the data with.


If you’re familiar with GPG, Duplicity is a great tool to use for encrypted remote/local compressed backups with many features. It’s meant to be a tool for backing up specified directories in increments to save space, but can also be used to perform full backups each time.

With Duplicity you’ll need to create a GPG key and protect it with a strong password. You can then use that key with Duplicity to encrypt and sign the backups. Just like with EncFS, you can use rsync, rclone or another tool to transfer the encrypted ups off-site. The best implementation of Duplicity that I’ve found is backupninja, allowing you to create multiple backup actions with an easy-to-use configuration.


Another option is to encrypt your entire block device with dm-crypt + LUKS. Using this tool, all of the data on the block device is encrypted and even someone with local access cannot decipher it.

There are few exceptions to this. For instance, if the attacker has your root password or can read from memory via a cold boot attack when the system is powered on, then it would be possible to either simply login or grab the encryption keys from memory. What’s neat about dm-crypt + LUKS is that you can also set it up remotely on your server if you have access to IPMI and you boot a recovery image.

Once setup, you can make it prompt you via SSH for a password when the server boots instead of having to type it in locally. LUKS only protects you completely from unauthorized local access when your system is powered off, which is likely what the attacker must do. It’s unlikely that your data can be deciphered if you have a strong password. If someone were to compromise your system while the encrypted volume is mounted, you are in trouble.

Remember that encryption doesn’t protect you from the lack of safe security practice on your part!

Be sure to also read my blog, How to secure your chats with Matrix.

What is the value of a server? In this world of virtual machines and dedicated servers, our customers are becoming more and more removed from the physical components that comprise a server. Everything is easily replaceable — everything except the data contained within the servers.

Countless work hours have gone into making each server unique, with custom set-ups, modified WordPress templates, blog posts going back years, etc.

This is where the value of a server lies, in the data. What is this data worth to you? How do you even begin to measure that?

The data is stored on the server’s hard drives.

And guess what? This is, by far, the most common part of a server to experience failure. So it’s absolutely necessary to create a backup strategy.

So how do you to create the best backup strategy? Where do you begin?

There are two ways to prevent against the effects of data loss from drive failure – prevention and recovery.

On the prevention vector, we focus on RAID: Redundant Array of Independent Disksvarious

RAID configurations can be implemented to allow your data to withstand the loss of one, two, or more drives. There are several different configurations that can be tailored to your specific needs, essentially finding the sweet spot between performance, resilience, and cost that is right for your environment. RAID uses two or more drives to store your data in ways that can not only survive drive loss but can also improve performance.

However, even RAID can only protect from so many simultaneous failures. While it certainly helps prevent data loss in most cases, it doesn’t reduce it to statistic nothingness. Servers are susceptible to multi-drive failure, which is more common than one would expect.

When setting up a server with multiple drives, often these drives are all from the same batch, if they are installed new. If one drive has a flaw, it is likely that this flaw is shared by the other drives in that batch, making the loss of 2 or more drives in a short time more likely than one would expect. In addition to often coming from the same batch, drives in a server are exposed to the same environmental factors, as well.

Furthermore, in addition to the fear of multiple drive failures, files and filesystems can become corrupt — either accidentally or maliciously. In this case, no RAID level will help you out of this jam.

This is where our second vector comes into play – recovery.

Backing up your data to an external system and keeping multiple recovery points is one of the best ways to mitigate the effect of unpreventable data loss. No matter how robust your data storage plan is, it can fail.

Our R1Soft backup service is set up to take daily incremental backup of your data.

Your data got corrupted on Tuesday? No Problem! Just restore the data from Monday.

You lost the “wrong two” drives in your RAID 10 array? Easy! We’ll simply replace the bad drives, and any others that don’t test 100% stable, then perform a bare metal recovery of your OS and data.

Every client with the service has full control and visibility of their backups with the ability to review, edit and download their backups using a personalized interface.

So, should you choose RAID or R1Soft backups?

While RAID alone is a great option that can prevent the downtime associated with recovery on single or even multiple-drive failures (as long as they are the “right” drives that fail, but that is another topic), it is not failproof.

On the other hand, backups alone can require lengthy downtime for recovery to occur, and the backup is only as current as the last recovery point.

This is why, for protecting your cannot-lose data, we recommend the dual-vectored approach of prevention and recovery. Significantly reduce the need to recover from backups by using a robust RAID, but have those backups on hand for when you do need them.

If you would like to add RAID or R1Soft backups (or better yet, both) to your current setup, chat with our specialists.

Dual Core and Quad Core Servers Optimal Ecommerce Solutions

Three banks plundered with DDoS distraction.

Criminals have recently hijacked the wire payment switch at several US banks to steal millions from accounts, a security analyst says.

Gartner vice president Avivah Litan said at least three banks were struck in the past few months using “low-powered” distributed denial-of-service (DDoS) attacks meant to divert the attention and resources of banks away from fraudulent wire transfers simultaneously occurring.

The loses “added up to millions [lost] across the three banks”, she said.

“It was a stealth, low-powered DDoS attack, meaning it wasn’t something that knocked their website down for hours.”

The attack against the wire payment switch — a system that manages and executes wire transfers at banks — could have resulted in even far greater loses, Litan said.

It differed from traditional attacks which typically took aim at customer computers to steal banking credentials such as login information and card numbers.

While it was unclear how the attackers gained access to the wire payment switch, fraudsters could have targeted bank staff with phishing emails to plant malware on bank computers.

RSA researcher Limor Kessem said she had not seen the wire payment switch attacks in the wild, but the company had received reports of the attacks from customers.

“The service portal is down, the bank is losing money and reliability, and the security team is juggling the priorities of what to fix first,” she said.

“That’s when the switch attack – which is very rare because those systems are not easily compromised [and require] high-privilege level in a more advanced persistent threat style case – takes place.”

Litan declined to name the victim banks but said that the attacks did not appear linked to recent hacktivist-launched DDoS attacks against US banks since they were entirely financially driven.

Researchers at Dell SecureWorks in April detailed how DDoS attacks were used as a cover for fraudulent attacks against banks.

The researchers said fraudsters were using Dirt Jumper, a $200 crimeware kit that launches DDoS attacks, to draw bank employees’ attention away from fraudulent wire and ACH transactions ranging from $180,000 to $2.1 million in attempted transfers.

Last September, the FBI, Financial Services Information Sharing and Analysis Center, and the Internet Crime Complaint Center, issued a joint alert about the Dirt Jumper crimeware kit being used to prevent bank staff from identifying fraudulent transactions.

In the alert, the organisations said criminals used phishing emails to lure bank employees’ into installing remote access trojans and keystroke loggers that stole their credentials.

In some incidents, attackers who gained the credentials of multiple employees were able to obtain privileged access rights and “handle all aspects of a wire transaction, including the approval,” the alert said – a feat that sounds daringly similar to recent attacks on the wire hub at banks.

“In at least one instance, actors browsed through multiple accounts, apparently selecting the accounts with the largest balance.”

Litan suggested that financial institutions “slow down” their money transfer system when experiencing DDoS attacks in order to minimise the impact of such threats.

This article originally appeared at scmagazineus.com

DDoS Protected Hosting

Izz ad-Din al-Qassam Cyber Fighters, the group behind three phases of distributed-denial-of-service attacks against banks since last September, now says more attacks against U.S. banks are on the way. The group made its announcement in a July 23 posting on the open forum Pastebin.

al-Qassam Cyber Fighters hasn’t attacked since the first week of May, when it announced it was halting attacks for the week, in honor of Anonymous’ Operation USA. But the group has remained quiet since then, apparently bringing to a close its third phase of attacks, which began March 5 (see New Wave of DDoS Attacks Launched).

Experts who’ve been following the group’s DDoS attacks say this fourth phase was expected and likely will follow the pattern of earlier phases.

“The QCF always start out a phase of Operation Ababil with something new,” says Mike Smith of online security provider Akamai Technologies. “It might be new targets, a larger botnet, new techniques, etc. This is how they try to evade the protections that the targets have deployed. They’ve also demonstrated a bit of showmanship in the past with announcing the attack before they resumed hostilities, and this could be another tactic to generate more press buzz.”

‘A Bit Different’
In its most recent post, al-Qassam Cyber Fighters says: “Planning the new phase will be a bit different and you’ll feel this in the coming days.”

John LaCour, CEO of cyber-intelligence firm PhishLabs, says the group’s plans for different attacks are in response to banking institutions’ heightened DDoS-mitigation strategies. “Major banks had improved their defenses prior to the quiet period,” he says. “If new types of attacks appear, then banks will need to be prepared to respond quickly to prevent significant impact to their online services.”

Based on the impact of the first three phases of DDoS attacks, LaCour notes: “Today’s announcement should put financial organizations on high alert for future attacks seeking to disrupt their online operations.”

In its post, al-Qassam also says, “The break’s over and it’s now time to pay off. After a chance given to banks to rest awhile, now the Cyber Fighters of Izz ad-Din al-Qassam will once again take hold of their destiny.”

Brobot’s Growth
So far, the only activity DDoS experts have noted is growth and maintenance of the botnet, known as Brobot, used in the previous three phases. No attack activity against banking institutions was apparent as of the afternoon of July 23.

Although experts did not directly link PDF download attacks waged in late June against two mid-tier banks to al-Qassam, some speculated those may have been a test for the next phase of attacks (see Another Version of DDoS Hits Banks).

LaCour told Information Security Media Group in early July that new code files linked to Brobot had been identified on compromised web servers the hacktivists had taken over. “The new code we see on these web servers is one of the strong indicators that the botnet is being rebuilt,” he pointed out.

The code behind the malware had changed and included configurations not seen in the first three phases, LaCour said.

Multiple Phases
Phase three of the attacks, which ran for eight weeks, lasted longer than the earlier phases. The first campaign, which began Sept. 18, lasted six weeks. The second campaign, which kicked off Dec. 10, lasted only seven.

Experts won’t speculate about how long this fourth phase might last, although al-Qassam does include a complex formula in its July 23 post to hint at how long the attacks could drag on.

But financial fraud expert Avivah Litan, an analyst with the consultancy Gartner Inc., says the timing of this latest announcement is not surprising, given that she believes there’s little doubt these attacks are backed by Iran.

Dedicated Server Storage

Numericable is a cable TV company operating in France, Belgium and Luxembourg. Rex Mundi claimed to have stolen customer data and demanded €22,000 for its return. Numericable declined, and denied that the hackers had the data.

Rex Mundi (king of the world) is a hacker group that makes a habit of hacking for extortion. Last week,Numericable Belgium‘s IT manager received an email saying that the hackers accessed a database of 6000 new customers, demanding a €22,000 ransom for the data.

Numericable’s response was threefold. It refused to pay the ransom, denied that the hackers could obtain the customer data, and referred the matter to the police. “Hackers have managed to get the data requests for information through our website, but have failed to obtain the data from our customers for the reason that we all separated and the data were not available via the site” (Google translation), Martial Foucart, CIO at Numericable, told RTL.

Rex Mundi responded first on Twitter. “So, Numericable claims that we didn’t steal any data… Our dump tomorrow will be rather humiliating for them then.”

According to Softpedia, Rex Mundi followed up by posting the database to dpaste.de (it has since been ‘removed’). An accompanying note apparently laid the blame on Numericable. “In life, when someone makes a mistake, especially a mistake that could potentially have grave consequences for other people, you would expect that person to man up and own up to it. But not Numericable.”

In Rex Mundi’s logic, Numericable made the mistake (in not securing the data) and then refused to ‘man up’ – and pay the price.

Direct extortion is a growing motivation for cybercriminals. Ransomware, or the ‘police trojan,’ is used to extort money directly from users. The threat of a DDoS attack is used to extort money from both large and small companies. And the threat of data leaks, such as in this case, is simple blackmail. On Tuesday this week, Rex Mundi separately announced that it had breached a Belgian recruitment agency.

However, “More often than not these blackmail threats go unreported,” commented Ashley Stephenson, CEO of Corero. We only tend to hear about them, he added, “when a threat is received and a decision taken to ignore it.”

Meanwhile, Numericable is facing a separate concern: the European Commission has launched an investigation into whether it received unfair aid from France in receiving the French cable infrastructure. “The Commission has doubts that such aid could be found compatible with EU rules,” said an EC statement.

In September 2012, six major American banks came under attack by hackers, and customers could not gain access to their accounts or pay bills online. The attacks did not affect customer bank accounts, but the rash of so-called distributed denial-of-service, or DDOS, attacks such as these against major financial institutions have forced them to step up their game in combating such threats.

DDOS attacks are becoming more frequent and sophisticated, according to the 2013 annual report of the Financial Stability Oversight Council. The council and cybersecurity experts have outlined a number of ways the financial service industry can mitigate the risk. They also say consumers need to be better educated about cybersecurity.

Danny Miller, national practice leader for cybersecurity and privacy at Grant Thornton LLP, worries that at some point, cyberattackers will begin to disrupt the ability of targeted banks to conduct business.

“They don’t really have to shut down a bank’s website for a long period of time,” Miller says. “What they could do — and what it appears their strategy is — is to do it using guerilla tactics. In other words, they’re doing small, concentrated attacks that make it look to the rest of the world that the banks are not able to control their infrastructure and protect themselves.”

Sneaky hackers

Miller says hackers have developed sneakier methods for doing their worst damage. For example, they’ll use insiders to steal information from one department at a bank while security experts are distracted by a cyberattack on another department.

Individual consumers and investors add to the problem with risky behavior such as accessing their personal banking information via unsecured Wi-Fi connections and inadvertently leaving clues about their passwords — think birthdays and pet names — on social media sites, says Jerry Irvine, a member of the National Cyber Security Task Force.

A joint effort of the Department of Homeland Security and the U.S. Chamber of Commerce, the task force involves members of the public and private sectors sharing information about security risks and prevention strategies, says Irvine, who is chief information officer of Prescient Solutions, an information technology outsourcing firm in the Chicago area.

The Financial Stability Oversight Council report encourages these types of public-private partnerships, along with better cooperation with the banking sector and 15 other industries to help decrease cyberthreats.

Cybersecurity legislation needed

In his May 2013 testimony before the Senate Committee on Banking, Housing and Urban Affairs, Treasury Secretary Jacob Lew called for a bipartisan effort to pass comprehensive cybersecurity legislation that would enhance the sharing of information among banks.

Todd McClelland, an attorney with Alston and Bird LLP in Atlanta, advises financial institutions, retailers, payment processors and other clients on information security issues. His firm represents several clients who have a stake in proposed cybersecurity legislation.

“It seems that there’s always some bill pending in front of Congress legislating additional cybersecurity standards, additional risk assessments or the like,” McClelland says.
A February 2013 presidential executive order tasked the National Institute of Standards and Technology — an agency of the U.S. Department of Commerce — with producing a new framework to improve cybersecurity for the nation’s critical infrastructure. One of the agency’s goals is to standardize the measures financial institutions use to control cybersecurity risks. The NIST aims to have the final framework for guidelines ready to roll out by February 2014.

Miller says each bank needs to first identify its most important information and then focus on securing that information from both external and internal threats. As a consultant, Miller advises banks to securely delete any customer information they don’t need to store, while tailoring their security policies to fit each category of data they decide to keep.

As for consumers, Miller says, “If you don’t need to share information … don’t.”

Password tips

Make sure you understand how the financial institution is using your information, who it is sharing it with and how long it plans to keep it in its database, Miller says. And if you’re able to opt out of having your information stored, you should.

“The longer they keep it, the more likely it is going to be stolen and exposed,” Miller says.

Irvine adds these tips:

  • Use a complex password of 10 or more characters. It should be alphanumeric, uppercase and lowercase, and have special characters.
  • Be wise about selecting and answering security questions. If a site asks for your mother’s maiden name, which a hacker might easily discover by checking out your Facebook page, use another one. Pick someone you haven’t seen since elementary school. You can lie on your security questions — just remember them.
  • Don’t create the same password for all of the sites you need to access.

“If you use the same password on Facebook and LinkedIn and other social networking sites and then you use it on your banking site, you might as well just be taking the money out and giving it to the hackers yourself,” Irvine says.

Copyright 2013, Bankrate Inc.

Zimbabweans knocked offline and see data wiped because of slew of cyber attacks last week during the elections, TechWeekEurope learns.

Cyber Repression: In the weeks leading up to and following Zimbabwe’s election of last Thursday, Zimbabweans were hit by significant Internet-based attacks. In some, they could have just been the victims of collateral damage. In others, they were targeted directly.

Two massive distributed denial of service (DDoS) attacks against hosting providers took place this weekend. They took a slew of sites offline, a number of which were reporting heavily on the hugely controversial Zimbabwean election, TechWeekEurope has learned.

One of the hosting providers, GreenNet, which describes itself as an ethical hoster and ISP, with Privacy International and Fair Trade Africa amongst its customers, suspects it may have been hit because of goings on in Zimbabwe. One of its clients is the Zimbabwe Human Rights Forum, which told TechWeekEurope it believes it may have been the subject of a separate hack earlier in the week.

Human rights group hit

The coordinator of the international office of the Zimbabwe Human Rights Forum said he was alerted to the DDoS by an employee of the Congressional Research Service in Washington DC, who had been looking at the ZHRF’s election “situation-room”, a live feed updating users on the political situation in the African nation.

At 6pm Wednesday, just before the DDoS started, the coordinator noticed all the information on that feed had mysteriously been wiped. “I lost information I had gathered for eight hours,” he said. “All of the information I had recorded on 30 July in the evening through to lunchtime the next day had been wiped.

“Even our website designer and engineer couldn’t really explain what happened. Then, whilst we were still talking about the wiping, we realised the site wasn’t working.

“It is curious because we have never had this problem before in the past 10 years.”

He claimed he was putting out the most comprehensive feed on the election, drawing from a variety of sources for users, and that’s why he could have been a target.

Zimbabweans have set up numerous sites, to draw attention to fears of rigging, violent repression and threats that had blighted the 2008 election.

One, electionride.com, has been taken offline. On its Facebook page on election day, it claimed to have been compromised.

Last month, Kubatana.net, which has been disseminating information via various electronic means, said it had been blocked from sending bulk text messages. Its mobile provider Econet Wireless had been told by the government’s telecoms regulator to enforce the block, it was claimed.

“Kubatana.net views the interference in our work as obstructive, repressive and hostile. It is our opinion that as we approach the July poll the Zimbabwean authorities are increasing their control of the media,” the organisation said on its website on 25 July.

This election has proven just as controversial as 2008’s, with the two main parties at loggerheads over the result, which went strongly in favour of President Robert Mugabe. Opposition leader Morgan Tsvangirai, of the Movement for Democratic Change (MDC) party, has claimed the vote was rigged, whilst the official figures indicate Mugabe won with a significant majority.

MDC members have now claimed they were the victims of physical attacks by Mugabe supporters. Zanu-PF, Mugabe’s party, has denied the claims.

GreenNet taken out

GreenNet is only just recovering today, with some customer websites still down, having reported the strike on Thursday morning, the day after Zimbabweans headed to the polls. It appeared to be a powerful attack – TechWeek understands it was at the 100Gbps level – aimed at GreenNet’s co-location data centre provider. Its upstream provider Level 3 subsequently did not let GreenNet route through its infrastructure. Level 3 was not available for comment.

Cedric Knight, technical consultant at GreenNet, said the company suspected the massive attack, which knocked all its 3,000 customers offline, with email also disrupted, could have been launched because of the Zimbabwean organisations running off its infrastructure.

However, it could not be certain, saying only that it was one GreenNet customer that was targeted. Many of its customers from environmental, gender equality and human rights groups have powerful enemies.

It believes a government entity or a private organisation was responsible. A tweet from GreenNet earlier this week read: “The nature and magnitude of this attack does suggest corporate or governmental sponsors, likely a very unsavoury one.”Zimbabwe election – Shutterstock – © Stephen Finn

The DDoS that hit GreenNet was not a crude attack using a botnet to fire traffic straight at a target port, but a DNS reflection attack using UDP packets, which can generate considerable power. DNS reflection sees the attacker spoof their IP address to pretend to be the target, send lines of attack code to a DNS server, which then sends back large amounts of traffic to the victim.

HostGator, a huge hosting provider in the US, also suffered a big DDoS hit over the weekend. That took out popular Zimbabwean news service Nehanda Radio, amongst many others. Lance Guma, managing editor of the organisation’s website, said he was not sure what exactly had happened. But he has become used to attempted cyber attacks.

“Every time you have a big story, it depends whether the government want people to read it or not,” he said, admitting it was sometimes hard to tell if a story had just been hugely popular, causing the server to crash, or if it was a genuine attack.

Neganda Radio also receives plenty of threats via email: “We received a lot of those this last week. Obviously we never open any,” Guma added.

“We’ve been receiving a lot of election reports and then there’s a link you’re meant to click, but we never click anything because you can tell the subject matter is dodgy.

“They try all that… we normally just open emails from trusted sources.”

Guma said Mugabe’s government is fairly useless when it came to anything to do with technology, but China is believed to be assisting the nation’s cyber police. “You can just outsource this stuff now,” he added.

This article is part of TechWeek’s Cyber Repression Series – check out the first article on attacks stemming from China on spiritual activists and military bodies and the second on IP tracking in Bahrain.

Turkish security researcher claims to have found flaw in system, which has been offline since Thursday as company ‘rebuilds and strengthens’ security around databases

Apple says its Developer portal has been hacked and that some information about its 275,000 registered third-party developers who use it may have been stolen.

The portal at developer.apple.com had been offline since Thursday without explanation, raising speculation among developers first that it had suffered a disastrous database crash, and then that it had been hacked.

A Turkish security researcher, Ibrahim Balic, claims that he was behind the “hack” but insisted that his intention was to demonstrate that Apple’s system was leaking user information. He posted a video on Youtube which appears to show that the site was vulnerable to an attack, but adding “I have reported all the bugs I found to the company and waited for approval.” A screenshot in the video showed a bug filed on 19 July – the same day the site was taken down – saying “Data leaks user information. I think you should fix it as soon as possible.”

The video appears to show developer names and IDs. However, a number of the emails belong to long-deprecated services, including Demon, Freeserve and Mindspring. The Guardian is trying to contact the alleged owners of the emails.

Balic told the Guardian: “My intention was not attacking. In total I found 13 bugs and reported [them] directly one by one to Apple straight away. Just after my reporting [the] dev center got closed. I have not heard anything from them, and they announced that they got attacked. My aim was to report bugs and collect the datas [sic] for the purpose of seeing how deep I can go with it.”

Apple said in an email to developers late on Sunday night that “an intruder attempted to secure personal information of our registered developers… [and] we have not been able to rule out the possibility that some developers’ names, mailing addresses and/or email addresses may have been accessed.”

It didn’t give any indication of who carried out the attack, or what their purpose might have been. Apple said it is “completely overhauling our developer systems, updating our server software, and rebuilding our entire database [of developer information].”

Some people reported that they had received password resets against their Apple ID – used by developers to access the portal – suggesting that the hacker or hackers had managed to copy some key details and were trying to exploit them.

If they managed to successfully break into a developer’s ID, they might be able to upload malicious apps to the App Store. Apple said however that the hack did not lead to access to developer code.

The breach is the first known against any of Apple’s web services. It has hundreds of millions of users of its iTunes and App Store e-commerce systems. Those systems do not appear to have been affected: Apple says that they are completely separate and remained safe.

Apple’s Developer portal lets developers download new versions of the Mac OS X and iOS 7 betas, set up new devices so they can run the beta software and access forums to discuss problems. A related service for developers using the same user emails and passwords, iTunes Connect, lets developers upload new versions of apps to the App Store. While developers could log into that service, they could not find or update apps and could not communicate with Apple.

But if the hack provided access to developer IDs which could then be exploited through phishing attacks, there would be a danger that apps could be compromised. Apps are uploaded to the App Store in a completed form – so hackers could not download “pieces” of an existing app – and undergo a review before being made publicly available.

High-profile companies are increasingly the target of increasingly skilful hackers. In April 2011, Sony abruptly shut down its PlayStation Network used by 77 million users and kept it offline for seven days so that it could carry out forensic security testing, after being hit by hackers – who have never been identified.

It has also become a risk of business for larger companies and small ones alike. On Saturday, the Ubuntu forums were hacked, and all of the passwords for the thousands of users stolen – although they were encrypted. On Sunday, the hacking collective Anonymous said that it hacked the Nauruan government’s website.

On Sunday, the Apple Store, used to sell its physical products, was briefly unavailable – reinforcing suspicions that the company was carrying out a wide-ranging security check. The company has not commented on the reasons for the store going down.

Marco Arment, a high-profile app developer, noted on his blog before Apple confirmed the hack that ” I don’t know anything about [Apple’s] infrastructure, but for a web service to be down this long with so little communication, most ‘maintenance’ or migration theories become very unlikely.”

He suggested that the problem could either be “severe data loss” in which restoring from backups has failed – but added that the downtime “is pretty long even for backup-restoring troubles” – or else “a security breach, followed by cleanup and increases defenses”.

Of the downtime, he said “the longer it goes, especially with no statements to the contrary, the more this [hacking hypothesis] becomes the most likely explanation.”

About Graeme Caldwell — Graeme works as an inbound marketer for InterWorx, a revolutionary web hosting control panel for hosts who need scalability and reliability. Follow InterWorx on Twitter at @interworx, Like them on Facebook and check out their blog, http://www.interworx.com/community.

An extremely hard to find backdoor that exposes web users to malware infection has been discovered in the wild by security researchers. The Linux/Cdorked. A backdoor uses a number of advanced methods to avoid detection with the techniques normally employed by system administrators, and is estimated to be present on hundreds of machines.

The most recent of a series of serious Apache exploits discovered over the last few weeks, Linux/Cdorked.A is particularly pernicious because, in addition to providing a platform from which the Blackhole toolkit can be used against target machines, it makes almost no easily detectable changes to infected systems. The usual remediation techniques employed by system administrators are likely to simply destroy evidence of infection.

The backdoor stores none of its configuration files on disk, instead using shared memory to store its instructions and configuration. The only evidence on the filesystems of infected machines is a modified HTTP daemon binary. The backdoor receives its instructions via obfuscated URLs that Apache does not log and is capable of receiving 70 different instructions, indicating a comprehensive and fine grained control capability.

In addition to control via URL, the modified server binary also contains a reverse connect backdoor that can be triggered by a URL containing hostname and port data to connect to a shell session that the attacker controls.
Linux.Cdorked.A redirects clients to machines that contain malware payloads, but makes itself even more difficult to detect by avoiding redirecting clients that meet conditions that indicate that the connecting machine may be used by a site’s administrators. For example, it won’t redirect if the URL or hostname contains strings like “support” or “adm”. An administrator visiting an infected site is likely to see no evidence of the site having been exploited. Additionally, the backdoor sets a cookie on clients it redirects and won’t redirect the same client again, making it further difficult to determine the source of infection.

If an administrator suspects that their server has been infected they can use a tool created by ESET, whose researchers made the initial discovery, to dump the shared memory used by the backdoor for analysis.
It’s not clear how servers become infected initially, but all system administrators should employ industry best practices to ensure that their sites are not easily exploited, including having the most recent version of the Apache server installed and verifying that users with SSH access to servers are using secure passwords, as there is some indication that brute force attacks on SSH servers may be responsible.

Our CTO here at GigeNET, Ameen Pishdadi, was recently interviewed by Net-Security.org. In this interview he discusses the various types of DDoS attacks, tells us who is at risk, tackles information gathering during attacks, lays out the lessons that he’s learned when he mitigated large DDoS attacks, and more.

Read the full article on the Net-Security.org website

Attacks on computer systems are on the rise. If a hacker gets into a system and steals credit card numbers, customer data, Social Security numbers, it can be financially devastating for a company. Businesses can lose most of their customers when they no longer trust them with their personal and financial. For this reason, it is vital that a business stays ahead of the web criminals. The question is how much will you pay for security?

The Costs are Greater if you do Nothing

If you do not acquire effective website security and your server is breached, you can pay immense costs, which includes losing customers and suffering serious loss of sales. If you have an online business, customer data and financial information could be stolen. The result can be lawsuits and loss of reputation which could be financially devastating. Deciding on the security measures that you will implement will depend on the type of website you have such as a large corporation website or a small online store offering select products or services.

Generally, you have to consider such measures as security penetration tests, virus scanners, firewalls, technology to prevent hackers, routine security assessments, Phishing and Malware protection, anti-virus protection, and anti-DDoS software. As well, you have to make sure you are upgrading these security systems on a regular basis. You will also have to implement an office security policy for your employees.

Lessening the Risks

Security prevention means you must reduce the risks. When considering what you will need to in your hosting security plan, you should consider the following: regulatory compliance, security breach history, industry standards, and size of network and system. In addition you need to consider such risks to infrastructure, code, and applications and how susceptible your system is to URL manipulation, SQL injection, and cross-site scripting.

The impact of a security breach can be devastating to a business so it is essential to budget for a quality all-inclusive security plan. Implementing an effective security system can be expensive; however, the cost of a breach can destroy a business. A good security system can assure and give you peace of mind knowing that your system and data is protected at all times.

Due to the increasing number of DDoS attacks, it is vital that businesses implement a diverse number of security measures in order to protect their websites and data from a wide range of security threats.

Five ways to protect against DDoS attacks:

  1. Vulnerability Scanning and Penetration Testing: Prevention is the key to website security. Vulnerability scanning is an effective prevention tool. A vulnerability scanner is a tool that scans a site for security vulnerabilities. The results of the scan allows administrator to secure the vulnerable spots such as improving firewalls. As well, penetration testing is another tool that helps to identify weakness in such areas as application codes and browser scripts.
  2. DDoS Protection Software: A DDoS (Distributed Denial of Service) takes place when a server is overwhelmed with tasks and requests and the server is no longer able to function properly. A DDoS attack can cause a server to use up a resource such as storage capacity, bandwidth, and processing power. By doing so, there is no more of that resource remaining for regular legitimate traffic. DDoS protection software runs on existing hardware and it is involved in analyzing incoming traffic. When the software detects malicious packets, it will filter them out which efficiently stops a traffic flood attack.
  3. Application Firewalls: A web application firewall is a tool that is located between a client browser and web server. This device prevents web attacks, prevents data leaks, and analyzes HTTP traffic. It is an effective method of blocking web attacks.
  4. Browser Security Tools: Make sure your browser has tools such as built-in XSS filter to minimize the risk of XSS attacks.
  5. Application Whitelists: Implement a policy of approved applications through the use of application whitelists.

When securing your website, make sure you prioritize and choose the security tools that are affordable and provide you with an efficient level of security. Detecting diverse attacks along with a security program that prevents attacks will significantly help prevent any type of DDOS attacks against your website.


DDoS Protected Hosting

Distributed Denial of Service (DDoS) attacks have become more prevalent and are now considered among the highest types of attacks against a web server. DDoS attacks have resulted in not just taking websites temporarily offline, but also shutting websites down for days. Because of malicious attacks such as Wikileaks ‘Operation Payback,’ more enterprises are taking these attacks seriously and are looking for effective anti-DDoS technology. There are now efficient anti-DDoS technologies available to safeguard web servers. DDoS protected hosting is an efficient and affordable solution to preventing malicious DDoS attacks.

DDoS protected hosting provides protection for your website from DDoS attacks. The objective is to respond to the attack using DDoS prevention measures. DDoS attacks normally operate by driving an overwhelming amount of web traffic to a targeted server so that the server can no longer function properly and stops working. The authentic traffic is then lost. When there is a sudden spike in IP traffic and unfiltered IP traffic starts to increase, this is usually an indication that a DDoS attack is making its way into the network. Anti-DDoS software will start filtering the traffic immediately until the traffic slows down to normal levels. This is an indication that the attack has been mitigated. A couple of minutes later the traffic is flowing in at normal levels while remaining filtered, and authentic traffic continues to flow undisrupted.

By placing a website behind a ProxyShield mitigation system, DDoS attacks will be effectively stopped which would have resulted in extended periods of downtime Businesses benefit from complete protection for their website IP address, and automatic detection and filtering of DoS / DDoS attacks. DDoS protected hosting provides clients with the most current technology to ensure their websites are protected from malevolent elements. When choosing DDoS protected hosting, it is important to understand the level the protection the service provides to ensure that you have the most reliable services for DDoS protection.

Website downtime caused by Distributed Denial of Service (DDoS) attacks can cost your business hundreds of thousands and even millions of dollars. Today, DDoS attacks aimed at shutting down websites have become one of the most costly computer crimes. If you have a growing e-commerce site, it is essential that you have complete DDoS attack protection. DDoS protected hosting is the most efficient and most affordable DDoS security solution.