Heartbleed: SSL Zero-Day Vulnerability

An OpenSSL vulnerability known as the Heartbleed Bug has been reveled, which affects nearly two thirds of the Internet’s servers. This bug allows attackers to exploit a critical programming flaw in the SSL/TLS encryption protocol. Since Secure-Socket Layer (SSL) and Transport Layer Security (TLS) are at the heart of Internet security, this security hole is very serious.


The vulnerability causes servers to leak their stored data to reveal not just personal data such as emails, instant messages, and files; but the primary and secondary SSL keys such as usernames and passwords. This data could then, in theory, be used as a skeleton keys to bypass secure servers without leaving a trace that a site had been hacked.


Please rest assured that all Network Box systems, which have been updated to the latest security patch levels, are NOT affected by the Heartbleed SSL Zero-DayVulnerability.

Zero-Day Vulnerability: Microsoft Internet Explorer

Microsoft has announced a new zero-day vulnerability that affects all versions of Internet Explorer (IE) 6,7,8,9,10 and 11. As 26% of all computers use IE for web browsing, this is a serious issue. The vulnerability, being tracked as CVE-2014-1776, exists in the way that IE accesses an object in memory that has been deleted or has not been properly allocated.


A hacker could host a website designed to exploit this vulnerability, to lure users using IE. Once the user has accessed the website, the hacker is able to gain the same user rights as the current user. This gives the hacker complete control of the victim’s computer and the hacker can remotely execute code, install malware; view, change or delete data.


Network Box Security Response, in partnership with Microsoft’s Active Protection Program (MAPP), acted in real-time and PUSHed out NBIDPS (IDS & IPS) signatures to all Network Box systems running NBIDPS. These signatures detect exploits of the zero-day vulnerability and provide protection at the gateway. NBIDPS is designed to protect networks against known exploit vectors.

Zero Day Flaw Revealed – CVE-2014-1776

A new zero-day vulnerability, which affects all versions of Internet Explorer (IE) 6,7,8,9,10 and 11 has been announced by Microsoft and is being tracked as CVE-2014-1776.

As 26% of all computers use IE for web browsing, this is very much a serious issue. The vulnerability exists in the way that IE accesses an object in memory which has been deleted or has not been properly allocated.

A hacker could host a website designed to exploit this vulnerability, to lure users using IE. Once the user has accessed the website, the hacker is able to gain the same user rights as the current user, giving him complete control of the victim’s computer and the ability to remotely execute code, install malware; view, change or delete data.

Network Box Security Response, in partnership with Microsoft’s Active Protection Program (MAPP), was quick to react and PUSHed out NBIDPS (IDS & IPS) signatures, in real-time, to all Network Box systems running NBIDPS. Already released, these signatures detect exploits of CVE-2014-1776 and provide protection at the gateway, thus ensuring that all customers using the NBIDPS system will be safeguarded against known exploit vectors.

In addition, we are continuing to work with our anti-virus labs to provide HTTP anti-virus protection signatures or known exploits wherever possible.

Network Box: Windows XP Support


As of April 8th, 2014, support and updates for Windows XP, are no longer available from Microsoft. This fact has resulted in many people contacting our Security Operations Centers around the world, asking if this also means that Network Box will stop protecting computers running Windows XP. The following are the most common questions that have been asked with regards to this issue:

Will Network Box support Windows XP?

The short answer is that Network Box will continue to provide cyber security for customers running Windows XP.

 What about the recent Microsoft Internet Explorer vulnerability?

As a Microsoft Active Protections Program (MAPP) partner, Network Box is given access to information on the latest zero-day threats and vulnerabilities, and provides virtual patching at the gateway, before the public is even aware any security issues.

This was illustrated with the recent zero-day vulnerability which affect all versions of Internet Explorer 6, 7, 8, 9, 10, 11. Working in partnership with MAPP, Network Box Security Response PUSHed out in real-time, NBIDP (IDS and IPS) signatures to Network Box systems running NBIDPS. These signatures provided virtual patching at the gateway, and kept client networks safe from becoming compromised.

  Does Network Box secure other systems?

Network Box not only partners with Microsoft for threat intelligence gathering; but works with more than seventy relevant companies and organizations, to keep client systems as secure as possible.

An OpenSSL vulnerability known as the Heartbleed Bug, affected nearly two thirds of the Internet’s servers. This vulnerability causes servers to leak their stored data, to reveal not just personal data such as emails, instant messages, and files; but the primary and secondary SSL keys such as usernames and passwords. Once again, Network Box systems were NOT affected by the Heartbleed SSL Zero-Day Vulnerability, due real-time virtual patching.

Should I upgrade?

At Network Box, we provided comprehensive 24x7x365 cyber protection for our customers, regardless of the operating systems or applications being used. Over the years Network Box has been providing real-time protection at the gateway using patented PUSH technology, which automatically updates and installs security patches to every Network Box system across the globe in an average time of less than 45 seconds.


However, this does not mean Network Box recommends never patching, or upgrading, operating systems and applications protected by Network Box’s Managed Security Services. As Microsoft themselves point out, Windows XP is over twelve years old. Technology marches ever forwards, and twelve years is a very long time in the world of computing.  

Cyber Attacks Hit Businesses in The Middle East, Security Of The Internet in Headlines Again

Businesses in the region are challenged by the instability in the Middle East. To add up to this dilemma, cyber-attacks are becoming a chronic headache for businesses as well as individuals in the region. While in the past, Security of the internet has been viewed as an extra cost, today’s businesses and individuals in the Middle East should reconsider their position, especially after several hacking incidents in the region. In the following examples, we will show you the increasing trend of such attacks, and how harmful they are to businesses.

The competitive advantage of the Middle East as a region is its richness in energy resources. So it is vital to protect the energy sector businesses in order to keep their business operations going well and productive. According to DNV KEMA, a global provider of services for managing risk and serves a range of high risk industry, the energy sector in the Middle East is more vulnerable to cyber-attacks than any other power market in the world.  In December 2012, a devastating cyber-attack hit Aramco, state owned Saudi Arabian oil Company, aimed at disrupting production from the largest order of crude oil. About 30,000 computers were affected by a “spear phishing” attack. Another important story comes from Qatar. RasGas, one of the world’s largest producers of liquefied natural gas, has been infected by a virus. This virus shut down its website and e-mail servers.

An example of a cyber-attack that hit financial institutions is the incident that happened with the national Bank of Ras Al Khaina, RAKBANK, and Bank Muscat in May 2013. The two Gulf banks have lost more than $45 million by international credit card hacking gang. The gang hacked into systems of several credit card processing companies. Stolen card data were applied to create new counterfeit credit cards. After this, the gang took  the money from ATMs in about 27 countries.

Government institutions are also vulnerable to such attacks; in May 2013, A series of Saudi government websites, including that of the Interior Ministry, where hit by cyber-attacks. The interior ministry’s website crashed after huge amount of service requests.

Entertainment industry has also got its share of such attacks. In June 4, 2013 a cyber-security attack hit Version radio website in the United Arab Emirates. the Version radio website in Dubai was disabled. The hacker claimed that he had downloaded the website database.

These incidents brought IT security to attention and it has become high priority in the Middle East. An important event that occurred in the region was the Gulf Information Security Expo and Conference. A world-famous former hacker who spend time in jail & currently working as a security expert, came to Dubai to speak in this event.  Kevin Mitnick, spoke to the audience about social engineering techniques that exploit human nature. He described these techniques as “low-risk, cheap, and high return method of attack”.

As mentioned above, all different kinds of businesses are vulnerable to cyber-attacks. Government, large enterprises, financial institutions, and even small and medium businesses in the Middle East have to form a security strategy. Even if political and economic instability prevails in large parts of the region, these organizations should change their view about IT security spending. With the increasing trend of the number and sophistication of cyber-attacks, securing businesses’ networks is not a luxury anymore, It is essential for the livelihood of the business.

In the upcoming blog posts, we shall demonstrate how Network Box through its products and managed IT security services , vitally help in enhancing businesses networks security & provide security for the Internet.

How to Hook a Hacker

January 25th, 2013

In 2012, we all became pretty familiar with hackers, getting to know their groups like Anonymous, Lulzsec, and others like them.  The activities of these high-profile hackers have come to the attention of international authorities, who are now increasingly working co-operatively across national boundaries to try to prosecute them.  Hooking a hacker is, however, easier said than done.

The obvious fact is that hackers use the Internet to obscure their identity.  Let’s start with the basics – the IP address that uniquely identifies a system on the Internet is dynamically allocated by the Internet Service Provider (ISP), so the only way of finding out who had that address, at that time, is to hope that the ISP has a record for this.  Not all ISPs do.  In the UK, ISPs do keep records of IP addresses allocated, but they don’t hold this information forever, so time is against the investigators.  The trouble is that any request for this information requires proof of illegal activity to generate a warrant, which can take time.

Even if IP address information can be retrieved, the source of the IP might well be in another country which raises political and legal barriers to any investigator.  Hackers know this and will deliberately attack targets outside their own countries.  Some hackers who have been identified have only finally been caught because they were arrogant enough to attack an institution in their own country.  This happened to Victor Faur from Romania who attacked NASA from the safety of his country.  He seemed immune to prosecution, as Romania did not recognise the crime, but Faur then decided to attack computers in Romania, at which point the Romanian authorities arrested him.

Knowing that they cannot always hide safely behind a dynamic IP address, hackers moved on.  The next step was for them to use a proxy or, more likely, several proxies.  Examples of proxies are:

  • Facilities provided by individuals or companies usually with the intent of making it possible for people in repressive regimes to have their say anonymously;
  • Systems that have been compromised without the victim’s notice.  Many hackers will tell of forgotten servers in some foreign country that they have hacked onto and now route their traffic through, others will talk about an army of computers that they have turned into their obedient servants, or Robots, making a network of Robots or BotNet.

Proxies make the investigators’ life a little harder as they may now have multiple ISPs in multiple countries to work with.  The result is more time, greater complexity and less certainty in the results.  And like chasing all prey, it requires patience, determination and a good deal of silence.

In the meantime, some hackers have moved on to use onion routing, a technique for anonymous communication via computer network.  Tor is now the obfuscator of choice though there are others.  These routing protocols manage to obscure the source, destination and the actual body of the data, making the life of the investigator extremely difficult.  There are ways of discovering more information but they require more time and considerable access to parts of the onion routers’ network, like the exit node of a Tor network for instance.

Sometimes, ironically, investigators are helped by the hackers themselves, who need to communicate either with their collaborators or, in the case of hacktivists, by the need to make people understand why they are taking the actions they are taking.  So the frequent posts made require that the hacker always takes precautions.  One Anonymous member, Sabu, was apparently caught because he failed to use Tor once when logging onto his IRC feed.  This allowed the FBI to see his IP and hence allowed him to be traced.  Another member, Nerdo, kept his childhood ‘handle’ so whilst as a hacker he was cautious, it was possible for investigators to associate this name with a real world name by tracing it back to a time when he was less careful, as he had less reason to be.

The job will get harder; hackers will learn how others were caught and will take precautions, and investigators will have to look for flaws in those new precautions.  By the nature of the game, investigators are reactive, waiting for a compromise and then having to chase on limited information.  Victims can help by having good security and improving logs which help traceability by providing more information.  Good monitoring is also key, as the less the time between hack and detection, the less data that is stolen and the hotter the trail

http://blog.redscan.com/ Simon Heron

Tackling Modern Malware

July 26th, 2012, by Simon Heron http://blog.redscan.com/?p=676

With new unique pieces of malware emerging daily and ever-increasing access requirements from a host of new endpoints, the challenge posed by malware detection has changed.  Zero-day threats pose an increasing risk as, by definition, nobody has a signature for this and in many cases heuristics can be bypassed.

User habits are changing too; the vast majority of applications are now downloaded and installed over the internet.   Users need to connect to the internet to do anything useful; time off-line is usually brief and increasingly rare and unproductive.    This, though, provides a new way of delivering security that can keep users safe and up to date instantly.  Webroot have used this in their Secure Anywhere (WSA) product to provide a new concept that changes the anti-malware game.

WSA doesn’t download vast databases of signatures onto an end user’s device, which is a boon for the increasing army of endpoints that are being used.  This also saves bandwidth and it saves time, the installation times drop dramatically and make it very easy to install.  Some anti-malware solutions are downloading vast quantities of data everyday in updates.  Instead, Webroot’s system stores a vast database in the cloud (over 400TBytes and growing), which is updated all the time with new solutions (around 200GBytes a day).  Any file that can be executed is first ‘hashed’ and then sent up to this vast store and categorised as:

  • Known good software – the hash uniquely identifies the code as a known piece of software that has been tested and known to be safe to run.
  • Known bad software – the hash uniquely identifies the files as a known piece of malware that will be blocked from running and either quarantined or removed from the endpoint.
  • Unknown – this is where the clever stuff happens and the fact that Webroot’s database defines known as well as unknown makes this category very useful:

The graphic below illustrates the communication flow between the agent and cloud.

  • If the Webroot Intelligence Network (WIN) responds with an unknown classification, the file is executed in a virtual sandbox environment. This allows the behaviour of the file to be monitored. This behaviour is then packaged and sent up to the Webroot Intelligence Network where it’s compared to thousands of behavioural rules.
  • In the diagram, you can see the behaviour is classified as Good. This means that Webroot haven’t observed any malicious behaviour at this stage.
  • Because the behaviour is good (so far), the file is allowed to execute on the endpoint but it’s placed in monitor mode. While in monitor mode, the behaviour is watched to see if changes. As soon as it starts to behave maliciously, or as soon as Webroot’s Threat Research team identify the threat, the malware is quarantined or removed and, more importantly, it is remediated.

While in monitor mode, every single change the file makes to the endpoint is recorded in a local change-journal database. So if a file is found to be malicious, remediation means not just quarantining or deleting the malware, it means that all changes that the file made to the endpoint can be reversed, providing a perfect clean-up routine.


In addition to the Monitoring functionality, there is also a powerful Identity & Privacy shield to protect data from information stealing malware which means that even if the initial infection tries to make changes, the endpoint and user’s data will still be protected.

The other major benefit this solution brings to companies is that it can be run from an interface in the cloud allowing the administrator to manage the system from wherever they are without the time and expense of maintaining a locally sourced server.  Added to which this administration interface provides a wide range of features that will even allow administrators to do all the usual administration tasks as well as white and black listing applications right down to  executing commands on end users’ systems if required.

The other thing to consider is what happens when the endpoint is not connected to the internet.  If a brand-new piece of software is introduced when the endpoint is completely offline, and it has no relationship with any existing software on the endpoint, then WSA automatically applies special offline heuristics blocking many threats automatically. If a threat gets past this logic, it is run in monitoring mode which ensures any threats that do execute cannot do lasting damage.

The suspicious program is monitored to see precisely what files, registry keys, and memory locations are changed by the software program, while remembering the “before and after” picture of each change. If the software is subsequently found to be malicious, WSA proceeds to clean up the threat when it is online again. The important thing here is that WSA doesn’t just simply delete the main file—it removes every change that the threat made and returns the endpoint to its previously known good state. If at any point a suspicious program tries to modify the system in such a way that WSA cannot automatically undo it, the user is notified and that change is automatically blocked.

With conventional antivirus products, their signature bases are never completely up to date. When a brand-new infection emerges, and the antivirus software hasn’t applied the latest update or there isn’t a signature written for that specific threat, the infection simply roams freely across all endpoints, deleting, modifying, and moving files at will. As a result, it doesn’t really matter if a device is online or offline—the malware infection has succeeded in compromising the endpoint.

When a traditional AV product comes back online, it applies any updates and if configured to do so, runs a time-consuming scan—it might then be able to remove the infection. But it will not be able to completely reverse the changes the infection made, so the user or administrator will have to activate the System Restore function. More likely, the endpoint will need to be re-imaged because it’s so unstable—a major further drain on time and productivity.

Conversely, WSA leverages behavioural monitoring to pick up infections when the Internet is inactive or the endpoint is offline and it isn’t sure whether a file is malicious or not. This process provides uniformly strong protection against the damaging effects of malware.

The effectiveness of the approach was highlighted in 2007 when it recognised Flame and protected customers against it a good number of years before other manufacturers even knew of its existence.  A conversation between computer security companies reveals the effectiveness of Webroot’s approach:http://www.npr.org/2012/05/30/153970997/computer-security-companies-debate-flames-origins.

This is a really clever use of the internet to provide a large database of signatures and heuristics but keeping the footprint on the endpoint very light giving the best of both worlds.  The other consideration is that there is a mechanism for catching and remediating zero-day threats which shows a degree of pragmatism rarely seen in other products



Top Myths of Endpoint Security

Posted on Wed, Jun 12, 2013 by http://blog.networkboxusa.com/

This hits right home with Network Box, as we do perimeter security and we cringe every time we hear AV companies making the case that moat defenses are a thing of the past. Indeed, if there was someone suffering in the fight against malware today, it’d be just those very companies owning the end point. Yes, the same ones who are telling you to ditch your gateway defenses.

I always like to draw a parallel with the real world; think of the perimeter defense as being the stone fort and moat surrounding your castle. Now, would you remove those and let your enemies through? Wouldn’t you rather stop them at the gate? Or, as far away from your gate as possible, for that matter?

hacker attack background

From a stricter technical standpoint, several considerations must to be made:-

1- End point security is often at the mercy of the end user because it’s possible for an end user to turn it off or reconfigure it incorrectly. The IT department needs to set things up very carefully to avoid this. And even then, there are users who still manage to circumvent such measures.

2- As with every other security system, End Point security requires updates. If updates aren’t being correctly downloaded and installed, often there is little feedback (will the end user call the IT when he gets that pop up?).

3- The End Point security solution runs on the same platform it’s trying to defend and, consequently, suffers from identical vulnerabilities. Which means, it is, in itself, vulnerable. In fact, the first thing Trojans do when they start working is to take down the end point security, disarm it, and render it useless (assuming it was ever useful to begin with if it allowed Trojans in). Any Trojan that doesn’t do that wouldn’t likely be that dangerous anyway.

4- Performance is most often an issue (which irks end users). These End Point systems require a large amount of CPU to run, sometimes slowing down the user experience so much that the user ends up doing anything he can to disable that system. They are, in one word, intrusive!

5- Attacks are complex.

In general, we talk about blended threats. Therefore, trying to lay claim that one single technology can defend a network against all threats is pretentious, to say the least. For instance, if your users are browsing websites that are dangerous, the best defense is to stop them from doing so in the first place. This is called content (or web) filtering, and it’s best done centrally, at the gateway, as a proxy system, and not on every single workstation because the settings would be a nightmare to maintain.

IPS technology checks things inside a TCP/IP packet which AVs do not look for, as AVs scan/analyze entire files, and not single packets. Indeed, there are threats called network worms, which aren’t detectable by end point technologies because they are, essentially, threats of a different nature.

6- At Network Box we witness an average of 500,000 new threats every day. To put it clearly and bluntly, AV companies simply cannot_keep_up.

The sheer amount of data they must analyze to create protection is impossible to filter. New techniques are emerging, all of which can be categorized under the umbrella of real time behavior analysis. These are mostly cloud based mechanisms that attempt to correlate the presence of malware in traps with the same fingerprints seen on the workstations. As far as we can tell, no single AV company in the marketplace at the moment is actually able to keep up with the amount of threats they’re having to deal with.

So, the logical question is, how can they possibly claim that perimeter protection is dead? If anything, perimeter defense is more alive than ever, because it ensures updates are done, management is centralized under the strict control of the IT dept, the OS deployed are (most of the times) Linux or proprietary based and, as a result, not quite as vulnerable as Windows based systems (or at least the vulnerabilities are not as well known and exploited). There are many more advantages.

To make the case of how strongly I feel about this, I’m personally not running end point security on my computer. None. Why? Because I consider it useless and intrusive.

I’m always behind one of our devices, and I know my traffic is very well scanned ~ to the point that if our device misses something, I’m fairly confident my EP would miss it as well.