Another high-profile security company has been breached. Bit9, a leading provider of application whitelisting technology, has admitted to have been attacked by a malicious external third party who was able to illegally gain access to one of their digital code-signing certificates. The attackers did not waste time and the compromised certificate has immediately been used to sign malware infiltrating, according to the company’s investigation, the network of three customers.
The news was initially revealed by Brian Krebs in a blog post, and later confirmed by the security vendor, which also gave additional (scant) details, including the fact that the malicious attackers were able to infiltrate a portion of their internal network not protected by their product.
“We simply did not follow the best practices we recommend to our customers by making certain our product was on all physical and virtual machines within Bit9.“
At first glance the attack has many points in common with the infamous RSA Breach of 2011, including the fact that maybe the real target of the attack was not the company itself, but the protected network of its customers. On the other hand, if it is true, as the company claims, that Bit9 was the only security company capable to stop both the Flame malware and the RSA breach attack, to achieve their target, the attackers had no other chance than attacking the source of their technology.
The latest demonstration, if necessary, that attacks are becoming more and more aggressive and sophisticated, and the protection is not only a matter of technology but even of good procedures and best practice, and not only for the possible victims…
(But in some cases may remain unknown for up to 2.5 years). A couple of days ago, two Symantec Researchers have published an interesting article (“Before We Knew It: An Empirical Study of Zero-Day Attacks In The Real World”) reporting the study of 0-Day Attacks between 2008 and 2001. They have analyzed 300 million files collected by 11 million hosts (a representative subset of the hosts running Symantec products) between March 2008 and February 2011.
These files were extracted from the the WINE environment (Worldwide Intelligence Network Environment, a platform for repeatable data intensive experiments aimed to share comprehensive field data among the research community) and correlated with three additional sources: the Open Source Vulnerability Database (OSVDB), Symantec’s Threat Explorer (the company database for the known malware samples) and an additional Symantec data set with dynamic analysis results for malware samples.
The purpose of the research was to execute a sort of automatic forensic analysis aimed to go back in time to look for 0-day attacks carried on during the analyzed period. The results are disarming.
The researchers were able to find 18 vulnerabilities exploited before disclosure, among which 11 were not previously known to have been deployed in 0-day attacks. Based on the data, a typical zero-day attack lasts on average 312 days, but in some cases may remain unknown for up to 2.5 years (think to what it means to have the enemy inside the gates for such a long time).
Just to confirm that 0-days are the cradle of targeted attacks, the data show that most zero-day attacks affect few hosts, with the exception of a few high-profile attacks (Do you remember Stuxnet?). Moreover, after vulnerabilities are disclosed, the volume of attacks exploiting them increases by up to 5 orders of magnitude (the number of variants increases “only” by up to 2 orders).
And this is not a mere coincidence since apparently the cyber criminals watch closely the vulnerability landscape, as exploits for 42% of all vulnerabilities employed are detected in field data within 30 days after the disclosure date.
A terribly worrying landscape, even considering a theoretical point of weakness of the research, that is the fact that the sample could be considered self-consistent referring only to malware strains collected by Symantec customers.
Here we are with the statistics from the Cyber Attack Timelines for the first and the second half of July 2012. The sample included 76 attacks which have been analyzed according the three familiar parameters: Motivations behind attacks, Distribution of attacks techniques and Distribution of targets.
Again, I will never get tired of repeating that data must be taken very carefully since they do refer only to discovered attacks (the so-called tip of the iceberg), and hence do not pretend to be exhaustive but only aim to provide an high level overview of the “cyber landscape” of the considered period. Moreover, remember that the most dangerous threats are the invisible ones.
As far as the Motivations Behind Attacks are concerned, July has confirmed the predominance of Cyber Crime, although it dropped down to 55% from 72% of the previous month. It is interesting to notice the corresponding growth of Hacktivism, from 18% in June to 32% in July. Although the number of (discovered) attacks motivated by Cyber Espionage is always low, this month their occurrences nearly doubled as a consequence of the events in the Middle East, that confirms to be a “hot area” for the Cyber Arena. Cyber Warfare is positioned at the bottom of the chart with a “poor” 4% of the occurrences.
The Distribution Of Attacks Techniques chart confirms that is getting harder and harder to recognize what the cyber crooks have leveraged to reach their goal. The percentage of the unknown attacks has grown from the 36% of June to the 45% of July. In any case, among the recognized attacks, SQL Injection ranks at number one with the 28% of possible occurrences. DDoS has confirmed his decreasing trend from 16% in June to 9% in July. Maybe the possible victims are learning to effectively defend themselves?
The Distribution of Targets chart confirms that targets belonging to industry are always on top of the preferences of Cyber Crooks with the 32% of occurrences, well above the 21% of the last month. Government targets confirmed their second place with the 15% of occurrences (were the 18% on July) followed by Online Services with the 10%. It is interesting to notice the low occurrences of incidents targeting Law Enforcement Agencies and Military Institutions. Maybe after the high number of cyber attacks suffered, they are learning to enforce adequate countermeasures.
If you want to have an idea of how fragile our data are inside the cyberspace, have a look at the timelines of the main Cyber Attacks in 2011 and 2012 (regularly updated), and follow @paulsparrows on Twitter for the latest updates.
Also, feel free to submit remarkable incidents that in your opinion deserve to be included in the timelines (and charts).
Few days ago I have discovered that the City I live in (Rome), ranks at number two in the World for the number of BOT infections, at least according to Symantec Internet Security Threat Report Edition XVII.
Of course reports must be taken with caution, but it is undoubted that Bot infections are becoming a huge problem for the Information Security Community (a modern Biblical Plague), so huge to deserve the attentions of The Federal Communication Commission. As a matter of fact, on March 2012, FCC, working with communications companies including Verizon, Cox, and Comcast, has passed a voluntary code that delineates the steps that ISPs must take to combat botnets. As you will probably know, botnets may be used by cybercrookers for making money with different criminal purposes ranging from information theft to the execution of DDoS Attacks: have a look to this interview to a botnet operator to have an idea (and to discover that botnets are used also to counterfeit virtual currency).
Such a similar plague is pushing a major change to the traditional security paradigm, a change that can be summarized in few words: if yesterday the refrain for system administrators was “Beware of what enters your network” (so all the security warfare was focused in checking the ingress traffic), today it is becoming: “Beware of what leaves your network“.
This is nothing else than a consequence of the fact that traditional endpoints technologies are proving not to be so effective against Bots, so a new approach, which aims to control the egress traffic generated by compromised endpoints and leaving the organization, is needed. The effectiveness of traditional endpoint technologies is not optimal since new variants (capable of evading antivirus controls) come out much faster than the related signatures developed by vendors: try to have a look at the average antivirus detection rate against Zeus (the god of bots), and you will probably be disappointed in noticing that it is stable at a poor 38%). On the other hand, recognizing the communication patterns at the perimeter is a more profitable strategy, since the different variants generally do not change deeply the communication protocols with the C&C Server (unless a P2P protocol is used, see below).
The strategy to mitigate botnets relies on the fact that each botnet has (in theory) a single point of failure: it is the C&C Server to which Cyber Hunters and Law Enforcement Agencies address their takeover attempts to take them down definitively or to turn them into sinkholes for studying the exact morphology and extension of the infection). Depending on the botnet configuration, each infected endpoint polls the C&C server for new instructions at a given time interval and that is the point of the process in which good guys may act: detecting (and blocking) that traffic allows to identify infected machines (and my experience indicate that too often those machines are equipped with an updated and blind antivirus).
For the chronicle the C&C Server is only a theoretical single point of failure since C&C Servers are generally highly volatile and dynamic so it is not so easy to intercept and block them (the only way to take down a botnet), hence in my opinion, it should be more correct to say that a botnet has has many single points of failure (an information security oxymoron!).
As if not enough, in order to make life harder for good guys, the next generation botnets are deploying P2P protocols for decentralizing the C&C function and make their takedown even tougher.
But good guys have a further weapon in this cat and mouse game: the cloud intelligence. Even if I am not a cloud enthusiast, I must confess that this technology is proving to be a crucial element to thwart botnets since it allows to collect real time information about new threats and to centralize the “intelligence” needed to dynamically (and quickly) classify them. Real time information is collected directly from the enforcement points placed at the perimeter, which analyze the egress traffic from an organization containing compromised machines. Of course after the successful analysis and classification, the new patterns may be shared among the enforcement points all over the five continents in order to provide real time detection (and hence protection) against new threats. This approach is clearly much more efficient than an endpoint based enforcement (which would need to share the information among a larger amount of devices), provided the enforcement point are positioned adequately, that is they are capable to monitor all the egress traffic.
The combination of the analysis of egress traffic and cloud intelligence is a good starting points for mitigating the botnet effects (for sure it is necessary to identify infected machines) but, as usual, do not forget that the user is the first barrier so a good level of education is a key factor together with consolidated processes and procedures to handle the infections.
What is a Cyber Weapon? At first glance this seems an immediate question to answer, but should anyone try to analyze the meaning of this term more deeply, probably he would be quite surprised and disappointed in discovering that the answer is not so immediate since an exact definition has not been given (at least so far).
A real paradox in the same days in which The Pentagon, following the Japanese Example, has unveiled its new strategy aimed to dramatically accelerate the development of new Cyber Weapons. And do not think these are isolated, fashion-driven examples (other nations are approaching the same strategy), but rather consider them real needs in the post-Stuxnet age, an age in which more and more government are moving their armies to the fifth domain of war [you will probably remember the (in)famous episode, when F-Secure was able to discover Chinese Government launching online attacks against unidentified U.S. Targets].
Recently Stefano Mele, a friend and a colleague of the Italian Security Professional Group, tried to give an answer to this question in his paper (so far only in Italian but it will be soon translated in English) where he analyzes Cyber Weapons from a legal and strategical perspective.
As he points out “Correctly defining the concept of Cyber Weapon, thus giving a definition also in law, is an urgent and unavoidable task, for being able to assess both the level of threat deriving from a cyber attack, and the consequent political and legal responsibilities attributable to those who performed it”. Maybe this phrase encloses the reason why a coherent definition has not been given so far: a cyber weapon is not only a technological concept, but rather hides behind it complex juridical implications.
Having this in mind, according to Stefano’s definition: a cyber weapon is:
A device or any set of computer instructions intended to unlawfully damage a system acting as a critical infrastructure, its information, the data or programs therein contained or thereto relevant, or even intended to facilitate the interruption, total or partial, or alteration of its operation.
The above definition implies that cyber weapons may span in theory a wide range of possibilities: from (D)DoS attacks (which typically have a low level of penetration since they target the “surface” of their targets), to “tailored” malware like Stuxnet, characterized by a high intrusiveness and a low rate of collateral damages.
One could probably argue whether a cyber weapon must necessarily generate physical damages or not, in which case, probably, Stuxnet, would be the one, so far, to encompass all the requirements. In any case, from my point of view, I believe the effects of a cyber weapon should be evaluated from its domain of relevance, the cyberspace, with the possibility to cross the virtual boundaries and extend to the real world (Stuxnet is a clear example of this, since it inflicted serious damages to Iranian Nuclear Plants, including large-scale accidents and loss of lifes).
With this idea in mind, I tried to build a model to classify the cyber weapons according to four parameters: Precision (that is the capability to target only the specific objective and reduce collateral damages), Intrusion (that is the level of penetration inside the target), Visibility (that is the capability to be undetected), and Easiness to Implement (a measure of the resource needed to develop the specific cyber weapon). The results, ranging from paintball pistols to smart bombs, are summarized in the above chart.
As you may notice, in these terms a DDoS attack is closer to a paintball pistol: the latter has a low level of penetration and the effects are more perceived than real (it shows the holder’s intention to harm the victim rather than constituting a real danger ), nevertheless it may be used to threaten someone, or worst to make a robbery. The same is true for a DDoS, it is often used to threaten the target, its action stops at the surface and usually the effects are more relevant in terms of reputation of the victims than in terms of damages done. Nevertheless, for the targets, it may lead to an interruption of service (albeit with no physical damages) and monetary losses.
On the opposite site there are specific “surgical” APTs: they have a high level of penetration with reduced collateral damages, they are able to go hidden for long time, but require huge investments to be developed, which ultimately make their adoption not so easy.
Of course, in between, there is a broad gray area, where the other Cyber Weapons reside depending on their positioning according to the four classification parameters identified… So, at the end what do you think? Do you agree with this classification?
Several days ago I tweeted a request of help to my tweeps for building a timeline of India and Bangladesh Cyber War Timeline.
Unexpectedly the day after I received a complete and detailed timeline from Catherine Anita. I was nearly risking to loose it since my Gmail account had marked the message as spam. As I do not completely trust the antispam engines and regularly check my Spam folder, fortunately I was able to retrieve her e-mail and the precious content.
Catherine did a complete review that I formatted according to my consolidated model, adding a couple of events for the last days.
Since the timeline of the mutual Cyber Attacks is quite long, I decided to divide it into two parts, the first one covering the cyber attacks carried on by pro-Bangladesh hacking crews against India (I used a rigorous alphabetical Order). Expect for tomorrow the update with attacks carried on from the other side.
As usual after the Jump you find all the references.
If you want to have an idea of how fragile our data are inside the cyberspace, have a look at the timelines of the main Cyber Attacks in 2011 and 2012 (regularly updated), and follow @pausparrows on Twitter for the latest updates.