This information security week has offered many interesting points: the brand new CRIME attack against SSL/TLS, the release of BlackHole Exploit Kit 2.0 that promises new stealth vectors of Drive-By download infections, the takedown of the emerging Nitol botnet by Microsoft, and, last but not least, the first (?) known example of a new generation of a C&C Server leveraging the anonymization granted by Tor Service.
The latter is in my opinion the news with the most important consequences for the Information Security community, since delineates the next step of Botnets’ evolution, after the common, consolidated, C&C communication schema, and its natural evolution consisting in Peer-to-Peer (P2P) communication.
The first (I wonder if it is really the first) discovery of a Botnet command server hidden in Tor, using IRC protocol to communicates with its zombies, has been announced in a blog post by G-Data. Of course the advantages of such a similar communication schema are quite simple: the Botnet may use the anonymity granted by the Deep Web to prevent the identification and the likely takedown of the server, and the encryption of the Tor protocol to make traffic identification harder by traditional layers of defense. Two advantages that greatly exceed the Tor latency which represents the weakness of this communication schema.
Maybe it was only a matter of time, in any case it is not a coincidence that in the same weeks researchers have discovered BlackHole 2.0 and the first (maybe) C&C infrastructure hidden inside the Deep Web: Cyber Criminals are continuously developing increasingly sophisticated methods to elude law enforcement agencies and to evade the security controls of the traditional bastions, and the botnets are confirming more than ever to be the modern biblical plague for the Web…
And even if every now and then good guys are able to obtain a victory (as the Nitol takedown), the war is far from over.
Targeted attacks exploiting endpoint vulnerabilities are becoming more and more common and increasingly aggressive.
For this reason I could not help but notice the last report from NSS Labs dealing with the capability of 13 consumer grade AV products, to protect against two critical Microsoft vulnerabilities (CVE-2012-1875 and CVE-2012-1889). The successful exploitation of these critical vulnerabilities could result in arbitrary remote code execution by the attacker leading to very harmful consequences for the victim, such as, for instance, to make it become part of a botnet. Unfortunately a very common scenario in these troubled days.
Even if these vulnerabilities are a couple of months old (and patched), the resulting report is not so encouraging, and renews the dramatic question: are endpoint protection technologies, on their own, capable to offer adequate protection in the current cyber-landscape?
Probably not, considering the the findings which are quite frustrating:
- Only 4 of the 13 products blocked all attacks: exploit prevention remains a challenge for most products;
- More than half of the products failed to protect against attacks over HTTPS that were blocked over HTTP, a serious deficiency for a desktop AV / host intrusion prevention system (HIPS.);
- Researchers are not the only ones testing security products – criminal organizations also have sophisticated testing processes in order to determine which product detects which malware, and how the various products can be evaded. Some crimewares
will(already) include various one-click buttons to “Bypass VendorX,” for example.
Ok, you might argue that only consumer-grade AV products were tested, so enterprise organizations are not so exposed against exploit attacks. Mmh… Do not jump to conclusions, as I believe the reality is pretty much different and enterprise organizations are even more exposed for the following reasons:
- More and more organizations are approaching the BYOD
philosophypolicy in which users are free to use their own devices. Even worse, too often these are equipped with outdated EPPs (how many organizations enforce NAC policies to check the integrity of the endpoint?).
- Most of all… If cyber criminals have sophisticated testing processes in place, aimed to test the detection capability of the various products, why should they use them only for consumer products and not (also) for the most appealing enterprise crime market?
Yes, definitively I believe endpoint protection technologies, on their own, do not offer adequate protection for exploit prevention, and the time has come for Advanced Threat Detection/Prevention technologies (like Lastline :-)).
February 2012 brings a new domain for my blog (it’s just a hackmaggedon) and confirms the trend of January with a constant and unprecedented increase in number and complexity of the events. Driven by the echo of the ACTA movement, the Anonymous have performed a massive wave of attacks, resuming the old habits of targeting Law Enforcement agencies. From this point of view, this month has registered several remarkable events among which the hacking of a conf call between the FBI and Scotland Yard and the takedown of the Homeland Security and the CIA Web sites.
The Hacktivism front has been very hot as well, with attacks in Europe and Syria (with the presidential e-mail hacked) and even against United Nations (once again) and NASDAQ Stock Exchange.
Scroll down the list and enjoy to discover the (too) many illustrious victims including Intel, Microsoft, Foxconn and Philips. After the jump you find all the references and do not forget to follow @paulsparrows for the latest updates. Also have a look to the Middle East Cyberwar Timeline, and the master indexes for 2011 and 2012 Cyber Attacks.
Addendum: of course it is impossible to keep count of the huge amount of sites attacked or defaced as an aftermath of the Anti ACTA movements. In any case I suggest you a couple of links that mat be really helpful:
- List of all vulnerable websites attacked by anonymous Part II (updated daily) (via cylaw.info)
- List of Websites Hacked, Defaced & Taken Down By Anonymous (via valuewalk.com)
This infamous 2011 is nearly gone and here it is the last post for this year concerning the 2011 Cyber Attacks Timeline. As you will soon see from an infosec perspective this month has been characterized by two main events: the LulzXmas with its terrible Stratfor hack (whose effects are still ongoing with the recent release of 860,000 accounts), and an unprecented wave of breaches in China which led to the dump of nearly 88 million of users for a theoretical cost of nearly $19 million (yes the Sony brech is close). For the rest an endless cyberwar between India and Pakistan, some hactivism and (unfortunately) the usual amounts of “minor” breaches and defacement. After the page break you find all the references.
Last but not least… This post is my very personal way to wish you a happy new infosec year.
It looks like that Christmas approaching is not stopping hackers who targeted a growing number of organizations including several security firms (Kaspersky, Nod 32 and Bitdefender) even if in secondary domains and with “simple” defacements.
Cyber chronicles report of Gemnet, another Certification Authority Breached in Holland (is the 12th security incident targeting CAs in 2011) and several massive data breaches targeting Finland (the fifth this year, affecting 16,000 users), online gambling (UB.com affecting 3.5 million of users), Telco (Telstra, affecting 70,000 users), and gaming, after the well known attacks to Sony, Sega and Nintendo, with Square Enix, which suffered a huge attacks compromising 1,800,000 users (even if it looks like no personal data were affected).
Online Payment services were also targeted by Cybercrookers: a Visa East European processor has been hit by a security breach, but also four Romanian home made hackers have been arrested for a massive credit card fraud affecting 200 restaurants for a total of 80,000 customers who had their data stolen.
As usual, hacktivism was one of the main trends for this first half of the month, which started with a resounding hacking to a Web Server belonging to ACNUR (United Nations Refugees Agency) leaking more than 200 credentials including the one belonging to President Mr. Barack Obama.
But from a mere hactvism perspective, Elections in Russia have been the main trigger as they indirectly generated several cyber events: not only during the election day, in which three web sites (a watchdog and two independent news agencies) were taken down by DDoS attacks, but also in the immediately following days, when a botnet flooded Twitter with Pro Kremlin hashtags, and an independent forum was also taken down by a further DDoS attacks. A trail of events which set a very dangerous precent.
Besides the ACNUR Hack, the Anonymous were also in the spotlight (a quite common occurrence this year) with some sparse attacks targeting several governments including in particular Brazil, inside what is called #OpAmazonia.
Even if not confirmed, it looks like that Anonymous Finland might somehow be related to the above mentioned breach occurred in Finland.
Other interesting events occurred in the first two weeks of December: the 0-day vulnerability affecting Adobe products, immediately exploited by hackers to carry on tailored phishing campaigns and most of hall, a targeted attack to a contractor, Lockheed Martin, but also another occurrence of DNS Cache Poisoning targeting the Republic of Congo domains of Google, Microsoft, Samsung and others.
Last but not least, the controversial GPS Spoofing, which allegedly allowed Iran to capture a U.S. Drone, even the GPS Spoofing on its own does not completely solve the mistery of the capture.
Other victims of the month include Norwich Airport, Coca Cola, and another Law Enforcement Agency (clearusa.org), which is currently unaivalable.
As usual after the page break you find all the references.
We have not completely assimilated the BEAST vulnerability, and here it comes, from Bochum, Germany, another serious flaw involving Encryption, or better, involving XML Encryption.
XML Encryption, is a W3C standard widely used to securely transmit information inside Application-to-Application Web services connections. It was believed to be a robust standard mechanism to protect data exchange between a wide class of applications using web services and deployed in different sectors, for instance business, e-commerce, financial, healthcare, governmental and military applications. For the generic user a typical scenario involves, for example, credit card information encryption for a payment within an XML-based purchase order.
Unfortunately it lools like the mechanism is not so robust as it was supposed to be, and the discovery comes from Juraj Somorovsky and Tibor Jager, two Researchers at the Ruhr University of Bochum (RUB) who were successful in cracking parts of the XML encryption process used in web services, thus making it possible to decrypt encrypted data. They demonstrated their work at the ACM Conference on Computer and Communications Security in Chicago this week.
As far as the attack technique is concerned, once again CBC (Cipher-Block Chaining) is indicted since the new discovered vulnerability, as in case of the BEAST attack, is exploitable only in this encryption mode.
The attack strategy seems very similar to the one behind the BEAST Attack and allows to decrypt data, encrypted in AES-CBC, by sending modified ciphertexts to the server, and gathering information from the received error messages using the cryptographic weakness of the CBC mode, in particular the fact that, by conveniently manipulating the IV, ciphertexts encrypted in CBC mode can be modified so that the resulting ciphertext is related to the original ciphertext in a certain way (see the description of the BEAST attack for an example).
So, by choosing a given ciphertext, the attacker is able to recover the entire plaintext and the only prerequisite requires the availability of what the researchers define an “oracle”, that is a pattern telling the attacker if a given ciphertext contains a “correctly formed” plaintext that is a valid encoding (e.g. in UTF-8 or ASCII) of a message. Even worse XML signature is not able to mitigate the attack.
In their paper the authors showed that a moderately optimized implementation of the attack was able to decrypt 160 bytes of encrypted data within 10 seconds by issuing 2,137 queries to the Web Service, morever the complexity of the attack grows only linearly with the ciphertext size, thus allowing to recover a larger plaintext of 1,600 bytes takes about 100 seconds and 23,000 queries.
The proof of concept has been performed on a Web Service based on the Apache Axis2 XML framework and verified on JBoss, but many other vendors are affected, that is the reason why the two researchers announced the vulnerability to the W3C XML Encryption Working Group in February 2011. Vendors affected include the above mentioned Apache Software Foundation (Apache Axis2), RedHat Linux (JBoss), but also IBM and Microsoft.
Unfortunately fixing the flaw will not be that easy, the only suitable way is to replace CBC mode by using a symmetric cryptographic primitive providing confidentiality and integrity, this means to change the XML Encryption standard!
Advanced Persistent Threats are probably the most remarkable events for Information Security in 2011 since they are redefining the infosec landscape from both technology and market perspective.
I consider the recent shopping in the SIEM arena made by IBM and McAfee a sign of the times and a demonstration of this trend. This is not a coincidence: as a matter of fact the only way to stop an APT before it reaches its goal (the Organization data), is an accurate analysis and correlation of data collected by security devices. An APT attack deploys different stages with different tactics, different techniques and different timeframes, which moreover affect different portion of the infrastructure. As a consequence an holistic view and an holistic information management are needed in order to correlate pieces of information spread in different pieces of the networks and collected by different, somewhat heterogeneous and apparently unrelated, security devices.
Consider for instance the typical cycle of an attack carried on by an APT:
Of course the picture does not take into consideration the user, which is the greatest vulnerability (but unfortunately an user does not generate logs except in a verbal format not so easy to analyze for a SIEM). Moreover the model should be multiplied for the numbers of victims since it is “unlikely” that such a similar attack could be performed on a single user at a time.
At the end, however, it is clear that an APT affects different components of the information security infrastructure at different times with different threat vectors:
- Usually stage 1 of an APT attack involves a spear phishing E-mail containing appealing subject and argument, and a malicious payload in form of an attachment or a link. In both cases the Email AV or Antispam are impacted in the ingress stream (and should be supposed to detect the attack, am I naive if I suggest that a DNS lookup could have avoided attacks like this?). The impacted security device produce some logs (even if they are not straightforward to detect if the malicious E-mail has not been detected as a possible threat or also has been detected with a low confidence threshold). In this stage of the attack the time interval between the receipt of the e-mail and its reading can take from few minutes up to several hours.
- The following stage involves user interaction. Unfortunately there is no human firewall so far (it is something we are working on) but user education (a very rare gift). As a consequence the victim is lured to follow the malicious link or click on the malicious attachment. In the first scenario the user is directed to a compromised (or crafted) web site where he downloads and installs a malware (or also insert some credentials which are used to steal his identity for instance for a remote access login). In the second scenario the user clicks on the attached file that exploits a 0-day vulnerability to install a Remote Administration Tool. The interval between reading the malicious email and installing the RAT takes likely several seconds. In any case Endpoint Security Tools may help to avoid surfing to malicious site or, if leveraging behavioral analysis, to detect anomalous pattern from an application (a 0-day is always a 0-day and often they are released after making reasonably sure not to be detected by traditional AV). Hopefully In both cases some suspicious logs are generated by the endpoint.
- RAT Control is the following stage: after installation the malware uses the HTTP protocol to fetch commands from a remote C&C Server. Of course the malicious traffic is forged so that it may be hidden inside legitimate traffic. In any case the traffic pass through Firewalls and NIDS at the perimeter (matching allowed rules on the traffic). In this case both kind of devices should be supposed to produce related logs;
- Once in full control of the Attacker, the compromised machine is used as a hop for the attacker to reach other hosts (now he is inside) or also to sweep the internal network looking for the target data. In this case a NIDS/anomaly detector should be able to detect the attack, monitoring, for instance, the number of attempted authentications or wrong logins: that is the way in which Lockheed Martin prevented an attack perpetrated by mean of compromised RSA seeds, and also, during the infamous breach, RSA detected the attack using a technology of anomaly detection Netwitness, acquired by EMC, its parent company immediately after the event.
At this point should be clear that this lethal blend of threats is pushing the security firms to redefine their product strategies, since they face the double crucial challenge to dramatically improve not only their 0-day detection ability, but also to dramatically improve the capability to manage and correlate the data collected from their security solutions.
As far as 0-day detection ability is concerned, next-gen technologies will include processor assisted endpoint security or also a new class of network devices such as DNS Firewalls (thanks to @nientenomi for reporting the article).
As far data management and correlation are concerned, yes of course a SIEM is beautiful concept… until one needs to face the issue of correlation, which definitively mean that often SIEM projects become useless because of correlation patterns, which are too complex and not straightforward. This is the reason why the leading vendors are rushing to include an integrated SIEM technology in their product portfolio in order to provide an out-of-the-box correlation engine optimized for their products. The price to pay will probably be a segmentation and verticalization of SIEM Market in which lead vendors will have their own solution (not so optimized for competitor technologies) at the expense of generalist SIEM vendors.
On the other hand APT are alive and kicking, keep on targeting US Defense contractors (Raytheon is the latest) and are also learning to fly though the clouds. Moreover they are also well hidden considered that, according to the Security Intelligence Report Volume 11 issued by Microsoft, less than one per cent of exploits in the first half of 2011 were against zero-day vulnerabilities. The 1% makes the difference! And it is a big difference!
- Information, The Next Battlefield (paulsparrows.wordpress.com)