About these ads

Archive

Posts Tagged ‘Botnet’

16-28 February 2014 Cyber Attacks Timeline

Here is the list of the main Cyber Attacks happened during the second half of February 2014 (Part I here). As you will soon discover, unlike the previous months, the activity in this second half of February has shown a sensible decrease both in number and in size in comparison with the previous months.

One of the few exception to this decreasing trend is  the University of Maryland, that was the victim of a cyber attack that led to the compromising of more than 300,o00 records.

Apparently even the hacktivists have slowed down their attacks with few exceptions (the Syrian Electronic Army and some sporadic actions executed by hackers affiliated to Anonymous.

Interestingly the cyber espionage front has been the more interesting with a couple of remarkable events (a cyber attack against a French aerospace engine maker and the revelation of further details concerning the Iranian attack against a US Navy Computer Network.

As usual, if you want to have an idea of how fragile our data are inside the cyberspace, have a look at the timelines of the main Cyber Attacks in 2011, 2012 and now 2013 (regularly updated). You may also want to have a look at the Cyber Attack Statistics, and follow @paulsparrows on Twitter for the latest updates.

Also, feel free to submit remarkable incidents that in your opinion deserve to be included in the timelines (and charts).

16-28 Feb 2014 Cyber Attacks Timelines Read more…

About these ads

A (Graphical) World of Botnets and Cyber Attacks

January 26, 2013 7 comments

Update 3/12/2013: I should also mention the Deutsche Telekom Security Tachometer

We live in a World made of Botnets and cyber attacks! While I am typing these few words in my keyboard, other fingers somewhere else in the Globe are moving quickly through the keys, firing stream of bits against their targets.

For thwarting this malicious landscape, trying to understand the evolving trends, more and more security companies and organizations collect data from their security endpoint or network devices spread all over the Globe, and send it to the cloud to be analyzed with big data algorithms. The purpose is to reduce the time between the release of a threat and the availability of an antidote. The same data can also be used to build spectacular maps that show in real time the status of the Internet, a quite impressive and worrisome spectacle! Here a short list of resources:

HoneyMap

Probably the most impressive: the HoneyMap shows a real-time visualization of attacks detected by the Honeynet Project‘s sensors deployed around the world. The Map shows “automated scans and attacks originating from infected end-user computers or hijacked server systems”. This also means that an “attack” on the HoneyMap is not necessarily conducted by a single malicious person but rather by a computer worm or other forms of malicious programs. Please Notice that, as the creators of the Project declare, many red dots means there are many machines which are attacking our honeypots but this does not necessarily imply that those countries are “very active in the cyberwar”

HoneyMap

Akamai Real-Time Web Monitor

Akamai monitors global Internet conditions around the clock. With this real-time data the company identifies the global regions with the greatest attack traffic, measuring attack traffic in real time across the Internet with their diverse network deployments. Data are collected on the number of connections that are attempted, the source IP address, the destination IP address and the source and destination ports in real time. The packets captured are generally from automated scanning trojans and worms looking to infect new computers scanning randomly generated IP addresses. Values are measured in attacks per 24 hours (attacks/24hrs).

Akamai

Securelist Statistics (Kaspersky Lab)

The information collected by Kaspersky Security Network is shown in the Securelist Statistics section. In the corresponding navigable map, the user can select Local Infections, Online Threats, Network Attacks and Vulnerabilities with Map, Diagrams or Ratings format in a time scale of 24 hours, one week or one month.

SecureList

Trend Micro Global Botnet Map

Trend Micro continuously monitors malicious network activities to identify command-and-control (C&C) servers, making the ability to rapidly identify and correlate bot activity critical. The real-time map indicates the locations of C&C servers and victimized computers that have been discovered in the previous six hours.

Trend Map

Shadowserver

The Shadowserver Foundation, managed by volunteer security professionals, gathers intelligence from the Internet via honeyclients, honeypots, and IDS/IPS Systems. The maps are made converting all of the IP addresses of the aggressor, the Command and Control and the target of the DDoS attack in coordinates and placing those points on a map. The maps are updated once a day and are available for DDoS activity and Botnet C&Cs.

shadowserver

Arbor’s Threat Level Analysis System (ATLAS)

Through its relationships with several worldwide service providers and global network operators, Arbor provides insight and on global DDoS attack activity, Internet security and traffic trends. Global Activity Map shows data in terms of scan sources, attack sources, phishing websites, botnet IRC Servers, Fast Flux bots.

ATLAS

BotClouds Still Hard to Detect (And Mitigate)

October 31, 2012 1 comment

This morning, during my usual virtual promenade through my feeds, I came across a really interesting post from Stratsec, a subsidiary of Bae Systems.

The post unveils the details of an unprecedented experiment aimed to verify how easy and cheap is to setup a botCloud and how hard is for the Cloud providers to detect them (and consequently advise the victims).

As the name suggests, a botCloud is defined as a group of Cloud instances that are commanded and controlled by malicious entity to initiate cyber-attacks.

The research was carried on by subscribing to five common Cloud providers and setting up to 10 Cloud instances targeting a victim host, protected by traditional technologies such as IDS, and flooded with several common attack techniques (malformed traffic, non-RFC compliant packets, port scanning, malware traffic, denial of service, brute force, shellcode and web application attacks) in 4 scenarios:

  1. Victim host placed in a typical network scenario with a public IP, firewall and IDS;
  2. Victim host setup as a cloud instance inside the same cloud service provider then the attackers;
  3. Victim host setup as a cloud instance inside a different cloud service provider then the attackers;
  4. Same scenario as test 1 with a major duration (48 hours) to verify the impact of duration on the experiment;

 The findings are not so encouraging, and confirm that the security posture of the cloud providers needs to be improved:

  • No connection reset or connection termination on the outbound or inbound network traffic was observed;
  • No connection reset or termination against the internal malicious traffic was observed;
  • No traffic was throttled or rate limited;
  • No warning emails, alerts, or phone calls were generated by the Cloud providers, with no temporary or permanent account suspensions;
  • Only one Cloud provider blocked inbound and outbound traffic on SSH, FTP and SMTP, however these limitation was bypassed by running the above service on non-default port.

The other face of the coin is represented by the moderate easiness needed to setup an army of cloud-hidden zombie machined which can leverage the advantages of a Cloud infrastructure. In fact a botCloud

  • Is relatively easy to setup and use;
  • Needs significantly less time to build;
  • Is Highly reliable and scalable;
  • Is More effective;
  • Has a Low cost.

Cloud Service Providers (and their customers), are advised…

Even Botnets Go on Holidays!

September 30, 2012 Leave a comment

The attack model based on botnet-generated Distributed Denials of Service is opportunistic. The botmaster selects a target, gathers as many resources as possible among his army of zombie machines, and when he realizes to have achieved enough firepower, simply selects a target and pushes the attack button. After this the target is inevitably flooded by packets generated by the bots, while the unaware owners of the zombie machines perform their normal work or fun activities with their infected computers.

In theory there is only a weak point in this opportunistic model and it consists on the fact that the botmaster controls the compromised machines but cannot control their availability. Simply said, if a user has not switched on his computer while the botmaster needs that machine to execute a DDoS attack, the machine is not available, and there is not so much to do.

Of course this is essentially a theoretical weak point since, quoting a famous phrase, we could say that on botnet empire the sun never sets: indeed botnets are so huge and widespread to be in practice always available (they span different continents and different time zones) and constantly grant enough firepower.

But what happens if some global events or some global festivities make a larger number of computers to be turned off? Quite simple apparently! The firepower of the botnet is hugely affected and the number of DDoS attack drops. This is one of the paradoxical conclusions that can be derived from a paper presented few days ago at the Virus Bulletin 2012 International Conference by Cloudflare (probably the main CDN company in the world), about which an interesting post by Naked Security has provided additional details.

Apparently the number of IP addresses used to execute Layer 7 DDoS attacks in 2012 showed the minimum values just in those days of events or vacations which kept the users away from their Personal Computers. The minimum values occurred in days such as Mardi Gras, Earth Day (the 22nd of April when one billion people around the world chose to keep their computers turned off to reduce energy consumption), the Memorial Day weekend on the 29th May and 28th June, just before US Independence Day celebrations.

Quoting the Naked Security article, the conclusion is quite amusing: if everyone turned off their computers each night, it might not just be good for the environment because of the lower levels of energy being consumed… it could also mean a reduction in botnet attacks.

Value Added Distributors of Botnets

September 22, 2012 Leave a comment

Cyber Crime, and in particular botmasters, never cease to amaze. If you were (not so much) surprised in discovering the compromised supply chain behind the Nitol Botnet (that allowed Chinese manufacturers to sell compromised computers pre-installed with the botnet), you’d better have a look at the ZeroAccess Botnet, which has recently been analyzed by Sophos.

ZeroAccess has some impressive “state-of-the-art” features such as:

  • Pure User-Mode on 32-bit Windows platforms;
  • A Peer-to-peer protocol for communicating with other members of the Botnet to receive updates and downlad plugins;
  • A modular architecture (via plugins) that allows to generate revenues for Botnet owners in different ways: Click Fraud or Bitcoin Mining (revenues that the security firms estimates in USD 100,000 per day with the botnet at full power);
  • A compromised population of over 9 million of PCs infected.

Really impressive features indeed, even if I must confess they were not the ones that impressed me most.

One of the challenges of a “successful” botnet is the capability to spread as quickly as possible, and infect and insert in the botnet (read enroll) the largest number of hosts in the shortest possible time.

Cyber Criminals are becoming increasingly aware of this, and hence, have developed a lucrative Pay-Per-Install partnership affiliate scheme to distribute the dropper. This affiliate scheme (I like to call it Partnership program) foresees wall paid revenues for affiliates who are able to execute successful installation of the dropper. This is exactly what happens in case of ZeroAccess and it is the reason of its large-scale extent.

The scheme is typically advertised on underground forums and, in case of ZeroAccess, the revenues are differentiated based on the country (probably US victims are the most lucrative, since US gets paid the most, then UK, Canada and Australia), and also on the access rights of the infected user (Admin gets paid more).

After the discovery of compromised supply chains and programs that foresee revenues for botnet distributors, have you still doubts about the fact that Cyber Crime is really becoming an industry?

The Botnet Factory

September 16, 2012 Leave a comment

Probably there’s something more in the Next Step Of Botnets besides BlackHole 2.0 and Tor C&C mentioned in my previous post. I mentioned the takedown of the Nitol Botnet by Microsoft as one of the most important infosec events of the last week, but I forgot to mention one important aspect related to this event: the malware supply chain.

As a matter of fact, in case of Nitol, Microsoft discovered a real botnet factory, that is a compromised supply chain, based in China, that allowed new computers (to be sold to unaware consumers) to come pre-installed with malware embedded with counterfeit version of Microsft OS.

A step forward in the Cyber Crime industry with the advantage for cyber crooks to setup an “army” of zombie machines without enforcing time consuming drive-by attacks or spam campaigns. I used the term army since the main features of Nitol are the capability to execute on-demand DDoS attacks (besides to offer a backdoor to cyber criminals for taking control of the infected machines).

Unfortunately, what’s especially disturbing according to Microsoft, is that the counterfeit software embedded with malware could have infiltrated the chain at any point.

If you still have doubts that Cyber Crime has become a real industry there’s no better example to demonstrate it. Moreover I cannot help but think that, once upon a time, new computers came out with antivirus software embedded, today they are sold directly with malware.

The Next Step of Botnets

September 15, 2012 Leave a comment

A BlackHole (Exploit Kit) absorbing an Onion (Ring), the future of Botnets?

This information security week has offered many interesting points: the brand new CRIME attack against SSL/TLS, the release of BlackHole Exploit Kit 2.0 that promises new stealth vectors of Drive-By download infections, the takedown of the emerging Nitol botnet by Microsoft, and, last but not least, the first (?) known example of a new generation of a C&C Server leveraging the anonymization granted by Tor Service.

The latter is in my opinion the news with the most important consequences for the Information Security community, since delineates the next step of Botnets’ evolution, after the common, consolidated, C&C communication schema, and its natural evolution consisting in Peer-to-Peer (P2P) communication.

The first (I wonder if it is really the first) discovery of a Botnet command server hidden in Tor, using IRC protocol to communicates with its zombies,  has been announced in a blog post by G-Data. Of course the advantages of such a similar communication schema are quite simple: the Botnet may use the anonymity granted by the Deep Web to prevent the identification and the likely takedown of the server, and the encryption of the Tor protocol to make traffic identification harder by traditional layers of defense. Two advantages that greatly exceed the Tor latency which represents the weakness of this communication schema.

Maybe it was only a matter of time, in any case it is not a coincidence that in the same weeks researchers have discovered BlackHole 2.0 and the first (maybe) C&C infrastructure hidden inside the Deep Web: Cyber Criminals are continuously developing increasingly sophisticated methods to elude law enforcement agencies and to evade the security controls of the traditional bastions, and the botnets are confirming more than ever to be the modern biblical plague for the Web…

And even if every now and then good guys are able to obtain a victory (as the Nitol takedown), the war is far from over.

Botnets, ISPs, and The Role of The Cloud

Data CenterOne interesting comment on my previous post on Botnets, gave me a cue for another consideration concerning the role of the cloud inside the fight against botnets.

The fact that ISPs are evaluating an Anti Botnet Conduct Code means  their are feeling responsible for what resides inside (and leaves) their networks, and hence are supposed to take technical, organizational and educational countermeasures.

Anyway, in order to be effective, anti-bot controls should be enforced inside the customers’ networks, or at least before any source NAT is performed, otherwise IP addresses of the infected machines would be hidden, making impossible to detect and block them directly. A huge task for an ISP unless one were able to centralize the security enforcement point where the traffic is monitored and compromised endpoints members of a bot detected.

Said in few words I believe that ISPs will soon offer advanced anti-malware (read anti-bot) services in the cloud by routing (or better switching) the customer’s traffic on their data centers where it is checked and the customers notifyed in real time about the presence of bots inside their networks. You may think to the same approach used for URL filtering services on the cloud with the difference that in this scenario the clients should arrive to the ISP’s Data Center with their original IP Address or a statically NATed address so that it could always be possible to recognize the original source. Another difference is also that in this scenario the purpose in not only to protect the customers’ networks from the external world but also (and maybe most of all) to protect the external world from the customers’ (dirty) networks.

Another contribution of the cloud against Botnets that I forgot to mention in the original post.

I, BOT (Coming To A C&C Server Near You)

May 22, 2012 3 comments

Few days ago I have discovered that the City I live in (Rome), ranks at number two in the World for the number of BOT infections, at least according to Symantec Internet Security Threat Report Edition XVII.

Of course reports must be taken with caution, but it is undoubted that Bot infections are becoming a huge problem for the Information Security Community (a modern Biblical Plague), so huge to deserve the attentions of The Federal Communication Commission. As a matter of fact, on March 2012, FCC, working with communications companies including Verizon, Cox, and Comcast, has passed a voluntary code that delineates the steps that ISPs must take to combat botnets. As you will probably know, botnets may be used by cybercrookers for making money with different criminal purposes ranging from information theft to the execution of DDoS Attacks: have a look to this interview to a botnet operator to have an idea (and to discover that botnets are used also to counterfeit virtual currency).

Such a similar plague is pushing a major change to the traditional security paradigm, a change that can be summarized in few words: if yesterday the refrain for system administrators was “Beware of what enters your network” (so all the security warfare was focused in checking the ingress traffic), today it is becoming: “Beware of what leaves your network“.

This is nothing else than a consequence of the fact that traditional endpoints technologies are proving not to be so effective against Bots, so a new approach, which aims to control the egress traffic generated by compromised endpoints and leaving the organization, is needed. The effectiveness of traditional endpoint technologies is not optimal since new variants (capable of evading antivirus controls) come out much faster than the related signatures developed by vendors: try to have a look at the average antivirus detection rate against Zeus (the god of bots), and you will probably be disappointed in noticing that it is stable at a poor 38%). On the other hand, recognizing the communication patterns at the perimeter is a more profitable strategy, since the different variants generally do not change deeply the communication protocols with the C&C Server (unless a P2P protocol is used, see below).

The strategy to mitigate botnets relies on the fact that each botnet has (in theory) a single point of failure: it is the C&C Server to which Cyber Hunters and Law Enforcement Agencies address their takeover attempts to take them down definitively or to turn them into sinkholes for studying the exact morphology and extension of the infection). Depending on the botnet configuration, each infected endpoint polls the C&C server for new instructions at a given time interval and that is the point of the process in which good guys may act: detecting (and blocking) that traffic allows to identify infected machines (and my experience indicate that too often those machines are equipped with an updated and blind antivirus).

For the chronicle the C&C Server is only a theoretical single point of failure since C&C Servers are generally highly volatile and dynamic so it is not so easy to intercept and block them (the only way to take down a botnet), hence in my opinion, it should be more correct to say that a botnet has has many single points of failure (an information security oxymoron!).

As if not enough, in order to make life harder for good guys, the next generation botnets are deploying P2P protocols for decentralizing the C&C function and make their takedown even tougher.

But good guys have a further weapon in this cat and mouse game: the cloud intelligence. Even if I am not a cloud enthusiast, I must confess that this technology is proving to be a crucial element to thwart botnets since it allows to collect real time information about new threats and to centralize the “intelligence” needed to dynamically (and quickly) classify them. Real time information is collected directly from the enforcement points placed at the perimeter, which analyze the egress traffic from an organization containing compromised machines. Of course after the successful analysis and classification, the new patterns may be shared among the enforcement points all over the five continents in order to provide real time detection (and hence protection) against new threats. This approach is clearly much more efficient than an endpoint based enforcement (which would need to share the information among a larger amount of devices), provided the enforcement point are positioned adequately, that is they are capable to monitor all the egress traffic.

The combination of the analysis of egress traffic and cloud intelligence is a good starting points for mitigating the botnet effects (for sure it is necessary to identify infected machines) but, as usual, do not forget that the user is the first barrier so a good level of education is a key factor together with consolidated processes and procedures to handle the infections.

Follow

Get every new post delivered to your Inbox.

Join 2,898 other followers