About these ads

Archive

Posts Tagged ‘Cloud computing’

BotClouds Still Hard to Detect (And Mitigate)

October 31, 2012 1 comment

This morning, during my usual virtual promenade through my feeds, I came across a really interesting post from Stratsec, a subsidiary of Bae Systems.

The post unveils the details of an unprecedented experiment aimed to verify how easy and cheap is to setup a botCloud and how hard is for the Cloud providers to detect them (and consequently advise the victims).

As the name suggests, a botCloud is defined as a group of Cloud instances that are commanded and controlled by malicious entity to initiate cyber-attacks.

The research was carried on by subscribing to five common Cloud providers and setting up to 10 Cloud instances targeting a victim host, protected by traditional technologies such as IDS, and flooded with several common attack techniques (malformed traffic, non-RFC compliant packets, port scanning, malware traffic, denial of service, brute force, shellcode and web application attacks) in 4 scenarios:

  1. Victim host placed in a typical network scenario with a public IP, firewall and IDS;
  2. Victim host setup as a cloud instance inside the same cloud service provider then the attackers;
  3. Victim host setup as a cloud instance inside a different cloud service provider then the attackers;
  4. Same scenario as test 1 with a major duration (48 hours) to verify the impact of duration on the experiment;

 The findings are not so encouraging, and confirm that the security posture of the cloud providers needs to be improved:

  • No connection reset or connection termination on the outbound or inbound network traffic was observed;
  • No connection reset or termination against the internal malicious traffic was observed;
  • No traffic was throttled or rate limited;
  • No warning emails, alerts, or phone calls were generated by the Cloud providers, with no temporary or permanent account suspensions;
  • Only one Cloud provider blocked inbound and outbound traffic on SSH, FTP and SMTP, however these limitation was bypassed by running the above service on non-default port.

The other face of the coin is represented by the moderate easiness needed to setup an army of cloud-hidden zombie machined which can leverage the advantages of a Cloud infrastructure. In fact a botCloud

  • Is relatively easy to setup and use;
  • Needs significantly less time to build;
  • Is Highly reliable and scalable;
  • Is More effective;
  • Has a Low cost.

Cloud Service Providers (and their customers), are advised…

About these ads

Botnets, ISPs, and The Role of The Cloud

Data CenterOne interesting comment on my previous post on Botnets, gave me a cue for another consideration concerning the role of the cloud inside the fight against botnets.

The fact that ISPs are evaluating an Anti Botnet Conduct Code means  their are feeling responsible for what resides inside (and leaves) their networks, and hence are supposed to take technical, organizational and educational countermeasures.

Anyway, in order to be effective, anti-bot controls should be enforced inside the customers’ networks, or at least before any source NAT is performed, otherwise IP addresses of the infected machines would be hidden, making impossible to detect and block them directly. A huge task for an ISP unless one were able to centralize the security enforcement point where the traffic is monitored and compromised endpoints members of a bot detected.

Said in few words I believe that ISPs will soon offer advanced anti-malware (read anti-bot) services in the cloud by routing (or better switching) the customer’s traffic on their data centers where it is checked and the customers notifyed in real time about the presence of bots inside their networks. You may think to the same approach used for URL filtering services on the cloud with the difference that in this scenario the clients should arrive to the ISP’s Data Center with their original IP Address or a statically NATed address so that it could always be possible to recognize the original source. Another difference is also that in this scenario the purpose in not only to protect the customers’ networks from the external world but also (and maybe most of all) to protect the external world from the customers’ (dirty) networks.

Another contribution of the cloud against Botnets that I forgot to mention in the original post.

Drones used as Proxies to get around ISP blocking and law enforcement: Predator’s to add server payload?

Cross Posted from TheAviationist.

Nearly in contemporary with the breaking news that a judge in New Zealand’s High Court has declared that the order used to seize Kim Dotcom’s assets is “null and void”, writing another page inside the endless MegaUpload saga, The Pirate Bay, one of the world’s largest BitTorrent sites, made another clamorous announcement. Tired of countering the block attempts that forced, last month, to switch its top-level domain, possibly to avoid seizure by U.S. authorities, and in October 2011 to set up a new domain to get around ISP blocking in Belgium, the infamous BitTorrent site is considering the hypothesis to turn GPS-controlled aircraft drones into proxies, in order to avoid Law Enforcement controls (and censorship) and hence evade authorities who are looking to shut the site down.

A Predator drone carries a few servers…as tin cans would trail a newly married couple’s car

The drones, controlled by GPS and equipped with cheap radio equipment and small computers (such as Raspberry Pi), would act as proxies redirecting users’ traffic to a “secret location”. An unprecedented form of (literally) “Cloud Computing”, or better to say “Computing in the Clouds”, capable to transfer, thanks to modern radio transmitters, more than 100Mbps at over 50 kilometers away, more than enough for a proxy system.

This is essentially what MrSpock, one of the site’s administrators, stated in a Sunday blog post (apparently unavailable at the moment). Curiously the drones are called “Low Orbit Server Stations”, a name not surprisingly much similar to the “Low Orbit Ion Cannon”, the DDoS weapon used by the Anonymous collective, capable of evoking very familiar hacktivism echoes.

Actually this is not the first time that hackers try to use air communication to circumvent Law Enforcement controls. At the beginning of the year, a group of hackers unveiled their project to take the internet beyond the reach of censors by putting their own communication satellites into orbit.

What raised some doubts (at first glance this announcement looks like an anticipated April Fools), is not the the use of a Low Orbit Server Stations, but the fact that moving into an airspace would be enough to prevent Law Enforcement Controls (and reactions).

Drones are subject to specific rules and restrictions and can only fly along reserved corridors to deconflict them from civilian and military air traffic. And they have to land every now and then, unless someone thinks these pirate robots can be air-to-air refueled.

As a commenter of The Hacker News correctly pointed out: “There seems to be a lot of misunderstanding about who “owns” the airspace of a given country“: definitely a drone flying too high would be classified as a threat and forcibly removed by an air force, a drone tethered to ground would be subjected to local zoning laws, while a drone broadcasting from an “intermediate” height would probably violate a number of existing laws and forced to shut down.

At the end it is better to turn back to “Ground Computing” as opposed to “Cloud Drones”. As a matter of fact “it’s probably a lot easier to find a friendly government and host a normal server in that country“.

If you want to have an idea of how fragile our data are inside the cyberspace, have a look at the timelines of the main Cyber Attacks in 2011 and 2012 (regularly updated) at hackmageddon.com. And follow the author of this article @pausparrows on Twitter for the latest updates.

Attacks Raining Down from the Clouds

November 22, 2011 Leave a comment

Update November 24: New EU directive to feature cloud ‘bridge’. The Binding Safe Processor Rules (BSPR) will ask cloud service providers to prove their security and agree to become legally liable for any data offences.

In my humble opinion there is strange misconception regarding cloud security. For sure cloud security is one of the main trends for 2011 a trend, likely destined to be confirmed during 2012 in parallel with the growing diffusion of cloud based services, nevertheless, I cannot help but notice that when talking about cloud security, the attention is focused solely on attacks towards cloud resources. Although this is an important side of the problem, it is not the only.

If you were on a cybercrook’s shoes eager to spread havoc on the Internet (unfortunately this hobby seems to be very common recent times), would you choose static discrete resources weapons to carry on your attacks or rather would you prefer dynamic, continuous, always-on and practically unlimited resources to reach your malicious goals?

An unlimited cyberwarfare ready to fire at simple click of your fingers? The answer seems pretty obvious!

Swap your perspective, move on the other side of the cloud, and you will discover that Security from the cloud is a multidimensional issue, which embraces legal and technological aspects: not only for cloud service providers but also for cloud service subscribers eager to move there platforms, infrastructures and applications.

In fact, if a cloud service provider must grant the needed security to all of its customers (but what does it means the adjective “needed” if there is not a related Service Level Agreement on the contract?) in terms of (logical) separation, analogously cloud service subscribers must also ensure that their applications do not offer welcomed doors to cybercrooks because of vulnerabilities due to weak patching or code flaws.

In this scenario in which way the two parties are responsible each other? Simply said, could a cloud service provider be charged in case an attacker is able to illegitimately enter the cloud and carry on attack exploiting infrastructure vulnerabilities and leveraging resources of the other cloud service subscribers? Or also could an organization be charged in case an attacker, exploiting an application vulnerability, is capable to (once again) illegitimately enter the cloud and use its resources to carry on malicious attacks, eventually leveraging (and compromising) also resources from other customers? And again, in this latter case, could a cloud service provider be somehow responsible since it did not perform enough controls or also he was not able to detect the malicious activity from its resources? And how should he behave in case of events such as seizures.

Unfortunately it looks like these answers are waiting for a resolutive answer from Cloud Service Providers. As far as I know there are no clauses covering this kind of events in cloud service contracts, creating a dangerous gap between technology and regulations: on the other hands several examples show that similar events are not so far from reality:

Is it a coincidence the fact that today TOR turned to Amazon’s EC2 cloud service to make it easier for volunteers to donate bandwidth to the anonymity network (and, according to Imperva, to make easier to create more places and better places to hide.)

I do believe that cloud security perspective will need to be moved on the other side of the cloud during 2012.

Consumerization Of Warfare 2.0

June 21, 2011 2 comments

It looks like the consumerization of warfare is unstoppable and getting more and more mobile. After our first post of Jume the 16th, today I stumbled upon a couple of articles indicating the growing military interest for consumer technologies.

Network World reports that the National Security Agency is evaluating the use of COTS (Commercial Off-The-Shelf) products for military purposes and is evaluating several different commercially available smartphones and tablets, properly hardened and secured. The final goal is to have four main devices, plus a couple of infrastructure support services. Meanwhile, trying to anticipate the NSA certification process, U.S. Marines are willing to verify the benefits of a military use of smartphones and consequently issued a Request For Information for trusted handheld platforms.

In both cases, the new technologies (smartphones and tablets) are preferred since they are able to provide, in small size and weight, the capability to rapidly access information in different domains (e.g., internet, intranet, secret), geolocation capabilities which are useful in situation awareness contexts, and , last but not least, the capability to connect with different media (eg, personal area network [PAN], wireless local area network [LAN], wide area network [WAN]).

Nevertheless, in a certain manner, the two approaches, albeit aiming to the same objective, are slightly different. NSA is evaluating the possibility to harden COTS in order to make them suitable for a military use, but since this process of hardening, certification and accreditation may take up to a couple of years, which is typically the life cycle of a commercial smartphone or tablet (it sounds quite optimistic since one year is an eternity for this kind of devices), the RFI issued by the Marines Corps is soliciting for system architectures and business partnerships that facilitate low-cost and high-assurance handhelds, where high-assurance means at least meeting the common criteria for evaluated assurance level (EAL) of 5+ or above. From this point of view the Marines’ approach seems closer to (and hence follows) the approach faced by the U.S. Army which is already testing iPhones, Android devices and tablets for us in war (a total of 85 apps, whose development took about $4.2 million, we could nearly speak about a Military iTunes or Military Android Market!).

But the adoption of consumer technologies does not stop here and will probably soon involve also the use of technologies closely resembling the Cloud. As a matter of fact, the NSA plans to develop in the near future a secure mobile capability, referred to as the “Mobile Virtual Network Operator,”, which will be be able to establish a way to provide sensitive content to the military and intelligence “in a way that roughly emulates what Amazon does with Kindle”, as stated by said Debora Plunkett, director of the NSA’s information assurance directorate, speaking at the Gartner Security and Risk Management Summit 2011 (but the NSA will not be the first to pilot this kind of technology since the NATO is already adopting Cloud Computing).

Probably this is only one side of the coin, I’m willing to bet that the consumerization of warfare will soon “infect” armies belonging to different countries and consequently the next step will be the development of weapons (read mobile military malware) targeted to damage the normal behavior of the military smartphones and tablets. On the other hand the Pentagon has developed a list of cyber-weapons, including malware, that can sabotage an adversary’s critical networks, so it is likely that these kind of weapons will soon affect mobile devices…

Driving Through The Clouds

April 8, 2011 1 comment

How many times, stuck in traffic on a hot August day, we hoped to have a pair of wings to fly through the clouds and free from the wreckage of burning metal.

Unfortunately, at least for me (even if my second name in English would sound exactly like Sparrows) no wing so far, miraculously, popped up to save me, nevertheless I am quite confident that, in a quite near future,  I will be saved by the clouds even if I will not be able to fly, or better said, I will be saved by cloud technologies that will help me, and the other poor drivers bottled between the asphalt and the hot metal, under the ruthless August sun to avoid unnecessary endless traffic jams on Friday afternoons.

Some giants of Information Technology (Cisco and IBM in primis) are exploring and experimenting such similar solutions, aimed to provide Car Drivers with real time information about traffic and congestions in order to suggest them the optimal route. In this way they will provide a benefit to the individual, avoiding him a further amount of unnecessary stress, and to the community as well, contributing to fight pollution and making the whole environment more livable and enjoyable.

The main ingredients of this miraculous technological recipe consist in Mobile Technologies and cloud technologies and the reasons are apparently easy to understand: everybody always carries with him a smartphone which is an incommensurable real time probe source of precious data necessary to model a traffic jam (assuming that it will be ever possible to model a traffic jam in the middle of the Big Ring of Rome): as a matter of fact a smartphone allows to provide real-time traffic information correlated with important parameters such as GPS position, average speed, etc.

Cloud technologies provide the engine to correlate information coming from mobile devices (and embedded devices) belonging to many different vehicles, providing the computational (dynamically allocated) resources needed to aggregate and make coherent data from many moving sources in different points of the same city or different interconnected cities. Cloud technologies may act a a single, independent, point of collection for data gathered on each device, dynamically allocating resources on-demand (let us suppose to have, in the same moment, two different jams, one of which is growing to an exponential rate and requires, progressively more and more computational resources), providing, to the individual (and to the City Administrators) a real time comprehensive framework, coherent and updated (nobody would hope to be led, by his navigator, to a diversion with a traffic-jam much worse than the original one which caused the diversion.

Of course, already today many consumer navigators offer the possibility to provide real-time traffic information, anyway the huge adoption of cloud technologies will offer an unprecedented level of flexibility together with the possibility to deal with a huge amount of data and to correlate the collected information with other data sources (for instance the V2V Veichle2Veichle e V2I Veichle2Infrastructure). From the city administrations perspective, the collected data will be invaluable for identifying the more congested points (and drive the subsequent proper targeted corrective actions), and moreover for supporting a coherent and sustainable development of the city.

Cisco and IBM are working hard to make this dream become true in few years with different approaches converging to the cloud: Cisco is leveraging the network intelligence for a project pilot in the Korean City of Busan (3.6 million of inhabitants). Cisco vision aims, in the third phase of the project scheduled before the end of 2014, to provide the citizens with many different mobile services in the cloud, with a Software-As-A-Service approach. Those services are dedicated to improve urban mobility, distance, energy management and safety. A similar project has recently been announced also for the Spanish City of Barcelona.

The IBM project, more focused on applications, is called The Smarter City and aims to integrate all the aspects of city management (traffic, safety, public services, etc..) within a common IT infrastructure. Few days ago the announcement that some major cities of the Globe, for instance Washington and Waterloo (Ontario), will participate to the initiative.

Even if the cloud provides computing power, dynamicity, flexibility and the ability to aggregate heterogeneous data sources at an abstract layer, a consistent doubt remains, and it is represented by security issues for the infrastructure… Apart from return on investment considerations (for which there are not yet consistent statistics because of the relative youth of the case studies depicted above), similar initiatives will succeed in their purpose only if supported by a robust security and privacy model. I already described in several posts the threats related to mobile devices, but in this case the cloud definitely makes the picture even worse because of the centralization of the information (but paradoxically this may also be an advantage if one is able to protect it well.) and the coexistence of heterogeneous data, even though logically separated, on the same infrastructure. As a consequence compromising the only point that contains all the data coming from heterogeneous sources that govern the physiological processes of a city, could have devastating impacts since the system would be affected at different levels and the users at different services. Not to mention, moreover, in case of wider use of this technologies, the ambitions of cyberterrorism that could, with a single computer attack, cripple the major cities around the globe.

Follow

Get every new post delivered to your Inbox.

Join 3,041 other followers