Posts Tagged ‘Cisco’

16-31 June 2014 Cyber Attacks Timeline

I do not know if being happy or not, but it looks like the second half of June (the first timeline covering 1-15 June is here) has seen a sharp inversion of the decreasing trend recorded on the last few months. I have registered an increase of the number of attacks with particular focus on targeted attacks.

The cyber crime front offered several noticeable events, targeting, just to mention the most devastating cases: AT&T, Evernote, the State of Montana (1.3 million single individuals potentially affected), and Butler University.

Moving to hacktivism, the cyber temperature is still high in Brazil, where the hacktivists concentrated their unwelcome attentions. Other points of interest involve Pakistan, and US.

Last but not least, this period recorded an unusual number of targeted attacks spotted in the news. The list includes (but is not limited to): the British Government Secure Intranet, an US Hedge Fund, Vietnamese Ministry of Natural Resources and Environment, ICS vendors in US and Europe and a Government Agency in Taiwan.

As usual, if you want to have an idea of how fragile our data are inside the cyberspace, have a look at the timelines of the main Cyber Attacks in 2011, 2012, 2013 and now 2014 (regularly updated). You may also want to have a look at the Cyber Attack Statistics, and follow @paulsparrows on Twitter for the latest updates.

Also, feel free to submit remarkable incidents that in your opinion deserve to be included in the timelines (and charts).

16-31 June 2014 Cyber Attacks Timeline rev2

Read more…

TCP Split Handshake: Why Cisco ASA is not susceptible

May 12, 2011 6 comments

As I told yesterday, I was not very satisfied with the updated NSS remediation guide concerning the TCP Split Handshake issue, published after the second round of testing on Cisco and Fortinet devices.

In particular, in case of Cisco, in my opinion the report was poor on details, considering Cisco’s ACL approach suboptimal and definitively coming to the discouraging conclusion that:

Our original results are unchanged, and ultimately Cisco did offer some mitigation steps.

This is clearly in contrast with what stated in the official Cisco post, which declares Cisco ASA firewall not susceptible to the issue, even if, in my opinion, the most disappointing aspect of the story consists in the fact that no other detail is provided on the NSS document, leaving many unresolved questions about the real nature of the issue and the level of vulnerability of Cisco devices.

Since I was really curious to discover were the truth resides, I decided to ask to Cisco Engineers to provide more details on the testing results, and after few hours it is exactly what they kindly did with an accurate and detailed description of the events posted by Joe Karpenko and Omar Santos, the two engineers who took part to the joint session with NSS Labs.

There are 2 connection establishment handshakes associated with this topic, they are as follows:
* Split Handshake (primary concern/issue)
* Simultaneous Open

By default, the Cisco ASA accelerated security path (asp) prevents both the “Split Handshake” and “Simultaneous Open” using the “tcp-dual-open” connection check. The Cisco ASA firewall drops the TCP SYN segment sent from the server (eg: fakestack.rb) when there is an embryonic TCP connection already open between two endpoints.

However, NSS created and demonstrated a brand new test-case which deviates from the 2 connection establishment handshakes mentioned above along with the most commonly used 3-way handshake. This new test-case is not compliant with the TCP connection establishment equirements defined in RFC 793.

For the “Split Handshake”, the first TCP segment sent by the server (fakestack.rb) in response to the clients TCP SYN segment is a TCP ACK segment (also described in the paper, The TCP Split Handshake: Practical Effects on Modern Network Equipment, pg. 200). However, for the new test-case a TCP RST/ACK segment is sent instead. At this point the client would be in a state called SYN_SENT and the server in the SYN_RCVD state.

The protocol specifications for TCP (defined in RFC 793) define how to process TCP segments received in certain states. When an endpoint is in a SYN_SENT state and it receives a TCP RST/ACK segment the endpoint aborts and closes the connection.

During our testing, the client ignores the TCP RST/ACK segment sent by the server (fakestack.rb) and does not abort the connection. Upon seeing the TCP RST/ACK segment sent by the server (fakestack.rb) the Cisco ASA firewall tears down the connection slot. Immediately following the TCP RST/ACK segment sent by the server (fakestack.rb) it sends a TCP SYN segment which initiates a *new* connection establishment and completes a 3-way handshake that complies with the TCP specifications defined in RFC 793.

For the new test-case, access control list rules can be applied using an access-group and used as additional countermeasures to mitigate and prevent unsolicited connection attempts between the endpoints for a TCP conversation when the client does not abort the connection as defined in the RFC protocol specification for TCP.

Given this description of the events, I completely agree with Cisco’s interpretation and I definitively believe there is nothing strange about the behaviour of the ASA firewall, since it immediately tears down the connection slot upon receiving a TCP RST/ACK (how it should be), and immediately allocates a new connection after receiving the new TCP SYN from the server.

Moreover, in the testing scenario the client behaviour does not fit with the TCP RFC. As a matter of fact, page 32 of the TCP RFC 793 states that:

The principle reason for the three-way handshake is to prevent old   duplicate connection initiations from causing confusion.  To deal with   this, a special control message, reset, has been devised.  If the   receiving TCP is in a  non-synchronized state (i.e., SYN-SENT, SYN-RECEIVED), it returns to LISTEN on receiving an acceptable reset. If the TCP is in one of the synchronized states (ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT), it aborts the connection and informs its user.  We discuss this latter case under “half-open” connections below.

Not only. Page 37 of the same RFC, on the paragraph “Reset Processing” states that:

In the SYN-SENT state (a RST received in response   to an initial SYN), the RST is acceptable if the ACK field acknowledges the SYN.

Which is exactly the occurrence in the above scenario when the client receives the TCP RST/ACK.

The sum of the two assertions definitively means that the tested scenario is probably not fully compliant to RFC 793 since, as stated by Cisco Engineers, upon receiving the TCP RST/ACK from the server, the client should reset the connection, free the socket and revert to LISTEN state, which corresponds, according to RFC 793, to a state waiting for a new connection request from any remote TCP and port.

And even if could be acceptable to perform a test in similar conditions not covered by the RFC 793, similarly I do not find anything strange or suboptimal in deploying ACLs to prevent unsolicited connection attempts between the endpoint. As I told yesterday, a firewall should protect critical assets from unsolicited connections independently from the risk of a TCP Handshake attack…

Again, I would like to thank the Cisco Engineers for their kindness and the transparency with which they quenched my curiosity thirst.

P.S.: A final thought from my youth

Now the big picture is clear! Few years ago, when I was younger and at the end of my short and shining system engineer life, I stumbled upon the curious case of a custom application which suddenly stopped to work after an upgrade of the firewall. Deeper analysis showed that each session of the application used the same TCP port for source and destination (the port number was used to identify the customer sigh!). Moreover the server used to terminate the connection with a TCP RST/ACK and to immediately open a new connection with a SYN packet with the same source and destination port number of the previous session. Does it sound familiar to you after reading the post? Yes it does! At that time we spent many hours on insulting the dangerous mind of the programmer and his strange interpretation of the TCP-IP RFC (but the client port should not be allocated on the random Ephemeral Port Range from 1024 to 65535?). After many years I got it!: he was a precursor of the TCP Split Handshake attack.

You will be asking which was the firewall that since then proved not to be susceptible to TCP Split Handshake… I will never say it! Not even under torture. I only may say that in order to fix the problem we had to perform a very unlikely tuning on the timeout parameters of the firewall queues…

Other Considerations On TCP Split Handshake

April 21, 2011 3 comments

The storm unleashed by NSS Labs test for the TCP split handshake attack which affected 5 firewall vendors  is far from being quiet.

During these days I enjoyed speaking with many colleagues about the results of the tests and definitively, I must confess that firewalls were not the only entities unaware the TCP Split Handshake, as a matter of fact, none of the professionals I discussed with (of course including me the first time I read about it) were familiar with this method of establishing TCP connections.

Nevertheless the show must go on: professionals must study to stay up-to-date (and learn what TCP Split Handshake is), firewalls (if susceptible to attack) must be fixed in order to learn how TCP Split handshake is correctly handled.

After the surprising findings of the test vendor are running for cover, so I spent half an hour to check the state-of-the-art after some communications from NSS Labs (unfortunately I was not able to attend the webinar of today) and some rumors on the Infosec arena.

Among the manufacturers found susceptible to TCP Split handshake attack during the first round, Palo Alto Networks has released an update (4.0.2) to fix the TCP Split Handshake Evasion, after the fix the manufacturer was able to pass the TCP handshake attack test.

As far as Juniper Networks is concerned, today a communication sent by E-mail by NSS Labs has indicated that this vendor is working on a fix as well: a configuration setting which will be enabled by default for new customers.

But probably the most interesting piece of news is the fact that today some Cisco representatives today went to NSS Labs to participate in the vulnerability-assessment on site and sort out any issues directly. Cisco refused to accept the results of the tests since was not able to reproduce the issue on any tested platform (ASA, IOS Firewall, IPS Appliances). An updated blog post about the findings is expected later today. NSS Labs also expects to publish updated findings related to what firewalls it tested have completed remediation to protect against the TCP Split Handshake attack.

Just for fun…

(But not only!), I gave a look individually to other vendors not involved in the tests to see if they had analyzed the behavior of their technologies on this issue.

Some McAfee representatives indicated me that their Enterprise Firewall platform is not prone to TCP Split Handshake attack. I looked for some information and I found this post from the vendor. Would be interesting if the security manufacturer from Santa Clara could release a more detailed documentation (maybe they already released but I did not find it J).

Stonesoft issued a blog post with the result of the test performed individually on its Stonegate Devices with the same BreakingPoint method pointed out in the original document describing the attack. The finding is that with the only firewall function the security device is not vulnerable if the “strict mode” is enabled in the advanced properties of the node. In normal or loose mode the traffic is permitted (even if Stonesoft indicates that the firewall does not get spoofed, that is correctly recognizes the origin of the session). With the antivirus function enabled the firewall is not vulnerable in any mode.

Astaro except some tweets indicating that the technology is not vulnerable. Would be interesting, also in this case, if the vendor could release some detailed document on the necessary configurations to be implemented to avoid the spoof (or if they are enabled by default).

I was nearly forgetting Microsoft, for which there is not any official document. Anyway I found an independent test in this blog which seems to confirm that the Microsoft platform is not vulnerable.

At this point I look forward to read the result of Cisco/NSS joint tests…

SCADA Security: Bridge the Gap (Updated)

April 19, 2011 2 comments

In the same hours in which I was writing the original article concerning the growing attention of utilities and security vendors versus SCADA security holes; an anonymous hacker put in practice the lesson and broke into wind turbine systems. He was able to break a 200 megawat wind turbine system owned by NextEra Energy Resources, a subsidiary of Florida Power & Light, claiming revenge for an “illegitimate firing”. Having said that it is not yet known whether or not it is an hoax (Wind power company sees no evidence of reported hack), the data was posted to the Full Disclossure security mailing list Saturday anonymously, by someone using the name “Bgr R.” In the post, the author of the hack wrote:

Here comes my revenge for illegitimate firing from Florida Power & Light Company (FPL)

   … ain’t nothing you can do with it, since your electricity is turned off !!!

Secure you SCADA better! Leaked files are attached …

In an e-mail interview, Bgr R said he’s a former employee who discovered a vulnerability in the company’s Cisco security management software. He used that vulnerability to hack into the SCADA (supervisory control and data acquisition) systems used to control the turbines.

Even if the screenshots of the Wind Turbine management interface look legitimate, there are some big question marks. In his interview Bigr R didn’t say much about how he broke into the SCADA systems themselves and he didn’t demonstrate much insider knowledge of Florida Power & Light (FPL) systems.

Hoax or not, this event renews the attention on SCADA Security Issues… For my part I promise I will no longer write down Security Predictions :-)

TCP Split Handshake Attack Explained

April 17, 2011 9 comments

Update May 12: TCP Split Handshake: Why Cisco ASA is not susceptible

Update May 11: The Never Ending Story

Update April 21: Other Considerations on TCP Split Handshake

Few days ago, independent security research and testing NSS Labs, issued a comparative report among six network security technologies. The controversial results created a comprehensible turmoil among the security vendors involved in the tests, and more in general inside the infosec landscape. As a matter of fact it turned out that that five of the six tested platforms were susceptible to TCP Split handshake attack.

As a security professional, I am pretty much involved with at least five of the six tested technologies, consequently, although I never heard about TCP Split Handshake before, I must confess I was really curious to learn which was the only platform capable of surviving the test (the answer is indirectly provided by the vendor – Checkpoint – missing from the list contained on the remediation report subsequently released). Fortunately the scientific side of me took over and instead of making judgments and drawing conclusions about the results, I decided to learn more about TCP Split Handshake and the reasons why a security equipment may be vulnerable.

TCP Split Handshake in RFC 793

Since TCP is a connection-oriented protocol, every connection begins with a “handshake” defined in RFC 793. The handshake defines three well defined steps and for this reason it is called “TCP Three Way Handshake.”

The host initiating the connection, referred as the client, send to its peer, referred as the server, a synchronization packet, or SYN. In order to correctly identify the beginning (and the subsequent “state” of the session, the SYN packet contains an initial Sequence Number (ISN) which corresponds to a pseudo-random number.

Upon reception of the SYN packet, the server acknowledges that, and generates its own SYN. This “SYN/ACK” packet contains both the server’s Initial Sequence Number, as well as an acknowledgment number equal to the client’s Sequence Number plus 1. The fact that the server sends a single packet to initiate the connection on its side and to acknowledge the initial SYN sent from the client is known as piggy-backing and, as explained later, is the fundamental aspect in which TCP Split Handshake differs from Three Way Handshake.

At this point, in order to establish the session, the client concludes the Three Way Handshake and acknowledges the server’s SYN/ACK, sending a packet with its own ISN incremented by one, as well as its acknowledgement number equal to the server’s ISN plus 1.

As mentioned above, in the second phase of the handshake, the piggy-backing allows the server to use a single packet to send its own SYN and to acknowledge the SYN packet received from the client (ACK). However, let us assume that the server could decide to split the second phase of the handshake and send a dedicated ACK packet to acknowledge the client SYN, and a further dedicated packet with its own SYN. This is exactly what is stated at section 3.3, page 27, of RFC 793, which introduces an intriguing four-step process:

1) A --> B  SYN my sequence number is X
2) A <-- B  ACK your sequence number is X
3) A <-- B  SYN my sequence number is Y
4) A --> B  ACK your sequence number is Y

As a consequence, one might expect that an RFC 793 perfectly compliant client be capable to silently accept packet two, explicitly ACK packet 3, and hence complete the handshake more-or-less normally. At least in theory…

In reality, in such similar circumstances, NSS test have shown that some network security devices, with the sole firewall function enabled, get confused and behaves in a stateless manner. In few words, if even the client behaves as stated in the RFC, that is it is able to correctly establish the session even if it accepts separated ACK and SYN packets from the server, the network security device, on receiving the SYN from the server (packet 2), loses the awareness of the session and lets the traffic flow without enforcing any security control as if it belongs to an uncontrolled session (in theory an unknown or out-of-state session should be blocked). This means that a malicious payload conveyed through a TCP Split Handshake intiated session could go through the firewall and as a consequence, an attack scenario is quite straightforward: an attacker could think to use a server-side malicious script to establish the session by mean of a TCP Split Handshake and consequently install an executable on the client (a very fashionable event in the last days), for instance, by mean of an ActiveX Buffer Overflow on the target client browser.

The bad news is that this kind of attack is not new, and a similar attack scenario was reported for the first time approximately one year ago (with different behaviours reported for clients and security devices). The strange side of the story relies on the fact that this behaviour may not be considered a real vulnerability, but rather an occurrence covered by RFC not correctly implemented or not enabled on the default configuration by security vendors (please consider that RFC 793 also includes a further method for establishing a TCP connection dubbed “TCP Simultaneous Open” in which two TCP hosts simultaneously attempt to open a connection to each other via a SYN packet).

Last but not least…

For the record, as previously stated, NSS Labs released a remediation report containing the indications needed to mitigate (where necessary) the occurrence of the TCP Split Handshake for the affected technologies. Moreover two vendors (Cisco and Fortinet) added some indications as reported in the following:

  • According to an official blog post, Cisco was not able to reproduce the issue occurred in NSS Labs Test and is further investigating the TCP Split Handshake attack on its devices.
  • According to an official response in a blog post, Fortinet is not susceptible to TCP Split Handshake attack if IPS and Antivirus protections are enabled. A special IPS signature has been developed and a firmware update is scheduled for May in order to block TCP Split Handshake attack with only firewall enabled:
  • For Juniper devices the line “set security flow tcp-session strict-syn-check” must be  inserted into configuration (this option affects all the traffic, so it must be set with caution);
  • Palo Alto is working to release an official fix between mid-April and early May;
  • For Sonicwall devices, the option “Enforce Strict TCP Compliance” must be enabled (also in this case this option affects all the traffic and must be set with caution).

Driving Through The Clouds

April 8, 2011 1 comment

How many times, stuck in traffic on a hot August day, we hoped to have a pair of wings to fly through the clouds and free from the wreckage of burning metal.

Unfortunately, at least for me (even if my second name in English would sound exactly like Sparrows) no wing so far, miraculously, popped up to save me, nevertheless I am quite confident that, in a quite near future,  I will be saved by the clouds even if I will not be able to fly, or better said, I will be saved by cloud technologies that will help me, and the other poor drivers bottled between the asphalt and the hot metal, under the ruthless August sun to avoid unnecessary endless traffic jams on Friday afternoons.

Some giants of Information Technology (Cisco and IBM in primis) are exploring and experimenting such similar solutions, aimed to provide Car Drivers with real time information about traffic and congestions in order to suggest them the optimal route. In this way they will provide a benefit to the individual, avoiding him a further amount of unnecessary stress, and to the community as well, contributing to fight pollution and making the whole environment more livable and enjoyable.

The main ingredients of this miraculous technological recipe consist in Mobile Technologies and cloud technologies and the reasons are apparently easy to understand: everybody always carries with him a smartphone which is an incommensurable real time probe source of precious data necessary to model a traffic jam (assuming that it will be ever possible to model a traffic jam in the middle of the Big Ring of Rome): as a matter of fact a smartphone allows to provide real-time traffic information correlated with important parameters such as GPS position, average speed, etc.

Cloud technologies provide the engine to correlate information coming from mobile devices (and embedded devices) belonging to many different vehicles, providing the computational (dynamically allocated) resources needed to aggregate and make coherent data from many moving sources in different points of the same city or different interconnected cities. Cloud technologies may act a a single, independent, point of collection for data gathered on each device, dynamically allocating resources on-demand (let us suppose to have, in the same moment, two different jams, one of which is growing to an exponential rate and requires, progressively more and more computational resources), providing, to the individual (and to the City Administrators) a real time comprehensive framework, coherent and updated (nobody would hope to be led, by his navigator, to a diversion with a traffic-jam much worse than the original one which caused the diversion.

Of course, already today many consumer navigators offer the possibility to provide real-time traffic information, anyway the huge adoption of cloud technologies will offer an unprecedented level of flexibility together with the possibility to deal with a huge amount of data and to correlate the collected information with other data sources (for instance the V2V Veichle2Veichle e V2I Veichle2Infrastructure). From the city administrations perspective, the collected data will be invaluable for identifying the more congested points (and drive the subsequent proper targeted corrective actions), and moreover for supporting a coherent and sustainable development of the city.

Cisco and IBM are working hard to make this dream become true in few years with different approaches converging to the cloud: Cisco is leveraging the network intelligence for a project pilot in the Korean City of Busan (3.6 million of inhabitants). Cisco vision aims, in the third phase of the project scheduled before the end of 2014, to provide the citizens with many different mobile services in the cloud, with a Software-As-A-Service approach. Those services are dedicated to improve urban mobility, distance, energy management and safety. A similar project has recently been announced also for the Spanish City of Barcelona.

The IBM project, more focused on applications, is called The Smarter City and aims to integrate all the aspects of city management (traffic, safety, public services, etc..) within a common IT infrastructure. Few days ago the announcement that some major cities of the Globe, for instance Washington and Waterloo (Ontario), will participate to the initiative.

Even if the cloud provides computing power, dynamicity, flexibility and the ability to aggregate heterogeneous data sources at an abstract layer, a consistent doubt remains, and it is represented by security issues for the infrastructure… Apart from return on investment considerations (for which there are not yet consistent statistics because of the relative youth of the case studies depicted above), similar initiatives will succeed in their purpose only if supported by a robust security and privacy model. I already described in several posts the threats related to mobile devices, but in this case the cloud definitely makes the picture even worse because of the centralization of the information (but paradoxically this may also be an advantage if one is able to protect it well.) and the coexistence of heterogeneous data, even though logically separated, on the same infrastructure. As a consequence compromising the only point that contains all the data coming from heterogeneous sources that govern the physiological processes of a city, could have devastating impacts since the system would be affected at different levels and the users at different services. Not to mention, moreover, in case of wider use of this technologies, the ambitions of cyberterrorism that could, with a single computer attack, cripple the major cities around the globe.

Application Security: What’s Next?

April 6, 2011 3 comments

In the wake of the infamous LizaMoon which has flooded an impressive number of databases all over the world with SQL Injection, infecting more than 1,500,000 URLs according to Google Search, the next frontier of Information Security to which security vendors are likely to move, is the branch of application security. The last vendor in order of time to make an acquisition (just a couple of days before LizaMoon was detected) was Intel McAfee, which decided to enter the database security market (estimated more than $ 600 million in 2012) acquiring Sentrigo, a Santa Clara based company focused on database security, former member of the SIA Technology Partnership Program (McAfee Security Innovation Alliance) and currently linked to McAfee by an OEM partnerships.

The red Infosec Colossus of Santa Clara is just the latest player to enter this market, following the example of IBM, which offers a complete Application Security solution since 2009, thanks to the acquisitions (in rigorous chronological order) of DataPower (Web Application/XML Security), Ounce Labs (Code Analysis) and Guardium (Database Security). A set of solutions which form respectively the Websphere, Rational and InfoSphere Guardium Security Solutions.

McAfee and IBM are accompanied by Fortinet, another important security player which has been active in this field for some years. Fortinet has been investing in database and application security since 2006, and even if it lacks a code analysis solution, it offers a portfolio which extends up to the database (scanning and monitoring) level, through the acquisition of IP-Locks, and up to XML /Application Firewall, thanks to its offer of FortiWeb appliances.

As you may notice the three examples above are particularly meaningful of how the security is now converging towards application security. Although McAfee, Fortinet and IBM have very different backgrounds, they are converging to a comparable application security offer: McAfee approached the problem from the endpoint security, which is its historical strength, IBM from the content security, since its adventure in security has started from the acquisition of ISS, and finally Fortinet from the network security, well represented by its Fortigate appliances.

According to my personal model, the complete cycle of application security moves on four levels: training of developers is the first level and the necessary foundation upon which an application security project is built. Where the ability (and security awareness) of developers does not arrive, Vulnerability Assessment/Penetration Test (second level) may be enforced to check the level of security of the applications. If we move to a more “technological” plane there are two more levels: they consist respectively in Code Analysis (a preventive measure) and XML/Application/Database security solutions implemented with dedicated software or appliances (an infrastructural measure). Please consider that (an aspect which is not secondary) these kindw of solutions are also driven by increasingly stringent regulations, such as PCI-DSS, and emerging “De Facto” standards such as OWASP (Open Web Application Security Program).

If IBM is currently the only vendor to cover the three areas of application security (code analysis, XML/Web application security and database security), in addition to McAfee and Fortinet, there are other vendors at the door, looking at this market with great interest starting from Cisco Systems, provided with a great ability to execute, but currently focused primarily on its network-centric approach by mean of its ACE family of XML Firewalls, and HP, which, even if currently leaks an XML/WAF or Database Security solution) is approaching the world of Application Security starting from code analysis thanks to the acquisition of Fortify, considered by many observers the market leader in this field.

Actually, alongside these vendors there are more players which, even if more focused on network security, however, look carefully in this market by offering niche solutions, as is the case, for instance, with Checkpoint, which combines its traditional firewall modules (or software blades according to the current terminology) with Web Security functions specifically tailored for application threat, or also Citrix which approaches the problem from the Application Acceleration/Distribution perspective.

It is likely that the market value and the emotional drive of LizaMoon will soon bring furher earthquakes in this market. In my honest opinion, the next to fall will be two more important partners in the McAfee SIA focused in Application Security: Imperva (Web Application/XML Firewall) and VeraCode (Code Analysis) are well advised…

La Città (Verde) Tra Le Nuvole

Il titolo dell’articolo apparentemente richiama la romantica Cloud City, città tra le nuvole che contraddistingue le gesta di Han Solo e Lando Calrissian nell’Episodio V di Guerre Stellari. In realtà alla Cloud City basta aggiungere un aggettivo ecologico per creare il termine Green Cloud City che identifica il progetto su cui stanno lavorando, su tre filoni paralleli, altrettanti colossi del settore, Cisco, IBM e Microsoft (in rigoroso ordine alfabetico) con l’intenzione di realizzare la Città del futuro, dove Mobility, Green ICT  e Tecnologie Cloud si sposano per garantire servizi avanzati ai cittadini.

Il progetto di Cisco si colloca all’interno di una iniziativa più vasta definita Smart+Connected Community che mira a fornire ai cittadini servizi di collaborazione e connettività avanzata grazie all’intelligenza fornita dalla rete. Il progetto pilota è attualmente in corso presso la città coreana di Busan (3.6 milioni di abitanti) e mira, entro il 2014, a fornire ai cittadini una serie di servizi, mobili e condivisi, ospitati sul cloud, atti a migliorare la mobilità urbana, l’impatto delle distanze, la gestione dell’energia e le condizioni generali di sicurezza.

I pilastri su cui si basa l’iniziativa sono il cloud e le tecnologie mobili: il primo fornisce l’infrastruttura, di tipo platform-as-a-service (PaaS) nella prima fase, necessaria allo sviluppo delle applicazioni mobili; e di tipo software-as-a-service (SaaS) nella fase 2, prevista nel 2012, in cui i primi servizi (ad esempio gestione documentale e billing automation saranno disponibili a tutti i cittadini). Come si è facilmente intuito, mobili sono invece i terminali che potranno usufruire delle applicazioni a partire dalla fase 2, accesso applicativo che vedrà il pieno compimento nella fase 3, prevista per il 2014, quando le applicazioni saranno accessibili a tutti. Il motivo del connubio in ambito cittadino è presto detto: i terminali mobili costituiscono il punto di accesso dei cittadini ai servizi, possono autenticare in maniera forte e fornire nel contempo le informazioni in tempo reale necessarie al funzionamento dell’ecosistema. Il cloud fornisce invece l’infrastruttura che garantisce la potenza di calcolo, la la dinamicità e la flessibilità necessarie per gestire una mole di dati così ingente e verso la quale stanno migrando le tecnologie dei partner Cisco in questo progetto.

Il progetto di IBM è chiamato The Smarter City e mira  a integrare tutti gli aspetti di gestione di una città (traffico, sicurezza, servizi ai cittadini, etc.) all’interno di una infrastruttura IT comune. Pochi giorni fa l’annuncio che all’iniziativa parteciperanno alcune importanti città del Globo, quali Washington e Waterloo (Ontario). Il caso di Washington in particolar modo è significativo perché dimostra come la gestione unificata dei parametri di una città mediante una intelligenza (e una infrastruttura software) comune possa apportare benefici agli utenti su molteplici piani. Basti pensare ad esempio ad una utility che è in grado di ottimizzare i consumi grazie agli Smart Meter (rilevatori di consumo intelligenti bidirezionali equipaggiati con uno stack IP per le comunicazioni) e di ottimizzare nel contempo interventi di manutenzione grazie alla possibilità di conoscere il traffico in tempo reale ed altri servizi mobili (fatturazione, verifica consumi, etc. grazie al supporto delle tecnologie mobili. Anche per questa iniziative, nubi virtuali si stagliano all’orizzonte in quanto il progetto può essere integrato con l’infrastruttura IP di una città ma può anche essere ospitato in ambito cloud per consentire, soprattutto nel caso delle città più piccole, la coesistenza, logicamente separata, di diverse entità in una stessa infrastruttura tecnologica.

Sul carro del Green ICT su scala cittadina (e anche oltre) è salita anche Microsoft, mediante la propria iniziativa SERA (Smart Energy Reference Architecture) dedicata alla creazione di un framework comune per l’interoperabilità di diversi dispositivi di misurazione intelligente. L’ultimo progetto in ordine di tempo è quello realizzato con il gigante energetico francese Alstom. Anche in questo caso nel cielo Azure di Microsoft si stagliano nubi tecnologiche, nel senso che il progetto utilizza in maniera importante tecnologie di cloud.

Ovviamente in tutti i casi evidenziati il cloud fornisce potenza di calcolo, dinamicità, flessibilità, capacità di aggregare ad un livello astratto fonti di dati così eterogenee, ma un dubbio (e anche consistente) rimane… La sicurezza…  A prescindere dal ritorno degli investimenti (per cui non c’è ancora una casistica consistente vista la relativa gioventù dei casi sopra citati), simili iniziative potranno riuscire nel loro scopo solamente se supportate da un solido modello di sicurezza e privacy. Già di per sè, come ho avuto modo di approfondire nel caso delle smart grid, l’apertura al mondo IP espone sistemi non nativamente concepiti per essere aperti a nuove tipologie di minacce. Il cloud sicuramente rende il quadro ancora più delicato perchè centralizza il punto di elaborazione delle informazioni (ma questo paradossalmente potrebbe anche essere un vantaggio) e consente la coabitazione di dati eterogenei, ancorché logicamente separati, sulla medesima infrastruttura. Compromettendo l’unico punto che contiene tutti i dati, provenienti da sorgenti eterogenee che governano i processi fisiologici di una città, gli impatti sarebbero estremamente dannosi in quanto influenzerebbero il sistema a diversi livelli e gli utenti a diversi servizi. Senza contare le mire dei Cyberterroristi che potrebbero, con un unico attacco informatico (e quindi con elevate possibilità di nascondersi) mettere in ginocchio le maggiori città del globo.

Comunque non preoccupatevi, passerà ancora un po’ di tempo prima che le città arriveranno ad un modello così evoluto. Ci arriveranno sicuramente prima le automobili. Ma chissà perché la cosa non mi tranquillizza per niente. E poi un dubbio mi rimane: secondo voi tra Cisco, IBM e Microsoft, quale sarà la prima ad essere “bucata”? Chissà perché ma temo di sapere la risposta…

Lo Smartphone? Ha fatto il BOT!

February 23, 2011 2 comments

E’ stato appena pubblicato un interessante articolo di Georgia Weidman relativo al concept di una botnet di smartphone controllati tramite SMS. Il lavoro, annunciato alla fine del mese di gennaio 2011 e presentato alla Shmoocon di Washington, aveva da subito attirato la mia attenzione poiché, in tempi non sospetti, avevo ipotizzato che la concomitanza di fattori quali la crescente potenza di calcolo dei dispositivi mobili e la loro diffusione esponenziale, avrebbe presto portato alla nascita di possibili eserciti di Androidi (o Mele) controllate da remoto in grado di eseguire la volontà del proprio padrone.

Il modello di mobile bot ipotizzato (per cui è stato sviluppato un Proof-Of-Concept per diverse piattaforme) è molto raffinato e prevede il controllo dei terminali compromessi da parte di un server C&C di Comando e Controllo, mediante messaggi SMS (con una struttura di controllo gerarchica), che vengono intercettati da un livello applicativo malevolo posizionato tra il driver GSM ed il livello applicativo. La scelta degli SMS come mezzo di trasmissione (che in questo modello di controllo assurgono al ruolo di indirizzi IP) è dovuto all’esigenza di rendere quanto più possibile trasparente il meccanismo di controllo per utenti e operatori (l’alternativa sarebbe quella del controllo tramite una connessione  dati che tuttavia desterebbe presto l’attenzione dell’utente per l’aumento sospetto di consumo della batteria che non è mai troppo per gli Androidi e i Melafonini ubriaconi). Naturalmente il livello applicativo malevolo è completamente trasparente per l’utente e del tutto inerme nel processare i dati e gli SMS leciti e passarli correttamente al livello applicativo senza destare sospetti.

Georgia Weidman non ha trascurato proprio nulla e nel suo modello ipotizza una struttura gerarchica a tre livelli:

  • Il primo livello è composto dai Master Bot, controllati direttamente dagli “ammucchiatori”. I Master Bot non sono necessariamente terminali (nemmeno compromessi), ma dovendo impartire ordini via SMS possono essere dispositivi qualsiasi dotati di un Modem;
  • Il secondo livello è composto dai Sentinel Bot: questi agiscono come proxy tra i master e l’esercito di terminali compromessi. Le sentinelle devono essere dispositivi “di fiducia”, ovvero dispositivi sotto il diretto controllo degli “ammucchiatori” o membri della botnet da un periodo di tempo sufficientemente lungo da far ritenere che l’infezione sia ormai passata inosservata per il proprietario e degna pertanto di promuoverli al ruolo di sentinelle.
  • Il terzo livello è composto dagli slave bot. I veri e propri soldati dell’esercito di terminali compromessi che ricevono le istruzioni dalla sentinelle ed eseguono il volere del capo.

Da notare che questo modello gerarchico applica il paradigma del “divide et impera”. I terminali compromessi slave non comunicano mai direttamente con il master, e solo quest’ultimo, inoltre, conosce la struttura dell’intera botnet. L’utilizzo del SMS inoltre consente al master di poter cambiare numero di telefono all’occorrenza ed eludere così le forze del bene, ovvero gli eventuali cacciatori di bot.

Ovviamente tutte le comunicazioni avvengono tramite SMS cifrati (con un algoritmo di cifratura a chiave asimmetrica) e autenticati, inoltre la scoperta di un telefono infetto non pregiudica l’intera rete di terminali compromessi ma solo il segmento controllato dalla sentinella di riferimento (il master può sempre cambiare numero).

Quali possono essere gli utilizzi di una botnet così strutturata? Naturalmente rubare informazioni, per fini personali o di qualsiasi altro tipo (politici, economici, etc.). Purtroppo, per questa classe di dispositivi, che stanno trovando sempre di più applicazioni verso i livelli alti di una Organizzazione, gli exploit e i bachi sono all’ordine del giorno per cui teoricamente sarebbe possibile rubare il contenuto della memoria SD con un semplice SMS. Ma non finisce qui purtroppo: considerata la potenza di calcolo (abbiamo ormai un PC nel taschino) e la potenza di calcolo, questi dispositivi possono essere facilmente usati come seminatori di traffico, ovvero sorgenti di attacchi di tipo DDoS (Distributed Denial of Service), specialmente nel caso di connessioni Wi-Fi che si appoggiano su un operatore fisso  che offre possibilità  di banda maggiori e quindi più consone ad un attacco di tipo Distributed Denial Of Service. Questo si sposa perfettamente con la dinamicità di una botnet basata su SMS (in cui il master può cambiare numero per nascondersi) e con le infrastrutture degli operatori mobili (o fissi offerenti servizi Wi-Fi) che potrebbero non essere completamente pronte per affrontare simili tipologie di eventi informatici (come anche evidenziato dal recente report di Arbor Networks). Altra nefasta applicazione potrebbe essere lo spam, soprattutto se effettuato tramite SMS. Interessante inoltre la combinazione con il GPS che potrebbe portare al blocco totale delle comunicazioni GSM in determinate circostanze spazio-temporali (sembra fantapolitica ma è comunque teoricamente possibile).

Rimane ora l’ultimo punto che era rimasto in sospeso quando avevo trattato di questo argomento per la prima volta:  mi ero difatti chiesto la questione fondamentale, ovvero se il software malevolo di bot avesse necessità o meno di permessi di root. La risposta è affermativa, ma questo non mitiga la gravità del Proof-Of-Concept, ribadisce anzi l’importanza di un concetto fondamentale: alla base della sicurezza c’è sempre l’utente, il cui controllo sovrasta anche i meccanismi di sicurezza del sistema operativo, e questo non solo perché ancora una volta viene evidenziata drammaticamente la pericolosità di pratiche “smanettone” sui propri dispositivi (che possono avere conseguenze ancora più gravi se il terminale è usato per scopi professionali), ma anche perché gli utenti devono prendere consapevolezza del modello di sicurezza necessario, facendo attenzione alle applicazioni installate.

Lato operatori, urge l’assicurazione che gli aggiornamenti di sicurezza raggiungano sempre i dispositivi non appena rilasciati. Aggiungerei inoltre, sulla scia di quanto dichiarato da Arbor Networks, possibili investimenti infrastrutturali per l’eventuale rilevazione di eventi anomali dentro i propri confini.

A questo punto, il fatto che i produttori di sicurezza abbiano, quasi all’unanimità, inserito il mondo mobile al centro delle preoccupazioni di sicurezza per il 2011 perde qualsiasi dubbio sul fatto che si tratti di una moda passeggera, ed è asupicabile che  gli stessi stiano già correndo ai ripari, aggiungendo livelli di sicurezza aggiuntivi ai meccanismi intrinseci del sistema operativo con l’ausilio di tecnologie di DLP (come indicato dal report Cisco per il 2011), virtualizzazione e integrando sempre di più tecnologie di sicurezza nei dispositivi: ultimo annuncio in ordine di tempo? Quello di McAfee Intel che si dimostra, ancora una volta, molto attiva nel settore mobile.

Report Cisco 4Q 2010: Il Malware Web ha fatto il Bot(net)

February 20, 2011 Leave a comment

Dopo i turni di McAfee e Symantec è la volta di Cisco: il gigante dei router e della sicurezza perimetrale ha da poco pubblicato il proprio Cisco 4Q10 Global Threat Report che riflette i trend della sicurezza su scala globale da ottobre a dicembre 2010.

Il report Cisco si differenzia leggermente dai documenti precedentemente citati poiché proviene da un produttore di sicurezza focalizzato su soluzioni di rete, e si basa inoltre su dati di traffico raccolti dalla propria rete di sensori di Intrusion Prevention (IPS), di dispositivi di sicurezza IronPort per la posta e per il traffico Web, dai propri servizi di gestione remota Remote Management Services (RMS), ed infine dai porpri servizi di sicurezza basati sul Cloud ScanSafe.

Picco di Malware in Ottobre

Gli utenti Enterprise in media hanno registrato, nel periodo in esame, 135 impatti di nuovo malware al mese, con un picco di 250 eventi al mese in ottobre, mese che ha visto anche il più elevato numero di host intercettati ospitanti web malware  che si è attestato a 16.905. In totale nel periodo sono stati rilevati 38.811 eventi web risultanti, in totale, a 127.622 URL.

Il traffico correlato ai motori di ricerca si è attestato a circa l’8% del web malware con la maggior percentuale, pari al 3.84%, proeveniente da Google, in notevole calo rispetto al 7% della stessaa tipologia di traffico rilevata nel terzo quarto. Il traffico di tipo webmail si è invece attestato all’1%.

Il malware Gumblar (caratterizzato del redirigere le ricerche) ha compromesso in media il 2% delle ricerche nel periodo Q4 2010,  anche in questo caso in netto calo rispetto al picco del 17% raggiunto a maggio 2010.

Per quanto concerne gli exploit applicativi, Java l’ha fatta da padrone: la creatura di SUN Oracle ha sbaragliato la concorrenza, posizionandosi al 6.5%, una percentuale quasi quattro volte maggiore rispetto alle vulnerabilità inerenti i file PDF.

I settori verticali più a rischio sono risultati essere il Farmaceutico, Il Chimico, e il settore dell’energia (gas and oil), probabilmente per quest’ultimo ha contribuito anche il malware Night Dragon.

Attività delle BotNET

Le analisi rese possibili dai dati raccolti mediante i sensori IPS e i servizi gestiti hanno consentito di tracciare le attività delle botnet nel periodo preso in esame. I dati hanno evidenziato un leggero aumento del traffico generato dalle Botnet, soprattutto per quanto riguarda Rustock, la rete di macchine compromesse più diffusa, che ha avuto un picco notevole al termine dell’anno.

Per quanto riguarda le signature di attacco maggiormente rilevate, al primo posto spiccano le “Iniezioni SQL” (Generic SQL Injection), a conferma del fatto, indicato da molti produttori, che nel 2011 le vulnerabilità tradizionali verrano utilizzate in modo più strutturato per scopi più ampi (furto di informazioni, hactivisim, etc.).

Interessante notare che ancora nel 2011 sono stati rilevati residuati virali quali Conficker, MyDoom e Slammer. Per contro, a detta del produttore di San Francisco, i virus di tipo più vecchio quali infezioni dei settori di boot e file DOS, sarebbero in via di estinzione (ironia della sorte era appena uscito il report ed è stata rilevata una nuova infezione informatica diretta al Master Boot Record che ha sollevato una certa attenzione nell’ambiente).

Interessante anche l’impatto degli eventi mondiali sulla qualità e quantità del traffico: la rete di sensori Cisco ha difatti rilevato un picco di traffico peer-to-peer (in particolare BitTorrent) nell’ultima parte dell’anno coincidente, temporalmente, con la rivelazione dei “segreti” di Wikilieaks che ha portato gli utenti, viste le misure di arginamento tentate dalle autorità statunitensi, a ricercare vie parallele per avere mano ai documenti.

Meno Spam per tutti!

I produttori di sicurezza raramente vanno d’accordo tra loro, tuttavia, nel caso dello Spam, le indicazioni del gigante di San Jose sono in sostanziale accordo con quelle di McAfee. Il quarto trimestre del 2010 ha registrato un calo considerevole delle mail indesiderate, verosimilmente imputabile alle operazioni di pulizia su vasta scala compiute all’inizio dell’anno passato nei confronti delle grndi botnet: Lethic, Waledac, Mariposa e Zeus; e più avanti nel corso del medesimo anno nei confronti di Pushdo, Bredolab e Koobface.


Get every new post delivered to your Inbox.

Join 2,944 other followers