About these ads

Archive

Posts Tagged ‘IBM’

Looking Back…

January 13, 2012 Leave a comment

Actually this post is nearly a couple of weeks in delay (last week I was skiing in at the Italian Dolomites!!). (Un)fortunately now that I am back to home (and to work), I have choosen this Friday The 13th, while preparing my traditional Cyber Attacks Master Index for the first half of January 2012, to give a quick look to the past year in terms of my blogging activity in order to discover which where the posts which collected most views (more than 60,000 in total), of course excluding the home page.

As you will easily notice the articles related to cyber attack statistics dominate the Top 10. For sure it is not a coincidence that some of the included articles were also quoted by leading security firms such as Kaspersky and IBM). Of course, for a correct interpretation of the chart you should also consider the period of the year in which each article was written (before the article is written, greater is the number of potential readers) and also the fact that the master index is continuously updated.

Date

Title

Views

Aug 11, 2011

One Year Of Android Malware (Full List)

16,737

Dec 31, 2011

2011 Cyber Attacks Timeline Master Index

3,668

Aug 16, 2011

Antisec hacks another Defense Contractor

2,406

Apr 17,2011

TCP Split Handshake Attack Explained

2,110

Jun 22, 2011

2011 CyberAttacks Timeline

1,535

Jun 28, 2011

2011 Cyber Attacks (and Cyber Costs) Timeline (Updated)

1,195

Dec 15, 2011

One Year Of Lulz (Part I)

1,090

Sep 15, 2011

Anatomy Of A Twitter Scam

938

May 1, 2011

Social Espionage

696

Sep 2, 2011

August 2011 Cyber Attacks Timeline

590

Yes, the post dedicated to Android Malware ranked undoubtely at number 1 (it even deserved a mention on Engadget) but also the Cyber Attacks Master index “performed well” even if at a great distance (but it was destined for a more professional audience) being quoted in many information security forums.

At rank number 3 there is a summer post dedicated to cyber attacks targeting contractors (clearly it is updated to August and could not include STRATFOR), which, actually a surprise for me, gained an unexpected attention under the Dog Days (a prolific period for blogging).

Clearly my readers have shown a great interest for security statistics, since in order to find a more technical article we have to browse the chart until number 4 with my post dedicated to TCP Split Handshake. In that circumstance I forced myself to investigate the question since when I first stumbled upon it after the NSS report (and the consequent turmoil) I must confess I had never heard about it.

Again statistics at ranks number 5, 6, and 7, until number 8 which is hold by a post dedicated to a scam targeting Twitter and mobile users. At that time the scam lured so many victims, who consequently “googled” the phrase “This made me laugh so hard when i saw this about you lol” (the symptom of the scam) and were hence redirected to that article.

A particular mention is also deserved by the Social Espionage at number 9, dealing with the threats hidden behind social networks: the Social Network Poisoning seen from the perspective of several resounding examples such as Primoris Era and Robin Sage.

In any case, forgive me if I could not do it before, I really would like to say thank you to all the Information Security Professionals who inspired my work (which I decided to quote in a very special manner)…

But most of all I want to say thank you to all the readers who stumbled upon my blog and decided to keep on reading (and retweeting) the articles regularly. Hope they will find in 2012 the same level of interest shown in the past year. Since it is not so easy to conciliate my professional and personal life with my blogging activity (thanks to my wife Romina for her patience), their appreciation is the scope of my work and a crucial driver to improve the level of quality…

XML Encryption Cracked!

October 23, 2011 3 comments

Broken CBC XMLWe have not completely assimilated the BEAST vulnerability, and here it comes, from Bochum, Germany, another serious flaw involving Encryption, or better, involving XML Encryption.

XML Encryption, is a W3C standard widely used to securely transmit information inside Application-to-Application Web services connections. It was believed to be a robust standard mechanism to protect data exchange between a wide class of applications using web services and deployed in different sectors, for instance business, e-commerce, financial, healthcare, governmental and military applications. For the generic user a typical scenario involves, for example, credit card information encryption for a payment within an XML-based purchase order.

Unfortunately it lools like the mechanism is not so robust as it was supposed to be, and the discovery comes from Juraj Somorovsky and Tibor Jager, two Researchers at the Ruhr University of Bochum (RUB) who were successful in cracking parts of the XML encryption process used in web services, thus making it possible to decrypt encrypted data. They demonstrated their work at the ACM Conference on Computer and Communications Security in Chicago this week.

As far as the attack technique is concerned, once again CBC (Cipher-Block Chaining) is indicted since the new discovered vulnerability, as in case of the BEAST attack, is exploitable only in this encryption mode.

The attack strategy seems very similar to the one behind the BEAST Attack and allows to decrypt data, encrypted in AES-CBC, by sending modified ciphertexts to the server, and gathering information from the received error messages using the cryptographic weakness of the CBC mode, in particular the fact that, by conveniently manipulating the IV, ciphertexts encrypted in CBC mode can be modified so that the resulting ciphertext is related to the original ciphertext in a certain way (see the description of the BEAST attack for an example).

So, by choosing a given ciphertext, the attacker is able to recover the entire plaintext and the only prerequisite requires the availability of what the researchers define an “oracle”, that is a pattern telling the attacker if a given ciphertext contains a “correctly formed” plaintext that is a valid encoding (e.g. in UTF-8 or ASCII) of a message. Even worse XML signature is not able to mitigate the attack.

In their paper the authors showed that a moderately optimized implementation of the attack was able to decrypt 160 bytes of encrypted data within 10 seconds by issuing 2,137 queries to the Web Service, morever the complexity of the attack grows only linearly with the ciphertext size, thus allowing to recover a larger plaintext of 1,600 bytes takes about 100 seconds and 23,000 queries.

The proof of concept has been performed on a Web Service based on the Apache Axis2 XML framework and verified on JBoss, but many other vendors are affected, that is the reason why the two researchers announced the vulnerability to the W3C XML Encryption Working Group in February 2011. Vendors affected include the above mentioned Apache Software Foundation (Apache Axis2), RedHat Linux (JBoss), but also IBM and Microsoft.

Unfortunately fixing the flaw will not be that easy, the only suitable way is to replace CBC mode by using a symmetric cryptographic primitive providing confidentiality and integrity, this means to change the XML Encryption standard!

The 2011

Advanced Persistent Threats and Security Information Management

October 13, 2011 3 comments

Advanced Persistent Threats are probably the most remarkable events for Information Security in 2011 since they are redefining the infosec landscape from both technology and market perspective.

I consider the recent shopping in the SIEM arena made by IBM and McAfee a sign of the times and a demonstration of this trend. This is not a coincidence: as a matter of fact the only way to stop an APT before it reaches its goal (the Organization data), is an accurate analysis and correlation of data collected by security devices. An APT attack deploys different stages with different tactics, different techniques and different timeframes, which moreover affect different portion of the infrastructure. As a consequence an holistic view and an holistic information management are needed in order to correlate pieces of information spread in different pieces of the networks and collected by different, somewhat heterogeneous and apparently unrelated, security devices.

Consider for instance the typical cycle of an attack carried on by an APT:

Of course the picture does not take into consideration the user, which is the greatest vulnerability (but unfortunately an user does not generate logs except in a verbal format not so easy to analyze for a SIEM). Moreover the model should be multiplied for the numbers of victims since it is “unlikely” that such a similar attack could be performed on a single user at a time.

At the end, however, it is clear that an APT affects different components of the information security infrastructure at different times with different threat vectors:

  • Usually stage 1 of an APT attack involves a spear phishing E-mail containing appealing subject and argument, and a malicious payload in form of an attachment or a link. In both cases the Email AV or Antispam are impacted in the ingress stream (and should be supposed to detect the attack, am I naive if I suggest that a DNS lookup could have avoided attacks like this?). The impacted security device produce some logs (even if they are not straightforward to detect if the malicious E-mail has not been detected as a possible threat or also has been detected with a low confidence threshold). In this stage of the attack the time interval between the receipt of the e-mail and its reading can take from few minutes up to several hours.
  • The following stage involves user interaction. Unfortunately there is no human firewall so far (it is something we are working on) but user education (a very rare gift). As a consequence the victim is lured to follow the malicious link or click on the malicious attachment. In the first scenario the user is directed to a compromised (or crafted) web site where he downloads and installs a malware (or also insert some credentials which are used to steal his identity for instance for a remote access login). In the second scenario the user clicks on the attached file that exploits a 0-day vulnerability to install a Remote Administration Tool. The interval between reading the malicious email and installing the RAT takes likely several seconds. In any case Endpoint Security Tools may help to avoid surfing to malicious site or, if leveraging behavioral analysis, to detect anomalous pattern from an application (a 0-day is always a 0-day and often they are released after making reasonably sure not to be detected by traditional AV). Hopefully In both cases some suspicious logs are generated by the endpoint.
  • RAT Control is the following stage: after installation the malware uses the HTTP protocol to fetch commands from a remote C&C Server. Of course the malicious traffic is forged so that it may be hidden inside legitimate traffic. In any case the traffic pass through Firewalls and NIDS at the perimeter (matching allowed rules on the traffic). In this case both kind of devices should be supposed to produce related logs;
  • Once in full control of the Attacker, the compromised machine is used as a hop for the attacker to reach other hosts (now he is inside) or also to sweep the internal network looking for the target data. In this case a NIDS/anomaly detector should be able to detect the attack, monitoring, for instance, the number of attempted authentications or wrong logins: that is the way in which Lockheed Martin prevented an attack perpetrated by mean of compromised RSA seeds, and also, during the infamous breach, RSA detected the attack using a technology of anomaly detection Netwitness, acquired by EMC, its parent company immediately after the event.

At this point should be clear that this lethal blend of threats is pushing the security firms to redefine their product strategies, since they face the double crucial challenge to dramatically improve not only their 0-day detection ability, but also to dramatically improve the capability to manage and correlate the data collected from their security solutions.

As far as 0-day detection ability is concerned, next-gen technologies will include processor assisted endpoint security or also a new class of network devices such as DNS Firewalls (thanks to @nientenomi for reporting the article).

As far data management and correlation are concerned, yes of course a SIEM is beautiful concept… until one needs to face the issue of correlation, which definitively mean that often SIEM projects become useless because of correlation patterns, which are too complex and not straightforward. This is the reason why the leading vendors are rushing to include an integrated SIEM technology in their product portfolio in order to  provide an out-of-the-box correlation engine optimized for their products. The price to pay will probably be a segmentation and verticalization of SIEM Market in which lead vendors will have their own solution (not so optimized for competitor technologies) at the expense of generalist SIEM vendors.

On the other hand APT are alive and kicking, keep on targeting US Defense contractors (Raytheon is the latest) and are also learning to fly though the clouds. Moreover they are also well hidden considered that, according to the Security Intelligence Report Volume 11 issued by Microsoft, less than one per cent of exploits in the first half of 2011 were against zero-day vulnerabilities. The 1% makes the difference! And it is a big difference!

Information, The Next Battlefield

October 5, 2011 1 comment

Today the Information Security Arena has been shaken by two separate, although similar, events: IBM and McAfee, two giants in this troubled market, have separately decided to make a decisive move into the Security Information And Event Management (SIEM) market by acquiring two privately held leading companies in this sector.

As a matter of fact, nearly in contemporary, today IBM has officially announced to acquire Q1 Labs while McAfee was officially declaring its intent to acquire privately owned company NitroSecurity.

Although part of different tactics, the two moves follow, in my opinion, the same strategy which aims to build a unified and self-consistent security model: a complete security framework must not only provide information but also the intelligence to manage it, Information is power and Security is no exception to this rule.

But in order to be a real power, information must be structured and here comes the key point. Both vendors are leading providers of Network and Host Intrusion Prevention Solutions, heritage of the acquisions of ISS by IBM and Intrushield by McAfee and have hence the ability to capture security events from endpoints and networks: definitively they have the ability to provide the information, but they miss the adequate intelligence to correlate and manage it in order to make it structured.

This is completely true for McAfee that, (at least until today) lacked a SIEM solution in its portfolio and needed to rely on the SIA Certified SIEM Partner (Of course NitroSecurity was certified as a Sales Teaming Partner, the higher level). But in part this is also true for IBM that, despite the Micromuse acquisition and its troubled integration with Tivoli, was never able to became a credible player in this market, confined at the boundaries of the various (magic) quadrants.

Now they can make a decisive change to their positioning and also leverage a powerful trojan horse (the Information Management) to push their technologies to conquer new customers and market segments.

Is maybe a coincidence that another leader provider of SIEM solutions (ArcSight) is part of a company (HP) which also has in its portfolio Tipping Point (as part of the 3Com acquisition) a leader provider of Network IPS?

Event detection and event correlations (and management) are converging in the new Unified Security Model, general SIEM vendors are advised…

Some Random Thoughts On The Security Market

May 10, 2011 1 comment

The intention by UK-headquartered company Sophos to acquire Astaro, the privately-held security company co-headquartered in Karlsruhe, Germany and Wilmington, Massachusetts (USA) is simply the last effect of the process of vendor consolidation acting in the information security market. It is also the trigger for some random thoughts…

In the last two years a profound transformation of the market is in place, which has seen the birth (and subsequent growth) of several giants security vendors, which has been capable of gathering under their protective wings the different areas of information security.

The security model is rapidly converging toward a framework which tends to collect under a unified management function, the different domains of information security, which according to my personal end-to-end model, mat be summarized as follows: Endpoint Security, Network Security, Application Security, Identity & Access Management.

  • Endpoint Security including the functions of Antivirus, Personal Firewall/Network Access Control, Host IPS, DLP, Encryption. This component of the model is rapidly converging toward a single concept of endpoint including alle the types of devices: server, desktop, laptop & mobile;
  • Network & Contente Security including the functions of Firewall, IPS, Web and Email Protection;
  • Application Security including areas of WEB/XML/Database Firewall and (why not) proactive code analysis;
  • Compliance: including the functions of assessment e verification of devce and applications security posture;
  • Identity & Access Management including the functions of authentication and secure data access;
  • Management including the capability to manage from a single location, with an RBAC model, all the above quoted domains.

All the major players are moving quickly toward such a unified model, starting from their traditional battlefield: some vendors, such as McAfee and Symantec, initiallty moved from the endpoint domain which is their traditional strong point. Other vendors, such as Checkpoint, Fortinet, Cisco and Juniper moved from the network filling directly with their technology, or also by mean of dedicated acquisitions or tailored strategic alliances, all the domains of the model. A further third category is composed by the “generalist” vendors which were not initially focused on Information Security, but became focused by mean of specific acquisition. This is the case of HP, IBM and Microsoft (in rigorous alphabetical order) which come from a different technological culture but are trying to become key players by mean of strategic acquisitions.

It is clear that in similar complicated market the position and the role of the smaller, vertical, players is becoming harder and harder. They may “hope” to become prey of “bigger fishes” or just to make themselves acquisitions in order to reach the “critical mass” necessary to survive.

In this scenario should be viewed the acquisition of Astaro by Sophos: from a strategical perspective Sophos resides permanently among the leaders inside the Gartner Magic quadrant but two of three companions (Symantec and Mcafee, the third is Trend Micro) are rapidly expanding toward the other domains (meanwhile McAfee has been acquired by Intel). In any case all the competitors have a significant major size if compared with Sophos, which reflects in revenues, which in FY 2010 were respectively 6.05, 2.06 and 1.04 B$, pretty much bigger than Sophos, whose revenues in FY 2010 were approximately 260 M$, about one fourth of the smaller between the three above (Trend Micro which is, like Sophos, a privately-owned company).

In perspective the acquisition may be also more appealing and interesting for Astaro, which is considered one of the most visionary players in the UTM arena with a primary role in the European market. Its position with respect to the competition is also more complicated since the main competitors are firms such as Fortinet, Check Point and Sonicwall which all have much greater size (as an example Checkpoint revenues were about 1.13 B $ in FY 2010 which sound impressive if compared with the 56 M $ made by Astaro in the Same Fiscal Year).

In this scenario, the combined company aims to head for $500 million in 2012.

Last but not least both companies are based in Europe (respectively in England and Germany) and could rely on an US Headquarter in Massachusetts.

From a technological perspective, the two vendors are complementary, and the strategy of the acquisition is well summarized by the following phrase contained in the Acquisition FAQ:

Our strategy is to provide complete data and threat protection for IT, regardless of device type, user location, or network boundaries. Today, we [Sophos] offer solutions for endpoint security, data protection, and email and web security gateways. The combination of Sophos and Astaro can deliver a next generation set of endpoint and network security solutions to better meet growing customer needs […]. With the addition of Astaro’s network security, we will be the first provider to deliver truly coordinated threat protection, data protection and policy from any endpoint to any network boundary.

Sophos lacks of a network security solution in its portfolio, and the technology from Astaro could easily fill the gap. On the other hand, Astaro does not own an home-built antivirus technology for its products (so far it uses ClamAV and Avira engines to offer a double layer of protection), and the adoption of Sophos technologies (considered one of the best OEM Antivirus engine) could be ideal for its portfolio of UTM solutsions.

Moreover the two technologies fit well among themselves to build an end-to-end security model: as a matter of fact Information security is breaking the boundary between endpoint and network (as the threats already did). Being obliged to adapt themselves to the new blended threats, which often uses old traditional methods to exploit 0-day vulnerabilities on the Endpoint, some technologies like Intrusion prevention, DLP and Network Access Control, are typically cross among different elements of the infrastructure, and this explains the rush of several players (as Sophos did in this circumstance) to enrich their security portfolio with solutions capable of covering all the information Security Domains.

Just to have an idea, try to have a look to some acquisitions made by the main security players in the last years (sorry for the Italian comments). Meanwghile the other lonely dancers (that is the companies currently facing the market on their own), are advised…

Cisco

HP

IBM

Intel

Microsoft

Check Point

Fortinet

Juniper Networks

Symantec

Driving Through The Clouds

April 8, 2011 1 comment

How many times, stuck in traffic on a hot August day, we hoped to have a pair of wings to fly through the clouds and free from the wreckage of burning metal.

Unfortunately, at least for me (even if my second name in English would sound exactly like Sparrows) no wing so far, miraculously, popped up to save me, nevertheless I am quite confident that, in a quite near future,  I will be saved by the clouds even if I will not be able to fly, or better said, I will be saved by cloud technologies that will help me, and the other poor drivers bottled between the asphalt and the hot metal, under the ruthless August sun to avoid unnecessary endless traffic jams on Friday afternoons.

Some giants of Information Technology (Cisco and IBM in primis) are exploring and experimenting such similar solutions, aimed to provide Car Drivers with real time information about traffic and congestions in order to suggest them the optimal route. In this way they will provide a benefit to the individual, avoiding him a further amount of unnecessary stress, and to the community as well, contributing to fight pollution and making the whole environment more livable and enjoyable.

The main ingredients of this miraculous technological recipe consist in Mobile Technologies and cloud technologies and the reasons are apparently easy to understand: everybody always carries with him a smartphone which is an incommensurable real time probe source of precious data necessary to model a traffic jam (assuming that it will be ever possible to model a traffic jam in the middle of the Big Ring of Rome): as a matter of fact a smartphone allows to provide real-time traffic information correlated with important parameters such as GPS position, average speed, etc.

Cloud technologies provide the engine to correlate information coming from mobile devices (and embedded devices) belonging to many different vehicles, providing the computational (dynamically allocated) resources needed to aggregate and make coherent data from many moving sources in different points of the same city or different interconnected cities. Cloud technologies may act a a single, independent, point of collection for data gathered on each device, dynamically allocating resources on-demand (let us suppose to have, in the same moment, two different jams, one of which is growing to an exponential rate and requires, progressively more and more computational resources), providing, to the individual (and to the City Administrators) a real time comprehensive framework, coherent and updated (nobody would hope to be led, by his navigator, to a diversion with a traffic-jam much worse than the original one which caused the diversion.

Of course, already today many consumer navigators offer the possibility to provide real-time traffic information, anyway the huge adoption of cloud technologies will offer an unprecedented level of flexibility together with the possibility to deal with a huge amount of data and to correlate the collected information with other data sources (for instance the V2V Veichle2Veichle e V2I Veichle2Infrastructure). From the city administrations perspective, the collected data will be invaluable for identifying the more congested points (and drive the subsequent proper targeted corrective actions), and moreover for supporting a coherent and sustainable development of the city.

Cisco and IBM are working hard to make this dream become true in few years with different approaches converging to the cloud: Cisco is leveraging the network intelligence for a project pilot in the Korean City of Busan (3.6 million of inhabitants). Cisco vision aims, in the third phase of the project scheduled before the end of 2014, to provide the citizens with many different mobile services in the cloud, with a Software-As-A-Service approach. Those services are dedicated to improve urban mobility, distance, energy management and safety. A similar project has recently been announced also for the Spanish City of Barcelona.

The IBM project, more focused on applications, is called The Smarter City and aims to integrate all the aspects of city management (traffic, safety, public services, etc..) within a common IT infrastructure. Few days ago the announcement that some major cities of the Globe, for instance Washington and Waterloo (Ontario), will participate to the initiative.

Even if the cloud provides computing power, dynamicity, flexibility and the ability to aggregate heterogeneous data sources at an abstract layer, a consistent doubt remains, and it is represented by security issues for the infrastructure… Apart from return on investment considerations (for which there are not yet consistent statistics because of the relative youth of the case studies depicted above), similar initiatives will succeed in their purpose only if supported by a robust security and privacy model. I already described in several posts the threats related to mobile devices, but in this case the cloud definitely makes the picture even worse because of the centralization of the information (but paradoxically this may also be an advantage if one is able to protect it well.) and the coexistence of heterogeneous data, even though logically separated, on the same infrastructure. As a consequence compromising the only point that contains all the data coming from heterogeneous sources that govern the physiological processes of a city, could have devastating impacts since the system would be affected at different levels and the users at different services. Not to mention, moreover, in case of wider use of this technologies, the ambitions of cyberterrorism that could, with a single computer attack, cripple the major cities around the globe.

Application Security: What’s Next?

April 6, 2011 3 comments

In the wake of the infamous LizaMoon which has flooded an impressive number of databases all over the world with SQL Injection, infecting more than 1,500,000 URLs according to Google Search, the next frontier of Information Security to which security vendors are likely to move, is the branch of application security. The last vendor in order of time to make an acquisition (just a couple of days before LizaMoon was detected) was Intel McAfee, which decided to enter the database security market (estimated more than $ 600 million in 2012) acquiring Sentrigo, a Santa Clara based company focused on database security, former member of the SIA Technology Partnership Program (McAfee Security Innovation Alliance) and currently linked to McAfee by an OEM partnerships.

The red Infosec Colossus of Santa Clara is just the latest player to enter this market, following the example of IBM, which offers a complete Application Security solution since 2009, thanks to the acquisitions (in rigorous chronological order) of DataPower (Web Application/XML Security), Ounce Labs (Code Analysis) and Guardium (Database Security). A set of solutions which form respectively the Websphere, Rational and InfoSphere Guardium Security Solutions.

McAfee and IBM are accompanied by Fortinet, another important security player which has been active in this field for some years. Fortinet has been investing in database and application security since 2006, and even if it lacks a code analysis solution, it offers a portfolio which extends up to the database (scanning and monitoring) level, through the acquisition of IP-Locks, and up to XML /Application Firewall, thanks to its offer of FortiWeb appliances.

As you may notice the three examples above are particularly meaningful of how the security is now converging towards application security. Although McAfee, Fortinet and IBM have very different backgrounds, they are converging to a comparable application security offer: McAfee approached the problem from the endpoint security, which is its historical strength, IBM from the content security, since its adventure in security has started from the acquisition of ISS, and finally Fortinet from the network security, well represented by its Fortigate appliances.

According to my personal model, the complete cycle of application security moves on four levels: training of developers is the first level and the necessary foundation upon which an application security project is built. Where the ability (and security awareness) of developers does not arrive, Vulnerability Assessment/Penetration Test (second level) may be enforced to check the level of security of the applications. If we move to a more “technological” plane there are two more levels: they consist respectively in Code Analysis (a preventive measure) and XML/Application/Database security solutions implemented with dedicated software or appliances (an infrastructural measure). Please consider that (an aspect which is not secondary) these kindw of solutions are also driven by increasingly stringent regulations, such as PCI-DSS, and emerging “De Facto” standards such as OWASP (Open Web Application Security Program).

If IBM is currently the only vendor to cover the three areas of application security (code analysis, XML/Web application security and database security), in addition to McAfee and Fortinet, there are other vendors at the door, looking at this market with great interest starting from Cisco Systems, provided with a great ability to execute, but currently focused primarily on its network-centric approach by mean of its ACE family of XML Firewalls, and HP, which, even if currently leaks an XML/WAF or Database Security solution) is approaching the world of Application Security starting from code analysis thanks to the acquisition of Fortify, considered by many observers the market leader in this field.

Actually, alongside these vendors there are more players which, even if more focused on network security, however, look carefully in this market by offering niche solutions, as is the case, for instance, with Checkpoint, which combines its traditional firewall modules (or software blades according to the current terminology) with Web Security functions specifically tailored for application threat, or also Citrix which approaches the problem from the Application Acceleration/Distribution perspective.

It is likely that the market value and the emotional drive of LizaMoon will soon bring furher earthquakes in this market. In my honest opinion, the next to fall will be two more important partners in the McAfee SIA focused in Application Security: Imperva (Web Application/XML Firewall) and VeraCode (Code Analysis) are well advised…

La Città (Verde) Tra Le Nuvole

Il titolo dell’articolo apparentemente richiama la romantica Cloud City, città tra le nuvole che contraddistingue le gesta di Han Solo e Lando Calrissian nell’Episodio V di Guerre Stellari. In realtà alla Cloud City basta aggiungere un aggettivo ecologico per creare il termine Green Cloud City che identifica il progetto su cui stanno lavorando, su tre filoni paralleli, altrettanti colossi del settore, Cisco, IBM e Microsoft (in rigoroso ordine alfabetico) con l’intenzione di realizzare la Città del futuro, dove Mobility, Green ICT  e Tecnologie Cloud si sposano per garantire servizi avanzati ai cittadini.

Il progetto di Cisco si colloca all’interno di una iniziativa più vasta definita Smart+Connected Community che mira a fornire ai cittadini servizi di collaborazione e connettività avanzata grazie all’intelligenza fornita dalla rete. Il progetto pilota è attualmente in corso presso la città coreana di Busan (3.6 milioni di abitanti) e mira, entro il 2014, a fornire ai cittadini una serie di servizi, mobili e condivisi, ospitati sul cloud, atti a migliorare la mobilità urbana, l’impatto delle distanze, la gestione dell’energia e le condizioni generali di sicurezza.

I pilastri su cui si basa l’iniziativa sono il cloud e le tecnologie mobili: il primo fornisce l’infrastruttura, di tipo platform-as-a-service (PaaS) nella prima fase, necessaria allo sviluppo delle applicazioni mobili; e di tipo software-as-a-service (SaaS) nella fase 2, prevista nel 2012, in cui i primi servizi (ad esempio gestione documentale e billing automation saranno disponibili a tutti i cittadini). Come si è facilmente intuito, mobili sono invece i terminali che potranno usufruire delle applicazioni a partire dalla fase 2, accesso applicativo che vedrà il pieno compimento nella fase 3, prevista per il 2014, quando le applicazioni saranno accessibili a tutti. Il motivo del connubio in ambito cittadino è presto detto: i terminali mobili costituiscono il punto di accesso dei cittadini ai servizi, possono autenticare in maniera forte e fornire nel contempo le informazioni in tempo reale necessarie al funzionamento dell’ecosistema. Il cloud fornisce invece l’infrastruttura che garantisce la potenza di calcolo, la la dinamicità e la flessibilità necessarie per gestire una mole di dati così ingente e verso la quale stanno migrando le tecnologie dei partner Cisco in questo progetto.

Il progetto di IBM è chiamato The Smarter City e mira  a integrare tutti gli aspetti di gestione di una città (traffico, sicurezza, servizi ai cittadini, etc.) all’interno di una infrastruttura IT comune. Pochi giorni fa l’annuncio che all’iniziativa parteciperanno alcune importanti città del Globo, quali Washington e Waterloo (Ontario). Il caso di Washington in particolar modo è significativo perché dimostra come la gestione unificata dei parametri di una città mediante una intelligenza (e una infrastruttura software) comune possa apportare benefici agli utenti su molteplici piani. Basti pensare ad esempio ad una utility che è in grado di ottimizzare i consumi grazie agli Smart Meter (rilevatori di consumo intelligenti bidirezionali equipaggiati con uno stack IP per le comunicazioni) e di ottimizzare nel contempo interventi di manutenzione grazie alla possibilità di conoscere il traffico in tempo reale ed altri servizi mobili (fatturazione, verifica consumi, etc. grazie al supporto delle tecnologie mobili. Anche per questa iniziative, nubi virtuali si stagliano all’orizzonte in quanto il progetto può essere integrato con l’infrastruttura IP di una città ma può anche essere ospitato in ambito cloud per consentire, soprattutto nel caso delle città più piccole, la coesistenza, logicamente separata, di diverse entità in una stessa infrastruttura tecnologica.

Sul carro del Green ICT su scala cittadina (e anche oltre) è salita anche Microsoft, mediante la propria iniziativa SERA (Smart Energy Reference Architecture) dedicata alla creazione di un framework comune per l’interoperabilità di diversi dispositivi di misurazione intelligente. L’ultimo progetto in ordine di tempo è quello realizzato con il gigante energetico francese Alstom. Anche in questo caso nel cielo Azure di Microsoft si stagliano nubi tecnologiche, nel senso che il progetto utilizza in maniera importante tecnologie di cloud.

Ovviamente in tutti i casi evidenziati il cloud fornisce potenza di calcolo, dinamicità, flessibilità, capacità di aggregare ad un livello astratto fonti di dati così eterogenee, ma un dubbio (e anche consistente) rimane… La sicurezza…  A prescindere dal ritorno degli investimenti (per cui non c’è ancora una casistica consistente vista la relativa gioventù dei casi sopra citati), simili iniziative potranno riuscire nel loro scopo solamente se supportate da un solido modello di sicurezza e privacy. Già di per sè, come ho avuto modo di approfondire nel caso delle smart grid, l’apertura al mondo IP espone sistemi non nativamente concepiti per essere aperti a nuove tipologie di minacce. Il cloud sicuramente rende il quadro ancora più delicato perchè centralizza il punto di elaborazione delle informazioni (ma questo paradossalmente potrebbe anche essere un vantaggio) e consente la coabitazione di dati eterogenei, ancorché logicamente separati, sulla medesima infrastruttura. Compromettendo l’unico punto che contiene tutti i dati, provenienti da sorgenti eterogenee che governano i processi fisiologici di una città, gli impatti sarebbero estremamente dannosi in quanto influenzerebbero il sistema a diversi livelli e gli utenti a diversi servizi. Senza contare le mire dei Cyberterroristi che potrebbero, con un unico attacco informatico (e quindi con elevate possibilità di nascondersi) mettere in ginocchio le maggiori città del globo.

Comunque non preoccupatevi, passerà ancora un po’ di tempo prima che le città arriveranno ad un modello così evoluto. Ci arriveranno sicuramente prima le automobili. Ma chissà perché la cosa non mi tranquillizza per niente. E poi un dubbio mi rimane: secondo voi tra Cisco, IBM e Microsoft, quale sarà la prima ad essere “bucata”? Chissà perché ma temo di sapere la risposta…

Il Cloud? E’ appena NATO!

December 24, 2010 Leave a comment

Il 2011 sarà l’anno della definitiva consacrazione del Cloud (soprattutto per le implicazioni di sicurezza). A conferma di questo arriva la notizia che la NATO adotterà tecnologia Cloud IBM per il proprio quartier generale di Norfolk.

Sebbene si preveda una diffusione massiccia da parte di tutta l’Alleanza Atlantica, è bene non farsi prendere da facili entusiasmi: all’inizio il sistema verrà usato da pochi utenti in un ambito ristretto (e isolato da Internet).

Il cloud consente di consolidare, uniformare (e astrarre) le diverse tecnologie e sorgenti di dati che oggi compongono l’infrastruttura NATO, si adatta inoltre perfettamente alle esigenze logistiche e di processo dell’Alleanza Atlantica che presenta una organizzazione che, per quanto distribuita e frammentata, deve seguire processi comuni. Secondo le parole di Joahn Goossens, Technology Manager del quartier generale di Norfolk:

Let’s say a Dutch soldier is wounded on the battlefield. He has to be evacuated by a German helicopter and taken to an American hospital. It gets very complex in international scenarios. We hope to create a pipeline where all this information can flow.

Il cloud quindi sembra la soluzione non solo alla frammentazione delle infrastrutture, ma anche alla frammentazione dei processi, poiché consente di creare una infrastruttura in cui le informazioni possono fruire in una ottica end-to-end, indipendentemente dalla natura del dato e dalla ubicazione geografica.

E forse in questo caso il problema della sicurezza è risolto definitivamente… Con la separazione fisica!

Categories: Cloud Tags: , , ,
Follow

Get every new post delivered to your Inbox.

Join 2,714 other followers