Do you remember the intrepid Jeff Goldbum injecting malicious code on the Alien mothership during one of the most famous scenes of Independence Day? Easy, no alien invasion is happening, simply a similar event occurred for US drones which were targeted with a common Key-Logger “civil” malware.
Of course no foreign country plugged any malicious ship to US facilities, indeed what has really happened was much more simple and common, an hard-drive which accidentally infected the Ground Control System at Creech Air Force Base in Nevada.
This does not sound surprising to me since I wrote several posts about the growing use of Consumer Technologies for military purposes (but I should have included consumer anti-malware software as well), moreover I also predicted specific malware targeting military planes. Although this is not exactly what happened, there are several points in common with my prediction, essentially the fact that consumer technologies (as simple PCs are) open security doors inside sophisticated military weapons.
So, at this point it should not be surprising, as Wired reports, that a computer virus has infected Predator drones and Reaper drones, logging pilots’ keystroke during their fly missions over Afghanistan and other warzones.
The virus was detected nearly two weeks ago at the Ground Control System (GCS) at Creech Air Force Base in Nevada and has not prevented drones from flying their missions. Nevertheless it has shown an unexpected strength so that multiple efforts were necessary to remove it from Creech’s computers, network security, Wired reposts.
Although Fox News quotes a senior Air Force source according to whom, Wired’s story is “blown out of proportion” and “vastly overwritten.”, this event points out the risks associated with the use of standard technologies to control sophisticated military weapons that play a central role in both its conventional and shadow wars, allowing U.S. forces to attack targets and spy on its foes without risking human lives.
Although they suffer of native security holes (for instance the footage is transmitted in clear), that they are just computers, after all, and hence controlled by standard PCs, that may get virtually sick like any other civil companion.
Although the malware seemed benign, it is still not clear how it could make its way inside the systems and most of all, since it affected classified and unclassified system, if it was able to leak information and send it to a remote source. On the other hand a key-logger is able to steal whatever information is typed on the keyboard to control the drone. As the famous aviation expert David Cenciotti said:
Do you want to know what a keylogger can grab fm a Predator control station? Think to your keyboard inputs when playing w/ Flight Simulator.
Maybe the virus could have accidentally spread: the Ground Control Stations handling more exotic operations are top secret and none of the remote cockpits are supposed to be connected to the public internet, this should make them immune to viruses and other network security threats.
Unfortunately hard disks and pen drives may build bridges connecting public and classified networks, and this could have possibly have happened at the base at Creech since the Predator andcrews use removable hard drives to load map updates and transport mission videos from one computer to another. The same hard drives could have spread the malware and, as a consequence, drone units at other Air Force bases worldwide have now been ordered to stop their use.
This is not the first time that an infection has been spread through an hard drive: in late 2008, for example, the drives helped introduce the agent.btz worm to hundreds of thousands of Defense Department computers. It looks like the Pentagon is still disinfecting machines, three years later.
Curiously the virus showed to be very resistant to digital vaccines, and after several attempts to remove it with standard procedures (following removal instructions posted on the website of the Kaspersky security firm), the only safe method to clean it was to wipe the infected hard drives and rebuild them from scratch: a time consuming operations. As to say: sophisticated military weapons and technologies suffer the same issues than civil users (how many Windows installations from scratch after a malware infection), on the other hand the drone virus was detected by the military’s Host-Based Security System, a flexible, commercial-off-the-shelf (COTS)-based application. If you look carefully at the HBSS web site you will also be able to identify the commercial security technology which lays behind the HBSS.
Is it times for drones to be natively equipped with anti malware?
- Exclusive: Computer Virus Hits U.S. Drone Fleet (wired.com)
As you will probably know my Birthday post for Android Malware has deserved a mention from Engadget and Wired. Easily predictable but not for me, the Engadget link has been flooded by comments posted by Android supporters and adversaries, with possible trolls’ infiltrations, up to the point that the editorial staff has decided to disable comments from the article. The effect has been so surprising that someone has also insinuated, among other things, that I have been paid to talk s**t on the Android.
Now let me get some rest from this August Italian Sun and let me try to explain why I decided to celebrate this strange malware birthday for the Android.
First of all I want to make a thing clear: I currently do own an Android Device, and convinced, where possible, all my relatives and friends to jump on the Android. Moreover I do consider the Google platform an inseparable companion for my professional and personal life.
So what’s wrong? If you scroll the malware list you may easily notice that the malware always require an explicit consent from the user, so at first glance the real risk is the extreme trust that users put in their mobile devices which are not considered “simple” phones (even if smart), but real extensions of their personal and professional life.
You might say that this happens also for traditional devices (such as laptops), but in case of mobile devices there is a huge social and cultural difference: users are not aware to bring on their pocket dual (very soon four) cores mini-PCs and are not used to apply the same attention deserved for their old world traditional devices. Their small display size also make these devices particularly vulnerable to phishing (consider for instance the malware Android.GGTracker).
If we focus on technology instead of culture (not limiting the landscape to mobile) it easy to verify that the activity of developing malware (which nowadays is essentially a cybercrime activity) is a trade off between different factors affecting the potential target which include, at least its level of diffusion and its value for the attacker (in a mobile scenario the value corresponds to the value of the information stored on the device). The intrinsic security model of the target is, at least in my opinion, a secondary factor since the effort to overtake it, is simply commensurate with the value of the potential plunder.
What does this mean in simple words? It means that Android devices are growing exponentially in terms of market shares and are increasingly being used also for business. As a consequence there is a greater audience for the attackers, a greater value for the information stored (belonging to the owner’s personal and professional sphere) and consequently the sum of these factors is inevitably attracting Cybercrooks towards this platform.
Have a look to the chart drawing Google OS Market share in the U.S. (ComScore Data) compared with the number of malware samples in this last year (Data pertaining Market Share for June and July are currently not available):
So far the impact of the threats is low, but what makes the Google Platform so prone to malware? For sure not vulnerabilities: everything with a line of code is vulnerable, and, at least for the moment, a recent study from Symantec has found only 18 vulnerabilities for Google OS against 300 found for iOS (please do no question on the different age of the two OSes I only want to show that vulnerabilities are common and in this context Android is comparable with its main competitor).
Going back to the initial question there are at least three factors which make Android different:
- The application permission model relies too heavily on the user,
- The security policy for the market has proven to be weak,
- The platform too easily allows to install applications from untrusted sources with the sideloading feature.
As far as the first point is concerned: some commenters correctly noticed that apps do not install themselves on their own, but need, at least for the first installation, the explicit user consent. Well I wonder: how many “casual users” in your opinion regularly check permissions during application installation? And, even worse, as far as business users are concerned, the likely targets of cybercrime who consider the device as a mere work tool: do you really think that business users check app permission during installation? Of course a serious organization should avoid the associated risks with a firm device management policy before considering a wide deployment of similar devices, most of all among CxOs; but unfortunately we live in an imperfect world and too much often fashion and trends are faster (and stronger) than Security Policies and also make the device to be used principally for other things than its business primary role, hugely increasing risks.
This point is a serious security concern, as a matter of fact many security vendors (in my opinion the security industry is in delay in this context) offer Device Management Solution aimed to complete the native Application Access Control model. Besides it is not a coincidence that some rumors claim that Google is going to modify (enhance) the app permission security process.
As far as the second point is concerned (Android Market security policy), after the DroidDream affair, (and the following fake security update), it is clear that the Android Market Publishing (and Security) model needs to be modified, making it more similar to the App Store. There are several proposals in this context, of course in this place is not my intention to question on them but only to stress that the issue is real.
Last but not least Sideloading is something that makes Android very different from other platforms (read Apple), Apple devices do not allow to install untrusted apps unless you do not Jailbreak the devices. Android simply needs the user to flag an option (By The Way many vendors are opening their Android devices to root or alternate ROMs, consider for instance LG which in Italy does not invalidate the Warranty for rooted devices) or HTC which, on May 27, stated they will no longer have been locking the bootloaders on their devices.
So definitively the three above factors (together with the growing market shares) make Android more appealing for malware developers and this is not due to an intrinsic weakness of the platform rather than a security platform model which is mainly driven by the user and not locked by Manufacturer as it happens in case of Cupertino.
The Apple and the Android (almost) never agree in anything, but the issue of the Location Tracking has done the miracle and if there is one only point that Cupertino and Mountain View have in common, it is just the bad habit to track user’s position without his/her knowledge.
After the well known issue of iPhone hidden (so to say) location tracking, Wired was able to discover why Apple devices collect these kind od data, unleashing 13-page letter sent by Apple’s general counsel Bruce Sewell in July 2010, explaining its location-data-collection techniques. The letter was written in response to a request from Congressmen Joe Barton and Edward Markey asking for Apple to disclose such practices (Incidentally, Markey authored the “Do Not Track” bill to stop online companies from tracking children).
Although no comment so far has arrived from Apple, I was disappointed in discovering, from a Cisco Blog Post, dealing with the same argument, that a similar
bad habit collection has been detected for Google’s Android (at least the Android needs the root permission to grab the data).
In both cases the alleged main purpose of this data collection is to provide better location services. Instead my feeling is that the main benefit in this situation is not for the user, but for the marketing and/or advertising agencies which could come in possession of the data.
Interesting to notice the iPhone 3GS Software License Agreement states that:
By using any location-based services on your iPhone, you agree and consent to Apple’s and its partners’ licensees’ transmission, collection, maintenance, processing and use of your location data to provide such products and services.
Location data – Google offers location-enabled services, such as Google Maps and Latitude. If you use those services, Google may receive information about your actual location (such as GPS signals sent by a mobile device) or information that can be used to approximate a location (such as a cell ID).
Until now, nothing special, except the fact that Latitude asks for the user’s consent to share the data with the other, which, if I am not wrong, does not occurr for Google Maps. But the interesting point come a some lines below:
In addition to the above, we may use the information we collect to:
- Provide, maintain, protect, and improve our services (including advertising services) and develop new services; and
- Protect the rights or property of Google or our users.
Meanwhile Minnesota Senator Al Franken and the attorney general of Illinois are separately pressing Apple and Google to provide more information about the location data they collect about their end users…
- Lawmakers quiz Apple, Google about location tracking (infoworld.com)
- Grab Your Data? There’s An App For That! (paulsparrows.wordpress.com)
- IPhone Stored Location Even if Disabled (online.wsj.com)
- Apple, Google Collect User Data (online.wsj.com)
- iPhone Location Tracking: Important, Even if it Doesn’t Matter to You (blogs.cisco.com)